Developing Secure Applications
For something that is so vital to our world, it’s strange that we clearly don’t care much about how secure it is. I’m talking about computers, and software in particular. All kinds of software, from our operating systems through our desktop software tools we use each and every day—and running on anything from large mainframes through small handheld smart mobile devices.
Introduction to the Problems
For something that is so vital to our world, it’s strange that we clearly don’t care much about how secure it is. I’m talking about computers, and software in particular. All kinds of software, from our operating systems through our desktop software tools we use each and every day—and running on anything from large mainframes through small handheld smart mobile devices.
It must be true, we obviously don’t care much about security, or we wouldn’t keep making the same old mistakes over and over again. But when we take a look at today’s software we use, we find security defects repeatedly, even though we’ve known of many of these defects for a decade or more.
Take web applications as an example. If you look at the venerable Open Web Application Security Project (OWASP) Top-10 list of common application vulnerabilities, you’ll find many of them would be eliminated by simply applying positive input validation. But we fail to do it, over and over and over. Why?
I attribute much of the blame to several things, such as:
• Rush to market. Pressure to take software products to market has never been greater. This has been fueled on by the advent of software markets such as Apple’s iPhone application store where a gold rush mentality is prevalent. The result has inevitably been that less and less attention is paid to the security of the end products.
• Focus on functionality. Partly due to the divergence of the security and software development disciplines, we’ve seen almost fanatical attention being paid to simple functionality of software. Security issues, on the other hand, are almost always found in the non-functional aspects—or, perhaps more correctly, the inadvertent functionality—of software.
• Inadequate testing. Similar to developing software with only its functional aspects in mind, most software testing consists of little more than verification of functional specifications. When security testing, which is largely non-functional in its nature, is done at all, more often than not, companies do little more than some cursory penetration testing.
• Failure to learn from history. In pretty much every other engineering discipline in the world, engineers study their failures and learn from them extensively. Paradoxically, this isn’t so much the case in software engineering. Although we see software vulnerabilities documented in security technology communities, it’s rare that software developers have any direct exposure to those publications. This simply wouldn’t be tolerated in other engineering environments, like say civil engineering (where things like bridges are designed).
These are just a few of the factors that are fighting against us, but they’re all important to understanding the situation that we’re in today.
Available Solutions
Of course, the news isn’t entirely bad. In the late 1990s, the software security discipline started to emerge, and significant work has been done since that time. We undoubtedly have a much better handle on the situation today.
Chief among the advances we’ve made are improvements in software processes, understanding of the technical issues, and the development of some tools that can greatly help us.
Secure Development Processes
As our understanding of software security has grown, secure development methodologies have emerged. These have been well documented in several books published in the last few years, including The Security Development Lifecycle, by Michael Howard and Stephen Lipner for Microsoft Press, as well as Software Security, by Gary McGraw for Addison Wesley. These two books describe the two leading approaches to developing secure software today.
The Microsoft SDL approach lays out a 12-step process that is both mature and rigorous. Designed by and for Microsoft to use internally for its own products, it has been subjected to several years of use and refinement, and is considered by many to be the preeminent set of practices in the field today.
Briefly, the steps in the SDL process are as follows:
- Education and awareness
- Project inception
- Define and follow design best practices
- Product risk assessment
- Risk analysis
- Creating security documents, tools, and best practices for customers.
- Secure coding policies
- Secure testing policies
- The security push
- The final security review
- Security response planning
- Product release
- Security response execution
SDL, which was obviously built with a large software product developer in mind, is quite prescriptive and rigorous. It also requires substantial changes to an organization’s software practices in order to effectively implement, which is one of its chief hurdles to more widespread adoption.
By comparison, McGraw documents a process called the “Touchpoint” model in his book, Software Security. Fundamentally, the Touchpoints process differs significantly from the Microsoft SDL approach in that it is principally a review-based process. It was written primarily by McGraw’s company, Cigital, Inc., which serves as a security consulting services company to its clients.
The Touchpoint approach is to not substantially change an existing software development process, but to integrate heavily into it by inserting various reviews of the tangible artifacts produced while developing the software.
The Touchpoint process defines several individual activities as follows:
- Abuse cases
- Security requirements
- Architectural risk analysis
- Code review
- Penetration testing
- Risk-based security testing
- Security operations
- External analysis
Being review-oriented, the Touchpoints allow an organization to adopt the practices quite easily without fundamentally changing its development practices. On the other hand, it is a process that fundamentally seeks to find mistakes in the various artifacts, and is thus not as prescriptive as the Microsoft SDL practices.
Knowledge Base
Following sound processes is important, but without a foundation of knowledge to build them on, they will fail. So, it is vital that software developers build an understanding of software security defects as well as how those defects can be attacked. A great place to start learning—at least for web developers—is OWASP.
OWASP’s contributions to the field have been numerous and substantial, including two highly acclaimed tools, WebGoat and WebScarab. As with everything OWASP produces, WebGoat and WebScarab are open source and freely available to the community. These two tools are hugely useful at building that knowledge base.
WebGoat is a simple web application built on top of Apache’s Tomcat J2EE server. It is an education tool with deliberately flawed web servlets laid out in easy to understand exercises in numerous categories, such as cross-site scripting (XSS), SQL injection, and other OWASP Top-10 areas.
Quite simply, it is undoubtedly the most powerful learning tool I have encountered in my 20+ years’ experience in this field. Without exception, any software technologist—from architect through programmer and tester—who works with web-related technologies ought to be required to successfully complete each and every exercise laid out in WebGoat. That is a bold statement, but it is warranted. WebGoat really is that good.
As a complement to WebGoat, WebScarab is a powerful general-purpose web software testing tool. Specifically, it is a web application proxy testing tool. It works by interposing itself between the tester’s web browser and the application being tested, and then enabling the tester to intercept every HTTP request and response between the client and the server. Intercepted messages can be examined and/or edited however the tester desires. This basic capability—akin to a single stepping debugger in a desktop application—gives the programmer and tester an incredibly powerful set of capabilities.
Working through the exercises in WebGoat using WebScarab as a means for performing many of its attacks is a powerful approach to learning all the important lessons about web application security.
Code Analysis Tools
Lastly, over the past several years a product market has emerged for software security tools. The most important advance for the software practitioner is without a doubt static source code analysis tools. These have evolved out of a couple research projects in the late 1990s and now form a fairly mature set of commercial product offerings. The major companies in this product space include Coverity, Fortify, and Ounce Labs.
Static code analysis tools work by examining the source code for common programming security defects. Most of the tools can also be expanded to not only look for defects, but to verify positive compliance with corporate coding guidelines.
When used properly and carefully integrated into the code development process, static analysis tools can be hugely beneficial to development teams and can prevent some common mistakes from making their way into production code.
What Next?
These three things—sound processes, knowledge base, and security tools—can be hugely beneficial to software developers who must write secure software. Security, however, is a constantly moving target, and it is important that we all keep up with new attacks, vulnerabilities, and related technologies as time goes on.