34. Common Accounts Windows; Unix “” ; web web Windows; Unix user user Common to many applications “” ; test; Test test Unix sysadmin sysadmin Unix sys; system; bin sys Unix setup setup Windows SQL server; others “” sa Unix “” ; root Root Windows “” ; Guest; guest Guest Many “” ; demo; demos Demo Windows; Unix and many other platforms and applications “” ; Admin; admin; administrator; Administrator; root Admin Windows; Unix and many other platforms and applications “” ; Admin; admin; administrator; Administrator; root Administrator Systems Affected Passwords Username
Welcome to How to Break Software Security! This course is based on the book by the same name published in 2003. This book followed How to Break Software and precedes How to Break Web Software . That is a lot of How-To’s, which is a good thing because that’s what this course is about. It’s about understanding security vulnerabilities and how to do something about it for your own applications. Whether you are a developer, tester, integrator, manager, decision-maker, whatever…you’ll find this material to be invaluable for understanding security and security vulnerabilities. Welcome to the wonderful world of breaking things!
We’ve been performing functional testing for decades and the process is pretty well-entrenched. We have a spec or a test plan that tells us what the application is supposed to do. Say, for example, our test plan tells us to apply input A and that the application should generate output B. As a functional tester, that’s what we do: apply A, watch for B and when we see it, we mark the test case as ‘passed.’ What we are doing here is verifying that the application did what is was supposed to do. But this is both too much and not enough for security testing. It’s too much in that security testers really don’t bother with what the app is supposed to do. We’re concerned more with what the app is not supposed to do! In other words, we apply that same input A but don’t care about output B that is supposed to occur. Instead, we try to verify that some bad output C does not occur. That’s what you’ll learn in this course. How to anticipate insecure behaviors and test for their absence.
To highlight the difference, let’s examine two bugs, one functional and one security, and analyze the differences.
This screen snap is just for the slides…during the course we will repro the bug in Excel. This bug is in the “scenarios” feature and has the following analysis: 1. That the expected functionality DOES NOT WORK. We do not see the required output. 2. That the failure symptoms are pretty easy to see. This is, in essence, a typical functional bug.
This will also be demoed live. The bug in Macromedia flash (which has been fixed) doesn’t show up when the application executes this SWF file. The bug has the following properties. 1. The desired result (output) does indeed happen: the file is rendered correctly. This means that the insecure side-effect (which is a buffer overflow) is masked by the fact that the software did what it was supposed to do. 2. Insecurity often happens invisibly. New tools and thought processes are required to find them. This means that testers need to think about what SHOULD NOT HAPPEN when they are doing security testing.
In order to help us think about security bugs, we offer two models for testers to keep in their heads while they are doing security testing. The first deals with the software itself and teaches us how to think about software behaviors. The second deals with the environment in which the application runs and teaches us to think about how the application interacts with other entities in its environment of use.
On the left hand side of this diagram we have the specification, or intended behavior of the application. This is what the application is SUPPOSED to do. Then the application gets coded (which is the second, rightmost circle) we have ACTUAL behavior to compare to the EXPECTED behavior. This is the process of testing…find problems, fix problems and make the two circles merge. But functional testing only finds bugs on the left part of the Venn diagram. These are behaviors that SHOULD happen but DON’T…just like the Excel bug shown earlier. To find the security bugs on the right side we need to train ourselves to look for what “isn’t there”…to look in places we don’t look in traditional testing. We need to think about what should NOT happen.
If we think about the Macromedia bug for a moment, we realize that we could not see the security bug through the user interface. The UI is rarely the place where security bugs manifest (but it can be as we will see later). Instead, we have to think more holistically about the execution environment. The UI is one aspect of the environment. It is the interface where the application receives user input that must be carefully error-checked. It is also the place where outputs are rendered and we have to make sure those outputs do not reveal anything useful to an attacker The File System is the interface where data from files is read and written. Unlike the UI, this interface is normally invisible and require special tools to observe the traffic that crosses application boundaries. Another important set of inputs crosses this boundary and must be error-checked. However, error checking here is much less common than the UI because developers tend to trust the content of files more than they trust the content of UI text boxes and so forth. The Software interface is where data to third-party controls and applications comes from. For example, network libraries, databases, math libraries and so forth. This is also an invisible interface requiring special tools. The Kernel interface is where applications get memory and other resources. This is where evidence of memory-based exploits will be found and also requires special tools to observe. One such special tool is Holodeck and it will be demoed here.
Here’s where we play the All Your Base Are Belong To Us video that underscores hacker’s motivations and sheer delight in doing what they do. The lessons learned from this: 1. Hackers have some free time on their hands…they don’t ship products! 2. Hackers have some skills and they know how to use the tools. 3. Hackers are motivated to break anyone’s application.
Beginning in 1996, we undertook a massive project to analyze bugs. This project was partially funded by industry and government sources and had as its goal to develop a better understand of important bugs and to describe better techniques to prevent and find defects. We began studying functional bugs and the result was How to Break Software by James A. Whittaker. We then turned our attention to security bugs which resulted in How to Break Software Security by James and Herbert H. Thompson. In both cases, we studying BUGS THAT SHIPPED because it is this set of bugs that our current processes are the worst at preventing and finding…after all these are the ones that got away.
Placeholder for text of Conclusions, SPAs and others (substitute your own text) No source line is necessary unless the source is something other than Gartner Research