Security Experts:

Technology Rewind: CS 240: Systems Principles, in the Old Days

In college, I took a required Computer Science class called “Systems Principles”. My professor started the class by listing out the seven key components in a successful system/program development process:

Seurity Processes1. Requirements

2. Specifications

3. Design

4. Development

5. Test

6. Test

7. Test

Back in the Stone Age we did not worry so much about intrusion testing applications to help ensure that they could not be attacked from the outside world. In that context, our world was easier. I mean, my first full program was on about 3500 punch cards – it didn’t quite fit in one card box. Yes, I said “punch cards” and "card box". I did not say a full function text editor written in Algol, Fortran, and PL/I. This sounds terribly archaic, but it taught me great habits. I would take my card deck to the window in the computer lab, and they would read the deck, then return it. I could get a listing in a few minutes, or my program would actually run sometime in the next 12-15 hours. The process was a pain, but it absolutely forced you to be disciplined. The professor normally assigned a project on Monday, and it was due Wednesday. You could usually get in two runs by Wednesday, but if you were really lucky, really fast, and went in to the computer lab at like 3:00 Tuesday morning, you might be able to get in three runs. You did not have time to waste debugging crap – you just had to get it right, and get it right now. Every time you ran your job after Wednesday, your maximum score dropped a full letter grade. If you wanted an A, you could only count on being able to run your application twice. You simply had to figure out ways to improve the chances that your application was not only error-free, but error resistant.

A significant number of people who read that will think that is ancient news, and automatically discount some of this. The world has, after all, changed. But so many of the principles I learned back in the 70s still apply. Kernighan and Plauger’s Elements of Programming Style was the programmer’s bible. We automatically lost 10 points for any GOTO statement in a program. Today, secure programming techniques have become more science than art, and if you are doing web-enabled applications and are NOT conversant in OWASP, and don’t have the OWASP Top 10 bookmarked in your browser, you are just simply behind the curve.

Computer Science class and early practice showed that if you had good requirements, accurate specs, a sound design, then developed to the design, your program would work, and you could validate that with testing. Obviously, the initial steps were important. You had control over the program, and with good data, produced the expected results. In Systems Principles, we spent an amazing amount of the semester talking about data validation.

These are simple enough concepts. The identified “data” problem was that you had to assume you had absolutely no control over incoming data. You might expect a 10-character first name, and get something like “Christopher”. You might define the field as alpha only, and get a first name like “Jon-Louis” – For the first 8-10 years after I got my driver's license, my name was “Jonlouis” not “Jon-Louis” (or even “JonLouis”).

Application SecurityOne of the office directors from a previous job gave formal approval to every single system before it was fielded. As part of his approval process, one of the things he would do was what he called his "butt test". He would navigate to a data entry field and sit on the keyboard. The field would essentially fill with random characters, depending on how he wiggled while he sat. He would fill a number field with letters, and letter field with numbers and characters, and every field would get hundreds of characters, regardless of how many they were actually supposed to get. The message to developers was clear - plan for unknown input - and, I thought surprisingly, I saw more than one system crash during "the butt test" (none of mine, of course).

The point is that if you want to get results that you expect, you need to validate the input data to ensure that it matches the type of data that you are expecting. This should be a pretty basic set of data validation tests that includes boundary checking, valid vs. invalid data types, field overflows, escape and control characters, as well as others. My college professor summarized the need for truly studly data validation by crossing the old saying "garbage in, garbage out" off his whiteboard, and writing underneath it "no garbage in." (And yes, he included the period in his quote.)

In more practical language, studly data validation can go a long way to prevent buffer overflows, SQL injection, cross-site scripting, and a variety of data manipulation attacks.

That in itself also points out something about the current state of application security. While my college professor stressed that application security was concerned with good programming technique and data validation, that is a simplistic view of the world. Security that impacts your application is affected by all of those systems and things that support and surround your application, and some of those things may not even be related.

Obviously, you consider the application itself. You also consider tools and functions used by the application, perhaps third-party plug-ins that let you watch movies or play other enriched media. The application is written in one or more programming language that may or may not be patched and may or may not have issues. The application then runs on an operating system, which is hopefully patched, which in turn runs on, and is fully supported by, a set of hardware. Middleware allows the application to communicate with a backend system, hopefully in an encrypted manner. That middleware runs on an appropriately patched operating system, which runs on shared hardware. The data eventually enters a database, which, of course, includes the latest db patches, and includes the security options to enable encryption. That database also runs on an operating system, and its own hardware. But, to help control license fees, the database is a set of tables in a single db instantiation, which is shared by a different application (different operating systems, different middleware, different supporting apps and utilities). All of these systems are supported by an enterprise anti-virus, anti-malware solution. But, unless your application environment is completely segregated from your organizational infrastructure, you have tangential and unrelated systems. You probably have an email system that does not connect in any direct manner to the application or any system that supports it. That is, until you get an emailed Trojan horse that spreads around your internal network, across other supporting systems, and possibly to the same system(s) that support the application. And, the list goes on…

So web application security and testing, while a critical part of what you do, really is only one layer in what needs to be thought of as a multi-level security approach. You need to harden internal systems, segregate networks, use encryption, patch systems, use anti-malware software, and all of those “other” things to protect your entire environment.

Web application tests are a good way to help identify flaws in web-facing applications. Most modern application scanners do an excellent job of testing the web applications themselves for additional known vulnerabilities, including data entry errors, as well as other vulnerabilities like cookie, header, form, and URL manipulations, along with others. Most application scanners also test the components of the web application that provides a more robust view into application security. You get a better view of the platform, so that you can see the application “as-built”. That is a good thing, and like it or not, using web application scanners has become good business practice for any company that has web-facing applications. This is ultimately a good thing, since good application scanners do reduce an organization’s exposure.

For the most part, we are talking about good security practices, not compliance. Not many public standards actually require an organization to use application scanners to help test their environment. Some of the standards do include requirements for an external intrusion test, but that may not necessarily include a full analysis of the web-facing applications. Section 6.6 of the PCI standard explicitly calls for application testing with the following language “public-facing web applications, address new threats and vulnerabilities on an ongoing basis…”

Ultimately, there are two problems with this: frequency and coverage.

Like most good standards, PCI includes additional language about frequency of testing.

1. At least annually

2. After any changes

Organizations have gotten very good at completing their annual scans, but not so good at scanning “after any changes”. Even if we limit these scans to “material” or “significant” changes, organizations are not good enough about admitting to themselves when their environment has changed enough that a new scan is warranted. If we do anything to improve this scanning, it is better acknowledgement that we need to rescan on a more frequent basis.

For coverage, the problem is that the application scanners, for the most part, do an excellent job of testing the application and the system that supports it. But they do not, themselves, test surrounding systems. Without an intrusion test of all external facing systems, the value of an application scan is limited. And, without intrusion testing internal supporting systems, the value of the application scan is pretty much limited to a picture of the application environment, which is just one layer of the organization. You can also compare application scans to intrusion scans. If you want a true picture of your environment, the scan is only part of a higher level of application testing.

We have all the tools, we just need the right level of discipline to make sure that our testing is a little more complete, a little more repeatable, and a little more effective than a "butt test".

Subscribe to the SecurityWeek Email Briefing
view counter
Jon-Louis Heimerl is Director of Strategic Security for Omaha-based Solutionary, Inc., a provider of managed security solutions, compliance and security measurement, and security consulting services. Mr. Heimerl has over 25 years of experience in security and security programs, and his background includes everything from writing device drivers in assembler to running a world-wide network operation center for the US Government. Mr. Heimerl has also performed commercial consulting for a variety of industries, including many Fortune 500 clients. Mr. Heimerl's consulting experience includes security assessments, security awareness training, policy development, physical intrusion tests and social engineering exercises.