It’s almost inevitable that a discussion about industrial control system cyber security will touch on a few touchy topics: the ‘air gap’ myth; the ‘a firewall is good enough’ vs. the ‘you must build an anti-cyber-terrorist über bunker’ security measures; the ‘sharing information helps the community’ vs. ‘you’re publicizing a playbook for hackers’; and my personal favorite, the fatalistic statement that it doesn’t really matter because nothing can detect or prevent the angry employee from yanking a cable, or slamming his fist on a button (fans of The Princess Bride will appreciate the analogy of setting the soul-sucking torture machine to ’50’ in the pit of despair).
There’s no arguing that each side of each argument has some merit , and the concern over an inside attack is extremely valid. After all, let us assume that you accept the myth of the air gap, and you’ve built the strongest defenses since the fully operational Death Star in Episode VI. You’ve installed unidirectional network gateways, also known as data diodes, for layer 1 protection. You’re using 802.1x for layer 2 protection. You’ve put a firewall (layer 3+) and an industrial protocol filters (layer 4+) on both ends of your unidirectional gateways. You’ve installed application whitelisting on everything possible, and are monitoring everything else with SCADA-aware Intrusion Prevention Systems.
You’ve done all that, and then one day Bob overhears a conversation at the water cooler between the HR director and the Plant Manager. Benefits are changing, or there’s a planned layoff in Bob’s area, or maybe they were just making fun of Bob’s haircut and he lacks the maturity to handle a personal insult. Bob goes rogue. He could do a lot of things, being Bob—the entirely fictional control system operator with admin privileges on more than a few key systems, and with badge access to a lot of sensitive places full of buttons and levers (and HMIs). He knows where the special knot is that will open the door to the Pit of Despair, and he’s hell-bent on setting the torture machine to ’50’.
Here are a few examples of what Bob could do:
He could badge into the control room, boldly log in to an HMI, and change a bunch of set points so that safety parameters are ineffective. This lacks elegance but could sure cause problems, even though those problems would probably be addressed fairly quickly, and Bob would likely be in handcuffs before lunch.
He could badge into the control room, log in to an HMI workstation, and start modifying some system files to cause trouble. He could swap image file names to make ‘on’ indicators look ‘off’ (a picture is worth a thousand worms!), open up some remote access ports, turn on Wi-Fi, or whatever. With admin access you can mess with a system as blatantly or as elegantly as you’d like.
He could walk in with Stuxnet on a USB stick, ignore the disabled USB ports on the server and instead unplug the USB mouse and use that port (yes, this is a true story—which is one reason why its better to secure USB access digitally rather than disable USB interfaces physically. The other reason is that I know an 8 year old who knows how to open a PC and install or swap out a USB card). He could laugh at USB removable media. If he knows any 8 year olds, he might not bother because it would be easier to open a wireless interface on the server, connect it to a 4G mobile wireless access point, and download all sorts of files from an online storage or file sharing service.
Bob may even be the admin for the Application Whitelisting system, and authorize his home-grown Stuxnet variant as a “known good” application so that AWL-protected systems will allow it to execute. Being smart, he could cover his tracks by tailgating someone into the control room, or attempt to login using someone else’s credentials.
He could do a lot of things. In short, he could do almost anything. Bob has access and he has privileges. If Bob does lack certain necessary admin privileges (because lets face it, this guy would never have passed the background checks required to be given full admin access), he could buy a physical key logger from SkyMall for about ten bucks. One shift cycle later, he has every password he needs. Do you physically inspect the back of your servers every day to make sure none installed a pass-through key logger? I don’t (although admittedly, I work on a laptop).
The good news is that, as is often the case, there are existing tools both new and old that can help protect against Angry Bob. One of the best is some sort of change management mechanism. Another tool that goes hand-in-hand is an anomaly-based event monitor. Put these two together and you’ve gone a long way towards achieving what is the first stages of situational awareness: perceiving a threat, and making an informed decision based upon that perception. If you’re monitoring your corporate IT network and your control system correctly, and if you’ve properly inventoried your various assets and cyber systems, you’ll also be better off when it comes to responding to that threat as well, as the systems, applications and users involved will be clearly identified.
Here is an example:
Our antagonist shows up for work at a plastics manufacturing company. He badges into the control room, logs in and everything is business as usual until he subtly increases the high pressure point on a boiler. He does this right before his shift ends so that he’ll be far and away before anything happens. Whoever is taking over for Bob on the new shift might notice right away, but most likely wouldn’t notice until the pressure starting rising and things started turning red on the console. At this point, he’d notice and the human element would save the day, but probably not before compromising whatever was in that tank, costing his company time, money, and possibly safety. Serves them right, our disgruntled saboteur thinks to himself as he drives into the sunset.
Unfortunately for him, his company was able to detect the change when it occurred, using the basic statistical capabilities of their SIEM. In this case, set point changes are being monitored by the SIEM just as if they were any other IT system. The SIEM therefore knows that certain changes happen in certain patterns at certain times of the day, week or month, and in tune knows that the change made to the pressure set point was unexpected. An alert was sent to key personal by email and SMS, and Bob’s replacement rushed straight to that console and set things right before Bob even left the building. The change was clearly made by Bob as he logged in using his own credentials, and security was waiting for him at the facility exit. Everything happened in near real time because the right data points were being monitored, and running baselines were maintained to ensure that even a small change would register as an anomaly. The sabotage (or was it simply an honest operator error?) has been averted.
Read “SCADA Mischief Episode 2: Context and Correlation” as we learn about context and correlation, and what may have occured in a “smart Bob” scenario.