Implanting Easter Eggs is not Just a Developer’s Take at Humor. They Have the Potential to Double-act as Backdoors with a Devastating Effect.
Easter was just around the corner, giving us the opportunity to go do some egg hunting. Not just physically but virtually as well. In the virtual world, “Easter eggs” describes hidden code that appears within an application if a certain action triggers the abnormal application behavior. Finding these Easter eggs can be fun. Video games for example are known to be chock-full of Easter eggs AKA cheat codes (remember the Shift+L of the original Prince of Persia?). Even Microsoft developers have done this, adding their own sense of humor to the mix by hiding such eggs in old versions of Excel. Since 2002 however, this practice has been officially banned by Microsoft as part of its Trusted Computing Initiative for the security implications of such eggs. As it turns out, security and Easter eggs go hand in hand.
Easter eggs uncovered in industrial systems
Stuxnet brought attention to SCADA systems. Before, they were tagged as those heavy systems in charge of monitoring and controlling national and industrial infrastructure such as power lines, wind farms and even traffic lights. Stuxnet gave an edge to these systems, creating awareness of their vulnerabilities and impact if exploited was raised. Hacking into them proved that it can set back nuclear power programs by several months. In fact, hacking SCADA systems became as sexy as anyone could possibly imagine cyber-security to be, with the story appearing in a Vanity Fair feature.
As researchers started focusing on Siemens’ PLC firmware – the system that was targeted by Stuxnet – they even found Easter Eggs embedded within an HTML file depicting dancing monkeys. Granted, the researcher did not investigate these chimpanzees to see if the code contained actual vulnerabilities. But it’s not difficult to see the potential threat of sneaky code – this “feature” went undetected by Siemens, did not pass code review, QA testing and can even further introduce bugs. Effectively, Easter Eggs have the potential to double-act as backdoors.
If Stuxnet was intent on creating havoc, Duqu was intent on stealing documents. The authors of this variant could not refrain from adding their own Easter Egg to the exploit by adding the following line within the code: “Copyright (c) 2003 Showtime Inc. All rights reserved. DexterRegularDexter”
Easter Eggs Covered in Vengeance
Implanting Easter eggs is not only a developer’s take at humor. In fact, they could be sprinkled within the code to specifically cause a devastating effect. In this case, the Easter egg takes the form of a logic bomb. The first such incident I can recollect is actually taken from the 1992 film “Single White Female” where Bridget Fonda’s character planted a logic bomb to delete all data from a firm if payment was not delivered within 3 months. Although logic bombs are one of Hollywood’s favorites, reality serves us with similar reminders, such as:
• A former TSA data analyst planted code with the intention of sabotaging TSA’s terrorist screening database. The database is also used to examine people with access to sensitive information.
• A former IT contractor for mortgage bank, Fannie Mae, planted a logic bomb — if set off it would have deleted customer mortgage data.
• A former UBS PaineWebber systems administrator planted a logic bomb which went off and deleted files, costing the company over $3M in damage.
In two of the incidents mentioned above the companies were lucky, as they detected the bomb early enough. In the first case, the malicious code was not triggered because the TSA cameras caught the data analyst planting the code after hours and investigation followed. In the second case, another engineer found the code – a very rare case. After all, the code used by insiders for logic bombs, is simple and straightforward. It does not require special code because it relies on the trust and access already given to the individual within the organization. Further, the code does not require the use of unusual mechanisms, which root kits and infection kits typically use. Take, for instance, the published code of the logic bomb planted at UBS PaineWebber. The malicious code actually shows perfectly legitimate commands, including probably the last one which deleted the files. For the most part, trying to figure out whether a piece of code is malicious is futile – it loses focus, wastes resources and leads to inadequate controls in attempt to spot the few destructive lines of code. This is not to say though that we should sit aside and let these logic bombs take control.
What should organizations do?
Spending time avoiding the consequences and detecting the payload when it executes is a better use of resources and budget. Here are a few recommendations:
• Alert and monitor operations to sensitive information. Take for example the UBS PaineWebber incident where the code erases the disk. A solution should detect this type of sensitive activity – not only when performed from a left-over file, but also when performed by a logged on user, either intentionally or by mistake.
• Detecting suspicious behavior. As mentioned, the TSA uncovered the scheme when security cameras detected after-hours access. It’s this type of irregular access that should be applied to digital systems as well.
• Maintain a detailed audit trail. In the case of an incident, this type of audit may be invaluable for forensics purposes.
• Upon departure disable the employee’s access and continuously monitor dormant accounts. Many of these logic bomb cases are caused by disgruntled employees – within a short time frame after being let go. Fannie Mae did not terminate the contractor’s account until later that evening, given the contractor just enough time to write and deploy the code.
Hopefully, these tips will be helpful in combating these threats. That and of course, some real Cadbury’s Creme Eggs.