Sandboxing is a relatively new trend in malware analysis. It allows companies, such as antivirus vendors, to execute malicious malware in an environment where it can’t do any real damage. By watching what the executable does, security researchers can identify whether the software is malicious or if it’s a legitimate application users genuinely want to install. For example, if an unknown application is executed in the sandbox and is observed sending passwords to a random website in a foreign country, the executable is likely malware. If no such observations are made, then it’s “probably” goodware.
It’s a trend that’s created an interesting challenge for malware authors, who need to find ways to prolong detection via a sandbox. See, the longer they avoid detection, the longer their malware has a chance to infect a larger number of machines. And the more machines that get infected, and the longer they stay infected, the more value attackers can extract.
Enter sandbox evasion, a new technique many malware authors have adopted in the past few years. The idea is that if the malware can become aware of the fact it’s being analyzed in a sandbox, it can choose to do nothing. As long as it doesn’t do anything suspicious during analysis, the executable will likely be deemed “goodware” and allowed to proceed to facilitate further infection.
Attackers have figured out several ways to accomplish this. They start by asking a few questions:
• Am I being debugged?
• Am I running in the right environment?
• Do I have an Internet connection?
• Is the system clock up to date?
• Is there a human sitting behind the keyboard?
The list can go on and on, but attackers are simply looking for clues on the system that indicate it might not be a real user desktop. For example, if the malware sees that it is being debugged (a common technique for observing malware activity), then it simply won’t do anything. If the malware suspects analysis, it may ask the user a question such as, “What’s your email address?” If no input is observed, the malware assumes there is no human on the other end and, again, does nothing.
Another evasion tactic used by malware authors is to tightly control malware distribution. If the security research labs are unable to obtain a sample of the malware, they obviously can’t analyze or detect it. For such cases, attackers have adopted a malware distribution architecture that validates potential victims prior to disseminating the malware.
This validation often includes things like:
• Is it a known security lab asking for the malware?
• Is the person using a browser I know how to infect?
• Is the person using an operating system I know how to infect?
• Does the person have the vulnerable plug-ins installed?
• Is the user running antivirus applications likely to detect the malware?
• Is the user coming from a country I want to infect?
If all questions return an affirmative, the client is likely a good candidate to receive the malware. If any result in a negative, the malware would not be served. This type of easy insight makes it exceptionally difficult for security researchers to even uncover the malware for analysis in the first place. Add to that the fact that the environments and requirements established to receive the malware are always different. It often takes hours of testing—the kind of time security researchers generally don’t have to spare—to identify the correct environment the server is willing to transmit the sample to.
Unfortunately, this is bad news for consumers and security researchers alike because it makes protective measures like antivirus far less effective at catching sophisticated threats. Less unfortunate is that it presents an interesting opportunity to save folks from ever even encountering a virus (or, if they were to encounter one, to help them trick the virus into not performing malicious actions on their machine).
For example, say you were to modify your browsers to look as though you’re running Linux with the Opera browser, even though you are actually using Windows with the FireFox browser. It’s likely that many of the malware distribution servers would be unwilling to serve a piece of malware to your browser (because their Windows malware won’t run on Linux). What’s more, if you were to execute all software with a debugger attached, or if your system clock were a year behind, the malware may not actually perform malicious actions when executed because it believes it’s being analyzed in a security sandbox.
In short, while attackers are constantly improving their evasion tactics to extend the lifetime of their malware, users can also leverage these types of evasion tactics to help prevent malware infection in the first place. As the malware landscape evolves further, other strategies that malware authors employ to heighten the success of their exploits may also be prone to exploitation by legitimate users.
It’s time to start turning the tables by exploiting false assumptions made by attackers. Deception will become the future of protection. To butcher a corny mantra, “Assumptions make asses out of, well, malware authors.”