When I worked as a Principal Researcher at RSA Labs around year 2000, I was tasked to identify what might affect our revenue stream. “Anything,” I thought. Upon further reflection, I argued that criminals would start trying to steal passwords to gain access to accounts, transfer funds and steal information. This, of course, is what we now call phishing.
My colleagues weren’t convinced. “Nobody would be that dumb,” was the most common response.
Falling for phishing is not about being dumb, but rather about not paying sufficient attention. In either case, nearly 20 years ago, there was no evidence that people could be tricked, so I devised some experiments to test my hypothesis. I couldn’t ask permission, since it would influence the results, or steal passwords, as it would be illegal. Instead I designed intricate experiments in which I learned whether a password was correctly input — but not what it was.
The results were fascinating. By varying the attacks, I could determine what type of attack would harvest the most victims. Attackers must be drawn to the most powerful attack, I argued, so by determining what makes an attack powerful, I could predict future fraud trends and identify appropriate countermeasures. I have been engrossed with phishing ever since; subsequently authoring “Phishing and Countermeasures: Understanding the Increasing Problem of Electronic Identity Theft” in 2006.
Now, as Chief Scientist at Agari, I continue to research and develop insight into emerging attack trends and effective new countermeasures. In these SecurityWeek columns, I will focus on these trends, such as the growing use of targeted attacks including business email compromise (BEC) and spear phishing, that use identity deception to trick victims, as well as their countermeasures, such as authentication and machine learning.
Today, I will focus on endpoint protection blindspots.
We need look no further than the recent WannaCry and NotPetya attacks to underscore the importance of endpoint protection. Endpoint protection is focused on the prevention and detection of malicious threats that can be delivered through a variety of methods. However, even as we work to secure email, documents, web browsers and the OS, the end user remains the weakest link.
Of course, endpoint security models do consider the vulnerability of the end user. Solutions are available today that can prevent attacks, even as the user clicks on everything. However, even though endpoint protection can prevent malicious email attachments from being executed, it cannot prevent social engineering of the user itself.
Recently, we have seen the impact of these types of attacks in the spear phishing campaigns against Podesta, Macron and U.S. election officials. Likewise, the FBI recently advised that BEC is now a $5 billion threat.
But this wasn’t always the case. In its formative years, phishing attacks were scattershot, meaning they were not targeted and not very successful. However, as targeted attack techniques improved, so did the results.
I chronicled some of these techniques in a series of academic publications. I worked with my PhD students to predict emerging attack trends, studying what causes an increased yield and the general inability of typical users to understand identity.
Much of this work was based on experiments identifying weaknesses and risks; this is part of a method we developed to predict online threat trends. By hypothesizing and measuring, we predicted the likely development of threats ranging from ad fraud to political phishing. Predicting trends is of vital importance, as it permits a proactive development of countermeasures.
When we consider endpoint protection, it must include email security that can detect phishing and other social engineering attacks. Unfortunately, traditional countermeasures have been built on the paradigm of blacklisting known-bad URLs. This approach is similar to signature-based anti-virus detection, which has proven itself unreliable time after time.
Consider the analogy of a farmer keeping his flock of sheep safe from a wolf in sheep’s clothing. It would be impossible for him to detect every hungry wolf. Similarly, traditional security solutions can’t detect every threat. Blacklisting doesn’t generalize: it lists identities. A more sophisticated approach would be to look for deviations of good, while keeping in mind that not everything is dangerous. Instead of looking for “known wolves” we can look for things that are not “known sheep” even though they look a fair bit like sheep: in other words, a system that identifies sheep’s clothing.
Now replace the farmer with machine learning. Machine learning intrinsically relies on a training set that is consistent with likely future events, making it poorly suited to detect constantly changing attacks. However, machine learning is perfect for learning what is good, since that does not change in an adversarial manner. This corresponds to knowing what a sheep looks like. Then we detect what may be perceived as a sheep, but which is not one of the known good — where known good is a known sheep. This is best done using a combination of machine learning (to identify what is a sheep, and to identify what looks like a sheep) and a simple expert system that combines the machine learning outputs (what looks like a sh
eep, but is not a sheep).
Thus, artificial perception is used to interpret incoming messages, determining whether they are likely to be misinterpreted as safe by the recipients — if so, the system labels them as potential threats, and takes corrective action.
End users cannot be responsible for their security and no amount of training will ever change that. Instead, security professionals must endeavor to eliminate the security blind spots by implementing effective countermeasures that will keep malicious emails from ever being delivered to a user’s inbox. Endpoint protection will never be able to catch up with “known wolves,” but machine learning and artificial perception can change the rules of engagement with models of “known good.”