Security Experts:

Machine Learning & Security: Making Users Part of the Equation

The Best Security Doesn’t Exclude Users, it Empowers Them

If you’ve been watching, we’ve now entered the age of artificial intelligence, machine learning, and expanding automation. The capacity of modern computing to recognize patterns and make decisions based on boggling amounts of data has created new solutions and increasing customer expectations. Too many car accidents and mind-rending traffic? Self-driving cars might be the answer. Can’t get to the bank before it closes? There’s an app for that. Newspapers struggling to make ends meet? AI-based news bots churn out formulaic stories to reduce the cost of labor. 

Cybersecurity is no different. Always an area of rapid advancement, the threat landscape now expands dizzyingly at the pace of hundreds of thousands of new attack variants every day. As the types of attacks broaden and the sophistication level deepens, we humans obviously need some help. Enter data science and supporting technologies that have driven breakthrough advances in security, processing and analyzing all of this data on a scale that is multiple orders of magnitude faster than humans ever could. 

The downside of this expanding support is that we may become too dependent on our machines. Technology positioned in such a way that it insulates the users, treating them as incompetent victims that must be encased in a protective cocoon, causes the users’ security muscles to atrophy. When these tools further undermine their agency in security by making them feel as though they’re part of the problem, security becomes an external and intractable concern, like an asteroid impact or being hit by lightning. If this happens, the organization actually becomes less secure, given the nearly infinite ways a careless (or hapless) employee can negate even the most solid protection. Effective cybersecurity relies on the humans participating in a meaningful and cooperative way. 

We understand this viscerally in other parts of our lives. When we don’t know what will happen when the system fails, we balk. It’s the reason more than three-quarters of Americans have said they’re afraid to get behind the wheel of a self-driving car — we’re just not quite ready to relinquish complete control to the machines yet. 

The same wariness should also hold true in cybersecurity. The best security technology is never 100 percent foolproof and unfading, as cybercriminals spend their considerable time and intellects trying to outsmart or evade it. The perpetual game of hacker-and-defender cat-and-mouse creates the need for smart users to tip the balance. With all of our collective focus on machine learning, we simply can’t overlook human learning’s critical role in guarding against attack and protecting the organization. 

That’s why organizations must strike a balance between investing in technology and investing in people (namely via education and training) in order to protect themselves over time. There is a complementary relationship between smarter security and smarter users, one that provides the best overall defense.

Consider the age-old expression, “Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime.” Most cybersecurity vendors want to give users the fish — they want to say that their solution can solve the problem. Other vendors highlight the vulnerability of the user, and advocate teaching users as a primary defense. Neither is the right answer. The untrained user runs out of fish eventually, and the user that only trains starves before he or she is an adequate fisherman.

The balance is reached when the technologies that are protecting the organization can also reinforce the good security lessons that the users are being taught. While the user gets more practiced in secure behaviors, the technology protects them, and offers security and guidance when the user strays. Eventually, the balance shifts, and the technology has to bear less and less of the load while the user’s awareness eliminates more and more of the practical threat surface.

This approach to improving security is readily adopted with balanced investment in the right technologies and corresponding investment in user awareness, understanding, and empowerment to keep the organization safe. The use of AI and machine learning has — and will continue to — enabled breakthrough advances in security, but ultimately, it still comes down to the human factor: helping users understand the importance of security and how to achieve it. Using artificial intelligence to spur and bolster human intelligence is the best way to guarantee the combined effectiveness of both.

view counter
Jack Danahy is the co-founder and CTO of Barkly, an endpoint protection platform that is transforming the way businesses protect endpoints. A 25-year innovator in computer, network and data security, Jack was previously the founder and CEO of two successful security companies: Qiave Technologies (acquired by Watchguard Technologies in 2000) and Ounce Labs (acquired by IBM in 2009). Jack is a frequent writer and speaker on security and security issues, and has received multiple patents in a variety of security technologies. Prior to founding Barkly, he was the Director of Advanced Security for IBM, and led the delivery of security services for IBM in North America.