Machine Learning is a Black Box that is Poorly Understood
2017 was the year in which ‘machine learning’ became the new buzzword — almost to the extent that no new product could be deemed new if it didn’t include machine learning.
Although the technology has been used in cybersecurity for a decade or more, machine learning is now touted as the solution rather than part of the solution.
But doubts have emerged. Machine learning is a black box that is poorly understood; and security practitioners like to know exactly what it is they are buying and using.
The problem, according to Hyrum Anderson, technical director of data science at Endgame (a vendor that employs machine learning in its own endpoint protection product), is that users don’t know how it works and therefore cannot properly evaluate it. To make matters worse, machine learning vendors do not really understand what their own products do — or at least, how they come to the conclusions they reach — and therefore cannot explain the product to the satisfaction of many security professionals.
The result, Anderson suggests in a blog post this week, is “growing veiled skepticism, caveated celebration, and muted enthusiasm.”
It’s not that machine learning doesn’t work — it clearly does. But nobody really understands how it reaches its decisions.
Anderson quotes Ali Rahimi. “He compared some trends, particularly in deep learning, to the medieval practice of Alchemy. ‘Alchemy ‘worked’,’ Ali admitted. ‘Alchemists invented metallurgy, ways to dye textiles, our modern glass-making processes, and medications. Then again, Alchemists also believed they could cure diseases with leeches, and turn base metals into gold’.”
“If the physicist’s mantra is Feynman’s ‘What I cannot create, I do not understand’,” he continues, “then the infosec data scientist should adopt, ‘What cannot be understood, should be deployed with care.’ Implied, but not spoken, is ‘if at all’.
This problem of not understanding how a conclusion is reached could become much worse if a possible interpretation of Article 22 of the EU’s General Data Protection Regulation (GDPR) is enforced to its full potential. This states, “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
This should not directly affect machine-learning malware detection because data subjects are not directly involved, but could have implications for other applications used by both IT and security departments.
GDPR’s Recital 71 clarifies the requirement. It adds, “In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision.”
Right now, suggests Anderson, this would be largely impossible. “The point is that although some models may reach impressive predictive performance, it may not be clear what information in the data directly determine the decisions. Ironically, machine learning is such that even with full access to the source code and data, it may still be very difficult to determine ‘why’ a model made a particular decision.”
A partial solution for infosec practitioners would come from the increased involvement of the machine learning industry with third party testing. This would at least enable the practitioners to understand how effective the algorithms are, even if not how they work. Although some machine-learning, so-called next-gen, endpoint protection vendors have been slow and reluctant to embrace third-party testing, Endgame is not one of them.
“Fortunately,” writes Anderson, “there are technique-agnostic methods to compare solutions. We have previously argued that AV can be compared apples-to-apples to ML by comparing both false positive and true positive rates, for example, whereas ‘accuracy’ is wholly inadequate and may hide all manner of sins… In the endpoint security space, vendors are beginning to offer holistic breach tests rather than AV-only tests, which help customers value a broader protection landscape.”
But ultimately, it is the lack of visibility into the working of machine learning and AI algorithms that must change. “My call for 2018,” says Anderson, “is to continue to address what is still particularly needed in ML infosec research: more cross-pollination between academia and industry, more open community engagement from security vendors, and more open datasets for reproducible research. By doing this, we’ll continue to move ML in infosec from the dark arts of Alchemy to rigorous Science.”
Endgame was founded in 2008 by Chris Rouland and other executives who previously worked with the CIA and Internet Security Systems. It originally discovered and sold 0-day vulnerabilities, but shifted away from this around 2014. Under current CEO Nate Fick’s leadership, it has grown its commercial offering using more than $100 million in funding, including a $23 million Series B funding round in March 2013 followed by a $30 million Series C round in November 2014.
Related: Inside The Competitive Testing Battlefield of Endpoint Security
Related: Threat Hunting with Machine Learning, AI, and Cognitive Computing