Project Blackfin: Multi-Year Research Project Aims to Unlock the Potential of Machine Intelligence in Cybersecurity
Project Blackfin is ongoing artificial intelligence (AI) research challenging the current automatic assumption that deep-learning neural network principles are the best way to teach a system to detect anomalous behavior or malicious activity on a network. Run by security firm F-Secure, the project is examining the alternative applicability of distributed swarm intelligence in decision making.
“People’s expectations that ‘advanced’ machine intelligence simply mimics human intelligence is limiting our understanding of what AI can and should do,” explains Matti Aksela, F-Secure’s VP of artificial intelligence. “Instead of building AI to function as though it were human, we can and should be exploring ways to unlock the unique potential of machine intelligence, and how that can augment what people do.”
Project Blackfin is being run by F-Secure with collaboration between in-house engineers, researchers, data scientists and academic partners. “We created Project Blackfin,” continued Aksela, “to help us reach that next level of understanding about what AI can achieve.” Although it is a long-term project, some early principles are already being incorporated into F-Secure’s own products.
The primary problem with many current anomaly detection AI systems is well-known: too many false positives or too many false negatives. This is difficult to solve simply by the nature of how the systems work. Streams of data from endpoints and network traffic are centralized and analyzed on arrival, and then stored for later audit or forensic analysis. Because the data arrives from multiple sources it is difficult to correlate events across multiple sources. Since attackers often build delays into their attacks, new events may also need to be related to historical events to be able to contextualize possibly malicious activity.
The result is that finding the best sensitivity settings for detection of behaviors is critical. Set high to ensure nothing is missed results in huge numbers of false positives that need to be manually triaged by the security team. Set too low to reduce the false negatives increases the potential for false positives.
Blackfin is exploring the use of distributing the AI as agents within each endpoint and server of a network in a collaborative manner. That intelligence becomes expert in the acceptable use of its own host. The model is inspired by the patterns of collective behavior found in nature, such as the swarm intelligence found in ant colonies or schools of fish. “The project aims to develop these intelligent agents to run on individual hosts,” says F-Secure. “Instead of receiving instructions from a single, centralized AI model, these agents would be intelligent and powerful enough to communicate and work together to achieve common goals.”
Consider the machine learning predictive text input capabilities of individual phones. They learn the text habits of their owners very quickly, being able to rapidly offer probable word completions based on their owners’ habits. This is the type of distributed intelligence being explored by Blackfin, with the intelligence located in the device — but with the added ability for each intelligence to collaborate with the intelligence of adjacent intelligences. What may be just suspicious activity in the context of one endpoint can be confirmed as malicious or benign in the context of its action on adjacent endpoints — each of which has its own endpoint-specific intelligence.
This improves the correlation and contextualization of suspicious activity since the event is immediately, in situ, seen in the context of both the source and destination hosts. In our phone example, it might be equivalent for the text input intelligence on one phone being able collaborate with the destination intelligence and say, ‘Stop. You should not use that language with your grandmother.’
“Essentially,” said Aksela, “you’ll have a colony of fast local AIs adapting to their own environment while working together, instead of one big AI making decisions for everyone.”
F-Secure has published the first of what it expects to be regular papers on the progress of Blackfin (PDF). For now, it is exploring different anomaly detection models to detect specific phenomena. “By combining the outputs of multiple different models associated with each of the [different categories],” says the paper, “a contextual understanding of what is happening on a system can be derived, enabling downstream logic to more accurately predict whether a specific event or item is anomalous, and if it is, if it is worth alerting on. This approach enables generic methodologies for detecting attacker actions (or sequences of actions), without baking specific logic into the detection system itself.”
Research is ongoing and will continue for several years. Nevertheless, says F-Secure, through Blackfin, it has “identified a rich set of interactions between models running on endpoints, servers, and the network that have the potential to vastly improve breach detection mechanisms, forensic analysis capabilities, and response capabilities in future cyber security solutions… we expect to regularly report new results and findings as they present themselves “
Related: Artificial Intelligence in Cybersecurity is Not Delivering on its Promise
Related: Are AI and Machine Learning Just a Temporary Advantage to Defenders?
Related: The Malicious Use of Artificial Intelligence in Cybersecurity
Related: The Role of Artificial Intelligence in Cyber Security