Now on Demand Ransomware Resilience & Recovery Summit - All Sessions Available
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Identity & Access

The Current Limitations and Future Potential of AI in Cybersecurity

A recent NIST study shows the current limitations and future potential of machine learning in cybersecurity.

A recent NIST study shows the current limitations and future potential of machine learning in cybersecurity.

Published Tuesday in the Proceedings of the National Academy of Sciences, the study focused on facial recognition and tested the accuracy of a group of 184 humans and the accuracy of four of the latest facial recognition algorithms. The humans comprised 87 trained professionals, 13 so-called ‘super recognizers’ (who simply have an exceptional natural ability), and a control group of 84 untrained individuals.

Reassuringly, the trained professionals performed significantly better than the untrained control groups. Surprisingly, however, neither human experts nor machine algorithms alone provided the most accurate results. The best performance came from combining a single expert with the best algorithm.

“Our data show that the best results come from a single facial examiner working with a single top-performing algorithm,” commented NIST electronic engineer P. Jonathon Phillips. “While combining two human examiners does improve accuracy, it’s not as good as combining one examiner and the best algorithm.”

“The NIST study used a form of deep learning known as convolutional neural networks that has been proven effective for image recognition because it performs comparative analysis based on pixels rather than the entire image. This is like looking at the individual trees rather than the forest, to use a colloquialism,” explains Chris Morales, head of security analytics at Vectra.

The question asked by the NIST researchers was how many humans or machines combined would lead to the lowest error rate of judgement when comparing two photos to determine if it they are of the same person — with no errors being a perfect score. The outcome of their research was that combining man and machine produces a higher rate of accuracy for a single worker, which resulted in higher productivity. This result occurred because man and machine have different strengths and weaknesses that can be leveraged and mitigated by working together.

“What the researchers found,” continued Morales, “was the best machine performed in the same range as the best humans. In addition, they found that combining a single facial examiner with machine learning yielded a perfect accuracy score of 1.0 (no errors). To achieve this same 1.0 accuracy level without machine learning required either four trained facial examiners or three super recognizers.”

If these results are typical across the increasing use of artificial intelligence (AI) in cyber security — and Morales believes the study is representative of the value of AI — it implies we are rapidly approaching a tipping point. Right now, algorithms are not significantly better than trained professionals, but if used by a trained professional they can improve performance and reduce required manpower levels.

Advertisement. Scroll to continue reading.

While AI itself is not new, it has grown dramatically in use and capability over just the last few years. “If we had done this study three years ago, the best computer algorithm’s performance would have been comparable to an average untrained student,” NIST’s Phillips said. “Nowadays, state-of-the-art algorithms perform as well as a highly trained professional.”

The implication is that we are not yet ready to rely solely on the decisions of machine learning algorithms, but that day is surely coming if algorithm quality continues to improve. We have, however, already reached the point where AI can decrease our reliance on human resources. The best results came not from team of experts combined with machine learning, but from a single professional working with the best algorithm.

“It is often the case that the optimum solution to a new problem is found with the combination of human and machine,” comments Tim Sadler, CEO and co-founder of machine learning email security firm Tessian. “However, as more labelled data becomes available, and more researchers look into the problem, machine learning models generally become more accurate and autonomous reducing the need for a human ‘operator’. A good example of this is medical imaging diagnosis where deep learning models now greatly outperform radiologists in the early diagnosis of cancerous tissues and will soon become the AI ‘silver bullet’.”

He doesn’t believe that facial recognition algorithms have reached that stage yet.

“Facial recognition technology is fairly new, and although machine learning is quickly disrupting the industry clearly the technology is not perfect, for example there have been instances where facial recognition technology has authenticated through family likeness,” Sadler said. “It will take years of close partnership between facial recognition experts are their machine learning counterparts working together, with the experts overriding the machine’s mistakes and correctly labelling the data before a similar disruption is seen.”

This NIST study is specifically about facial recognition — but the basic principles are likely to be similar across all uses of machine learning in biometrics and cybersecurity. ” First, the machine learning algorithm gathers facts about a situation through inputs and then compares this information to stored data and decides what the information signifies,” explains Dr. Kevin Curran, senior IEEE member and professor of cybersecurity at Ulster University. “The computer runs through various possible actions and predicts which action will be most successful based on the collected information. 

“AI is therefore increasingly playing a significant role in cybersecurity, especially as more challenges appear with authenticating users. However, these AI techniques must be adaptive and self-learning in complex and challenging scenarios where people have parts of their face obscured or the lighting is quite poor to preserve accuracy and a low false acceptance rate.”

He cites the use of AI in Apple’s Face ID. “Face ID works by projecting around 30,000 infrared dots on a face to produce a 3D mesh. This resultant facial recognition information is stored locally in a secure enclave on the new Apple A11 Bionic chip. The infra-red sensor on front is crucial for sensing depth. Earlier facial recognition features e.g. Samsung last year, were too easily fooled by face masks and 2D photos. Apple claim their Face ID will not succumb to these methods. However, some claim already that 3D printing someone’s head may fool it, but we have yet to see that hack tested.”

This NIST study was solely about the efficacy of facial recognition algorithms, and the results cannot be automatically applied to other machine learning algorithms. Nevertheless, the general conclusions are likely to apply across many other uses for AI in both physical security and cybersecurity. AI is improving rapidly. It cannot yet replace human expertise completely, but is most effective used in conjunction or by a single human expert. The implication is very clear: the correct combination of man and machine alrea
dy has the potential to both improve performance and reduce payroll costs.

Related: The Impending Facial Recognition Singularity 

Related: It’s Time For Machine Learning to Prove Its Own Hype 

Related: Demystifying Machine Learning: Turning the Buzzword Into Benefits

Related: The Malicious Use of Artificial Intelligence in Cybersecurity

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

MSSP Dataprise has appointed Nima Khamooshi as Vice President of Cybersecurity.

Backup and recovery firm Keepit has hired Kim Larsen as CISO.

Professional services company Slalom has appointed Christopher Burger as its first CISO.

More People On The Move

Expert Insights

Related Content

Identity & Access

Zero trust is not a replacement for identity and access management (IAM), but is the extension of IAM principles from people to everyone and...

CISO Strategy

Okta is blaming the recent hack of its support system on an employee who logged into a personal Google account on a company-managed laptop.

Malware & Threats

The NSA and FBI warn that a Chinese state-sponsored APT called BlackTech is hacking into network edge devices and using firmware implants to silently...

Compliance

Government agencies in the United States have made progress in the implementation of the DMARC standard in response to a Department of Homeland Security...

Email Security

Many Fortune 500, FTSE 100 and ASX 100 companies have failed to properly implement the DMARC standard, exposing their customers and partners to phishing...

Funding/M&A

The private equity firm merges the newly acquired ForgeRock with Ping Identity, combining two of the biggest names in enterprise IAM market.

Application Security

Virtualization technology giant VMware on Tuesday shipped urgent updates to fix a trio of security problems in multiple software products, including a virtual machine...

Application Security

Fortinet on Monday issued an emergency patch to cover a severe vulnerability in its FortiOS SSL-VPN product, warning that hackers have already exploited the...