Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Bias in Artificial Intelligence: Can AI be Trusted?

Can AI be Trusted?

Artificial intelligence is more artificial than intelligent.

Can AI be Trusted?

Artificial intelligence is more artificial than intelligent.

In June 2022, Microsoft released the Microsoft Responsible AI Standard, v2 (PDF). Its stated purpose is to “define product development requirements for responsible AI”. Perhaps surprisingly, the document contains only one mention of bias in artificial intelligence (AI): algorithm developers need to be aware of the potential for users to over-rely on AI outputs (known as ‘automation bias’).

In short, Microsoft seems more concerned with bias from users aimed at its products, than bias from within its products adversely affecting users. This is good commercial responsibility (don’t say anything negative about our products), but poor social responsibility (there are many examples of algorithmic bias having a negative effect on individuals or groups of individuals).

Bias is one of three primary concerns about artificial intelligence in business that have not yet been solved: hidden bias creating false results; the potential for misuse (by users) and abuse (by attackers); and algorithms returning so many false positives that their use as part of automation is ineffective.

Academic concerns

When AI was first introduced into cybersecurity products it was described as a defensive silver bullet. There’s no doubt that it has some value, but there is a growing reaction against faulty algorithms, hidden bias, false positives, abuse of privacy, and potential for abuse by criminals, law enforcement and intelligence agencies.

According to Gary Marcus, professor of psychology and neural science at New York University (writing in Scientific American, June 6, 2022), the problem lies in the commercialization of a still developing science:

“The subplot here is that the biggest teams of researchers in AI are no longer to be found in the academy, where peer review used to be coin of the realm, but in corporations. And corporations, unlike universities, have no incentive to play fair. Rather than submitting their splashy new papers to academic scrutiny, they have taken to publication by press release, seducing journalists and sidestepping the peer review process. We know only what the companies want us to know.”

Advertisement. Scroll to continue reading.

The result is that we hear about AI positives, but not about AI negatives.

Emily Tucker, executive director at the Center on Privacy & Technology at Georgetown Law, came to a similar conclusion. On March 8, 2022, she wrote,

Starting today, the Privacy Center will stop using the terms ‘artificial intelligence’, ‘AI’, and ‘machine learning’ in our work to expose and mitigate the harms of digital technologies in the lives of individuals and communities… One of the reasons that tech companies have been so successful in perverting the original imitation game [the Turing Test] as a strategy for the extraction of capital is that governments are eager for the pervasive surveillance powers that tech companies are making convenient, relatively cheap, and accessible through procurement processes that evade democratic policy making or oversight.

The pursuit of profit is perverting the scientific development of artificial intelligence. With such concerns, we need to ask ourselves if we can trust AI in the products we use to deliver accurate, unbiased decisions without the potential for abuse (by ourselves, by our governments and by criminals).

AI Fails

Autonomous vehicle 1. A Tesla on autopilot drove directly toward a worker carrying a stop-sign, and only slowed down when the human driver intervened. The AI had been trained to recognize a human, and trained to recognize a stop-sign, but had not been trained to recognize a human carrying a stop sign. 

Autonomous vehicle 2. On March18, 2018, an Uber autonomous vehicle drove into and killed a pedestrian pushing a bicycle. According to NBC at the time, the AI was unable to “classify an object as a pedestrian unless that object was near a crosswalk”.

Educational assessment. During the Covid-19 lockdowns in 2020, students in the UK were awarded assigned examination results based on an AI algorithm. Many (about 40%) were considerably lower than expected. The algorithm placed undue importance on the historical performance of different schools. As a result, students from private schools and previously high performing state schools had an unmerited advantage over students from other schools, who suffered accordingly.

Tay. Tay was an AI chatbot launched on Twitter by Microsoft in 2016. It lasted just 16 hours. It was intended to be a slang-filled system that learned by imitation. Instead, it was rapidly shut down when it tweeted, “Hitler was correct to hate the Jews.”

Candidate selection. Amazon wished to automate its candidate selection for job vacancies – but the algorithm turned out to be sexist and racist, favoring white males.

Mistaken identity. During the Covid-19 lockdowns, a Scottish football team live-streamed a match using AI-based ball-tracking for the camera. But the system continually confused the linesman’s bald head for the ball and focused on him rather than the play.

Application rejection. In 2016, Carmen Arroya requested permission for her son – who had just woken from a six-month accident induced coma – to move into her apartment. The request was rapidly refused without explanation. Her son was sent to a rehabilitation center for more than a year while Arroya challenged the system. The landlord didn’t know the reason. He was using an AI screening system supplied by a third party. Lawyers eventually found the cause: an earlier citation for shop-lifting that had been withdrawn. But the AI system simply refused the request. Salmun Kazerounian, a staff attorney for the Connecticut Fair Housing Center (CFHC) representing Arroya, commented, “He was blacklisted from housing, despite the fact that he is so severely disabled now and is incapable of committing any crime.”

There are many, many more examples of AI fails, but a quick glance at these highlights two major causes: a failure in design caused by unintended biases in the algorithm, and a failure in learning. The autonomous vehicle examples were a failure in learning. They can be rectified over time by increasing the learning – but at what cost if the failure is only discovered when it happens? And it must be asked if it is even possible to learn every possible variable that does, or might in the future, exist.

The exam results and Amazon recruitment events were failures in design. The AI included unintended biases that distorted the results. The question here is whether it is even possible for developers to exclude biases they have if they are unaware of their own biases.

Misuse and abuse of AI

Misuse entails using the AI for purposes not originally intended by the developer. Abuse involves actions such as poisoning the data used to teach the AI. Generally speaking, misuse is usually performed by the lawful owner of the product, while abuse involves actions by a third party, such as cybercriminals, to make the product return manipulated incorrect decisions.

Misuse

SecurityWeek spoke to Sohrob Kazerounian, brother of the Kazerounian involved in the CFHC case, and himself AI research lead at Vectra AI. Kazerounian believes that the detection and response type of AI used in many cybersecurity products is largely immune to the inclusion of bias that plagues other domains. The inclusion of hidden bias is at its worst when a human-developed algorithm is passing judgment on other humans.

Here, he thinks the real question is one of ethics. He points out that such applications are designed to automate processes that are currently performed manually; and that the manual process has always included bias. “Credit applications, and rental applications… these areas have always had discriminatory practices. The US has a long history of redlining and racist policies, and these existed long before AI-based automation.”

His technical concern is that bias is harder to find and understand when buried deep in an AI algorithm than when it is found in a human being. “You might be able see the matrix operations in a deep learning model” he continued. “You might be able to see the calculations that go on and lead to the actual classification – but that won’t necessarily explain why. It’ll just explain the mechanism. I think at a high level, one of the things that we’re going to have to do as a society is to ask, is this something that we think it’s appropriate for AI to act on?”

The inability to understand how deep learning comes to its AI conclusions was confirmed by an MIT/Harvard study in the Lancet, May 11, 2022. The study found that AI could specify race from medical images such as X-rays and CT scans alone – but nobody understood how. The possible effect of this is that medical systems may be determining more than expected – it could also be looking at race, ethnicity, sex, whether the patient is incarcerated or not and more.

Anthony Celi, associate professor of medicine at Harvard Medical School, and one of the authors, commented, “Just because you have representation of different groups in your algorithms, that doesn’t guarantee it won’t perpetuate or magnify existing disparities and inequities. Feeding the algorithms with more data with representation is not a panacea. This paper should make us pause and truly reconsider whether we are ready to bring AI to the bedside.”

The problem also encroaches on the cybersecurity domain. On April 22, 2022, Microsoft added its Communications Compliance – Leavers Classifier (part of the Purview suite of governance products) to its product roadmap. The product reached Preview stage in June 2022, and is slated for General Availability in September 2022. 

In Microsoft’s own words, “The leavers classifier detects messages that explicitly express intent to leave the organization, which is an early signal that may put the organization at risk of malicious or inadvertent data exfiltration upon departure.” 

In a separate document published on April 19, 2022, Microsoft noted, “Microsoft Purview brings together data governance from Microsoft Data and AI, along with compliance and risk management from Microsoft Security.” There is nothing that explicitly ties the use of AI to the Leavers Classifier, but circumstantial evidence suggests it is used. 

SecurityWeek asked Microsoft for an interview “to explore the future uses and possible abuses of AI”. The reply was, “Microsoft has nothing to share on this at the moment, but we will keep you in the loop on upcoming news in this area.”

With no direct knowledge of exactly how the Leavers Classifier will work, what follows should not be taken as a critique or criticism of the Microsoft product, but a look at potential problems for any product that uses what amounts to a psycholinguistic AI analysis of users’ communications. 

SecurityWeek highlighted that such products were inevitable back in April 2017: “Users’ decreasing expectation of privacy would suggest that sooner or later psycholinguistic analysis for the purpose of identifying potential malicious insiders before they actually become malicious insiders will become acceptable.”

The potential difficulties include unethical purpose, false positives, and misuse. 

On ethics, the question that must be asked is whether this is a right and proper use of technology. “My intuition,” said Kazerounian, “is that monitoring communications to determine whether someone is considering leaving — especially if the results could have negative outcomes — would not be considered by most people as an appropriate thing to do.” Nevertheless, it is allowed even by GDPR with just a few limitations.

False positives in AI are generally caused by unintended bias. This can be built into the algorithm by the developers, or ‘learned’ by the AI through an incomplete or error-strewn training dataset. We can assume that the big tech companies have massive datasets on both people and communications. 

Unintended bias in the algorithm would be difficult to prevent and even harder to detect. “I think there will always be some degree of error in these systems,” said Kazerounian. “Predicting whether someone is going to leave is not something that humans can do effectively. It is difficult to see why a future AI system won’t misprocess some of the communications, some of the personal motivations and so on in the same way that humans do.” 

He added, “I may have personal reasons for talking a certain way at work. It may have nothing to do with my desire to stay or not. I might have other motivations for staying or not, that simply won’t be reflected in the types of data that those systems have access to. There’s always going to be a degree of error.”

Misuse involves how companies will use the data provided by the system. It is difficult to believe that ‘high-likelihood leavers’ will not be the first staff laid off in economic downturns. It is difficult to believe that the results won’t be used to help choose staff for promotion or relegation, or that the results won’t be used in pay reviews. And remember that all of this may be based on a false positive that we can neither predict nor understand.

There is a wider issue as well. If companies can obtain this technology, it is hard to believe that law enforcement and intelligence agencies won’t do similar. The same mistakes can be made but with more severe results — and this will be far more extreme in some countries over others.

Abuse

Alex Polyakov, CEO and founder of Adversa.ai, is more worried about the intentional abuse of AI systems by manipulating the system learning process. “Research studies conducted by scientists and proved by our AI red team during real assessments of AI applications,” he told SecurityWeek, “prove that sometimes in order to fool an AI-driven decision-making process, be it either computer vision or natural language processing or anything else, it’s enough to modify a very small set of inputs.”

He points to the classic phrase, ‘eats shoots and leaves’ where just the inclusion or omission of punctuation changes the meaning between a terrorist and a vegan. “The same works for AI but the number of examples is enormously bigger for each application and finding all of them is a big challenge,” he continued.

Polyakov has already twice demonstrated how easy it is to fool AI-based facial recognition systems – firstly showing how people can make the system believe they are Elon Musk, and secondly by showing how an apparently identical image can be interpreted as multiple different people.

This principle of manipulating the AI learning process can be applied by cybercriminals to almost any cybersecurity AI tool.

The bottom line is that artificial intelligence is more artificial than intelligent. We are many years away from having computers with true artificial intelligence – even if that is possible. For today, AI is best seen as a tool for automating existing human processes on the understanding that it will achieve the same success and failure rates that already exist – but it will do so much faster, and without the need for a costly team of analysts to achieve those successes and make those mistakes. Microsoft’s warning on users’ automation bias – the over-reliance on AI outputs – is something that every user of AI systems should consider.

Related: Cyber Insights 2022: Adversarial AI

Related: Hunting the Snark with ML, AI, and Cognitive Computing

Related: Are AI and ML Just a Temporary Advantage to Defenders?

Related: The Malicious Use of Artificial Intelligence in Cybersecurity

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Understand how to go beyond effectively communicating new security strategies and recommendations.

Register

Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.

Register

Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

Cybercrime

The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

Data Protection

The cryptopocalypse is the point at which quantum computing becomes powerful enough to use Shor’s algorithm to crack PKI encryption.

Cybercrime

As it evolves, web3 will contain and increase all the security issues of web2 – and perhaps add a few more.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Risk Management

The supply chain threat is directly linked to attack surface management, but the supply chain must be known and understood before it can be...

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Malware & Threats

The NSA and FBI warn that a Chinese state-sponsored APT called BlackTech is hacking into network edge devices and using firmware implants to silently...