Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Ethical AI, Possibility or Pipe Dream?

Coming to a global consensus on what makes for ethical AI will be difficult. Ethics is in the eye of the beholder.

Ethics and IA

Coming to a global consensus on what makes for ethical AI will be difficult. Ethics is in the eye of the beholder.

Ethical artificial intelligence (ethical AI) is a somewhat nebulous term designed to indicate the inclusion of morality into the purpose and functioning of AI systems. It is a difficult but important concept that is the subject of governmental, academic and corporate study. SecurityWeek talked to IBM’s Christina Montgomery to seek a better understanding of the issues.

Montgomery is IBM’s chief privacy officer (CPO), and chair of the IBM AI ethics board. On privacy, she is also an advisory council member of the Center for Information Policy Leadership (a global privacy and data policy think tank), and an advisory board member of the Future of Privacy Forum. On AI, she is a member of the U.S. Chamber of Commerce AI Commission, and a member of the National AI Advisory Commission.

Privacy and AI are inextricably linked. Many AI systems are designed to pass ‘judgment’ on people and are trained on personal information. It is fundamental to a fair society that privacy is not abused in teaching AI algorithms, and that ultimate judgments are accurate, not misused, and free of bias. This is the purpose of ethical AI.

Christina Montgomery IBMBut ‘ethics’ is a difficult concept. It is akin to a ‘moral compass’ that fundamentally does not exist outside of the viewpoint of each individual person. It differs between cultures, nations, corporations and even neighbors, and cannot have an absolute definition. We asked Montgomery, if you cannot define ethics, how can you produce ethical AI?

“There are different perceptions and different moral compasses around the world,” she said. “IBM operates in 170 countries. Technology that is acceptable in one country is not necessarily acceptable in another country. So, that’s the base line – you must always conform to the laws of the jurisdiction in which you operate.”

Beyond that, she believes that ethical AI is a sociotechnical issue that must balance the wellbeing of people with the operations of technology. “The first question,” she said, “is to ask ourselves not whether this is something we can do with technology, but whether this is something we should do. This is what we do at IBM – we use value-based principles to govern how we operate and what we produce.”

She gives a few examples of this stance in operation. “We were the first major company to come out and say, ‘We are no longer going to sell general purpose facial recognition API’.” This was a value-based decision made by IBM in accordance with its own ethical values. Its own moral compass and its own values led it to that position.

“There are many companies in the facial recognition space,” she continued. “We chose not to be there because it didn’t align with our principles. We didn’t feel the technology was ready to be deployed in a fair way, and it could also be used in contexts like mass surveillance – which we did not find acceptable from our moral position.”

Advertisement. Scroll to continue reading.

Compare this to a statement from Cate Cadell, formerly a technology and politics correspondent for Reuters in Beijing and currently a national security reporter focusing on China at The Washington Post. The comment comes from the Sydney Morning Herald (September 4, 2022) and originates from a book being published on September 6, 2022.

“Local police describe vast, automated networks of hundreds or even thousands of cameras in their area alone, that not only scan the identities of passersby and identify fugitives, but create automated alarm systems giving authorities the location of people based on a vast array of “criminal type” blacklists, including ethnic Uighurs and Tibetans, former drug users, people with mental health issues and known protestors.”

The mass surveillance based on AI-augmented facial recognition that concerned IBM is alive and well in China.

Montgomery’s second example of IBM’s ethical stance on AI came with the COVID-19 pandemic. “When COVID-19 struck, there was much discussion on how technology could be deployed to help address the global pandemic,” she said. One of these discussions was around the use of location data to locate, identify and warn people at risk of infection. This would inevitably involve incursions into people’s personal and healthcare information.

“IBM took a step back,” she said, “and we asked ourselves not what could be done, but what we as a company were willing to do. And we were not willing to develop technology solutions that were going to track individuals to ensure they comply with quarantine. Instead, we focused on a computing consortium that brought together the compute power of supercomputers and leveraged it for things like drug discovery – ultimately leading to the development of a vaccine in a shorter timeframe.”

Choosing to limit development to just applications that are not considered unethical is, however, only half a solution. Many apps are not designed to be unethical but become so through undetected and usually unintended bias hidden in the algorithms. This bias can be amplified over time and lead to outcomes that may harm individuals or sections of society.

IBM tackles this with a range of principles. The first is that AI should never be designed to replace human decision-making, but to augment it: the operation of AI should always have human oversight that can monitor for signs of bias.

The second is the use of a concept known as ‘explainable AI’. “Explainable artificial intelligence (XAI), says IBM, “is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact and potential biases.”

Montgomery explains, “Algorithms are essentially mathematical solutions. If you treat the discovery of bias as a math problem, you can employ other mathematical equations to detect deviations in the expected outcome of an AI algorithm.” This, with explainable AI, can be used to detect bias and locate its source within the algorithm.

The final piece in the ethical AI jigsaw is to prevent the use of an ethical algorithm for unethical purposes by the user. “In some cases, such as facial recognition, we simply won’t sell it,” said Montgomery. “With other types of technology, our decisions may determine who we sell it to and or what contract terms and conditions we put in place – what boundaries, what guardrails, what contractual restrictions, what technical restrictions we build into the product to ensure that that misuse doesn’t happen.”

Few would doubt that IBM has taken a moral stance on ethical AI. It is, however, IBM’s own view of ethics that prevails, and this may not be shared by everyone. Many countries are trying to develop a formal use of ethical principles – but their decisions and rules will be governed by their own different social and cultural mores. For example, Europe is likely to strengthen an ethical view of privacy. The US, while privacy is still important, will focus on how ethical AI can be used without impinging upon business innovation.

Even China could make an ethical argument. The East does not uphold the importance of the individual in the same manner as the West – China could argue that the health of the nation is more important than the health of the individual, and its use of facial recognition is designed for this purpose.

Coming to a global consensus on what makes for ethical AI will be difficult. Ethics is in the eye of the beholder. Different nations will have different ideals – and it may be that the stance taken by global transnational businesses such as IBM will ultimately be the primary mechanism for a transnational statement on ethics in AI.

Related: Bias in Artificial Intelligence: Can AI be Trusted?

Related: EU Proposes Rules for Artificial Intelligence to Limit Risks

Related: Becoming Elon Musk – the Danger of Artificial Intelligence

Related: Facial Recognition Firm Clearview AI Fined $9.4 Million by UK Regulator

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

Expert Insights

Related Content

Cybercrime

The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

Cybercrime

A recently disclosed vBulletin vulnerability, which had a zero-day status for roughly two days last week, was exploited in a hacker attack targeting the...

Cybercrime

As it evolves, web3 will contain and increase all the security issues of web2 – and perhaps add a few more.

Cybercrime

Luxury retailer Neiman Marcus Group informed some customers last week that their online accounts had been breached by hackers.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Cybercrime

Zendesk is informing customers about a data breach that started with an SMS phishing campaign targeting the company’s employees.

Cybercrime

Patch Tuesday: Microsoft calls attention to a series of zero-day remote code execution attacks hitting its Office productivity suite.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.