Artificial Intelligence

Bank of England Will Review the Risks That AI Poses to UK Financial Stability

The Bank of England will make an assessment next year about the risks posed by artificial intelligence and machine learning.

The Bank of England will make an assessment next year about the risks posed by artificial intelligence and machine learning.

The Bank of England, which oversees financial stability in the U.K., said Wednesday that it will make an assessment next year about the risks posed by artificial intelligence and machine learning.

In its half-yearly Financial Stability Review, the bank said it was getting advice about the potential implications stemming from the adoption of AI and machine learning in the financial services sector, which accounts for around 8% of the British economy and has deep-rooted global connections.

The bank’s Financial Policy Committee, which identifies and monitors risks, said it and other authorities would seek to ensure that the U.K. financial system is resilient to risks that may arise from widespread use of AI and machine learning.

“We obviously have to go into AI with our eyes open,” bank Gov. Andrew Bailey said at a press briefing. “It is something that I think we have to embrace, it is very important and has potentially profound implications for economic growth, productivity and how economies are shaped going forward.”

Over the past year, the potential benefits and threats of the new technologies have grown. Some observers have raised concerns over AI’s as-yet-unknown dangers and have been calling for safeguards to protect people from its existential threats.

There is a global race to figure out how to regulate AI as OpenAI’s ChatGPT and other chatbots exploded in popularity, with their ability to create human-like text and images. Leaders in the 27-nation European Union on Wednesday are trying to agree on world-first AI regulations.

“The moral of the story is if you’re a firm using AI, you have to understand the tool you are using, that is the critical thing,” Bailey said.

Admitting that he is “palpably not” an expert on AI, Bailey said the new technologies have “tremendous potential” and are not simply “a bag of risks.”

Advertisement. Scroll to continue reading.

RelatedCyber Insights 2023 | Artificial Intelligence

RelatedWhite House Unveils Artificial Intelligence ‘Bill of Rights’

RelatedBias in Artificial Intelligence: Can AI be Trusted?

Related Content

Artificial Intelligence

The group recommends that Congress draft emergency spending legislation to boost U.S. investments in artificial intelligence, including new R&D and testing standards to understand...

Artificial Intelligence

China’s official Xinhua news agency said the two sides would take up issues including the technological risks of AI and global governance.

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

Artificial Intelligence

Israeli AI security firm Apex has received $7 million in seed funding for its detection, investigation, and response platform.

Artificial Intelligence

Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing...

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Artificial Intelligence

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s...

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version