Connect with us

Hi, what are you looking for?


Artificial Intelligence

AI-Powered Misinformation is the World’s Biggest Short-Term Threat, Davos Report Says

False and misleading information supercharged with cutting-edge AI that threatens to erode democracy and polarize society, the World Economic Forum said in a new report.

WEF AI Threats

False and misleading information supercharged with cutting-edge artificial intelligence that threatens to erode democracy and polarize society is the top immediate risk to the global economy, the World Economic Forum said in a report Wednesday.

In its latest Global Risks Report, the organization also said an array of environmental risks pose the biggest threats in the longer term. The report was released ahead of the annual elite gathering of CEOs and world leaders in the Swiss ski resort town of Davos and is based on a survey of nearly 1,500 experts, industry leaders and policymakers.

The report listed misinformation and disinformation as the most severe risk over the next two years, highlighting how rapid advances in technology also are creating new problems or making existing ones worse.

The authors worry that the boom in generative AI chatbots like ChatGPT means that creating sophisticated synthetic content that can be used to manipulate groups of people won’t be limited any longer to those with specialized skills.

AI is set to be a hot topic next week at the Davos meetings, which are expected to be attended by tech company bosses including OpenAI CEO Sam Altman, Microsoft CEO Satya Nadella and AI industry players like Meta’s chief AI scientist, Yann LeCun.

AI-powered misinformation and disinformation is emerging as a risk just as a billions of people in a slew of countries, including large economies like the United States, Britain, Indonesia, India, Mexico, and Pakistan, are set to head to the polls this year and next, the report said.

“You can leverage AI to do deepfakes and to really impact large groups, which really drives misinformation,” said Carolina Klint, a risk management leader at Marsh, whose parent company Marsh McLennan co-authored the report with Zurich Insurance Group.

“Societies could become further polarized” as people find it harder to verify facts, she said. Fake information also could be used to fuel questions about the legitimacy of elected governments, “which means that democratic processes could be eroded, and it would also drive societal polarization even further,” Klint said.

Advertisement. Scroll to continue reading.

The rise of AI brings a host of other risks, she said. It can empower “malicious actors” by making it easier to carry out cyberattacks, such as by automating phishing attempts or creating advanced malware.

With AI, “you don’t need to be the sharpest tool in the shed to be a malicious actor,” Klint said.

It can even poison data that is scraped off the internet to train other AI systems, which is “incredibly difficult to reverse” and could result in further embedding biases into AI models, she said.

The other big global concern for respondents of the risk survey centered around climate change.

Following disinformation and misinformation, extreme weather is the second-most-pressing short-term risk.

In the long term — defined as 10 years — extreme weather was described as the No. 1 threat, followed by four other environmental-related risks: critical change to Earth systems; biodiversity loss and ecosystem collapse; and natural resource shortages.

“We could be pushed past that irreversible climate change tipping point” over the next decade as the Earth’s systems undergo long-term changes, Klint said.

Written By


Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Gain valuable insights from industry professionals who will help guide you through the intricacies of industrial cybersecurity.


Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.


Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...