Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Meta Says It Will Label AI-Generated Images on Facebook and Instagram

Facebook and Instagram users will start seeing labels on AI-generated images that appear on their social media feeds, as the tech industry aims to sort between what’s real and not.

Meta's "Pay for Privacy"

Facebook and Instagram users will start seeing labels on AI-generated images that appear on their social media feeds, part of a broader tech industry initiative to sort between what’s real and not.

Meta said Tuesday it’s working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools.

What remains to be seen is how well it will work at a time when it’s easier than ever to make and distribute AI-generated imagery that can cause harm — from election misinformation to nonconsensual fake nudes of celebrities.

“It’s kind of a signal that they’re taking seriously the fact that generation of fake content online is an issue for their platforms,” said Gili Vidan, an assistant professor of information science at Cornell University. It could be “quite effective” in flagging a large portion of AI-generated content made with commercial tools, but it won’t likely catch everything, she said.

Meta’s president of global affairs, Nick Clegg, didn’t specify Tuesday when the labels would appear but said it will be “in the coming months” and in different languages, noting that a “number of important elections are taking place around the world.”

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” he said in a blog post.

Meta already puts an “Imagined with AI” label on photorealistic images made by its own tool, but most of the AI-generated content flooding its social media services comes from elsewhere.

A number of tech industry collaborations, including the Adobe-led Content Authenticity Initiative, have been working to set standards. A push for digital watermarking and labeling of AI-generated content was also part of an executive order that U.S. President Joe Biden signed in October.

Advertisement. Scroll to continue reading.

Clegg said that Meta will be working to label “images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.”

Google said last year that AI labels are coming to YouTube and its other platforms.

“In the coming months, we’ll introduce labels that inform viewers when the realistic content they’re seeing is synthetic,” YouTube CEO Neal Mohan reiterated in a year-ahead blog post Tuesday.

One potential concern for consumers is if tech platforms get more effective at identifying AI-generated content from a set of major commercial providers but miss what’s made with other tools, creating a false sense of security.

“There’s a lot that would hinge on how this is communicated by platforms to users,” said Cornell’s Vidan. “What does this mark mean? With how much confidence should I take it? What is its absence supposed to tell me?”

Written By

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Learn about active threats targeting common cloud deployments and what security teams can do to mitigate them.

Register

Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.

Register

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...