Artificial Intelligence

Tech Titans Promise Watermarks to Expose AI Creations

Amazon, Google, Meta, Microsoft, OpenAI and other tech firms have voluntary agreed to AI safeguards set by the White House.

The White House said Friday that OpenAI and others in the artificial intelligence race have committed to making their technology safer with features such as watermarks on fabricated images.

“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI — safety, security, and trust — and mark a critical step toward developing responsible AI,” the White House said in a release.

Representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI were to join US President Joe Biden later Friday to announce the commitments, which include developing “robust technical mechanisms” such as watermarking systems to ensure that users know when content is AI-generated, according to a White House official.

Worry that imagery or audio created by artificial intelligence will be used for fraud and misinformation has ramped up as the technology improves and the 2024 US presidential election gets closer.

Ways to tell when audio or imagery have been generated artificially are being sought to prevent people from being duped by fakes that look or sound real.

“They’re committing to setting up a broader regime towards making it easier for consumers to know whether content is AI-generated or not,” the White House official said.

“There is technical work to be done, but the point here is that it applies to audio and visual content, and it will be part of a broader system.”

The goal is for it to be easy for people to tell when online content is created by AI, the official added.

Advertisement. Scroll to continue reading.

Commitments by the companies include independent testing of AI systems for risks when it comes to biosecurity, cybersecurity, or “societal effects,” according to the White House.

Common Sense Media commended the White House for its “commitment to establishing critical policies to regulate AI technology,” according to the review and ratings organization’s chief executive James Steyer.

“That said, history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”

Biden is also working on an executive order intended to ensure that AI is safe and trustworthy, according to the White House official.

Watermarks for AI-generated content were among topics EU commissioner Thierry Breton discussed with OpenAI chief executive Sam Altman during a June visit to San Francisco.

“Looking forward to pursuing our discussions — notably on watermarking,” Breton wrote in a tweet that included a video snippet of him and Altman.

In the video clip Altman said he “would love to show” what OpenAI is doing with watermarks “very soon.”

The White House said it will also work with allies to establish an international framework to govern the development and use of AI.

Related: How Europe is Leading the World in the Push to Regulate AI

RelatedChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications

RelatedCyber Insights 2023 | Artificial Intelligence

RelatedWhite House Unveils Artificial Intelligence ‘Bill of Rights’

RelatedBias in Artificial Intelligence: Can AI be Trusted?

RelatedThe Starter Pistol Has Been Fired for Artificial Intelligence Regulation in Europe

Related Content

Artificial Intelligence

China’s official Xinhua news agency said the two sides would take up issues including the technological risks of AI and global governance.

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

Artificial Intelligence

Israeli AI security firm Apex has received $7 million in seed funding for its detection, investigation, and response platform.

Artificial Intelligence

Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing...

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Artificial Intelligence

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s...

Artificial Intelligence

New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version