Artificial Intelligence

New AI Safety Initiative Aims to Set Responsible Standards for Artificial Intelligence

Major software vendors sign on to a new security initiative to create trusted best practices for artificial intelligence deployments.

ChatGPT attack

Several major artificial intelligence software vendors have signed on to a new security initiative out of the not-for-profit Cloud Security Alliance (CSA) to create trusted best practices for generative-AI technology.

The new AI Safety Initiative has attracted participation from tech heavyweights Microsoft, Amazon and Google OpenAI and Anthropic and plans to work on tools, templates and data for deploying AI/LLM technology in a safe, ethical and compliant manner.

“The AI Safety Initiative is actively developing practical safeguards for today’s generative AI, structured in a way to help prepare for the future of much more powerful AI systems. Its goal is to reduce risks and amplify the positive impact of AI across all sectors,” the group said in a statement.

The immediate plan is to create security best practices for AI use and deployment and make them freely available.  According to the CSA, the goal is to give customers of all sizes confidence to accelerate responsible adoption due to the presence of guidelines for usage that mitigate risks.

The AI Safety initiative will complement AI assurance programs within governments “with a healthy degree of industry self-regulation,” and provide what is described as “forward thinking program to address critical ethical issues and impact to society resulting from significant advances in AI over the next several years.”

Veteran cybersecurity executive Caleb Sima, who is chairing the new initiative, said generative-AI technology like chatbots and image manipulation tools have begun to reshape the world but warns that it comes with immense risk.

“Uniting to share knowledge and best practices is crucial. The collaborative spirit of leaders crossing competitive boundaries to educate and implement best practices has enabled us to build the best recommendations for the industry,” Sima said.

The group has exceeded 1,500 expert participants, the largest in the 14-year history of the Cloud Security Alliance. 

Advertisement. Scroll to continue reading.

Related: Harmonic Lands $7M Funding to Secure Generative AI

Related: OpenAI Turns to Security to Sell ChatGPT Enterprise

Related: Microsoft Puts ChatGPT to Work on Automating Security

Related: Cybersecurity VCs Pivot to Safeguarding AI Training Models

Related Content

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

CISO Strategy

Microsoft security chief Charlie Bell pledges significant reforms and a strategic shift to prioritize security above all other product features.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Malware & Threats

Russia-linked APT28 deploys the GooseEgg post-exploitation tool against numerous US and European organizations.

Data Breaches

The US government says Midnight Blizzard’s compromise of Microsoft corporate email accounts "presents a grave and unacceptable risk to federal agencies."

Cloud Security

Patch Tuesday: Microsoft warns that unauthenticated hackers can take complete control of Azure Kubernetes clusters.

Cloud Security

News analysis: SecurityWeek editor-at-large Ryan Naraine reads the CSRB report on China's audacious Microsoft’s Exchange Online hack and isn't at all surprised by the findings.

Artificial Intelligence

Software developers relying on AI chatbots for building applications may end up using hallucinated software packages.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version