Several major artificial intelligence software vendors have signed on to a new security initiative out of the not-for-profit Cloud Security Alliance (CSA) to create trusted best practices for generative-AI technology.
The new AI Safety Initiative has attracted participation from tech heavyweights Microsoft, Amazon and Google OpenAI and Anthropic and plans to work on tools, templates and data for deploying AI/LLM technology in a safe, ethical and compliant manner.
“The AI Safety Initiative is actively developing practical safeguards for today’s generative AI, structured in a way to help prepare for the future of much more powerful AI systems. Its goal is to reduce risks and amplify the positive impact of AI across all sectors,” the group said in a statement.
The immediate plan is to create security best practices for AI use and deployment and make them freely available. According to the CSA, the goal is to give customers of all sizes confidence to accelerate responsible adoption due to the presence of guidelines for usage that mitigate risks.
The AI Safety initiative will complement AI assurance programs within governments “with a healthy degree of industry self-regulation,” and provide what is described as “forward thinking program to address critical ethical issues and impact to society resulting from significant advances in AI over the next several years.”
Veteran cybersecurity executive Caleb Sima, who is chairing the new initiative, said generative-AI technology like chatbots and image manipulation tools have begun to reshape the world but warns that it comes with immense risk.
“Uniting to share knowledge and best practices is crucial. The collaborative spirit of leaders crossing competitive boundaries to educate and implement best practices has enabled us to build the best recommendations for the industry,” Sima said.
The group has exceeded 1,500 expert participants, the largest in the 14-year history of the Cloud Security Alliance.