Looking to cash in on the gold rush for generative-AI computing, OpenAI has rolled out a business edition of its popular ChatGPT app promising “enterprise-grade security” and a commitment not to use client-specific prompts and data in the training of its models.
The security-centric features of the new ChatGPT Enterprise are meant to address ongoing business concerns about the protection of intellectual property and the integrity of sensitive corporate data when using LLM (large language model) algorithms.
“You own and control your business data in ChatGPT Enterprise,” OpenAI declared. “We do not train on your business data or conversations, and our models don’t learn from your usage.”
The company said customer prompts and company data are not used for training OpenAI models and all conversations flowing through ChatGPT Enterprise are encrypted in transit (TLS 1.2+) and at rest (AES 256).
Taking aim at large scale enterprise deployments, OpenAI said businesses will get a new admin console with tools to handle bulk member management, SSO (single sign-on) and domain verification.
OpenAI is styling the product as “the most powerful version of ChatGPT yet, with no usage caps on the GPT-4 chatbot, higher performance speeds and access to advanced data analysis.
This shift toward the enterprise market is a notable expansion for OpenAI, the hotshot AI startup that raised a whopping $11 billion in funding and counts software giant Microsoft among its early backers.
The company said it has seen unprecedented demand for its chatbot, noting that the new ChatGPT Enterprise has already been deployed at places like Block, Canva, Carlyle, The Estée Lauder Companies, PwC, and Zapier to help craft clearer communications, accelerate coding tasks, rapidly explore answers to complex business questions, and assist with creative work.
The ChatGPT Enterprise rollout comes as organizations ramp up investments in use-cases for generative-AI beyond chatbots. Microsoft has already put ChatGPT to work on automating cybersecurity tasks while Google’s software engineers have added AI to expand code coverage in OSS-Fuzz, its open source fuzz testing infrastructure.