Looking to cash in on the gold rush for generative-AI computing, OpenAI has rolled out a business edition of its popular ChatGPT app promising “enterprise-grade security” and a commitment not to use client-specific prompts and data in the training of its models.
The security-centric features of the new ChatGPT Enterprise are meant to address ongoing business concerns about the protection of intellectual property and the integrity of sensitive corporate data when using LLM (large language model) algorithms.
“You own and control your business data in ChatGPT Enterprise,” OpenAI declared. “We do not train on your business data or conversations, and our models don’t learn from your usage.”
The company said customer prompts and company data are not used for training OpenAI models and all conversations flowing through ChatGPT Enterprise are encrypted in transit (TLS 1.2+) and at rest (AES 256).
Taking aim at large scale enterprise deployments, OpenAI said businesses will get a new admin console with tools to handle bulk member management, SSO (single sign-on) and domain verification.
OpenAI is styling the product as “the most powerful version of ChatGPT yet, with no usage caps on the GPT-4 chatbot, higher performance speeds and access to advanced data analysis.
This shift toward the enterprise market is a notable expansion for OpenAI, the hotshot AI startup that raised a whopping $11 billion in funding and counts software giant Microsoft among its early backers.
The company said it has seen unprecedented demand for its chatbot, noting that the new ChatGPT Enterprise has already been deployed at places like Block, Canva, Carlyle, The Estée Lauder Companies, PwC, and Zapier to help craft clearer communications, accelerate coding tasks, rapidly explore answers to complex business questions, and assist with creative work.
The ChatGPT Enterprise rollout comes as organizations ramp up investments in use-cases for generative-AI beyond chatbots. Microsoft has already put ChatGPT to work on automating cybersecurity tasks while Google’s software engineers have added AI to expand code coverage in OSS-Fuzz, its open source fuzz testing infrastructure.
Related: How Europe is Leading the Push to Regulate AI
Related: Microsoft Puts ChatGPT to Work on Cybersecurity
Related: Google Brings AI Magic to Fuzz Testing Infrastructure
Related: Investors Pivot to Safeguarding AI Training Models

Ryan Naraine is Editor-at-Large at SecurityWeek and host of the popular Security Conversations podcast series. He is a security community engagement expert who has built programs at major global brands, including Intel Corp., Bishop Fox and GReAT. Ryan is a founding-director of the Security Tinkerers non-profit, an advisor to early-stage entrepreneurs, and a regular speaker at security conferences around the world.
More from Ryan Naraine
- New ‘Sandman’ APT Group Hitting Telcos With Rare LuaJIT Malware
- CrowdStrike to Acquire Application Intelligence Startup Bionic
- HiddenLayer Raises Hefty $50M Round for AI Security Tech
- Microsoft AI Researchers Expose 38TB of Data, Including Keys, Passwords and Internal Messages
- Extradited Russian Hacker Behind ‘NLBrute’ Malware Pleads Guilty
- Caesars Confirms Ransomware Hack, Stolen Loyalty Program Database
- AuthMind Scores $8.5M Seed Funding for ITDR Tech
- Zero-Day Summer: Microsoft Warns of Fresh New Software Exploits
Latest News
- In Other News: New Analysis of Snowden Files, Yubico Goes Public, Election Hacking
- China’s Offensive Cyber Operations in Africa Support Soft Power Efforts
- Air Canada Says Employee Information Accessed in Cyberattack
- BIND Updates Patch Two High-Severity DoS Vulnerabilities
- Faster Patching Pace Validates CISA’s KEV Catalog Initiative
- SANS Survey Shows Drop in 2023 ICS/OT Security Budgets
- Apple Patches 3 Zero-Days Likely Exploited by Spyware Vendor to Hack iPhones
- New ‘Sandman’ APT Group Hitting Telcos With Rare LuaJIT Malware
