Artificial Intelligence

OpenAI Turns to Security to Sell ChatGPT Enterprise

ChatGPT Enterprise is a corporate edition of ChatGPT that promises “enterprise-grade security” and a commitment not to use prompts and company data to train AI models.

Looking to cash in on the gold rush for generative-AI computing, OpenAI has rolled out a business edition of its popular ChatGPT app promising “enterprise-grade security” and a commitment not to use client-specific prompts and data in the training of its models.

The security-centric features of the new ChatGPT Enterprise are meant to address ongoing business concerns about the protection of intellectual property and the integrity of sensitive corporate data when using LLM (large language model) algorithms.

“You own and control your business data in ChatGPT Enterprise,” OpenAI declared. “We do not train on your business data or conversations, and our models don’t learn from your usage.”

The company said customer prompts and company data are not used for training OpenAI models and all conversations flowing through ChatGPT Enterprise are encrypted in transit (TLS 1.2+) and at rest (AES 256).

Taking aim at large scale enterprise deployments, OpenAI said businesses will get a new admin console with tools to handle bulk member management, SSO (single sign-on) and domain verification.

OpenAI is styling the product as “the most powerful version of ChatGPT yet, with no usage caps on the GPT-4 chatbot, higher performance speeds and access to advanced data analysis.

This shift toward the enterprise market is a notable expansion for OpenAI, the hotshot AI startup that raised a whopping $11 billion in funding and counts software giant Microsoft among its early backers. 

The company said it has seen unprecedented demand for its chatbot, noting that the new ChatGPT Enterprise has already been deployed at places like Block, Canva, Carlyle, The Estée Lauder Companies, PwC, and Zapier to help craft clearer communications, accelerate coding tasks, rapidly explore answers to complex business questions, and assist with creative work.

Advertisement. Scroll to continue reading.

The ChatGPT Enterprise rollout comes as organizations ramp up investments in use-cases for generative-AI beyond chatbots. Microsoft has already put ChatGPT to work on automating cybersecurity tasks while Google’s software engineers have added AI to expand code coverage in OSS-Fuzz,  its open source fuzz testing infrastructure.  

Related: How Europe is Leading the Push to Regulate AI

Related: Microsoft Puts ChatGPT to Work on Cybersecurity

Related: Google Brings AI Magic to Fuzz Testing Infrastructure

Related: Investors Pivot to Safeguarding AI Training Models

Related Content

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

CISO Strategy

Microsoft security chief Charlie Bell pledges significant reforms and a strategic shift to prioritize security above all other product features.

Malware & Threats

Researchers can earn as much as $450,000 for a single vulnerability report as Google boosts its mobile vulnerability rewards program.

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Malware & Threats

In 2023, Google said it blocked 2.28 million bad applications from being published on Google Play and banned 333,000 developer accounts.

Malware & Threats

Russia-linked APT28 deploys the GooseEgg post-exploitation tool against numerous US and European organizations.

Data Breaches

The US government says Midnight Blizzard’s compromise of Microsoft corporate email accounts "presents a grave and unacceptable risk to federal agencies."

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version