OpenAI has released another report describing the actions it took recently to prevent the abuse of its artificial intelligence services by the United States’ adversaries and other threat actors.
The AI company’s threat intelligence report for February highlights two operations that are believed to have been conducted by Chinese threat actors.
In one of these operations, dubbed ‘Peer Review’ by OpenAI, ChatGPT accounts were leveraged to aid the development and distribution of spying tools.
According to OpenAI, ChatGPT was used to edit and debug code for what appeared to be AI tools designed to ingest and analyze posts and comments from social media platforms (including X, Facebook, Telegram, Instagram, YouTube and Reddit) in search of conversations on Chinese political and social topics.
“Again according to the descriptions, one purpose of this tooling was to identify social media conversations related to Chinese political and social topics – especially any online calls to attend demonstrations about human rights in China – and to feed the resulting insights to Chinese authorities,” OpenAI explained.
The threat actor also used ChatGPT to generate descriptions and sales pitches for these tools.
While the surveillance tools appeared to leverage AI, OpenAI said they are not actually powered by the company’s services — ChatGPT was only used for debugging and creating promotional materials.
The same group also used ChatGPT to conduct research, translate and analyze screenshots of English-language documents, and generate comments about Chinese dissident organizations.
The chatbot was also abused for a different China-linked operation that involved generating social media content written in English and long-form news articles written in Spanish. Some evidence suggests that this activity may be part of the disinformation campaign named Spamouflage.
OpenAI’s latest report also reveals that the company has shut down some accounts that may have been used in support of North Korea’s fake IT worker scheme.
OpenAI previously reported shutting down ChatGPT accounts used by Iranian hackers to conduct research into attacking industrial control systems (ICS).
Related: OpenAI Finds No Evidence of Breach After Hacker Offers to Sell 20 Million Credentials
Related: Italy’s Privacy Watchdog Fines OpenAI for ChatGPT’s Violations in Collecting Users Personal Data
Related: OpenAI Rolls Out Compliance API and Integrations for ChatGPT Enterprise
