Connect with us

Hi, what are you looking for?


Artificial Intelligence

ChatGPT Integrated Into Cybersecurity Products as Industry Tests Its Capabilities

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

ChatGPT data breach

While there has been a lot of talk about how OpenAI’s ChatGPT could be abused for malicious purposes and how it can pose a threat, the artificial intelligence chatbot can also be very useful to the cybersecurity industry.

Launched in November 2022, ChatGPT has been described by many as revolutionary. It is built on top of OpenAI’s GPT-3 family of large language models and users interact with it through prompts.

There have been numerous articles describing how ChatGPT’s capabilities can be used for malicious purposes, including to write credible phishing emails and create malware. 

However, ChatGPT can bring many benefits to defenders as well, and the cybersecurity industry has been increasingly integrating it into products and services. In addition, some members of the industry have been testing its capabilities and limitations. 

In the past few months, several cybersecurity companies revealed that they have started or plan on using ChatGPT, and some researchers have found practical use cases for the chatbot.

Cloud security company Orca was among the first to announce ChatGPT integration, specifically GPT-3, into its platform. The goal is to enhance the remediation steps provided to customers for cloud security risk. 

“By fine-tuning these powerful language models with our own security data sets, we have been able to improve the detail and accuracy of our remediation steps – giving you a much better remediation plan and assisting you to optimally solve the issue as fast as possible,” Orca explained. 

Advertisement. Scroll to continue reading.

Kubernetes security company Armo has integrated ChatGPT’s generative AI into its platform to make it easier for users to create security policies based on Open Policy Agent (OPA).

“Armo Custom Controls pre-trains ChatGPT with security and compliance Regos and additional context, utilizing and harnessing the power of AI to produce custom made controls requested via natural language. The user gets the completed OPA rule produced by ChatGPT, as well as a natural language description of the rule and a suggested remediation to fix the failed control — quickly, simply and with no need to learn a new language,” the company said.

Logpoint announced recently a ChatGPT integration for its LogPoint SOAR (security orchestration, automation, and response) solution in a lab setting. 

“The new ChatGPT integration for Logpoint SOAR allows customers to investigate the potential of using SOAR playbooks with ChatGPT in cybersecurity,” the company said. 

Cyber-physical security convergence software company AlertEnterprise has launched a chatbot powered by ChatGPT. It allows users to quickly obtain information on physical access, identity access management, visitor management, door reader analytics, and security and safety reporting. Users can ask the chatbot questions such as “how many new employee badges did we issue last month?” or “show me upcoming employee training expirations for restricted area access”.

Accenture Security has been analyzing ChatGPT’s capabilities for automating some cyber defense-related tasks. 

Cybersecurity companies such as Coro and Trellix are also currently exploring the possibility of embedding ChatGPT in their offerings.

Some members of the cybersecurity community have shared the results of the tests they have conducted using ChatGPT. Training provider HackerSploit, for instance, showed how it can be used to identify software vulnerabilities, and how it can be leveraged for penetration testing. 

A researcher at Kaspersky conducted some indicator of compromise (IoC) detection experiments and found promising results in some areas. The tests included checking systems for IoCs, comparing signature-based rule sets with ChatGPT output to identify gaps, detecting code obfuscation, and finding similarities between malware binaries. 

The online malware sandbox Any.Run used ChatGPT to analyze malware. The chatbot was able to analyze simple samples, but failed when asked to look at more complex code.

NCC Group has conducted a security code review using the AI and found that it “doesn’t really work”. The company found that while it can correctly identify some vulnerabilities, it also provides false information and false positives in many cases, making it unreliable for code analysis. 

Security researchers Antonio Formato and Zubair Rahim have described how they integrated ChatGPT with the Microsoft Sentinel security analytics and threat intelligence solution for incident management. 

Juan Andres Guerrero-Saade, security researcher and adjunct lecturer at Johns Hopkins SAIS, recently integrated ChatGPT into a class on malware analysis and reverse engineering. ChatGPT helped the students quickly get answers to ‘dumb questions’, thus preventing disruption of the class. It also made it easier for them to understand the tools they were using, interpret code, and even write scripts. 

Related: Malicious Prompt Engineering With ChatGPT

Related: Bias in Artificial Intelligence: Can AI be Trusted?

Related: Ethical AI, Possibility or Pipe Dream?

Written By

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join security experts as they discuss ZTNA’s untapped potential to both reduce cyber risk and empower the business.


Join Microsoft and Finite State for a webinar that will introduce a new strategy for securing the software supply chain.


Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Network Security

Attack surface management is nothing short of a complete methodology for providing effective cybersecurity. It doesn’t seek to protect everything, but concentrates on areas...

Identity & Access

Hackers rarely hack in anymore. They log in using stolen, weak, default, or otherwise compromised credentials. That’s why it’s so critical to break the...

Application Security

Fortinet on Monday issued an emergency patch to cover a severe vulnerability in its FortiOS SSL-VPN product, warning that hackers have already exploited the...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.


Government agencies in the United States have made progress in the implementation of the DMARC standard in response to a Department of Homeland Security...