Artificial Intelligence

ChatGPT Integrated Into Cybersecurity Products as Industry Tests Its Capabilities

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

ChatGPT attack

While there has been a lot of talk about how OpenAI’s ChatGPT could be abused for malicious purposes and how it can pose a threat, the artificial intelligence chatbot can also be very useful to the cybersecurity industry.

Launched in November 2022, ChatGPT has been described by many as revolutionary. It is built on top of OpenAI’s GPT-3 family of large language models and users interact with it through prompts.

There have been numerous articles describing how ChatGPT’s capabilities can be used for malicious purposes, including to write credible phishing emails and create malware. 

However, ChatGPT can bring many benefits to defenders as well, and the cybersecurity industry has been increasingly integrating it into products and services. In addition, some members of the industry have been testing its capabilities and limitations. 

In the past few months, several cybersecurity companies revealed that they have started or plan on using ChatGPT, and some researchers have found practical use cases for the chatbot.

Cloud security company Orca was among the first to announce ChatGPT integration, specifically GPT-3, into its platform. The goal is to enhance the remediation steps provided to customers for cloud security risk. 

“By fine-tuning these powerful language models with our own security data sets, we have been able to improve the detail and accuracy of our remediation steps – giving you a much better remediation plan and assisting you to optimally solve the issue as fast as possible,” Orca explained. 

Kubernetes security company Armo has integrated ChatGPT’s generative AI into its platform to make it easier for users to create security policies based on Open Policy Agent (OPA).

Advertisement. Scroll to continue reading.

“Armo Custom Controls pre-trains ChatGPT with security and compliance Regos and additional context, utilizing and harnessing the power of AI to produce custom made controls requested via natural language. The user gets the completed OPA rule produced by ChatGPT, as well as a natural language description of the rule and a suggested remediation to fix the failed control — quickly, simply and with no need to learn a new language,” the company said.

Logpoint announced recently a ChatGPT integration for its LogPoint SOAR (security orchestration, automation, and response) solution in a lab setting. 

“The new ChatGPT integration for Logpoint SOAR allows customers to investigate the potential of using SOAR playbooks with ChatGPT in cybersecurity,” the company said. 

Cyber-physical security convergence software company AlertEnterprise has launched a chatbot powered by ChatGPT. It allows users to quickly obtain information on physical access, identity access management, visitor management, door reader analytics, and security and safety reporting. Users can ask the chatbot questions such as “how many new employee badges did we issue last month?” or “show me upcoming employee training expirations for restricted area access”.

Accenture Security has been analyzing ChatGPT’s capabilities for automating some cyber defense-related tasks. 

Cybersecurity companies such as Coro and Trellix are also currently exploring the possibility of embedding ChatGPT in their offerings.

Some members of the cybersecurity community have shared the results of the tests they have conducted using ChatGPT. Training provider HackerSploit, for instance, showed how it can be used to identify software vulnerabilities, and how it can be leveraged for penetration testing. 

A researcher at Kaspersky conducted some indicator of compromise (IoC) detection experiments and found promising results in some areas. The tests included checking systems for IoCs, comparing signature-based rule sets with ChatGPT output to identify gaps, detecting code obfuscation, and finding similarities between malware binaries. 

The online malware sandbox Any.Run used ChatGPT to analyze malware. The chatbot was able to analyze simple samples, but failed when asked to look at more complex code.

NCC Group has conducted a security code review using the AI and found that it “doesn’t really work”. The company found that while it can correctly identify some vulnerabilities, it also provides false information and false positives in many cases, making it unreliable for code analysis. 

Security researchers Antonio Formato and Zubair Rahim have described how they integrated ChatGPT with the Microsoft Sentinel security analytics and threat intelligence solution for incident management. 

Juan Andres Guerrero-Saade, security researcher and adjunct lecturer at Johns Hopkins SAIS, recently integrated ChatGPT into a class on malware analysis and reverse engineering. ChatGPT helped the students quickly get answers to ‘dumb questions’, thus preventing disruption of the class. It also made it easier for them to understand the tools they were using, interpret code, and even write scripts. 

Related: Malicious Prompt Engineering With ChatGPT

Related: Bias in Artificial Intelligence: Can AI be Trusted?

Related: Ethical AI, Possibility or Pipe Dream?

Related Content

Artificial Intelligence

Software developers relying on AI chatbots for building applications may end up using hallucinated software packages.

Artificial Intelligence

Artificial intelligence computing giant NVIDIA patches flaws in ChatRTX for Windows and warns of code execution and data tampering risks.

Artificial Intelligence

Three types of vulnerabilities related to ChatGPT plugins could have led to data exposure and account takeovers. 

Artificial Intelligence

Microsoft threat hunters say foreign APTs are interacting with OpenAI’s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.

Artificial Intelligence

Prompt Security emerges from stealth with $5 million in seed to help businesses with generative-AI security tasks.

Artificial Intelligence

SecurityWeek interviews a wide spectrum of security experts on AI-driven cybersecurity use-cases that are worth immediate attention.

Artificial Intelligence

A researcher has shown how malicious actors can create custom GPTs that can phish for credentials and exfiltrate them to external servers. 

Artificial Intelligence

Major software vendors sign on to a new security initiative to create trusted best practices for artificial intelligence deployments.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version