Connect with us

Hi, what are you looking for?


Application Security

The Good, the Bad and the Ugly of Generative AI

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive one to thrive.”

Deepfake cybersecurity

As humans, we’re naturally wired to be negative. It’s a widely studied concept referred to as negativity bias, and it’s not entirely a bad thing. Dr. Richard Boyatzis, Professor of Organizational Behavior, Psychology and Cognitive Science is quoted as saying, “You need the negative focus to survive, but a positive one to thrive.” This helps to explain the overwhelming number of gloom and doom articles about Generative AI, commonly understood as ChatGPT, Google Bard, Microsoft Bing Chat amongst others. But this observation also points to the opportunity we have to identify ways Generative AI can help us thrive.

A recent example from Vulcan’s Q2 2023 Vulnerability Watch report (PDF) helps provide some perspective on the good, the bad and the ugly of Generative AI.

  • The good about ChatGPT is the ability to supplement human workflows and drive efficiencies. For example, it can be used to support software development, including providing recommendations for code optimization, bug fixing and code generation.
  • The bad comes when ChatGPT uses old data and information it was trained on to make recommendations that turn out to be ineffective.
  • Things start to get ugly when threat actors take advantage of bad data and gaps. In the absence of data and training, ChatGPT can start to freelance and generate convincing but not necessarily accurate information which Vulcan refers to as “AI package hallucination.”

However, it’s important to look at this with some historical context. Back in the early days of the internet bad guys figured out that people fat finger URLs so they would spoof a website with a misspelling and take advantage of people to infect their systems with malware or steal their credentials. Weaknesses in email usage and file sharing provided similar opportunities. Bad guys have always looked for gaps to exploit. It’s part of the natural evolution of technology that has led to innovation in cybersecurity solutions including anti-phishing tools, multi-factor authentication (MFA) and secure file transfer solutions. In the case of AI package hallucination, threat actors are looking for gaps in responses that they can fill with malicious code. This will undoubtedly spur more innovation.

Generative AI and security operations
Generative AI also holds great promise to transform security operations. We just have to look for ways to apply it for good and understand how to mitigate the bad and the ugly. Here are some best practices to consider.

Good: AI has a significant role in driving efficiency across the security operations lifecycle. Specifically, natural language processing is being used to identify and extract threat data, such as indicators of compromise, malware and adversaries, from unstructured text in data feed sources and intelligence reports so that analysts spend less time on manual tasks and more time proactively addressing risks.

Machine learning (ML) techniques are being applied to make sense of all this data in order to get the right data to the right systems and teams at the right time to accelerate detection, investigation and response. And a closed loop model with feedback, ensures AI capable security operations platforms can continue to learn and improve over time.

With Generative AI pushing even further, capabilities like learning from existing malware samples and generating new ones is just one example of creating outputs that can aid in detection and strengthening resilience.  

Bad and Ugly: Security operations can take a turn for the worse when we start to think we can hand the reigns over to AI models completely. Humans need to remain in the loop because analysts bring years of learning and experience that ML and Generative AI must build over time with our help if they are to act as our proxy. More than that, analysts bring trusted intuition – a gut feeling that is out of scope for AI for the foreseeable future.

Advertisement. Scroll to continue reading.

Equally important, risk management is a discipline that combines IT and business expertise. Humans bring institutional knowledge that needs to be married with an understanding of technical risk to ensure actions and outcomes are aligned with the priorities of the business.

Additionally, Generative AI is a horizontal technology that can be used in a wide variety of ways. Looking at its application too broadly may create additional challenges. Instead, we need to focus on specific use cases. A more measured approach with use cases that are built over time helps unleash the good while reducing any gaps that threat actors can exploit. Generative AI holds great promise, but it’s still early days. Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive one to thrive.”

Written By

Marc Solomon is Chief Marketing Officer at ThreatQuotient. He has a strong track record driving growth and building teams for fast growing security companies, resulting in several successful liquidity events. Prior to ThreatQuotient he served as VP of Security Marketing for Cisco following its $2.7 billion acquisition of Sourcefire. While at Sourcefire, Marc served as CMO and SVP of Products. He has also held leadership positions at Fiberlink MaaS360 (acquired by IBM), McAfee (acquired by Intel), Everdream (acquired by Dell), Deloitte Consulting and HP. Marc also serves as an Advisor to a number of technology companies.

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join security experts as they discuss ZTNA’s untapped potential to both reduce cyber risk and empower the business.


Join Microsoft and Finite State for a webinar that will introduce a new strategy for securing the software supply chain.


Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Application Security

PayPal is alerting roughly 35,000 individuals that their accounts have been targeted in a credential stuffing campaign.

Application Security

GitHub this week announced the revocation of three certificates used for the GitHub Desktop and Atom applications.

Application Security

A CSRF vulnerability in the source control management (SCM) service Kudu could be exploited to achieve remote code execution in multiple Azure services.