Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Security Experts Describe AI Technologies They Want to See

SecurityWeek interviews a wide spectrum of security experts on AI-driven cybersecurity use-cases that are worth immediate attention.

AI Regulations

The cybersecurity business runs on technology hype cycles and buzzwords. From zero trust to blockchain, digital transformation to posture management, defenders have been on a constant search for transformational, leapfrog technologies to stem the flow of cyberattacks.

Over the last year, Artificial Intelligence (AI) and Large Language Models (LLMs) have exploded as the most exciting frontier for security innovation with OpenAI’s ChatGPT showcasing the power of generative-AI applications.

It’s still early days but the excitement and promise of AI’s predictive capabilities and rapid data processing feels very real. Venture capital investors are throwing big money at startups promising to integrate AI into cybersecurity and we’re starting to see unique use-cases appear in the areas of fuzz-testing and vulnerability research, automation of incident response and threat hunting, and user education training. 

SecurityWeek interviewed a wide spectrum of security experts on AI-driven cybersecurity use-cases that make the most sense. These professionals, ranging from seasoned CISOs to pioneering  researchers, envision a future where AI is not just a reactive tool, but a proactive guardian. Their dream use cases span from real-time, adaptive defense systems that evolve autonomously to counteract new threats, to sophisticated insider threat detection using behavioral analytics.

Ryan Hurst, CEO, Peculiar Ventures:

At its core, AI is about processing vast amounts of data and deriving meaningful insights from it. Historically, we have either failed to extract significant value from the data we already possess or taken too long to make sense of it. I would first consider areas where we generally struggle to extract value from data. Then, I would think about how examining this data on a global scale could potentially be beneficial.

We are getting much better at designing systems capable of sharing refined information while preserving the privacy obligations and needs of each party. This skill in intelligently extracting value from data, combined with such systems, could be exceptionally powerful. For instance, state actors might aim to compromise the US energy sector and begin probing various energy companies until they gain a foothold somewhere. Could AI and trusted computing concepts be amalgamated to enable entire industries to participate in clearinghouses of events? These could maintain k-anonymity while allowing them to share anonymized, distilled data to enhance the security of the entire sector.

While human nature and commercial interests have often resisted information sharing for such value propositions, the larger challenge has been the practicality of doing this at scale. These two technological trends are driving innovations that could bridge this gap. Therefore, my first wish (in no particular order) would include making this concept a reality.

Advertisement. Scroll to continue reading.

Kymberlee Price, CEO/Founder, Zatik Security:

While science fiction has inspired a great many modern technologies, I’m particularly interested in what AI innovations can improve in cybersecurity in this decade because companies need reliable help now. I don’t dream of a future where security engineers and program managers are replaced by autobots, I hope for meaningful impact through improved security insights and efficiency.  And while AI for log analysis and breach detection are top of mind for a lot of companies, that market is filled with potential solutions being developed already, and I want to prevent the breach, not just detect it faster.   

Often the problems leading to breaches could be avoided during design – a phase of engineering where security engineers are frequently not included. Doing a threat model as part of your final security review is too late, just as throwing more security tools in the CI/CD pipeline doesn’t impact Secure by Design – if you’re only finding issues during deployment or in production, you’ve missed the design phase entirely. Today, software and infrastructure design is largely an artisanal hand-crafted process with few assistive tools or products available – this seems like a real area of market opportunity to me. 

My wish list for AI is primarily assistive, focused on improving software quality and human efficiency – once we master assistive AI, we’ll be able to lean into greater autonomy and generative AI.

Chris Castaldo, CISO, Crossbeam:

There are a few things I have seen consistently over my career building cybersecurity programs at companies. I’d love a co-pilot tool that I could point at all of my existing tooling, infrastructure and systems and get immediate answers back. Lots of time is spent understanding where the risks are in a business when you first join. Then I’d want that same tool to take actions, beyond what we think of repetitive SOAR tools that run the same workflow over and over.

Jon Callas, Distinguished Engineer,  Zatik Security:

“If I were to wave a magic wand and create an AI system with today’s technology that could really make a difference in security, it would be to go to a long-time threat, that of bogus emails. 

The obvious place this would be useful is blocking spam, phish, and business email compromise (BEC)  since more than 90% of all cyber attacks begin with phishing. It would be especially good when run within a large ISP, so that newly tuned training would help the entire user base avoid being presented with fraudulent messages. The sort of classification of inconsistencies in emails that is hard to do with regular expressions and Bayesian statistics in today’s spam filters is the bread and butter of AI systems. 

Furthermore, AI could do personalized classification of work messages, marketing messages we don’t want to unsubscribe from but are typically just deleted, personal messages. An LLM can even catch many impersonators and more. In more advanced modes, it could guide us and tell us about things that might be unwanted — as well as surfacing things that it predicts we might find interesting. 

Costin Raiu, APT malware reversing expert:

I would like to have [AI technologies] checking all the logs in my network, in real time, 24 hours a day, and emulating the eye of a skilled analyst to find suspicious things. 

I would like a competent AI watching my logs in real time. All the logs — from my mobile phone, from my router, from my computers and all smart devices in the network. Just watch the logs and tap-tap me on my shoulder whenever something suspicious happens.

For malware researchers, I want AI to write and understand APT reports. Simply having the ability to ask an AI that has been trained on all the private APT reports from all different vendors. Imagine an AI reading  and understanding all the APTreports, not just from one vendor, but from all the vendors and then you simply ask a question, what should my company worry about today? And the answer to that question, which will be different every day, is extremely valuable and it can be a time saving and it can make a huge difference for a company no matter how big or how small. 

Rob Ragan, Office of the CTO, Bishop Fox:

The fundamental security challenge that my magic wand would fix involves the use of AI for self-healing technology. Specifically, this would entail providing AI with any complex system and associated code base as input, and having the AI serve as a security engineering team. 

The AI would embrace the role of Security Engineers and Site Reliability Engineers (SREs) that the system needs. This includes understanding business requirements, constructing security requirements, developing cybersecurity-focused unit tests for those requirements, and fixing the code or component that fails those security unit tests.

We could employ multiple AI agents to help evaluate the business purpose and functional requirements. These agents would be responsible for more consistently, efficiently, and transparently evaluating the threats that the target system will face and what it needs to be resilient to. Then, they would undertake the challenging task of defining how to evaluate – through unit tests – whether the system is resilient enough, and even take it a step further by coding the necessary fixes or modifying the configurations if it’s not. In this way, we can achieve self-healing computer systems.

There are so many companies and ideas that were ahead of their time. A lot of them should be revisited with the help of augmenting intelligence and the next phase of automation capabilities. 

Ryan Hurst, Peculiar Ventures:

Another area where we have a lot of data that we do not adequately utilize is binary and configuration data. When examining a deployed system, how do we assess which parts of the code are reachable, whether firewalls would mitigate access to a given vulnerability, or if a change in access control is needed? 

Answering these kinds of questions requires time and expertise. However, this data is all structured, which makes it well-suited for the application of machine learning techniques. We could build models that understand cloud configurations, deployment scripts, system architecture, access control systems, and more. This would help us rapidly reason about true exposure and faults, enabling more proactive responses to security issues. It would also allow for more data-backed staffing requests, helping to better secure the systems we rely on daily.

Fourth, most organizations don’t really understand how bad (or good) they are relative to both competitors and regulatory requirements. I could see how over time, machine learning and the same trusted computing concepts could be applied to enable organizations to understand how they stand relatively. This understanding could help them make a case for further investment by mapping their business risks and ability to respond against their competitors.

With enough privacy-preserving computation and machine learning, you could even build predictive capabilities that could help initiate lockdowns in moments of increased risk, and maybe even simplify and automate the associated incident response should that fail.

Related: Google Brings AI Magic to Fuzz Testing With Eye-Opening Results

Related: OpenAI Turns to Security to Sell ChatGPT Enterprise

Related: HiddenLayer Raises Hefty $50M Round for AI Security Tech

Related: Microsoft Puts ChatGPT to Work on Automating Cybersecurity

Written By

Ryan Naraine is Editor-at-Large at SecurityWeek and host of the popular Security Conversations podcast series. He is a security community engagement expert who has built programs at major global brands, including Intel Corp., Bishop Fox and GReAT. Ryan is a founding-director of the Security Tinkerers non-profit, an advisor to early-stage entrepreneurs, and a regular speaker at security conferences around the world.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Learn about active threats targeting common cloud deployments and what security teams can do to mitigate them.

Register

Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.

Register

Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

CISO Strategy

SecurityWeek spoke with more than 300 cybersecurity experts to see what is bubbling beneath the surface, and examine how those evolving threats will present...

CISO Conversations

Joanna Burkey, CISO at HP, and Kevin Cross, CISO at Dell, discuss how the role of a CISO is different for a multinational corporation...

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

CISO Conversations

In this issue of CISO Conversations we talk to two CISOs about solving the CISO/CIO conflict by combining the roles under one person.

CISO Strategy

Security professionals understand the need for resilience in their company’s security posture, but often fail to build their own psychological resilience to stress.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.