Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek
AI in Cybersecurity
AI in Cybersecurity

Artificial Intelligence

Cyber Insights 2024: Artificial Intelligence

AI will allow attackers to improve their attacks, and defenders to improve their defense. Over time, little will change — but the battle will be more intense.

SecurityWeek’s Cyber Insights is an annual series discussing the major pain points for cybersecurity practitioners. These pain points differ year by year in line with the evolving cyber ecosphere: this year we include discussion on current pressures on the role of CISO, including the new SEC liability rules. Overall, Cyber Insights 2024 talks to hundreds of industry experts from dozens of companies covering seven primary topics. The purpose is to evaluate what is happening now, and to prepare for what is coming in 2024 and beyond.

Artificial Intelligence (AI) has been used within cybersecurity for well over a decade – largely to search out the threat-needle in the threat-haystack. 2023, however, saw the emergence and general availability of large language model (LLM) AI, commonly known as gen-AI; with ChatGPT being the iconic frontrunner. AI is now front and center for everyone.

In this Insight we will look at how AI will progress in 2024 and beyond. “Generative AI apps ChatGPT and DALL-E, released by OpenAI [late 2022] are the most disruptive technology developments since the world wide web,” comments Paul Brucciani, cyber security advisor at WithSecure.

(It is worth noting that the evolution of AI will permeate all other subjects in this series of Cyber Insights 2024.)

AI will affect areas we haven’t even considered; for example, AI will probably expedite changes to the patent laws. In late December 2023, the UK’s Supreme Court agreed with a similar ruling in the US (the US Supreme Court refused to hear a challenge against a US Patent and Trademark Office ruling): products produced by AI cannot be patented to an AI inventor. Since the owner or developer of the AI cannot claim to be the inventor of the product, the result is clearly untenable.

On top of all other predictions for AI in 2024, we will probably see lawmakers discussing changes to patent law to solve the issue.

AI in 2024 and beyond

Before considering the potential cybersecurity threats that may come from the adversarial use of AI in 2024, we need to consider whether the AI furor is genuine or hype. “The threat posed by AI to cybersecurity is real, but there is certainly a ‘hype’ element in how big a share of highly sophisticated new AI attacks and data breaches are going to pose,” comments Andrew Shikiar, executive director at FIDO Alliance.

Phishing

Phishing is the cyber threat most discussed by security professionals and the media. There is little doubt that gen-AI can provide the means to supercharge attacks: word-perfect, multi-lingual, and super-scaled.

There is more room for doubt, however, whether this is already happening, or will happen in 2024. Shikiar believes it has started, with more than half the population believing there has been an increase in the volume and an improvement in the sophistication of attacks.

Advertisement. Scroll to continue reading.

Kevin Surace, chair at Token, believes, “2024 is poised to bring an onslaught of AI generated phishing attacks.” Already, adds Matt Waxman, SVP and GM for data protection at Veritas Technologies, “Tools like WormGPT make it easy for attackers to improve their social engineering with AI-generated phishing emails that are much more convincing than those we’ve previously learned to spot.”

But Andy Patel, formerly senior researcher at WithSecure, raises a question. “The creation of such content requires expertise in prompt engineering – knowing which inputs generate the most convincing outputs. Will prompt engineering become a service offering? Perhaps.” 

If AI-as-a-service does emerge in 2024, it will undoubtedly lead to an increase in the volume of phishing. “These AI models can provide novice malicious actors with advanced capabilities… which were once the domain of more skilled hackers,” warns Ivan Novikov, founder and CEO at Wallarm The service side of the criminal underground is likely to continue its current course of turning wannabes into effective criminals.

However, at the beginning of 2024, we simply don’t know when gen-AI will be incorporated into mass mainstream phishing, nor how effective it will be.

Deepfakes: phishing’s video and voice accessory

Everybody is waiting for the true arrival of deepfakes into the attackers’ armory. It has been predicted for many years and has been used on occasion but is not yet standard attacker practice. Nevertheless, the inclusion of AI-generated voice and video into targeted spear-phishing and BEC will make attacks far more believable. It may happen during 2024; but the jury is still out.

“Absolutely it will improve and will be used in attacks,” warns Gerald Auger, consultant and adjunct professor at The Citadel (the military college of South Carolina). “Deepfake video of a CFO coupled with the deepfake audio capabilities (especially if there is enough corpus of audio sampling — think of the CFO for a Fortune 50 company that has done public speaking) will be enough to generate compelling content to trick financial analysts into moving money to threat actor controlled accounts.”

Shikiar believes that deepfake development will continue and improve, but will result in few data breaches in 2024. “Social engineering is already the cause of the majority of attacks, and now any fraudster, anywhere in the world, can generate word-perfect phishing attacks that are near-impossible to detect — at a fraction of the effort of creating a deepfake.” It’s not that deepfakes aren’t effective, it’s just that they aren’t necessary. 

Disinformation in elections year

Elia Zaitsev, Global Chief Technology Officer (CTO) at CrowdStrike

2024 is likely to see a spike in malicious disinformation because of elections in the US and Europe, and wars in Ukraine and Gaza. Elia Zaitsev, CTO, CrowdStrike, comments, “Nation-state adversaries—such as Russia, China, and Iran— have an established history of attempting to influence or subvert elections globally through cyber means and information operations. These adversaries have leveraged blended operations to include elements of ‘hack-and-leak’ campaigns, integration of modified or falsified content, and amplification of particular materials or themes.” The arrival and use of gen-AI will make such campaigns easier and more scalable.

“AI will be used to create disinformation and influence operations in the runup to the high-profile elections of 2024. This will include synthetic written, spoken, and potentially even image or video content,” warns Patel. 

But it’s not simply down to AI. “Disinformation is going to be incredibly effective now that social networks have scaled back or completely removed their moderation and verification efforts,” he continues. “Social media will become even more of a cesspool of AI and human-created garbage.”

Shivajee Samdarshi, CPO at Venafi, continues the same theme: “With major elections in the US, UK, and India, we are likely to see AI supercharging election interference in 2024… the concept of trust, identity and democracy itself will be under the microscope.” 

On a more positive note, Christian Schnedler, CISO and cyber practice lead at WestCap, thinks the surge in misusing identities to spread misinformation may prove to be a tipping point in better identity protection in the US – using a mobile driver’s license. “Uptake by consumers has been slowed by the lack of commercial application,” he comments. 

“This may all change in the wake of the disinformation campaigns America is about to experience. Once mDLs become common, the ability to authenticate a digital persona in a privacy-preserving fashion will become ubiquitous. This includes automatically signing up for new accounts by providing de minimis information to the merchant backstopped by authoritative (government) data. Moreover, sign-on and checkout will become as simple as a tap-to-pay transaction.”

Home-grown pilots and protecting the training data

Gen-AI is currently considered a co-pilot. The artificial intelligence is designed to be used by and under the influence of human operators. The development of fully automatic AI pilots is the next step, and organizations will increasingly develop their own pilots for their own applications.

Jonathan Marks, president and co-founder of Quorum, provides a co-pilot example: “As AI advances, we’ll shift from general-purpose text and experience models to more tailored, product-focused ones. Instead of creating broad solutions, companies will design specialized AI assistants explicitly for integration into specific applications. Take government affairs, for example. These AI applications could work alongside users to analyze numerous legislative data sets, identify patterns, and provide contextual recommendations within the parameters of their own purpose-built models.”

This is just the beginning. Sacha Labourey, co-founder and chief strategy officer at Cloudbees, continues: “2024 will see the emergence of true ‘pilot’ solutions that will not just propose changes, but perform them, relying on the overall security and stability test harness of the underlying product to make sure they are the proper changes.”

This will bring a new addition to the attack surface. The old adage of garbage in, garbage out has returned on steroids. If the training data used by in-house pilots can be poisoned by adversaries, the output can be similarly poisoned. Think of it like a very large buffer overflow attack – adversaries could potentially direct pilots to automatically attack the owners. Protecting the in-house training data lake will be a new priority for data defenders.

In this sense, gen-AI will deliver a double whammy for security teams starting from 2024. There is the external threat from adversaries using AI directly against organizations, and there is an internal threat from misuse and abuse of internal gen-AI applications — either by staff or adversaries.

For internal apps, OWASP published the LLM AI Security & Governance Checklist on December 6, 2023.

“The OWASP Top 10 for Large Language Model Applications project,” says OWASP, “aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs). The project provides a list of the top 10 most critical vulnerabilities often seen in LLM applications, highlighting their potential impact, ease of exploitation, and prevalence in real-world applications.” 

This is a current threat beyond the adversarial use of AI. “We saw the landscape of threats expand dramatically, from the jailbreak of Chevrolet’s Chatbot to prompt and data leakages in OpenAI’s custom GPTs,” comments Alex Polyakov, CEO and co-founder at Adversa AI; “not to mention numerous prompt Injection cases in Google Docs applications.” The OWASP LLM top ten should be required reading for all security personnel in January 2024.

Upgrading automated security defenses may downskill security staff

One of the biggest reasons for integrating AI throughout security during 2024 will be to increase automation. If the rules are right, AI will do more, faster, and without human error. This will reduce pressure on existing staff levels, solving one of security’s enduring problems: how to do more with less.

“In 2024, we’ll see the proliferation of AI and generative AI platforms being integrated into security tools, allowing huge amounts of data to be processed much more quickly, which will speed up operations such as instant response,” comments James Hinton, Director of CST services at Integrity360.

“Where AI can triage data really quickly and provide the results, organizations won’t necessarily require skilled analysts to write custom queries. Indeed, AI can be used to complete such tasks, freeing up highly skilled security professionals to focus on higher value tasks.”

Releasing skilled professionals is the glass half full view. There is also a glass half empty view: will those highly skilled security professionals remain necessary? What about the longer-term?

“As the integration process happens, we’ll also see younger generations not get the same hands-on experiences around workplace tasks as they’re used to, specifically in troubleshooting, outages, and security incidents, as much of this will be automated by AI,” warns Carl Froggett, CIO at Deep Instinct. 

“The question for business leaders then becomes, how do you continue to build and shape people’s skills and careers when opportunities to learn the basic building blocks in the workforce have been removed by AI involvement?”

The knock-on effect may be good for the skills shortage since skills will be less necessary. But employers may start recruiting fewer, less skilled, and lower paid youngsters.

Like so many other questions at this dawn of the AI era, we won’t know the answer until it happens, slowly over the next few years.

Artificial general intelligence

The realization of artificial general intelligence (AGI) is thought to be many years away — but we may see increasing discussion in 2024. AGI is a machine that can learn without being told, solve problems, and effectively reach its own conclusions — something more like the human brain and its thought processes than current gen-AI.

The rumor mill is already primed. In November 2023, Sam Altman was briefly removed from his role as CEO of OpenAI (he was gone for just a few days). No official reason was given for his removal; but Reuters reported on an internal memo that discussed a project known as Q* and its potential to lead to AGI. Reuters hinted that it may have some relevance to the sacking — but it’s all just rumors at this stage.

Also in November 2023, Google DeepMind published a paper titled Levels of AGI: Operationalizing Progress on the Path to AGI. The purpose was to define AGI, and the paper developed six principles: focus on capabilities, not processes; focus on generality and performance; focus on cognitive and metacognitive tasks; focus on potential, not deployment; focus on ecological validity; focus on the path, not a single endpoint.

Noticeably, these principles measure the progress of development, and specify what AGI should achieve without specifying what it should not do. There is no place for Asimov’s first law. There will be much discussion about AGI; and that discussion may start to emerge in 2024.

But it’s worth noting the view of Daniel Li, CEO and co-founder at Plus, and a partner with the Madonna Venture Group: “People regularly overestimate what can happen in a year and underestimate what can happen in 5 or 10 years. With the rapid pace of progress in AI, the world is going to look totally different in 10 years, whether we have gotten AGI or not.”

Polyakov thinks AGI could be here by 2028 – or even earlier. “Some sort of AGI prototype created by the combination of various models and code wrapping may be possible even within the next year or two.”

Peter and the AI wolf

Is the cybersecurity threat of AI real or hype? There is truth in both possible answers: the threat may be largely hype right now, but it will become real as AI continues to increase in capability and decrease in cost. The danger in crying ‘wolf’ too often is that we won’t recognize and will be il-prepared for the real wolf when it arrives.

The hype is not based on any inability of current AI to be a serious threat, but on the criminals’ lack of need to use it. Their current methods work well enough.

But, looking beyond the current low use of adversarial AI, what can we expect in the future, possibly as soon as the latter part of 2024? “We’re bracing for an escalation to multimodal attacks, where hackers simultaneously manipulate text, images, and voice to craft more elusive threats,” suggests Polyakov. 

“Another burgeoning frontier is the exploitation of autonomous AI agents. These nascent entities in the enterprise sphere, particularly in customer support, sales, or marketing, present ripe targets. As these agents become integrated into business processes, we expect complex, multi-step attacks, exploiting interactions between multiple AI agents.”

He continued, “LLMs and similar AI systems are like Swiss Army knives for cybercriminals. They can generate phishing emails indistinguishable from genuine ones, create fake news at an industrial scale, or automate hacking attempts. They’re essentially multiplying the workforce of cybercriminals by thousands without any overhead.”

It is possibly hype to say this is happening now – but it is coming, and we need to consider and implement defenses now. The best defense against adversarial AI is defensive AI – the use of AI to detect the use of AI. There is a hidden question here: can defensive AI get ahead of attacking AI. The answer depends on whether your glass is half full or half empty. 

“AI defense will never get ahead of AI offense,” comments Auger. “Historical reflection reveals that cyber defenders are locked in an act/react relationship with threat actors and their attack techniques.” This, he suggests, will not change.

Polyakov is more optimistic. “To counter AI weaponization, we need to fight fire with fire. Our defense systems must not just learn but adapt in real time,” he says. Automated AI red teaming platforms can learn about new vulnerabilities, construct attacks, and test them against AI defense before any breach occurs.

“The race between AI defense and offense is perpetual, but defense has a secret weapon,” he continues: “collaboration. By pooling collective intelligence and resources, defensive AI can stay a step ahead.”

Government and self-regulation

2024 will be a big year for government intervention in AI development and use. As always, the regulatory dilemma will be how to protect people and privacy without preventing innovation and economic growth. Traditionally, the EU focuses on the former while the US focuses on the latter. “It seems that the EU and the US are headed down diverging paths on AI regulation,” warns Harry Borovick, general counsel at Luminance. “This will cause a compliance headache for businesses who work across multiple markets.”

Brucciani adds, “There is a fundamental difference between the US and the EU over the question of who owns personal data: the collector or the subject? In Europe, it is the subject; in the US and elsewhere, it is the collector… So far, big players, like OpenAI, have avoided scrutiny by simply refusing to detail the data used to create their software. ”

First out of the regulatory blocks is the EU’s Artificial Intelligence Act, which achieved political agreement on December 8, 2023. “By guaranteeing the safety and fundamental rights of people and businesses,” said Ursula von der Leyen, President of the European Commission, “it will support the development, deployment and take-up of trustworthy AI in the EU.” The Act focuses on what it describes as ‘high risk’ areas including within the CNI, recruitment, and use by law enforcement. ‘Minimal risk’ areas, such as spam filters, are given a free pass. ‘Unacceptable risks’, such as manipulating human behavior, social scoring, and some predictive policing, is simply banned.

Anurag Gurtu, CPO at StrikeReady, sees one possible interesting effect of the Act. “The Act’s implications on open-source AI models, which are exempt from certain restrictions,” he notes, “could stimulate interesting shifts in the AI industry, potentially favoring open-source approaches.”

Patel considers this would be a Good Thing: “Open-source AI will continue to improve and be taken into widespread use. These models herald a democratization of AI, shifting power away from a few closed companies and into the hands of humankind.”

At the time of writing, the Act still needs formal approval by the European Parliament and Council, and there will be a staged transitional period before it fully takes effect. During this period, the EU will be initiating an AI Pact, comprising AI developers from around the world that will voluntarily commit to the Act provisions ahead of the legal deadlines.

Like GDPR, the AI Act is likely to have global ramifications — and like GDPR, the financial sanctions could be high: potentially up to €35 million (approximately $37.7 million) or 7% of global turnover.

The US approach to AI regulation is, so far, less restrictive. “The White House Office of Science and Technology Policy released the blueprint for an AI Bill of Rights on October 4, 2022,” explains Brucciani. “The blueprint is a non-binding set of guidelines for the design, development, and deployment of AI systems.”

The difficulty for any administration in applying a federal law across the entire United States is understood. But it will be interesting to see whether individual states start developing more restrictive state-wide AI regulations, loosely aligned with the EU Act, in the same way that many states developed their own privacy regulations loosely based on GDPR.

The irony, as with all cybersecurity regulations, is that while they place additional burdens on user organizations and restrictions on developers, those regulations have no effect on the bad guys.

Nevertheless, Suresh Vittal, CPO at Alteryx, believes that regulations such as the AI Act will promote greater use of gen-AI within organizations. He notes that the biggest user concerns are around privacy and a lack of trust in results, both of which are addressed in the AI Act. He believes, “In 2024, we’ll see enterprises advance their governance frameworks to unlock broad benefits and productivity gains from meaningful application of Generative AI.”

In parallel with regulations, we will continue to see government advice on how to use AI. In late November 2023, ‘global guidelines for AI security‘ were published. These were primarily developed by the UK’s NCSC and CISA in the US but endorsed by government agencies in 18 countries. 

Christina Montgomery, IBM
Christina Montgomery, Vice President and Chief Privacy & Trust Officer for IBM.

On December 11, 2023, NIST published the draft SP 800-226, Guidelines for Evaluating Differential Privacy Guarantees, described as ‘guidance on evaluating a privacy protection technique for the AI era’.

“In 2024, we’ll make strides in governments and corporations working together to advance meaningful movement on AI regulation and responsible adoption. We’re already seeing this with the EU — they just announced a provisional agreement on the world’s first comprehensive AI legislation that focuses on regulating high-risk applications of AI while promoting transparency, explainability, and safety among all AI models,” says Christina Montgomery, VP, chief privacy and trust Officer, and AI ethics board chair at IBM.

“I’m confident that we’ll see collaboration in the US as well. We’ve [IBM] advocated for policymakers and the government to reinforce trust through risk-based precision regulation of AI that prioritizes accountability and supports open innovation, not a licensing regime. At the same time,” she adds, “companies must be proactive and ensure the AI they build and use is explainable, transparent, and fair. This will require a multi-stakeholder approach to be successful – that’s why IBM co-founded the AI Alliance, a diverse group of technology creators, developers, and adopters that is joining forces to advance safe, responsible AI rooted in open innovation.”

The 2024 AI threat

Science we don’t understand is usually dubbed magic. That’s where we are with artificial intelligence, and as with all magic, there are both adherents and skeptics. We don’t understand what it can and cannot do, and we are in awe of the threat it brings to cybersecurity. 

But it isn’t magic.

Yes, it will enable more, faster, and better disguised attacks. But the targets are not changed. The targets will remain credentials, PI, data, and extortion. Defenders have been protecting these for years. It will be necessary to improve our defenses in the AI era, but if we continue to protect our assets, we can still block this new magic.

We must double down on zero trust defenses and network anomaly detection and be more aware of supply chain threats. We must improve our corporate security awareness, anti-phish staff training, and basic security hygiene. We must close the API doorway we keep leaving open. We must, in short, be more efficient at what we already do. And, of course, we can add our own AI into the defensive mix.

AI will allow attackers to improve their attacks, and defenders to improve their defense. Over time, little will change — but the battle will be more intense.

Meanwhile, as Ilia Kolochenko, chief architect at ImmuniWeb, comments, “Despite many sensational reports produced earlier this year projecting a looming surge of cybercrime due to the novel capabilities of generative AI, we will unlikely see a tectonic change there next year or even in 2025. Modern cybercrime is a mature, highly profitable, and well-organized industry. Therefore, AI disruption will have little impact on it.” The attackers simply don’t need AI to carry on being successful.

In short, 2023 was a year of AI hype, 2024 will see the beginning of AI reality, and 2025 will likely see its delivery.

Related: National Security Agency is Starting an Artificial Intelligence Security Center

Related: White House Offers Prize Money for Hacker-Thwarting AI

Related: ‘Grim’ Criminal Abuse of ChatGPT is Coming, Europol Warns

Related: SecurityWeek Cyber Insights 2023 | Artificial Intelligence

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Exposed data includes backup of employees workstations, secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.