Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

What If the Current AI Hype Is a Dead End?

If we should face a Dead-End AI future, the cybersecurity industry will continue to rely heavily on traditional approaches, especially human-driven ones. It won’t quite be business as usual though.

Gen AI security policies

As I discussed in my previous column on Cybersecurity Futurism for Beginners, we are applying methods and approaches commonly used in future studies, especially horizon scanning and scenario planning, to explore future scenarios for how AI such as LLM’s may impact security operations going forward.

To quickly rehash, horizon scanning is not strictly speaking about predicting the future. Rather, it’s about the early detection of weak signals to identify drivers of emerging trends. We’re not trying to identify a single expected future. Instead, we are describing a range of possible futures (Four Futures Model). Planners can then use these futures to further develop scenarios to aid in risk assessment, strategic planning, or similar tasks.

AI Future #1: Dead End AI

The hype man’s job is to get everybody out of their seats and on the dance floor to have a good time.

Flavor Flav

This week we posit a future we’re calling “Dead End AI”, where AI fails to live up to the hype surrounding it. We consider two possible scenarios in such a future. Both have similar near to mid term outcomes, so we can discuss them together.

Scenario #1: AI ends up another hype like crypto, NFT’s and the Metaverse.

Scenario #2: AI is overhyped and the resulting disappointment leads to defunding and a new AI winter.

In a Dead-End AI future, the hype currently surrounding artificial intelligence ultimately proves to be unfounded. The excitement and investment in AI dwindle as the reality of the technology’s limitations sets in. The AI community experiences disillusionment, leading to a new AI winter where funding and research are significantly reduced.

Advertisement. Scroll to continue reading.

Note that the future does not imply for example that machine learning has no beneficial applications at all, or that AI is theoretically not feasible. It means that due to various limitations and constraints the current wave of AI advances will not lead through a progressive transition to full-blown artificial general intelligence (AGI), a.k.a. the technological singularity.

Analysis

We will be discussing key external factors that may contribute towards a Dead-End AI outcome. We will cite signals, strong and weak, where available. You may notice how closely intertwined many of the factors are, and you will also begin to see how some will reappear in slightly different contexts throughout our entire series.

Signals Supporting the Dead-End AI Future

We’ll go through some of the signals and trends that may indicate a Dead-End AI Scenario below.

Economic factors

Investors are rushing into Generative AI, with early-stage startup investors investing $2.2B in 2022 (contrast this with $5.8B for the whole of Europe).  But if AI fails to deliver the expected return on investment it will be catastrophic for further funding for AI research and development.

The venture capital firm Andreessen-Horowitz (a16z) for example published a report stating that a typical startup working with large language models is spending 80% of their capital on compute costs. The report authors also state that a single GPT-3 training cost ranges from $500,000 to $4.6 million, depending on hardware assumptions.

Paradoxically, investing these moonshot amounts of money won’t necessarily guarantee economic success or viability, with a leaked Google report recently arguing that there is no moat against general and open-source adoption of these sort of models.  Others, like Snapchat, rushed to market prematurely with an offering, only to crash and burn.

High development costs like that, together with the absence of profitable applications will not make investors or shareholders happy. It also results in capital destruction on a massive scale, making only a handful of cloud and hardware providers happy.

At the same time, a considerable number of business leaders are very vocal about aggressively adopting AI with China’s Bluefocus Intelligent Communications Group Co. planning to fully replace external copywriters and editors with generative AI, and IBM estimating that up to 50% of regular work can be automated.  If these assumptions prove to be false, many businesses will be caught out and face a painful readjustment.

Some critics are already urging caution, with considerable as-yet unquantifiable risks, for example regarding copyright for generatively created artwork. Real-world experiences with the generative AI economy have also been mixed. Customers of freelance services like fiverr or Upwork have reported a surge in low-quality work blatantly created using generative models.

Lastly, there is considerable skepticism and understandable fatigue for yet another new, revolutionary technological innovation from the same businesses who just recently were selling us on crypto, NFT’s, and the Metaverse.

Limited progress in practical applications

While we have made significant advancements in narrow AI applications, we have not seen progress towards true artificial general intelligence (AGI), despite unfounded claims that it may somehow arise emergently. Generative AI models have displayed uncanny phenomena, but they are entirely explainable, including their limitations.

In between the flood of articles gushing about how AI is automating everything in marketing, development and design, there is also a growing trickle of evidence that the field of application for these sort of models may be quite narrow. Automation in real-world scenarios requires a high degree of accuracy and precision, for example when blocking phishing attempts, that LLM’s aren’t designed for.

Some technical experts are already voicing concern about the vast difference in what the current models actually do compared with how they are being described and more importantly, sold, and are already sounding the alarm about a new AI winter.

AI holds immense promise in theory. But the practical applications could fall short of the hype, either due to feasibility issues, a lack of clear use-cases, or the inability to scale solutions effectively.

Privacy and ethical concerns

Another set of growing signals is for the increasing concerns around privacy, ethics, and the potential misuse of AI systems. There are surprisingly many voices arguing for stricter regulations, which could hinder AI development and adoption, resulting in a dead-end AI scenario.

Geoffrey Hinton, one of the pioneers in artificial neural networks, recently quit his job with Google to be able to warn the world about what he feels are the risks and danger of uncontrolled AI without any conflicts of interest. The Whitehouse called a meeting with executives from Google, Microsoft, OpenAI, and Anthropic to discuss the future of AI. The biggest surprise is probably a CEO asking to be regulated, something that OpenAI’s Sam Altman urged US Congress to do. One article even goes as far as advocating that we need to evaluate the beliefs of people in control of such technologies, suggesting that they may be more willing to accept existential risks.

The tragedy in our scenario, is that the hyperbole surrounding AI may actually lead to conclusions about the ethical and societal implications of potential misuse of AI, leading to stringent regulation that stifle its development. Issues like privacy concerns, job displacement, and “deepfake” technology could provoke backlash, prompting governments to impose heavy restrictions, despite in practical reality the technology having only a minor or iterative impact. 

If AI systems are seen as untrustworthy, whether high-profile failures and biases in decision-making are real or imagined, public perception could turn against AI.

Environmental Impact

The promise of AI is not just based on automation – it is also has to be cheap, readily available and increasingly, sustainable. AI may be technically feasible, but it may be uneconomic, or even bad for the environment.

A lot of the data that is available indicates that AI technology like LLM’s have a considerable environmental impact. A recent study, “Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models”, calculated that a typical conversation of 20 – 50 questions consumes 500ml of water, and that it may have needed up to 700,000 liters of water just to train GPT-3.

Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) 2023 Artificial Intelligence Index Report concluded that a single training run for GPT3 put out the equivalent of 502 tons of CO2, with even the most energy efficient model, BLOOM, emitting more carbon than the average American uses per year (25 tons for BLOOM, versus 18 for a human).

We’ve only just started the age of LLM’s , and no doubt there will be efficiency improvements, but they will have to be substantial. Newer models are expected to become much larger, and adoption while historically unparalleled is still only just starting to kick off. If the promise of hyper automation and virtual assistants on demand for everything is to be realized, energy consumption will grow unsustainably.  The question is how will energy-hungry AI models thrive in a low-carbon economy?

Implications for Security Operations

If the current wave of AI technologies is being woefully overhyped and turns out to be a dead end, the implications for security operations could be as follows:

Traditional methods will come back into focus.

With AI failing to deliver on its promise of intelligent automation and analytics, cybersecurity operations will continue to rely on human-driven processes and traditional security measures.

This means that security professionals will have to keep refining existing techniques like zero-trust and cyber hygiene. They will also have to continue to create and curate an endless stream of up-to-date detections, playbooks, and threat intelligence to keep pace with the ever-evolving threat landscape.

Security operations management capabilities, especially the orchestration of workflows like detection and response across heterogeneous infrastructures, will remain difficult and expensive to do well.

Midsized organizations especially will need to rely more on services to close the resulting skills and personnel gap. New service offerings and models will evolve, especially if automation and analytics will require specialist expertise and skills.

On the bright side – at least the threat landscape will also only evolve at a human pace.

Automation will plateau.

Without more intelligent machine automation, organizations will continue to struggle with talent shortages in the cybersecurity field. For analysts the manual workload will remain high. Organizations will need to find other ways to streamline operations.

Automation approaches like SOAR will remain very manual, and still be based on static and preconfigured playbooks. No- and Low-code automation may help make automation easier and accessible, but automation will remain essentially scripted and dumb.

However – even today’s level of LLM capability is already sufficient to automate basic log parsing, event transformation, and some classification use-cases. These sorts of capabilities will be ubiquitous by the end of 2024 in almost all security solutions.

Threat detection and response will remain slow

In the absence of AI-driven solutions, threat detection and response times can improve only marginally.  Reducing the window of opportunity that hackers must exploit vulnerabilities and cause damage will mean becoming operationally more effective. Organizations will have to focus on enhancing their existing systems and processes to minimize the impact of slow detection and response times. Automation will be integrated more selectively but aggressively.

Threat intelligence will continue to be hard to manage.

With the absence of AI-driven analysis, it will continue to be difficult to gather and curate threat intelligence for vendors and remain challenging to use more strategically for most end users. Security teams will have to rely on manual processes to gather, analyze, and contextualize threat information, potentially leading to delays in awareness of and response to new and evolving threats. The ability to disseminate and analyze large amounts of threat intelligence will have to be enhanced using simpler means, for example with visualizations and graph analysis. Collective and collaborative intelligence sharing will also need to be revisited and revised.

Renewed emphasis on human expertise

If AI fails to deliver, the importance of human expertise in cybersecurity operations will become even more critical. Organizations will need to continue to prioritize hiring, training, and retaining skilled cybersecurity professionals to protect their assets and minimize risks. The competition for security experts will remain fierce. New approaches and frameworks will need to be developed to better capture and maintain rare team knowhow, improve knowledge management, and cooperate better across teams and domains.

Conclusion

If we should face a Dead-End AI future, the cybersecurity industry will continue to rely heavily on traditional approaches, especially human-driven ones. It won’t quite be business as usual though.

Even some of the less impressive use-cases we are hearing about already solve some hard challenges. From translating human language queries into SQL syntax on the fly, accelerating how fast an experienced developer can code, or classifying network events, the impact of LLM’s is already being felt. It is a tide that lifts all boats – the defenders, but also the attackers.

 More important though is whether progress will continue. Aside from the technological constraints and limitations a whole range of other drivers and trends can impact whether we may experience a new AI winter. Alone the perception of the risks that AI could pose, from societal destabilization to autonomous Killbots, may be sufficient to trigger a regulatory clampdown that could stifle any further progress. Setting too high expectations too soon could also cause enough disappointment and disillusionment to usher in an AI winter.

Hype can cause far more damage than just annoying people. It can cause FOMO and result in financial loss. It can also kill otherwise promising technology prematurely.

Probability: 0.25 Unlikely

While there are many unknowns in the questions that arise when considering the future of AI and cybersecurity operations, I feel confident that the current hype around AI, and generative AI has at least some foundation in truth.

There are also many knowledgeable people who are bullish on generative AI, and other AI approaches still hit the mainstream. We are seeing the benefits already in some places. There are enough green shoots to indicate that this is a true breakthrough. But I see it as evolution, not revolution. If you’ve been using Grammarly, an AI writing assistant, most programming IDE’s or even just your mobile to message, you will have experienced this slow slope of evolution. What is different is that for the first time it’s production ready. We can now start to build really interesting stuff, and see what happens when we stop trying to build faster horses, as Henry Ford quipped what he’d have done if he had asked his customers.

Still, the sense of hype is not wrong. There are a lot of well-meaning but not very technical commentators just doing what their job demands, a fair amount of grafters, not a small amount of desperate businesses and investors, all latching on to any and every claim, no matter how speculative or unsubstantiated.

So my estimate is that these two futures, that there is no real basis to the hype around generative AI, or that the hype will lead to a new AI winter, are unlikely to occur.

What’s your estimate? Share it with me in the comments below, or on LinkedIn or Twitter.

Written By

Oliver has worked as a penetration tester, consultant, researcher, and industry analyst. He has been interviewed, cited, and quoted by media, think tanks, and academia for his research. Oliver has worked for companies such as Qualys, Verizon, Tenable, and Gartner. At Gartner he covered Security Operations topics like SIEM, and co-named SOAR. He is the Chief Futurist for Tenzir, working on the next generation of data engineering tools for security.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Understand how to go beyond effectively communicating new security strategies and recommendations.

Register

Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.

Register

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...