Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Vector Embeddings – Antidote to Psychotic LLMs and a Cure for Alert Fatigue?

Vector embeddings – data stored in a vector database – can be used to minimize hallucinations from a GPT-style large language model AI system (such as ChatGPT) and perform automated triaging on anomaly alerts. 

AI security

Vector embeddings – data stored in a vector database – can be used to minimize hallucinations from a GPT-style large language model AI system (such as ChatGPT) and perform automated triaging on anomaly alerts. 

In this article, we look at two cybersecurity applications of vector embeddings within a vector database. The first is to reduce the tendency for large language model AI (such as ChatGPT) to ‘hallucinate’; and the second is to surface the concerning occurrences (effectively, to triage) ‘alerts’ found within network logs.

We’ll discuss vector embeddings, hallucinations, and anomaly analysis.

Vector embeddings

A vector embedding is the numerical representation of an object. The object could be a paragraph of text, the content of an email, an image, a whitepaper, a complete book, or anything else that can be defined as information.

The conversion of information (text or images) into data (numbers) is automated through different special purpose embedding models. An entire book or a single paragraph can be summarized as a sequence of numbers. The embedding model simplifies and speeds the generation of a vector database.

Elan Dekel

These models usually use neural network techniques with different characteristics. Examples include Word2vec (patented by Google), GLoVE (an open-source project at Stanford University), BERT (developed by Google) and FastText (developed by Facebook’s AI Research lab).

An important general characteristic of vector embeddings is that similarity in the source information object retains similarity in the output numerical data. “You can think of these vectors as points in space,” explains Elan Dekel, VP of product at vector database provider Pinecone. “Vectors that represent objects that are similar to each other will tend to cluster together – the distance between them will be less than objects that are very different from them.”

The vector database provides the means to store these embeddings efficiently, provide indexing, and for searching and querying. The clustering nature of the embeddings means that it is possible to return multiple (or indeed, all) related objects from a single query process – without knowing the precise content of the object.

Hallucinations

‘Hallucination’ is the term most used to describe the tendency for a generative pre-trained transformer (GPT) large language model (LLM) application to return a wrong but compelling and believable response to a user prompt. We’ll call these applications GPTs. OpenAI’s ChatGPT is the foremost example.

Advertisement. Scroll to continue reading.

It should be noted that the term ‘hallucination’ is not accepted by everyone because it is not physiologically accurate. Arthur Juliani, a postdoctoral researcher at Microsoft, considers terms like ‘confabulation’, ‘bullshitting’ and ‘making things up’ to be more physiologically accurate. Joe Regensburger, VP of Research Engineering at Immuta, provides a more detailed explanation. 

“The term hallucination,” he told SecurityWeek, “is somewhat imprecise and is used to describe at least two types of outcomes: one in which the LLM produces something that should not follow from the initial prompt, and one in which the response to a prompt contains incorrect or misleading information. It is important to distinguish between these two cases. Recently the term ‘confabulation’ has been used to describe the case when an LLM produces reasonable-looking results that are inaccurate, incorrect, or misleading.”

Nevertheless, for our purposes, we will limit ourselves to the term ‘hallucination’ and its widely used meaning: the return of an inaccurate and effectively made-up response in a manner that is not easily distinguishable from an accurate response.

Dekel explains the mechanism behind such hallucinations. “Think of a large language model as comprising two primary features: its brain (what it has been trained to do); and its knowledge (the data it is trained to use). The brain is required to reply with information from the data.”

But what happens when the data doesn’t contain an answer to the question? The brain is then ‘forced’ to create an answer, because that is its primary function. With no accurate data to use, it simply makes up, or hallucinates, an answer. The problem for the user is there is no easy way to know whether the response is fact or fiction.

”Once that brain is up and running,” continued Dekel, “it will happily generate text from its training data. It has no idea if it is the right answer – it’s just going to spit out an answer. A hallucination is when you ask it a question that cannot be accurately answered from the training set. So, the brain is just going to return something based on the weights that it was trained on. At that point, you get something that sounds really persuasive, but is just totally made up.”

The threat from, and solutions to, hallucinations

It makes sense for companies to harness the information provided by ChatGPT for their own internal decision-making. The problem is the training data used by ChatGPT is drawn from the internet. The response to a prompt may be generalized rather than specific to the company concerned – or even worse, a complete hallucination. 

The task for any business wishing to use ChatGPT for internal decision-making is to ensure it focuses on company-specific information, and to eliminate the potential for hallucinations.

Mo Patel, VP of technical account management at Tanium, explains the options. “Current state of the art for applying LLMs for internal usage present two options to companies: fine tuning a trained model [the brain] or using retrieval-augmented generation (RAG),” he said. “Fine tuning can be a highly reliable method for building an LLM that can be applied safely. However, there are several limitations to the approach: fine tuning requires open models ready to be modified, AI experts with fine tuning skills, and specialized hardware to fine tune models.”

Mo Patel, VP of technical account management at Tanium

RAG, however, has fewer drawbacks. “RAG is an approachable method for companies to build LLM applications with a lower barrier to implementation. A vector database provides companies with a reliable method for conducting RAG due to the enterprise grade features such as data management, governance, and security.” Basically, RAG involves providing additional known accurate and pertinent data to the GPT, and demanding it be used in generating the response.

“Say you have a query relevant to your own domain,” said Dekel. “First you do a query on your vector database. Because it uses vector embeddings, it can lead you to very specific paragraphs of text relevant to your query, but from multiple (if not all) the sources you have. You generate a new prompt to the LLM, adding the text retrieved through the vector database. The prompt says please answer this question with this context.”

The context is the text that you retrieved via the vector database. When you send it all to the LLM, the GPT can understand the text that you’ve submitted using its ‘brain’ and can determine which of the texts answers the question – and will generate an accurate and company-specific response.

With both fine tuning and RAG, the purpose is to ensure the GPT has the information it needs to answer your query from your own data without resorting to hallucinations. Both work, but RAG is probably the simpler solution. 

“People have found that in many cases, RAG is a much more efficient solution,” says Dekel. “For example, if your internal dataset has lots of updates, a vector database is easy to refresh and keep current. Using RAG on this will ensure that the LLM responds to the latest information. With fine tuning, you need to collect all the new information, perform a new training run on the model, and then swap out the old model for the new model. It’s a more expensive and slower process.”

Reducing hallucinations to avoid the possibility of bad decisions based on GPT hallucinations requires keeping internal GPT knowledge complete, relevant, and accurate. A vector database, vector embeddings and retrieval-augmented generation could be the best solution, leading to better business- and risk management-decision making.

Security-specific use cases

Vector embeddings can be used for facial recognition without requiring the storage of an actual image, thus avoiding privacy concerns.

Consider access to a secure room. Passwords can be forgotten, physical tokens can be lost, and some people don’t have adequate fingerprints – but everyone carries their face all the time.

The process is conceptually simple. Everybody authorized to enter the room has a facial photograph taken. A vector embedding is derived from the image and stored in the vector database. Meta data applied to the vector can relate the vector to personnel information held in a traditional HR database. The original photograph can then be destroyed.

When a person wishes to enter the secure room, a camera at the door will take a facial photograph, generate a new vector, and compare it to the vector embeddings already held in the vector database. Again, the facial photograph can be discarded. If the new vector is sufficiently close to one already held in the vector database, that person is considered identified and authorized – and is granted access.

Reducing alert fatigue

Alert fatigue is a real and common problem. Manually scanning network logs looking for anomalous incidents indicative of a problem isn’t realistic simply through the size of the logs. Automated anomaly detection systems tend to generate large numbers of potentially malicious incidents. Each one of these needs to be manually triaged by a skilled employee.

With a vector database approach, you can concentrate on the source of truth – the log itself. You create a vector embedding for each log entry as it happens, and store the embedding in the vector database. Similar events will tend to cluster – benign events will become grouped together in some clusters, while risky events will show in other clusters.

This separation into benign and risky events will become more apparent. New embeddings for new log events are continuously generated, but now also compared to those already stored in the vector database.

“Over time, the system learns this cluster here is benign – these events are perfectly safe,” explained Dekel. “But this other cluster comprises events that are highly risky.” For each new event, you check whether it is aligned to a benign or risky cluster.

If a new event is risky, you generate an alert. If it is benign, you just ignore it. It’s a way of surfacing genuine alerts without forcing staff to waste time analyzing security non-events. Since it is all done at machine speed, it can be accomplished in almost real time, and allows expensive skilled staff to concentrate on genuine potentially malicious events.

Vector databases

Vector databases are not new. They’ve been around for almost 20 years, but have primarily been used by data scientists to aid the development of machine learning and search functions. Elan Dekel was formerly product lead for core data serving at Google – work he describes as ‘product lead for the indexing and serving systems powering Google search, YouTube search, Maps, Assistant, Lens, Photos, and many more’.

But the use of vector databases within general business organizations is new, and the prevalence of data scientists is low. But much of the hard work has already been done by the vector database provider. While the ability to search for and surface otherwise hidden proprietary information is a major advantage (such as RAG, visual authentication, and malicious event highlighting as discussed here), other more specific use cases will likely emerge as companies become more aware of the potential use of vector embeddings and their characteristics.

Learn More at SecurityWeek’s Cyber AI & Automation Summit

Join this virtual event as we explore the hype and promise surrounding AI-powered security solutions in the enterprise and the threats posed by adversarial use of AI.

Cyber AI & Automation Summit

December 6, 2023 | Virtual Event

Related: The Good, the Bad and the Ugly of Generative AI

Related: ChatGPT, the AI Revolution, and Security, Privacy and Ethical Implications

Related: XDR and the Age-old Problem of Alert Fatigue

Related: House Panels Probe Gov’t Use of Facial Recognition Software

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Exposed data includes backup of employees workstations, secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.