Artificial Intelligence

Harmonic Lands $7M Funding to Secure Generative AI Deployments

British startup is working on software to mitigate against the ‘wild west’ of unregulated AI apps harvesting company data at scale.

British startup is working on software to mitigate against the ‘wild west’ of unregulated AI apps harvesting company data at scale.

A British startup called Harmonic Security has attracted $7 million in seed-stage investment to build technology to help secure generative AI deployments in the enterprise.

Harmonic, based in London and San Francisco, said it is working on software to mitigate against the ‘wild west’ of unregulated AI apps harvesting company data at scale.

The company said the early-stage financing was led by Ten Eleven Ventures, an investment firm actively investing in cybersecurity startups. Storm Ventures and a handful of security leaders also took equity positions.

Harmonic is entering an increasingly crowded field of AI-focused cybersecurity startups looking to find profits as businesses embrace AI and LLMs (large language model) technology. 

A wave of new startups like CalypsoAI ($23 million raised) and HiddenLayer ($50 million) have raised major funding rounds to help businesses secure generative AI deployments. 

Hotshot company OpenAI is already using security as its sales pitch for ChatGPT Enterprise while Microsoft and others are putting ChatGPT to work on solving threat intelligence and other security problems. 

Harmonic, the brainchild of Alastair Paterson (previously led Digital Shadows to a $160 million acquisition by ReliaQuest/KKR), is promising technology to give businesses a complete picture of AI adoption within enterprises, offering risk assessments for all AI apps and identifying potential compliance, security, or privacy issues. 

The company cited a Gartner study that shows that 55% of global businesses are piloting or using generative AI and warned that a majority of apps are unregulated with unclear policies on how data will be used, where it will be transmitted to or how it will be kept secure.

Advertisement. Scroll to continue reading.

“Harmonic provides a risk assessment of all AI apps so that high risk AI services that could lead to compliance, security or privacy incidents are identified. This approach means that organizations can control access to AI applications as required, including selective blocking of sensitive content from being uploaded, without needing rules or exact matches,” the company explained.

Related: Investors Pivot to Safeguarding AI Training Models

Related: CalypsoAI Banks $23 Million for AI Security

Related: HiddenLayer Raises $50M Round for AI Security Tech

Related: OpenAI Using Security to Sell ChatGPT Enterprise

Related: Google Brings AI Magic to Fuzz Testing With Solid Results

Related Content

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

Artificial Intelligence

Software developers relying on AI chatbots for building applications may end up using hallucinated software packages.

Artificial Intelligence

Artificial intelligence computing giant NVIDIA patches flaws in ChatRTX for Windows and warns of code execution and data tampering risks.

Artificial Intelligence

Three types of vulnerabilities related to ChatGPT plugins could have led to data exposure and account takeovers. 

Funding/M&A

Insider threat detection firm Dtex Systems raises $50 million in a funding round led by the investment arm of Google’s parent company.

Artificial Intelligence

Microsoft threat hunters say foreign APTs are interacting with OpenAI’s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.

Artificial Intelligence

Prompt Security emerges from stealth with $5 million in seed to help businesses with generative-AI security tasks.

Artificial Intelligence

SecurityWeek interviews a wide spectrum of security experts on AI-driven cybersecurity use-cases that are worth immediate attention.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version