Hi, what are you looking for?
Google refused to share any details on how its Big Sleep AI foiled efforts to exploit a SQLite vulnerability in the wild.
Straiker has emerged from stealth mode with a solution designed to help enterprises secure AI agents and applications.
OpenAI has raised its maximum bug bounty payout to $100,000 (up from $20,000) for high-impact flaws in its infrastructure and products.
SplxAI has raised $7 million in a seed funding round led by LAUNCHub Ventures to secure agentic AI systems.
Microsoft has expanded the capabilities of Security Copilot with AI agents tackling data security, phishing, and identity management.
Cato Networks discovers a new LLM jailbreak technique that relies on creating a fictional world to bypass a model’s security controls.
A year-old vulnerability in a third-party ChatGPT tool is being exploited against financial entities and US government organizations.
Vulnerabilities in Nvidia Riva could allow hackers to abuse speech and translation AI services that are typically expensive.
Measure the different level of risk inherent to all gen-AI foundational models and use that to fine-tune the operation of in-house AI deployments.
Two Microsoft researchers have devised a new jailbreak method that bypasses the safety mechanisms of most AI systems.
Researchers have analyzed the ability of the Chinese gen-AI DeepSeek to create malware such as ransomware and keyloggers.
How hyper agenda-driven threat actors, cybercriminals, and nation-states integrate digital, narrative, and physical attacks to target organizations through their executives.
Exploiting trust in the DeepSeek brand, scammers attempt to harvest personal information or steal user credentials.
Google Cloud’s AI Protection helps discover AI inventory, secure AI assets, and manage threats with detect, investigate, and respond capabilities.
AIceberg has launched a solution that helps governments and enterprises with the safe, secure and compliant adoption of AI.
Knostic provides a “need-to-know” filter on the answers generated by enterprise large language models (LLM) tools.
AI is all about data – and keeping AI’s data confidential both within devices and between devices is problematic. Intel offers a solution.
Unauthorized AI usage is a ticking time bomb. A tool that wasn’t considered a risk yesterday may introduce new AI-powered features overnight.
In a lawsuit targeting cybercriminals who abuse AI services, Microsoft has named individuals from Iran, the UK, China and Vietnam.
Dreadnode is building “offensive machine learning” tools to safely simulate how AI models might be exploited in the wild.
OpenAI has banned ChatGPT accounts used by Chinese threat actors, including ones leveraged for the development of spying tools.