Artificial Intelligence

Code Execution Flaws Haunt NVIDIA ChatRTX for Windows

Artificial intelligence computing giant NVIDIA patches flaws in ChatRTX for Windows and warns of code execution and data tampering risks.

Artificial intelligence computing giant NVIDIA patches flaws in ChatRTX for Windows and warns of code execution and data tampering risks.

Artificial intelligence computing giant NVIDIA on Wednesday pushed out urgent patches for a pair of software flaws in its ChatRTX for Windows app alongside a warning that users are at risk of code execution and data tampering attacks.

According to an advisory from NVIDIA, the flaws carry a ‘high-risk’ rating and could be exploited to launch harmful code via cross-site-scripting attacks.

The security defects, flagged as CVE‑2024‑0082 and CVE-2024-0083, affect ChatRTX for Windows 0.2 and prior versions.

The raw details:

  • CVE‑2024‑0082 — NVIDIA ChatRTX for Windows contains a vulnerability in the UI, where an attacker can cause improper privilege management by sending open file requests to the application. A successful exploit of this vulnerability might lead to local escalation of privileges, information disclosure, and data tampering. CVSS severity score 8.2/10.
  • CVE-2024-0083 — NVIDIA ChatRTX for Windows contains a vulnerability in the UI, where an attacker can cause a cross-site scripting error by network by running malicious scripts in users’ browsers. A successful exploit of this vulnerability might lead to code execution, denial of service, and information disclosure. CVSS severity score 6.5/10

The NVIDIA ChatRTX app is used by developers and AI enthusiasts to connect PC LLMs to their own data using a popular technique known as retrieval-augmented generation (RAG).

Related: The Chaos (and Cost) of the Lapsus$ Hacking Carnage

Related: Dymium Snags $7M to Build Data Security Platform with Secure AI Chat 

Related: Microsoft Catches APTs Using ChatGPT for Vuln Research

Related: NVIDIA Patches Code Execution Vulnerabilities in Graphics Driver

Advertisement. Scroll to continue reading.

Related Content

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

Artificial Intelligence

Google rolls out new threat-intel and security operations products and looks to the magic of AI to tap into the booming cybersecurity market.

Artificial Intelligence

Startup Knostic emerges from stealth mode with $3.3 million in funding and a gen-AI access control product for enterprises.

Artificial Intelligence

Software developers relying on AI chatbots for building applications may end up using hallucinated software packages.

Artificial Intelligence

Silicon Valley startup deposits $10 million in seed-stage funding to help organizations manage risk from cloud and gen-AI technologies.

Data Protection

Two Bear Capital leads a venture capital bet on Dymium, a California startup building data protection technologies.

Artificial Intelligence

Three types of vulnerabilities related to ChatGPT plugins could have led to data exposure and account takeovers. 

Artificial Intelligence

Microsoft threat hunters say foreign APTs are interacting with OpenAI’s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version