Researchers at Tenable have identified vulnerabilities in Microsoft’s Azure Health Bot Service that threat actors could have been able to exploit to gain access to sensitive data.
The Azure Health Bot Service is a cloud platform that healthcare organizations can use to create and deploy AI-powered virtual health assistants.
Depending on what they’re used for, some of these chatbots may need to be given access to sensitive patient information to complete their tasks.
Tenable researchers discovered a data connection feature that allows bots to interact with external data sources. The feature enables the service’s backend to make third-party API requests.
The researchers found a way to bypass the protections that were in place. Specifically, they found a server-side request forgery (SSRF) vulnerability that could have allowed an attacker to escalate privileges and access cross-tenant resources.
Tenable’s analysis did not go deeper to see exactly what type of data was exposed, but the company noted that a threat actor may have been able to gain management capabilities and move laterally within Azure customer environments, potentially gaining access to sensitive patient data.
“The vulnerabilities […] involve flaws in the underlying architecture of the AI chatbot service rather than the AI models themselves,” Tenable explained.
Microsoft was immediately informed about the vulnerabilities and released server-side patches in July.
Tenable has not found any evidence to suggest that the flaws have been exploited by malicious actors.
Related: AWS Patches Vulnerabilities Potentially Allowing Account Takeovers
Related: Docker Patches Critical AuthZ Plugin Bypass Vulnerability Dating Back to 2018
Related: Citrix Patches Critical NetScaler Console Vulnerability