JFrog’s Security Research team uncovers critical vulnerabilities in widely used machine learning platforms, highlighting the urgent need for enhanced security measures in the rapidly evolving field.
In a significant revelation concerning the vulnerabilities in machine learning (ML) frameworks, JFrog’s Security Research team has identified dozens of software vulnerabilities in popular open-source ML platforms. This discovery highlights the significant security challenges facing the rapidly advancing fields of machine learning and artificial intelligence (AI). The report disseminates findings that underline the sector’s relative immaturity in terms of security preparedness, as it increasingly becomes integral to various industries and applications.
JFrog’s extensive investigation uncovered 22 unique vulnerabilities across 15 different ML projects, which include renowned frameworks and tools used widely across the globe. This effort is part of a larger two-part series conducted by JFrog that aims to strengthen the open-source ecosystem by focusing on vulnerabilities found on both server-side and client-side components of ML software.
The research presents unsettling findings such as privilege escalation vulnerabilities in tools like Weights & Biases (WANDB) and ZenML. These vulnerabilities show the potential for malicious actors to manipulate ML tools in unforeseen ways, posing significant risks to enterprises utilizing these frameworks.
Server-Side Vulnerabilities Highlight Concerns
One major area of concern highlighted in the report is server-side vulnerabilities. These vulnerabilities give potential entry points for attackers to infiltrate enterprise systems, which could lead to control over essential ML assets, model registries, and data pipelines. A notable example given by the research is a directory traversal vulnerability in WANDB’s Weave server (CVE-2024-7340), which was addressed in version 0.50.8. This vulnerability previously allowed attackers to access any file on the system, with the potential to escalate their privileges to an administrative role.
Similarly, a critical improper access control vulnerability was found in ZenML Cloud. This flaw allowed attackers to escalate their privileges by manipulating role permissions within ZenML’s tenant-based structure, granting them administrative access and the ability to extract sensitive information such as credentials from the platform’s Secret Store.
As JFrog’s Senior Director of Security Research, Shachar Menashe, states, these vulnerabilities underscore the need for organizations to exercise caution when implementing AI/ML tools. He suggests that security patches and regular updates are crucial, as the likelihood of these becoming available is high due to the field’s developmental stage.
Database Exploits and Prompt Injection Perils
The vulnerabilities extend beyond just server-side frameworks. ML database frameworks and natural language processing tools are also at risk. Deep Lake, an AI-optimised database, was found to harbour a command injection vulnerability (CVE-2024-6507). This vulnerability allows attackers to execute operating system-level commands through API misuse, posing a significant threat to data integrity in ML environments.
Additionally, there are significant risks related to prompt injection attacks in ML applications that use large language models (LLMs). Vanna.AI, a tool designed to translate natural language queries into SQL, was found vulnerable to such attacks (CVE-2024-5565), potentially leading to remote code execution. Through compromised SQL queries, attackers could gain access to backend systems, allowing data manipulation and creating significant disruptions in data-dependent environments.
Navigating the Future of ML and Security
With these vulnerabilities prominently exposed, it is evident that ML software is vulnerable to a wide array of security threats. Attackers can target both backend infrastructure, like databases and model registries, and front-end ML tools. This situation is not exclusive to lesser-known tools, with widely adopted platforms like Weights & Biases and ZenML also being susceptible.
To mitigate these vulnerabilities, JFrog recommends organisations enforce stringent access controls and prioritise the application of regular security patches for their ML tools. They also advocate for a proactive approach to vulnerability management, including regular threat scans and ensuring the secure configuration of third-party ML libraries or tools used by enterprises.
As organisations continue to explore the potential of machine learning, securing the underlying infrastructure is becoming increasingly crucial to prevent exploitation by malicious entities. JFrog’s report serves as a crucial expose on the need for a security-first approach in the operational practices concerning ML to bridge the gap between innovative capabilities and security imperatives.
Source: Noah Wire Services
More on this & sources
- https://jfrog.com/blog/data-scientists-targeted-by-malicious-hugging-face-ml-models-with-silent-backdoor/ – Discusses the vulnerabilities in machine learning models on Hugging Face, highlighting the potential for malicious code execution and the need for robust security measures.
- https://securitybrief.com.au/story/jfrog-exposes-vulnerabilities-in-machine-learning-platforms – Details JFrog’s research on vulnerabilities in machine learning platforms, including remote code execution and other security risks associated with MLOps.
- https://jfrog.com/help/r/jfrog-security-documentation/scanning-malicious-ai-models – Explains JFrog’s Xray feature for scanning malicious AI models, including the types of models scanned and the frequency of scans, which helps in identifying and mitigating security risks.
- https://thehackernews.com/2024/08/researchers-identify-over-20-supply.html – Reports on the discovery of over 20 vulnerabilities in the machine learning software supply chain, including inherent and implementation-based flaws that could lead to arbitrary code execution and other security breaches.
- https://siliconangle.com/2024/11/04/jfrog-report-highlights-critical-security-flaws-machine-learning-platforms/ – Highlights critical security flaws in machine learning platforms, including vulnerabilities in Weights & Biases, ZenML, and other tools, and the potential for attackers to hijack important servers and compromise data integrity.
- https://siliconangle.com/2024/11/04/jfrog-report-highlights-critical-security-flaws-machine-learning-platforms/ – Details the directory traversal vulnerability in Weights & Biases’ Weave server (CVE-2024-7340) and improper access control vulnerability in ZenML Cloud, which allow attackers to escalate privileges and access sensitive information.
- https://siliconangle.com/2024/11/04/jfrog-report-highlights-critical-security-flaws-machine-learning-platforms/ – Discusses the command injection vulnerability in Deep Lake (CVE-2024-6507) and prompt injection vulnerability in Vanna.AI, which pose significant threats to data integrity and security in ML environments.
- https://securitybrief.com.au/story/jfrog-exposes-vulnerabilities-in-machine-learning-platforms – Emphasizes the need for heightened awareness and improved security measures in AI and ML, as highlighted by JFrog’s Senior Director of Security Research, Shachar Menashe.
- https://thehackernews.com/2024/08/researchers-identify-over-20-supply.html – Warns about the risks of exploiting vulnerabilities in ML libraries, such as those in JupyterLab and MLFlow, which can lead to client-side code execution and other security breaches.
- https://jfrog.com/help/r/jfrog-security-documentation/scanning-malicious-ai-models – Explains the importance of regular security patches and updates, as well as proactive vulnerability management, to mitigate the risks associated with ML models and tools.
- https://siliconangle.com/2024/11/04/jfrog-report-highlights-critical-security-flaws-machine-learning-platforms/ – Reiterates the necessity of a security-first approach in ML operational practices to prevent exploitation by malicious entities and bridge the gap between innovative capabilities and security imperatives.











