A thorough investigation by Protect AI’s Huntr bug bounty platform uncovers critical security flaws in major open-source AI tools, prompting immediate attention from developers.
In an extensive security alert, over 36 vulnerabilities have been flagged in various open-source AI and machine learning models, with potential hazards ranging from remote code execution to unauthorised data access. The vulnerabilities were identified as a result of a thorough investigation carried out by Protect AI’s Huntr bug bounty platform, shedding light on significant flaws in influential AI tools and models.
Among the most critical are flaws in the Lunary toolkit, a production system for handling large language models (LLMs). Two of these vulnerabilities, assigned the identifiers CVE-2024-7474 and CVE-2024-7475, each scored a high CVSS of 9.1, indicating significant risk. The first, an Insecure Direct Object Reference (IDOR) flaw, could grant authenticated users the capability to access or delete external users’ data, posing a severe risk of unauthorised data access and potential data loss. The latter issue involves improper access controls, potentially allowing an attacker to alter the SAML configuration and gain unauthorized access to sensitive information.
Another notable vulnerability in Lunary, CVE-2024-7473 with a CVSS score of 7.5, also involves an IDOR, enabling unauthorised updates to user prompts through manipulation of a user-controlled parameter. This vulnerability allows an attacker to perform unauthorised actions on other users’ data by intercepting and modifying request parameters.
ChuanhuChatGPT, another significant AI tool, has a critical path traversal flaw (CVE-2024-5982) in its user upload feature, which could lead to arbitrary code execution and exposure of sensitive data. This vulnerability also carries a CVSS score of 9.1, underscoring the critical nature of the threat.
LocalAI, an open-source project cherished for enabling self-hosted LLMs, is affected by two flaws. The first (CVE-2024-6983, CVSS score: 8.8) concerns arbitrary code execution via a malicious configuration file. The second flaw (CVE-2024-7010, CVSS score: 7.5) could enable attackers to discern valid API keys through response time analysis, a method known as a timing attack.
Another significant security breach involves the Deep Java Library (DJL), where a remote code execution flaw (CVE-2024-8396, CVSS score: 7.8) is attributed to an arbitrary file overwrite bug in its untar function.
These disclosures have prompted immediate action from companies like NVIDIA, which has issued patches to its NeMo generative AI framework to address a path traversal flaw (CVE-2024-0129, CVSS score: 6.3) that might have led to code execution and data tampering incidents.
Complementing these security updates, Protect AI has introduced ‘Vulnhuntr’, an innovative open-source Python static code analyser. Utilising large language models, Vulnhuntr diligently scans for zero-day vulnerabilities in Python codebases, promising a more robust and automated approach to identifying potential security weaknesses.
Furthermore, a new jailbreak method proposed by Mozilla’s 0Day Investigative Network (0Din) revealed how malicious input encoded in hexadecimal and emojis could circumvent safeguards in OpenAI’s ChatGPT, allowing the crafting of exploits for known security vulnerabilities. This unconventional tactic manipulates the model’s language processing features to inadvertently generate harmful outputs, demonstrating a conceptual vulnerability in its task execution framework.
These developments highlight the critical need for continuous vigilance and adaptation in AI safety measures, particularly as the integration of AI and machine learning systems becomes increasingly prevalent across various sectors.
Source: Noah Wire Services











