Funksec AI Ransomware Double-extorts 85 Victims
January 24, 2025In recent cybersecurity news, a high-severity vulnerability has been disclosed in Meta’s Llama large language model (LLM) framework. This flaw, if exploited, could enable attackers to execute arbitrary code on the Llama Stack inference server. Here, we delve into the technical details and implications of this security breach, alongside other notable AI-related vulnerabilities.
Understanding the Vulnerability
CVE-2024-50050: The Deserialization Flaw
Tracked as CVE-2024-50050, this vulnerability has been assigned a CVSS score of 6.3 out of 10. However, supply chain security firm Snyk has rated it as critical, with a severity score of 9.3. The root cause lies in the deserialization of untrusted data within the Llama Stack—a component defining API interfaces for AI application development.
Researcher Avi Lumelsky from Oligo Security explained that the flaw originates from the Python Inference API implementation. Specifically, it involves the use of Python’s pickle library, which automatically deserializes objects. This poses a significant risk when handling untrusted or malicious data.
The Exploitation Path
In setups where the ZeroMQ socket is exposed over the network, attackers could craft malicious objects and send them to the socket. The recv_pyobj function, which relies on pickle, would then deserialize these objects, allowing the attacker to execute arbitrary code on the host system.
Meta’s Response
After responsible disclosure on September 24, 2024, Meta addressed the issue in version 0.0.41 of the framework, released on October 10. The fix involved replacing the risky pickle serialization format with JSON for socket communication. Additionally, a similar issue in the pyzmq library has been remediated.
Broader Implications for AI Frameworks
Recurrent Deserialization Vulnerabilities
This is not the first instance of deserialization flaws in AI frameworks. In August 2024, Oligo Security highlighted a “shadow vulnerability” in TensorFlow’s Keras framework, which bypassed CVE-2024-3660. Rated 9.8 on the CVSS scale, this flaw exploited Python’s marshal module, allowing arbitrary code execution.
Risks in OpenAI’s ChatGPT Crawler
Another notable vulnerability was recently disclosed in OpenAI’s ChatGPT crawler. A misconfiguration in handling HTTP POST requests to the API endpoint could enable attackers to initiate distributed denial-of-service (DDoS) attacks. By sending a list of thousands of duplicate URLs, malicious actors could overwhelm target websites via OpenAI’s Azure-based infrastructure. OpenAI has since patched this flaw.
The Role of AI in Amplifying Cyber Threats
AI models, including LLMs, are increasingly being leveraged to enhance cyberattacks. According to Deep Instinct researcher Mark Vaitzman, LLMs streamline various phases of the attack lifecycle, making cyber threats more efficient and scalable. From payload delivery to command-and-control operations, the integration of AI poses evolving challenges to security professionals.
Emerging Research in AI Security
Model Genealogy and ShadowGenes
Recent advancements in AI security have introduced techniques such as ShadowGenes for identifying model genealogy. By analyzing computational graphs and recurring subgraph patterns, researchers can track the architecture and lineage of AI models. This approach aids in improving organizational awareness and enhancing security posture management.
Unsafe Coding Practices
Reports from Truffle Security reveal that AI-powered coding assistants often propagate risky practices, such as hard-coding API keys and passwords. These habits can mislead novice developers, further compounding security vulnerabilities.
Takeaways for Organizations
- Patch Management: Ensure all AI frameworks and libraries are updated to their latest versions.
- Security Best Practices: Avoid unsafe serialization formats like pickle and adopt alternatives such as JSON.
- AI-Specific Monitoring: Implement monitoring systems tailored to detect vulnerabilities in AI systems.
- Educate Developers: Train developers to recognize and avoid insecure coding practices.
As AI continues to play a pivotal role in technology and cybersecurity, addressing its vulnerabilities will be crucial to maintaining robust and secure systems. Organizations must stay vigilant, adapting to the evolving landscape of AI-driven threats.