ReversingLabs (RL), a leader in file and software security, has uncovered “nullifAI,” a sophisticated ML malware attack targeting the AI community Hugging Face. This attack exploited corruption tactics to evade detection, affecting two ML models hosted on the platform. RL’s latest research post, “Malicious ML Models Discovered on Hugging Face Platform,” details this discovery alongside a white paper, “AI is the Supply Chain,” highlighting AI-driven cybersecurity threats.
Key Findings on the ‘nullifAI’ Attack
1. Exploiting AI Supply Chains for Malware Distribution
- Attackers used corrupt Pickle files to bypass Hugging Face security measures.
- The attack enabled execution of malicious code through compromised ML models.
- Hugging Face has since removed the affected models.
2. Growing Security Risks in AI-Powered Software Development
- Threat actors are leveraging AI communities to insert hard-to-detect malware.
- AI-generated code increases the risk of outdated and compromised code in software development.
- 75% of enterprise software engineers will use AI code assistants by 2028 (Gartner).
3. Strengthening AI Cybersecurity with Modern Solutions
- AI-driven software supply chains demand enhanced security protocols.
- RL’s Spectra Assure offers deep binary analysis for AI models, detecting malware, tampering, and vulnerabilities.
- Organizations must implement robust security measures to protect against AI supply chain threats.
The discovery of nullifAI underscores the urgent need for improved AI security strategies. As AI accelerates software development, the risk of AI-driven cyber threats grows. ReversingLabs remains committed to securing AI platforms with advanced threat intelligence and security solutions like Spectra Assure, ensuring software integrity before deployment.