Pillar Security, a pioneering company in generative AI (GenAI) security solutions, has unveiled the industry’s first “State of Attacks on GenAI” research report. This report is based on an in-depth analysis of over 2,000 AI applications and highlights real-world attack data rather than relying on theoretical risk surveys.
Key Findings from the Report
The report presents alarming insights into the security landscape surrounding GenAI applications. Some of the most notable findings include:
- High Success Rate of Data Theft: 90% of successful attacks led to the leakage of sensitive data.
- Alarming Bypass Rate: 20% of jailbreak attack attempts managed to bypass the guardrails of GenAI applications.
- Rapid Attack Execution: Attackers required an average of just 42 seconds to execute an attack.
- Minimal Interaction Needed: On average, attackers needed only five interactions with GenAI applications to complete a successful attack.
- Widespread Vulnerabilities: Attacks exploited vulnerabilities at every stage of interaction with GenAI systems, highlighting the urgent need for comprehensive security measures.
- Increase in Frequency and Complexity: The analyzed attacks indicate a clear increase in both the frequency and complexity of prompt injection attacks, as users employ more sophisticated techniques to bypass safeguards.
Insights on Attack Techniques and Motivations
Pillar Security’s research also sheds light on specific attack techniques and the motivations behind them:
- Top Jailbreak Techniques:
- Ignore Previous Instructions: Attackers direct AI systems to disregard initial programming.
- Base64 Encoding: Malicious prompts are encoded to evade security filters.
- Primary Attacker Motivations:
- Stealing sensitive data and proprietary business information.
- Circumventing content filters to generate disinformation, hate speech, phishing messages, and malicious code.
- Curated Attack Analysis: The report provides a detailed analysis of the top attacks observed in real-world production AI applications.
Looking Ahead to 2025
Pillar Security projects significant shifts in the AI landscape by 2025, including the evolution from chatbots to copilots and autonomous agents. This transition, alongside the rise of locally deployed AI models, democratizes access but also expands attack surfaces, posing new security challenges for organizations.
Expert Insights
Dor Sarig, CEO and co-founder of Pillar Security, emphasized the need for actionable insights: “The widespread adoption of GenAI in organizations has opened a new frontier in cybersecurity. Our report goes beyond theoretical risks and highlights the actual attacks occurring in the wild.”
Jason Harrison, CRO of Pillar Security, added, “Static controls are no longer sufficient in this dynamic AI-enabled world. Organizations must invest in AI security solutions capable of anticipating and responding to emerging threats in real-time.”
Pillar Security’s “State of Attacks on GenAI” report is a wake-up call for organizations leveraging generative AI technologies. By providing critical insights into real-world attacks and their implications, the report underscores the necessity for enhanced security measures and a proactive approach to safeguarding GenAI applications against evolving threats.