New features enable companies to discover unsafe AI models in the software development pipeline for swift remediation, ensuring the deployment of compliant and secure code
Legit Security, a leading platform for managing application security posture, has announced new capabilities to help customers discover and mitigate risks posed by unsafe AI models within their software development environments. This move aims to enhance AI supply chain security throughout the software development lifecycle (SDLC).
- Challenges in AI Supply Chain Security:
- Risks associated with third-party AI models in software development.
- Potential threats like “AI-Jacking” highlighted by Legit’s research team.
- Expanded Capabilities:
- Identification of risks in AI models used across the SDLC.
- Actionable remediation steps to address security vulnerabilities.
- Empowering Security and Development Teams:
- Flagging unsafe models with insecure storage or low reputation.
- Coverage of market-leading AI models hub, starting with HuggingFace.
- Complementary Features:
- Discovering AI-generated code and enforcing policies for code review.
- Guardrails to prevent vulnerable code from reaching production.
- Insights from Legit’s CTO:
- Importance of a responsible AI framework with continuous monitoring.
- Safeguarding development practices from end-to-end against AI-related risks.
Legit Security’s enhanced AI discovery capabilities offer organizations key benefits, including reduced risk from third-party components, alerts for unsafe models, and protection of the AI supply chain. This advancement reflects Legit’s commitment to providing comprehensive solutions for securing AI in software development.