Wallarm’s 2025 API ThreatStats Report sounds an alarm on the explosive growth in AI-driven vulnerabilities. Researchers tracked 439 AI-related CVEs in 2024—a 1,025% leap from the prior year—nearly 99% tied to API security gaps. Memory corruption flaws rank high on the list, largely fueled by AI’s reliance on high-performance binary APIs. Wallarm’s new Top-10 category, Memory Corruption & Overflows, stems from how AI workloads interact with hardware, triggering buffer and integer overflows that lead to data leaks or system crashes.
Adoption of AI has skyrocketed. More than half of surveyed companies report multiple AI projects underway, which means more API endpoints and a corresponding rise in security blind spots. APIs connect AI models to the outside world, but they also open new avenues for injection, misconfigurations, and logic flaws. Wallarm’s data shows that over 57% of AI-powered APIs are externally accessible and 89% rely on weak authentication. Only 11% implement robust security controls, leaving most endpoints vulnerable to attacks.
This is consistent with broader trends. More than 50% of all CISA-exploited vulnerabilities now affect APIs—up 30% from the prior year—making APIs the predominant attack surface. Traditional kernel or supply chain exploits still matter, but APIs have become the prime target for hackers seeking direct access to critical data. High-profile incidents at Twilio and Tech in Asia show how weak authentication or misconfigurations let attackers gain unauthorized access. These breaches also highlight how memory errors—once an afterthought in API security—can now be exploited to compromise AI models, steal intellectual property, or even alter machine learning pipelines from within.
Wallarm’s threat intelligence uncovers striking examples of AI frameworks like PaddlePaddle and MLflow being exploited at API endpoints. Attackers slip malicious payloads into training data or siphon proprietary knowledge, leveraging injection flaws and memory overflows. As AI workloads push hardware limits, APIs can break under pressure. That’s why memory corruption and overflow vulnerabilities shot to the top of the chart. The same performance-boosting binary endpoints that power AI’s speed become gateways for buffer or integer overflows, enabling arbitrary code execution and system takeovers.
Legacy APIs aren’t off the hook either. Outdated .php endpoints and AJAX backends remain widespread in industries like telecom, healthcare, and government, creating easy targets for attackers exploiting old authentication and session handling. Even “modern” RESTful APIs can be equally at risk if they lack secure coding practices or use off-the-shelf code with known CVEs.
API security is now the linchpin for organizations embracing AI. While AI promises transformative capabilities, it also raises the stakes of an attack, with potentially bigger data leaks, more severe disruptions, and higher reputational damage. Real-time API security controls—automated vulnerability detection, threat intelligence, and memory-safety checks—are critical to reduce risk.
Wallarm’s research team used a rigorous methodology to capture 99% of all API-centric CVEs and bug bounty disclosures from 2024, mapping them to CWE categories, then validating results through empirical testing. The outcome is a high-coverage set of data and trends that show how crucial it is to fix these glaring security issues.
Enterprises that secure their AI-powered APIs position themselves to lead. Those that overlook these mounting risks invite breaches, system downtime, and loss of consumer trust. It’s clear that AI security is API security, and that strong API defenses form the key protective layer for modern, data-driven businesses.
Download the report: https://www.wallarm.com/resources/2025-api-threatstats-report-ai-security-at-raise