A majority of security leaders are struggling to defend AI systems with tools and skills that are not fit for the challenge, according to the AI and Adversarial Testing Benchmark Report 2026 from Pentera.
The report, based on a survey of 300 US CISOs and senior security leaders, examines how organizations are securing AI infrastructure and highlights critical gaps tied to skills shortages and reliance on security controls not designed for the AI era.
AI adoption is outpacing security visibility
AI systems are rarely deployed in isolation. They are layered across and integrated into existing corporate technology, from cloud platforms and identity systems to applications and data pipelines. With ownership spread across disparate teams, effective centralized oversight has collapsed.
As a result, 67 percent of CISOs reported limited visibility into how AI is being used across their organization. None of the respondents indicated they have full visibility; rather, they acknowledge being aware of or accepting some form of unmanaged or unsanctioned AI usage.
Without a clear view of where AI systems operate or what resources they can access, security teams struggle to assess risk effectively. Basic questions, such as which identities AI systems rely on, what data they can reach, or how they behave when controls fail, often remain unanswered.
Skills, not budget, are the primary barrier
Although AI security is now a regular topic in boardrooms and executive discussions, the study shows that the biggest challenges are not financial.
CISOs identified the following as their top obstacles to securing AI infrastructure:
- Lack of internal expertise (50 percent)
- Limited visibility into AI usage (48 percent)
- Insufficient security tools designed specifically for AI systems (36 percent)
Only 17 percent cited budget constraints as a primary concern. This suggests that many organizations are willing to invest in AI security, but do not yet have the specialized skills needed to evaluate AI-related risks in real environments.
AI systems introduce behaviors that security teams are still learning to assess, including autonomous decision-making, indirect access paths, and privileged interaction between systems. Without the right expertise and active testing, it becomes difficult to evaluate whether existing controls are effective as intended.
Legacy controls are carrying most of the load
In the absence of AI-specific best practices, skills, and tooling, most enterprises are extending existing security controls to cover AI infrastructure.
The study found that 75 percent of CISOs rely on legacy security controls, such as endpoint, application, cloud, or API security tools, to protect AI systems. Only 11 percent reported having security tools designed specifically to secure AI infrastructure.
This approach reflects a familiar pattern seen during previous technology shifts, where organizations initially adapt existing defenses before more tailored security practices emerge. While this can provide basic coverage, controls built for traditional systems may not account for how AI changes access patterns and expands potential attack paths.
A familiar challenge, now applied to AI
Taken together, the findings show that AI security challenges stem from foundational gaps rather than a lack of awareness or intent.
As AI becomes a core part of enterprise infrastructure, the report suggests that organizations will need to focus on building expertise and improving how they validate security controls across environments where AI is already operating.
To explore the full findings, download the AI and Adversarial Testing Benchmark Report 2026 for a deeper discussion of the data and key takeaways.
Note: This article was written by Ryan Dory, Director, Technical Advisors at Pentera.