While 9 in 10 security leaders feel prepared for an attack, a new “trust gap” report finds only 25% actually trust their own data, leading to slow remediation and delays in AI adoption.


A stark disconnect between cybersecurity confidence and the reality of security data is leaving organizations exposed, according to a new report. A recent study reveals that while 90% of cybersecurity leaders say their organization is prepared to take immediate action on a vulnerability, only a quarter of them (25%) trust all the data in their own security tools.

This fundamental “trust gap” is the central finding of the new research report commissioned by Axonius, specializing in cyber asset intelligence. The study, which surveyed 500 US director-level and above cybersecurity and IT leaders from companies with over 500 employees, found that this data trust deficit (due to dirty data) directly impacts performance. Looking closer at the 1 in 4 leaders who mistrust their security data, they cite inconsistent data (36%), incomplete data (34%), and inaccurate data (33%) as the primary reasons.

“Many organizations mistakenly believe they have a clear picture of their security posture, but that confidence often rests on flawed or what some call ‘dirty data’—information that’s incomplete, inaccurate, or out of date,” says Ryan Knisley, chief product strategist at Axonius, in a release. “Effective exposure management depends on reliable, trustworthy data. No amount of automation or AI integration can make up for a broken data foundation. Until that gap is addressed, the risk of a serious breach only grows.”

Industry analysts agree that this data integrity issue is a primary obstacle to security modernization.

“The [chief information security officer] we talk to are investing heavily in automation and AI, but it is unclear how many of these projects will actually deliver on their promise,” says Andrew Braunberg, principal analyst at Omdia, in a release. “The reason is simple: AI algorithms are only as good as the data they’re fed. A single, credible view of all assets and their exposures is critical for organizations to train accurate, predictive, and up-to-date models.”

Key findings from the report include:

  • Execution Lags Behind Confidence: Despite feeling prepared, 4 in 5 organizations (81%) take more than 24 hours to remediate a critical vulnerability or (80%) exposure, giving attackers a wide-open window to exploit security weaknesses. This is compounded by key operational challenges, including difficulty with prioritization and risk assessment (29%), and a lack of integration between security tools (27%).
  • CTEM Adoption is a Priority, But Faces Hurdles: While 58% of organizations report having adopted a continuous threat exposure management (CTEM) framework to become more proactive, they face significant challenges. The top obstacles include integrating CTEM tools across platforms (38%), measuring ROI (35%), and automating remediation (34%).
  • AI’s Potential is Hindered by Bad Data: Organizations are eager to use AI and automation for tasks like automated patching (42%) and AI-driven risk prioritization (40%). However, the top challenge to incorporating these technologies is integration issues with existing systems (38%)—a problem rooted in a weak data foundation.

“The industry is chasing the promise of proactive, predictive security, but you can’t predict threats if your view of the battlefield is a mirage,” says Knisley in a release. “The path forward requires a real commitment to establishing the right context: a consolidated view across environments for what exists in an environment and how it’s exposed. Only then can teams close the gap between feeling ready and actually being ready, enabling them to preemptively tackle threats and build lasting cyber resilience.”

Axonius will present research findings and demonstrate the Axonius Asset Cloud at Black Hat USA 2025 from Aug 6–7. 

ID 343746344 © Yuri Arcurs | Dreamstime.com

We Recommend for You: