Uncovering the HiddenRisks of the AI Race Contents 1.0Introduction 4.1Speed prioritized over governance4.2Rampant shadow AI 5.0Identity at the core of AI’s biggest risks 6.1Visibility comes first6.2Machine-speed security for machine-speed threats6.3Zero standing privilege is the endgame6.4Zero-trust principles are more important than ever Introduction The AI mandate from business leaders is clear:To remain competitive, you must accelerate AI adoption. Operationally, organizations can’t afford to let security friction hang upagentic AI deployments. Executive teams are under pressure to innovatequickly and keep pace with competitors. Even everyday non-tech workers Unfortunately, legacy security models built for humans aren’t evolvingfast enough to fully monitor new agentic AI operating models. In the rushto innovate, identity controls are often relaxed, inconsistently applied, left To understand how organizations are navigating these challenges, Delineacommissioned Censuswide to run a global survey of 2,001 IT decision-makers who are actively using or piloting AI in their environments across The study finds that in moving to agentic AI operating models,organizationsare introducing substantial identity-related risks—much of which remains The findings reveal that organizations express high confidencein their security readiness for AI while simultaneously admitting They acknowledge gaps in identity discovery, monitoring, and privilegecontrol. Under the strain of the opportunity risks of falling behind in theAI race, risk managers are under constant pressure to loosen identity Clearly, organizations can’t afford to slow down AI adoption. But the studyindicates that identity security must evolve alongside AI adoption. Leadersmust modernize the way they discover and protect access and identity The AI securityconfidence paradox2.0 The survey findings reveal a clear gap in agentic AIpreparedness. Throughout the research, respondents heldtwo conflicting beliefs: Most are highly confident that their This dynamic exposes what we call the “AI security governance is deficient around AI systems. Respondents were twice aslikely to give low marks to their ability to discover and govern identities 2x environments compared to identities accessing legacy systems.The pattern held across multiple lines of questioning. Confidence in discovery and protection of identities in AI environments was consistently For example, while 82% of organizations said they’re very confident in theirability to discover non-human identities (NHIs) with access to productionsystems. Fewer than 1 in 3 organizations actually validate NHI and AI agent This paradoxical thinking indicates organizations may be advancingagentic AI without fully modernizing the identity controls required tosupport it.They may not yet realize the level of risk incurred by agentic AI. Very confident in NHI discovery Validate NHI/AI usage in real time 3.0 The identity visibilitygap: What you don’t Our data shows that most organizations haven’t yet built the mechanisms tofully show them how identities are used, whether by humans or AI agents. An overwhelming amount of respondents admit to 90% The number one gap was machine and NHI accounts, including those usedby AI agents. Respondents reported that theidentity discovery gaps mostlikely to persist over time were in AI-related environments, at nearly Persistent discovery gaps despite existing controls The challenge is that awareness doesn’t always translate into correctiveaction. Many organizations have not yet experienced a security incidentlinked to AI-related identity weaknesses. Without a measurable failure or 42% Most respondents acknowledge they have worries about AI agent access.Approximately 42% of organizations admit that AI expansion is oneof the top factors that has increased their NHI risk in the past 12 months—farmore than increased automation and CI/CD velocity (26%) or growth in cloudnative workloads (26%). Fewer than 1 in 10 said that there’s no particular risk Until they close the visibility gap, these concerns remainvague worries while the threats accumulate unseen in theirenvironments. Without visibility into NHI and AI agent activity, Even more detrimentally, the lack of visibility into machine identities also The business is accepting AI risk to stay competitive, butbecause AI is such a new paradigm, they’re accepting itwithout actually understanding qualitatively or quantitatively Dr. Gerald Auger, Ph.D, head of Simply Cyber Identity type currently creates the largest visibility gap in The fact that NHIs only narrowly beat out workforce identitiesspeaks to another, deeper issue. Organizations haven’t yetperfected identity visibility governance for human users 4.0 Why identityweaknesses in Our survey analysis found three major areas that contribute to thesystemic lack of visibility into AI-related identity risks: Speed prioritized over governance:Innovators accelera