Hello, Guest!

Articles

Tamara Lilly and Cheri Pascoe. Lilly, an HHS audit exec and Pascoe, director of NIST's Cybersecurity Center of Excellence, shared thoughts on the cyber risks of agentic AI at an event.

Left: Tamara Lilly of the Department of Health and Human Services; Right: Cheri Pascoe of the National Institute of Standards and Technology's Cybersecurity Center of Excellence

The Cyber Threat of Agentic AI

Federal cybersecurity leaders are warning that autonomous AI-powered cyberattacks are evolving faster than government defenses can respond, creating new risks around identity, phishing, enterprise access and critical infrastructure security.

“The speed of agentic AI-driven attacks and existing human-led defenses, it’s not an equal match,” said Tamara Lilly, assistant inspector general for cybersecurity and IT audits at the Department of Health and Human Services, during GovCIO Media & Research’s CyberScape: The Federal Cybersecurity Summit in April. “You don’t come to a gun fight with rocks.”

Lilly’s warning reflects growing concern across government and industry that agentic AI, systems capable of independently making decisions and executing tasks, is accelerating both cyber offense and defense at a pace existing governance and security architectures are struggling to match.

2026 Cyber Summit tile ad. The GovCon networking event from Potomac Officers Club will gather leaders from DOW, CISA, the White House and beyond for discussions with industry.Those challenges will be a major focus at the Potomac Officers Club’s 2026 Cyber Summit on May 21, where federal and industry leaders will discuss the growing role of AI in cyber operations, threat detection and autonomous resilience.

The event will feature panel discussions including AI in Cyber Defense: From Intelligent Detection to Autonomous Resilience and From Automated to Autonomous, Leveraging AI to Evolve Your SOC, each propelled by expert panelists from across the government and military. Don’t miss this key GovCon networking opportunity; register now!

Why Is Agentic AI Becoming a Cybersecurity Concern?

Unlike traditional AI tools that operate within narrow boundaries, agentic AI systems can move across platforms, interact with multiple applications and autonomously execute tasks with minimal human oversight.

That flexibility is also creating new vulnerabilities.

“We’re seeing now almost a frenetic pace in which cybersecurity vendors are continuing to add AI agents in their technologies,” said Cheri Pascoe, director of the National Cybersecurity Center of Excellence at the National Institute for Standards and Technology. “The challenge is, though, that our governance and risk management frameworks do not evolve and are not updated at the pace in which technology is changing.”

Pascoe said organizations are still largely securing systems, endpoints and networks individually even as agentic AI increasingly operates across entire enterprise environments.

“We’re also seeing challenges in which organizations are continuing to secure individual systems, networks, endpoints, but AI, and especially agentic AI, which has the ability to kind of operate on its own is crossing platforms, crossing systems across an entire enterprise,” she said.

Security researchers and federal agencies have increasingly raised concerns about prompt injection attacks, privilege escalation, AI identity spoofing and autonomous attack chains tied to agentic AI deployments.

A May report from CSO Online said CISA and allied cybersecurity agencies are urging organizations to adopt least privilege models, continuous monitoring and stronger governance controls for AI agents as prompt injection and tool misuse risks grow.

Meanwhile, past Potomac Officers Club speaker and former CIA official Jennifer Ewbank wrote in Homeland Security Today that the convergence of autonomous AI systems and AI-enabled cyberattacks is dramatically expanding the attack surface across critical infrastructure sectors.

How Fast Are AI-Driven Cyberattacks Moving?

One of the biggest concerns among cybersecurity leaders is the shrinking timeline between initial compromise and data theft.

Lilly cited recent industry findings showing attackers can now infiltrate and exfiltrate sensitive data in less than half an hour.

“As of late, the average time for an attacker from initial access to data exfiltration … has dropped to a record low according to CrowdStrike. Twenty-nine minutes,” Lilly said. “In and out with your data. Amazing. Before you even know it, a lot of times.”

She added that AI is lowering the barrier to entry for cybercriminals while simultaneously increasing the sophistication of attacks.

“As much as AI is helping us, the adversary has stepped up its game and it has actually lowered the bar for the average bad actor to get into the game,” Lilly said.

The emergence of AI-generated phishing campaigns, deepfake voice scams and autonomous reconnaissance tools is forcing agencies to rethink traditional cybersecurity architectures.

“So in terms of traditional architectures, they’re failing us and we need to keep pace with the autonomous by using autonomous agents on our side,” Lilly said.

According to Ewbank, adversaries are increasingly using AI to automate reconnaissance, accelerate vulnerability discovery and execute lateral movement inside networks before security operations centers can complete initial triage.

At Potomac Officers Club’s 2026 Cyber Summit this coming Thursday, keynote speakers such as Department of War Acting CISO Aaron Bishop, acting Federal CISO Michael Duffy and Acting Assistant Executive Director Chris Butera are expected to address emerging AI-driven cybersecurity risks and modernization priorities. Get your questions answered by these influential federal representatives.

Can Federal Cybersecurity Frameworks Keep Up?

Federal agencies and standards organizations are now racing to develop governance and identity frameworks capable of managing autonomous AI systems.

In February, NIST announced the launch of its AI Agent Standards Initiative, which aims to support interoperable and secure AI agent deployments across sectors. The initiative focuses on industry-led standards development, open-source protocols and research into AI agent security and identity management, according to NIST.

Pascoe said identity and authorization have quickly emerged as major challenges for organizations deploying AI agents.

“Our first project … will be focused all around identity,” she said, when outlining the NCCoE’s approach toward combatting agentic threats. “How do you authorize an agent, identify an agent within your enterprise?”

Pascoe added that NIST is currently seeking industry feedback through concept papers and collaborative efforts focused on practical guidance for securing agentic AI systems.

That concern around non-human identities is becoming increasingly central to federal cyber discussions.

Lilly said organizations are now confronting “an attack surface of unknown identities” created by AI agents, bots and service accounts operating across enterprise systems.

“We need to do a better job at understanding what that is and preventing and actioning those,” she said.

CISA and international cybersecurity partners recently warned organizations to carefully define AI agent permissions, validate how agents interpret inputs and continuously monitor agent behavior to prevent privilege creep and malicious manipulation, according to CSO Online.

As agencies accelerate AI adoption, federal leaders are increasingly emphasizing resilience alongside prevention.

“We need to really focus on how we’re going to recover, make sure that it’s fast, it’s quick, it’s efficient and effective,” Lilly said. “That’s the biggest thing.”

The challenge facing federal cybersecurity leaders now is not simply whether AI will reshape cyber operations. It is whether governance, identity management and defensive capabilities can evolve quickly enough to match the speed of autonomous threats before those systems become deeply embedded across government and critical infrastructure environments.

Get answers and find out partnership opportunities to address these looming questions at the Potomac Officers Club’s 2026 Cyber Summit on May 21. The all-day event will feature government and industry leaders from organizations including the Air Force Research Laboratory, Department of the Navy, Centers for Medicare & Medicaid Services, Coast Guard, Defense Information Systems Agency, Department of Education and the Army.

Sessions throughout the summit will examine how agencies are leveraging AI for cyber defense, modernizing security operations centers and preparing for increasingly autonomous threats across federal networks and critical infrastructure environments. Reserve your spot before they sell out!

2026 Cyber Summit banner ad. The Potomac Officers Club event is an annual GovCon networking conference centered around talks from high-profile government officials.

Potomac Officers Club Logo
Become a Potomac Officer Club Insider
Sign up for our weekly email & get exclusive event, and speaker updates, and find networking opportunities to connect with GovCon decision makers.

Category: Articles