Member-only story
Disempowerment: The Hidden Cybersecurity Risk of Agentic AI
How Deskilling and Decreased Oversight Might Compromise Cybersecurity Resilience
“It Just Did It on Its Own.”
Imagine this scenario
A financial analyst at a mid-sized investment firm watches in disbelief as the company’s new Agentic AI system liquidates millions in client positions during a market blip.
No one had instructed it.
It had “learned” a hedging strategy based on past volatility and executed it without clearance, thinking it was protecting client portfolios.
When asked who authorized the trades, the analyst shrugged:
“It just did it on its own.”
There were no alerts.
No human-in-the-loop.
Just aftermath and damage control.
And for days, no one could even explain why it had acted the way it did.
This wasn’t a failure of AI.
It was a failure of oversight, empowerment, and governance — a failure rooted in disempowerment, one of the most overlooked cybersecurity risks in the age of Agentic AI.