The Skills Are There. They're Just in the Wrong People.
Why the real bottleneck in AI agent adoption isn't the technology — it's where the right skills are sitting in your organization.
Most organizations already have everything they need to work effectively with AI agents. The problem isn't the technology, the budget, or even the strategy. The problem is that the skills required to manage AI agents are distributed in exactly the wrong way — and the people who have them don't want to use them here, while the people who need them don't want to acquire them.
A Transition We've Seen Before
Anyone who has managed technical teams has watched this movie before. A brilliant developer gets promoted to engineering manager. Within weeks, the frustration sets in — not because the work is harder, but because it's different. Writing code gives immediate, tangible feedback. Managing people who write code means living in the abstract: briefing, delegating, reviewing, iterating on output you didn't produce yourself. Many technically brilliant people either fail the transition or quietly return to individual contributor roles.
That same transition is now being forced on entire technical teams — not as a promotion, but as a side effect of a tool update. And most teams are not ready for it.
Side One: Developers Discovering the Hard Skills Are Soft
I've seen this pattern consistently in the teams I work with. A senior developer — ten years of experience, deep technical credibility — receives access to a coding agent. He runs a few tests, gets mediocre output, and concludes the tool doesn't work for his type of problem. What actually happened: he wrote vague, ambiguous prompts. Not because he isn't intelligent, but because he has never had to write a formal specification for someone else. He always held the full context in his own head.
Working with AI agents requires exactly the skills that technical staff have historically handed off to product managers, project leads, and business analysts: decomposing problems into delegable units, writing clear acceptance criteria, reviewing output against intent rather than personal preference. InfoWorld captures this precisely — skills dismissed as "soft" turn out to be the hard ones.
The data confirms what I observe in the field. Engineers working with coding agents are already reporting that 70% or more of their time has shifted from writing new code to reviewing and revising agent output. But nobody trained them to review. Code review was always secondary — a step before merging, not a primary craft. Now it is the job.
Side Two: Executives Who Won't Make the Same Crossing
Here's where it gets structurally more difficult to solve.
The skills needed to manage AI agents effectively aren't exotic. They're the daily toolkit of any experienced manager: articulate a goal, break it into tasks, assign with context, review the output, iterate. Ethan Mollick at Wharton makes the argument explicitly — management experience is the hidden AI superpower that nobody is talking about.
So why aren't managers and executives using it?
Because using it directly means re-engaging with the technical layer they spent years leaving behind. I've seen this with CTOs who have declared their organizations "AI-first" while never personally interacting with an agent. Every AI task is still routed through a tech lead, who prompts the agent, reviews the output, and delivers a result — the same delegation chain that existed before AI arrived. The strategic vision still passes through a human translation layer, losing precision at every step.
McKinsey is direct about this: "Delegating the agentic transformation to your technology leader, as you would with a software deployment, will not suffice." The executives who could most naturally manage AI agents — because they already manage people using the same skills — are choosing not to.
The Crossing Nobody Wants to Make
This is the structural paradox at the center of most organizations' AI adoption problems:
- Technical staff are being pushed toward management skills they have historically avoided.
- Managers and executives are being pushed back toward technical engagement they deliberately left behind.
The skills exist in the organization. They're just sitting in the wrong layer — and neither layer wants to move.
HBR identified this gap in early 2026 and proposed a new role: the Agent Manager — someone who combines technical fluency with management discipline to govern how AI agents operate within a team. It's a reasonable response. But it's also a workaround. Hiring an Agent Manager is easier than changing the behavior of ten senior developers and a CTO. It doesn't solve the underlying structural problem.
The more uncomfortable truth is that AI agent adoption requires a genuine redistribution of skills — not a new headcount.
What This Means for Technology Leaders
Organizations that treat AI adoption as a tooling problem will plateau quickly. The bottleneck is not the model, the license, or the infrastructure. 80% of organizations are deploying AI agents while only 20% have mature governance for them — and governance isn't a technical problem, it's a management one.
The question worth asking isn't "which AI tools should we deploy?" It's a harder one: who in this organization is willing to work differently — and what would it take to make that crossing less threatening?
That's where the real transformation begins.
Sources:
- InfoWorld — AI coding requires developers to become better managers
- Ethan Mollick, One Useful Thing — Management as AI superpower
- McKinsey — The Agentic Organization: Contours of the next paradigm for the AI era
- Harvard Business Review — To Thrive in the AI Era, Companies Need Agent Managers
- Gloat — 10 Key AI Workforce Trends in 2026
- Fortune — Weary managers of the world, get ready to learn a new skill
- Medium / Sathish Jayaram — Everyone is a Manager Now: The Art of Delegating to AI Agents
- Anthropic — 2026 Agentic Coding Trends Report