Agents at Work
Microsoft put a number on the problem. The number is worse than I thought.
Microsoft published its 2026 Work Trend Index this month. Twenty-five pages, a foreword by Harvard’s Karim Lakhani, a headline that reads Agents, human agency, and the opportunity for every organization, and a subtitle that I am going to ask you to read twice.
“As AI and agents take on execution, our own agency expands. The question is whether organizations are built to capture it.”
If you have been on this page for the last six months, you have read a version of that sentence twice already. Once in March, in The Age of the Orchestrator. Once at the end of March, in Burn Rate. Microsoft, with twenty thousand survey respondents and a year of Copilot telemetry, has now done the work of providing the data.
The data is worse than I expected.
The Frontier Paradox
The report sorts AI users into five zones across two axes. Their own readiness to work with AI. Their organisation’s readiness to absorb it.
The Frontier zone, where individual capability and organisational readiness both run high and reinforce each other: 19%. The Stalled zone, where both sides are flat: 16%. The Unclaimed Capacity zone, where the company is ready and the worker is not: 5%. The Emergent zone, the messy middle where both sides are sort of trying: 50%.
That leaves 10%. Microsoft has a name for the 10%. They call it Blocked Agency.
Blocked Agency means skilled people, building real capability with AI, sitting inside companies that have not built anything around them. They are doing the work. Nothing is catching it. The output runs off into the sand.
Now read the subtitle one more time. The question is whether organizations are built to capture it.
For sixty percent of the workforce in the survey, the answer is no. For another twenty percent, the answer is approximately, sometimes, on a good day. Only one in five is in the place where the system actually compounds the work.
This is what Microsoft calls the Transformation Paradox. In their own words: “The Transformation Paradox is, at its core, a systems problem. And systems don’t fix themselves. They have to be redesigned.”
So far, so unsurprising. Then they ran the numbers.
The number that ends the argument
Microsoft tested twenty-nine factors against self-reported AI impact. Creativity. Work quality. Ability to do high-value work. Willingness to stay at the company because of AI. The full kitchen sink. Job level, industry, generation, AI familiarity. Manager support, organisational culture, governance maturity, talent practices.
They came back with one number that ought to settle a long-running argument.
Organisational factors account for sixty-seven percent of the variance in AI impact. Individual factors account for thirty-two percent. The strongest single predictor of whether AI is doing any useful work inside your organisation is the organisation’s AI culture. It is, in the report’s own words, “about two and a half times as strong a signal as the top individual factor.”
Not your training budget. Not your AI literacy programme. Not the prompt engineering certification your team just paid for. Not how clever your people are with a chat window. The thing that determines whether your investment in AI returns anything is what the place looks like around the AI.
Two thirds. Twenty-nine factors. Twenty thousand respondents across ten markets. If you wanted a number to put on the question of where the value sits, this is the number.
The story everyone wants to tell about AI is a story about people. The data, as it almost always does, has other ideas.
How the report will get misread
The report is going to get read two ways inside most organisations. Both ways are wrong.
The first reading is the training reading. Send everyone on an AI bootcamp. Buy the Coursera enterprise licence. Build the prompt library. Reward the early adopters. Get certified. Solve. This reading focuses on the thirty-two percent and ignores the sixty-seven.
The second reading is the tools reading. Deploy more agents. Copilot for everyone. Build the multi-agent orchestration layer. Plug it into Slack, ServiceNow, the CRM, the helpdesk. Solve. This reading mistakes activity for absorption and assumes that buying capability is the same as building it.
Microsoft’s own data refuses both readings.
The thing that turns AI investment into AI value is, in the report’s language, “the system that connects leadership, culture, management practices, and how work is measured.” Strategy at the top. Metrics that reward redesign over delivery. Managers who model AI use and set quality standards for AI output. Governance that has kept pace with what the agents are now allowed to do without human review. Talent practices that build the right skills and then create the space to apply them.
That is not a training programme. That is not a software licence. That is operating model work. Slow, structural, expensive, and the kind of thing most leadership teams would rather not look at directly because looking at it requires answering questions that have, until very recently, been comfortable to leave open.
Owned Intelligence
There is a line in the Organizations section of the report that I had to read three times.
“Every Frontier Firm needs to build Owned Intelligence. Institutional know-how that compounds over time, is unique to the firm, and is hard to replicate.”
That is a Microsoft research paper from May 2026, giving a name to the thing we have been arguing for since long before any of this had a name. The thing AI cannot give you, no matter how many agents you deploy and how cheap the tokens get, is the captured, codified, governed thinking of your own organisation. The narrative that says what you stand for, before anything you publish is judged against it. The structure that says what crosses the line and what does not. The workflow that says whose name is on the thing before it goes into the world.
Without that asset, AI is producing content the way an underclocked engine produces heat. There is plenty of energy in the system. None of it is doing useful work.
Microsoft calls it Owned Intelligence. We have been calling it Narrative, Structure, Workflow. Same asset. Same gap. Same reason most organisations cannot answer the question of who, exactly, is accountable for what the AI just shipped.
What this looks like inside the content function
If the Frontier Paradox is the macro picture, the content function is the micro one. Look at any communications or marketing team that has folded AI into its workflow in the last eighteen months. The pattern is the same.
Output is up. Speed is up. The number of pieces moving through the system has gone vertical.
The number of pieces that anyone has actually read, signed off on, and stands behind has not moved.
That gap is not a content problem. It is the Frontier Paradox happening inside a single function. Skilled people producing more than the governance system around them can absorb. Blocked Agency on a small scale, repeated everywhere a marketing team has access to a chat window.
The fix is not an editorial AI. It is not a smarter content tool. It is the system around the work. The narrative framework that defines what the organisation stands for before anyone is judging output against it. The governance layer that draws the line between what can ship without review and what cannot. The accountability infrastructure that means there is always a name attached to what goes out, whether it was written by a person or generated by an agent.
Without that, you have a faster version of the same problem you had two years ago. With it, you have something that compounds.
What Lakhani is actually asking
Lakhani frames the next decade’s management question in the foreword. “How should work itself be designed when intelligence can be embedded, distributed, and increasingly delegated?”
It is a good question. It is the question. And most organisations are still trying to answer it by buying tools, hiring AI leads, and running pilots. The report is, in its careful Harvard Business School way, telling them that none of that is the answer.
We have been less careful about it.
The answer is to build the system around the agent before you deploy the agent. The narrative. The structure. The workflow. The accountability gates. The version control on what the organisation is allowed to say. The audit trail on who said it. The compounding intelligence asset that does not exist by default and never will, unless someone goes and builds it.
That is the work. We have been doing it for nineteen years. We started calling it that long before the word agent meant what it now means.
If the Frontier Paradox is showing up in your content function as more output and less trust, the fix is not faster output. It is the system around it. And the system is not going to build itself.
Not sure where your content system breaks? Take the five-minute diagnostic and find out →
Red Pen builds the content governance, verification, and accountability infrastructure that complex organisations need to scale content at speed, without losing control.








Leave a Reply