The Age of the Orchestrator
Intelligence is commoditised. Everyone can do everything. The value is in what comes next.
On 4 February 2026, Anthropic released an update to Claude called Cowork, which includes agentic plugins built to automate contract reviews, regulatory compliance tracking, and sales forecasting. The kind of high-volume, repetitive knowledge work that millions of people do every day.
Within hours, India’s benchmark IT stock index dropped nearly 6%.
The technology wasn’t unexpected. The industry had been warned. But no one had quite reckoned with what it would feel like when a single product release made the whole model visible: selling human effort by the hour, at scale, had just hit a wall, the airbags deployed, the car totalled.
Tata Consultancy Services, India’s largest IT firm and employer of hundreds of thousands, had already announced its biggest-ever layoffs: 12,000 roles, citing AI-driven skill mismatches. Campus hiring across major Indian IT firms dropped to its lowest level in two decades. Analysts at State Street noted that Indian IT operating margins had fallen below their long-term average, with AI now viewed as a deflationary force reducing billable hours, compressing pricing, and accelerating a shift from headcount-based models to outcome-based delivery.
This isn’t really an Indian IT story. That’s just the most visible crack in the wall. The same pressure is bearing down on every industry that has historically monetised expertise by the unit: law, consulting, accounting, marketing, software development, and content production.
The question isn’t whether AI will replace specialists. It already is. The question is what replaces the model.
If you lead content or communications inside a complex organisation, this is your problem specifically.
The end of expensive intelligence
For decades, the economics of professional services were relatively simple. Intelligence was scarce and expensive. Organisations paid for subject-matter experts because genuinely knowing a domain and consistently applying that knowledge were difficult to find and costly to retain. The value sat in the knowing.
That scarcity is now gone.
AI is fast becoming a utility. Cheap, available, and getting cheaper. As one commentator put it recently: we’re watching the commoditisation of intelligence in real time. It’s becoming plumbing: essential, invisible, and boring. The “AI Company” label is already peeling off, just as the “Internet Company” label did after the dot-com crash. When everyone has access to the same capability, the capability stops being the differentiator.
Anyone with a browser and a subscription can now draft a legal brief, produce a market analysis, write a pitch deck, or turn out product descriptions in six languages before lunch. The tools are out there. Most people have them.
Which means doing doesn’t mean squat. The value lies in orchestrating.
Enter the orchestrator
This is where it gets interesting.
Machines handle execution. The orchestrator handles judgment, governance, and everything the algorithm can’t be trusted with.
We’re not heading into an economy where AI replaces humans and everyone goes home. We’re heading into one where a single person or a very small team can orchestrate AI agents across multiple domains to deliver what entire departments used to. Where someone with the right judgement and the right systems can direct a constellation of tools to produce work that previously required dozens of specialists.
This is already happening. Software engineers are describing the shift from coder to conductor to orchestrator, arranging work across multiple autonomous agents rather than writing every line themselves. Management thinkers are writing about a future where three human specialists orchestrate agent teams that do the work of twenty. Deloitte estimates that the autonomous AI agent market could reach $8.5 billion by 2026, and that organisations with mature orchestration will capture two to three times more value than those without.
The era of the single-domain specialist, the person paid for one skill, applied repeatedly, is giving way to something different. The orchestrator isn’t just a more senior specialist. It’s a different kind of role entirely.
What orchestration actually requires (and why it's harder than it sounds)
There’s a tempting narrative I want to push back on before it calcifies. The version that goes: AI does the work, humans just supervise. Easy. It isn’t.
Orchestration is not the same thing as supervision. Supervision is watching things happen. Orchestration is knowing what good looks like across multiple domains, setting intent before anything gets produced, building governance into the workflow so that output can actually be trusted, and standing behind what goes out. The orchestrator makes the decisions the machines can’t make, and takes accountability for the ones they shouldn’t.
PwC’s 2026 AI predictions research makes a point that doesn’t get nearly enough airtime: technology delivers roughly 20% of the value in any AI initiative. The other 80% comes from redesigning work: how tasks are performed, checked, approved, and reinforced through management. Eighty per cent. And almost nobody is building for that 80%.
That number should stop every executive who thinks buying an AI platform is the same as transforming their operations. The tool is 20% of the answer. The system around the tool, the governance, the standards, the accountability, and the human judgement at the right moments is the rest.
The accountability problem nobody is talking about
Here’s what I see in practice, every week, inside the organisations we work with.
AI has accelerated output dramatically. Teams are producing more content, more analysis, and more deliverables than ever before. And almost none of it is being verified. There are no editorial standards applied to AI output, no governance layer determining what can go out without human review, and no clear ownership model for who is accountable when something is wrong.
The result is content that exists everywhere and works nowhere. More gets published. Less gets trusted. Different teams, channels, and regions say different things under the same brand. Strategy gets agreed in workshops and stalls before it reaches execution. And leadership can’t quite see why the investment in content, or AI, never quite delivers.
This isn’t a volume problem. It never was. It’s a systems problem, and AI is making it worse, faster. In the organisations where we see this most acutely, three failures are always present: no editorial standard for AI output, no governance layer determining what requires human review, and no clear ownership when something goes wrong.
The European Business Review captured something worth sitting with: the idea of a “quiet re-ranking” already underway. Organisations that have their systems right, that deliver faster, with fewer errors, with consistent quality across touchpoints, are pulling ahead. Not in headline-grabbing ways. In small, repeated differences: shorter buying cycles, less rework, higher client confidence. Those differences compound. And the gap between organisations that have built the infrastructure and those that haven’t is widening every month.
Commoditisation doesn’t flatten everyone to the same level. It exposes who has built real operating capability, and who has just been buying tools.
The organisations closing that gap aren’t doing it by buying more tools. They’re building the operating infrastructure that makes AI output trustworthy: the governance layer between strategic intent and what actually goes out.
Why Red Pen exists now
That infrastructure has three components: the Narrative frameworks that define what the organisation stands for before any content is created; the Structure that makes consistency possible at scale; and the Workflow accountability that ensures nothing goes out unauthorised.
Without all three working together, AI doesn’t solve your content problem. It scales it.
The age of the orchestrator is here. The question is whether you're ready.
The shift has arrived. What happened in Indian IT wasn’t an anomaly; it was the first clear view of the wall. Every industry built on selling specialist expertise by the unit is facing the same pressure, on the same trajectory.
Future success is not about early AI adoption. It depends on building systems that enable orchestration: governance, accountability, and human judgement applied at the right moments.
The age of the orchestrator rewards people who can see across domains, set clear intent, and take responsibility for what comes out. It’s unforgiving of those who mistake access to tools for the ability to use them well.
If your organisation is producing more than ever but trusting it less, the issue isn’t the tools. It’s the system. And fixing that system is exactly what we do.
Got something to add?
If you’d like to talk about how we could help your organisation or continue the conversation, reach out today.








Leave a Reply