The web is being redesigned to serve agents. Here's what it strips out.
Alex Taylor
Feb 24, 2026

Last week, two separate companies announced two separate initiatives that, between them, describe the same project:
Making the web more accessible to AI agents.
Google announced a beta of WebMCP for Chrome.
A new protocol that gives agents structured tools to interact with websites directly. From booking a flight, filing a support ticket, navigating complex data, ultimately, without the ambiguity of trying to parse a page built for a human eye.
Around the same time, Cloudflare introduced what it calls “Markdown for Agents.”
The idea is simple:
when an AI agent requests a page from a Cloudflare-hosted site, the network automatically converts the HTML to Markdown before serving it.
No layout.
No structure.
Just text.
Very similar to what Scrunch.com have been doing for the past year, and something the blankspace team has been watching closely.
CDNs were perfectly positioned for this.
Both announcements were covered as good news.
And for the people building AI agents and the users who benefit from better answers, they are.
If you want AI to interact with the web more reliably, more efficiently, and with less friction, both of these are genuine progress.
But for publishers, there’s a cost that’s buried deep in both of them.
And almost nobody is actually talking about it.
What’s being stripped out
To understand the cost, you need to understand what publishers have spent two decades building on top of HTML.
The publisher’s commercial infrastructure.
The thing that turns a page view into revenue.
Which runs almost entirely on JavaScript.
When a user lands on a page, the browser loads the HTML,
then executes the scripts:
the analytics tag fires, recording the visit.
The ad tags run, triggering a programmatic auction that happens in milliseconds and serves a display ad.
The affiliate links activate.
The tracking pixels load for retargeting.
The paywall overlay appears, checking subscription status.
All of that happens in the JavaScript layer.
None of it is in the HTML itself.
What’s happening now:
Markdown for Agents takes the HTML and converts it to Markdown before the agent ever sees it.
Markdown is text.
It contains the article, the headings, the links.
Nothing else.
WebMCP gives agents structured tools to interact with the site directly, bypassing the page render entirely.
In both cases, the agent gets the content.
The JavaScript layer though, which is where the publisher’s commercial infrastructure lives, sits there patiently…. waiting to be called while millions of visitors go by unmonetised.
This isn’t a new problem.
Live Search Agents* have never executed JavaScript.
Live Search Agents
The bots dispatched by ChatGPT, Perplexity, Claude, and others, not for training their models, but to collect information for a specific user prompt within real-time.
They make an HTTP request, receive the HTML, and parse the text.
The publisher’s analytics don’t fire.
The ad auction doesn’t run.
The affiliate links don’t activate.
The bot reads the content, takes what it needs, and leaves.
The publisher gets nothing.
No data, no revenue, no record it even happened.

What Google & Cloudflare have done this week is build an industry standard infrastructure layer that makes this access pattern faster, cleaner, and more scalable.
They’ve taken a behaviour that was already happening and made it the intended behaviour.
Why this matters beyond the technical detail
Publishers lose an estimated $2 billion annually to AI bot traffic, according to IAB Tech Lab.

That figure reflects revenue that should have been generated from programmatic ads or affiliate clicks, that wasn’t, because the bots visiting publisher pages aren’t triggering any of the infrastructure that generates it.
That number was already growing.
Personal AI agent traffic grew 15x in 2025 alone (Cloudflare).
51% of internet traffic is now automated, projected to reach 90% by 2030 (IAB Tech Lab).
As AI assistants become the default way people find information, and as those assistants increasingly visit publisher content to form their answers, the proportion of publisher traffic that executes no JavaScript will grow with it.
To a point where maybe you don’t have much human traffic left… but a shit load of AI agent traffic.

WebMCP & Markdown for Agents don’t cause this problem.
But they do accelerate it.
They signal that the industry is building towards an agentic web where clean, structured content access is the standard,
and where the JavaScript dependent commercial infrastructure that publishers rely on is,
at best, is an afterthought.
There’s a version of the industry’s response to this that involves trying to make JavaScript work for agents too.
That path is technically difficult, commercially doesn’t make sense, and all in all probably not worth it.
Agents are designed to retrieve information efficiently.
Adding the friction of experiences for human eyes is just costing LLMs millions more a day.
What the response actually looks like
The more productive question isn’t how to make the existing infrastructure work for agents.
It’s how to build infrastructure that operates in the layer agents do use.
Agents receive HTML at the moment, and in the future that’ll become Markdown.
They parse text content.
They read and process what’s there before any JavaScript would have fired.
If a publisher’s commercial opportunity with agent traffic is going to exist at all,
it has to live in that layer.
Not in the JavaScript execution layer, which agents bypass.
Not in display ads, which agents don’t render.
In the text itself.
Contextually relevant, dynamically inserted, present in the content the agent reads.
This is the architecture that makes sense for an agentic web for the moment: not a JavaScript tag running after the fact, but content-layer monetisation running in real time, before the agent processes the page.
Google and Cloudflare aren’t the problem here.
They’re building efficient infrastructure for a world that’s already arriving.
The question for publishers is whether their monetisation strategy is built for the world that’s already arriving too, or for the one that’s leaving.
Those aren’t the same world.
And the protocols launched this week made that clearer than anything has in a while.