agentic ai

Google Just Changed How AI Interacts With Websites. Here Is What Every Kenyan Developer and Business Owner Needs to Know.

Google Just Changed How AI Interacts With Websites. Here Is What Every Kenyan Developer and Business Owner Needs to Know.

For the past two years, AI agents browsing the web have been doing something deeply inefficient. When an AI assistant tries to book a flight, fill a form, or add a product to a cart on your behalf, it does not understand your website the way a human does. It looks at it like a photograph — analysing pixels, guessing which button to click, scraping raw HTML, and burning through hundreds or thousands of tokens just to figure out where the search bar is.

If the button moved five pixels between deployments, the agent breaks. If you changed your CSS class names, the agent breaks. If your checkout flow has a dynamic step, the agent breaks.

This is the messy, expensive, unreliable reality of how AI has been interacting with the web until now. Google wants to change that — and what they shipped earlier this month has implications that reach well beyond Silicon Valley, including for developers and businesses building on the web in Kenya.

The Old Way: AI as a Confused Tourist

To understand why WebMCP matters, you need a clear picture of the problem it is solving.

When an AI agent visits a website today, it has essentially three tools available. It can scrape the raw HTML and try to parse meaning from the structure. It can take a screenshot and send it to a multimodal vision model to interpret visually. Or it can try to manipulate the DOM directly — finding elements by their IDs, classes, or positions and clicking them programmatically.

All three approaches share the same fundamental weakness: the agent is guessing. It is inferring intent from presentation. A button labelled "Continue" could mean "proceed to payment" or "dismiss this popup" — the agent has to figure that out from context, and it frequently gets it wrong.

The result is agents that are slow, expensive to run, and brittle. A single UI change on your website can break an automation that was working perfectly the day before. And for every task completed successfully, there are dozens of token-burning attempts that failed silently or produced the wrong outcome.

This is the problem that has limited browser-based AI agents from becoming genuinely useful at scale. Until now, the gap between what AI agents could theoretically do on the web and what they could reliably do in practice has been enormous.

What WebMCP Is

On February 10, 2026, the Google Chrome team launched WebMCP — the Web Model Context Protocol — as an early preview in Chrome 146 Canary. It was developed jointly by engineers at Google and Microsoft and is being incubated through the W3C's Web Machine Learning community group, which means this is not a Google-only experiment. It is the beginning of a proposed web standard with cross-industry backing.

The core idea is straightforward: instead of making AI agents guess how your website works, WebMCP lets your website tell them directly.

It works through a new browser API called `navigator.modelContext`. Using this API, your website publishes what Google calls a "Tool Contract" — a structured list of things the agent is allowed to do and exactly how to do them. Instead of an agent trying to visually locate a booking form and click through it field by field, your website simply exposes a function like `bookFlight(destination, date, passengers)`. The agent calls that function directly. No screenshot analysis. No DOM guessing. No pixel counting.

Google's André Cipriani Bandarra, who led the announcement, put it this way: by defining these tools, you tell agents how and where to interact with your site — whether it is booking a flight, filing a support ticket, or navigating complex data. The direct communication channel eliminates ambiguity and allows for faster, more robust agent workflows.

The early numbers reflect that. Benchmarks from the preview show approximately a 67% reduction in computational overhead compared to traditional visual agent-browser interactions, with task accuracy sitting around 98%. Both figures need broader real-world validation, but the directional improvement is significant.

The Two APIs: Declarative and Imperative

WebMCP gives developers two ways to make their sites agent-ready, depending on how complex their workflows are.

The Declarative API is the simpler option. It works by adding new attributes directly to standard HTML forms. If your website already has a well-structured form — a contact form, a search bar, a booking widget — you can expose it to agents with minimal additional code. This is the entry point for most websites, and it requires no new backend infrastructure.

The Imperative API is for more complex, dynamic interactions. It uses JavaScript's `navigator.modelContext.registerTool()` method to define richer tool schemas that can handle multi-step workflows, conditional logic, and dynamic state. If you are building something like a multi-step checkout, a product configurator, or a support ticket system with branching logic, this is the API you would use.

Both APIs are built with security and user consent in mind. The browser acts as a secure proxy between the agent and your site, and sensitive actions require explicit user confirmation before execution. There is also a `clearContext()` method to wipe shared session data, preventing agents from retaining information across sessions without consent.

The specification is explicit that WebMCP is designed for cooperative, human-in-the-loop workflows — not silent, unsupervised automation. The user is present. The agent acts on their behalf with their knowledge.

This Is Not Anthropic's MCP — But They Work Together

If you have been following AI infrastructure news, you will notice the name overlap. Anthropic's Model Context Protocol — MCP — has been gaining significant adoption among developers building server-side AI integrations. The names are similar and the goals are related, but they solve different problems.

Anthropic's MCP operates on the server side. It connects AI platforms to back-end services through API integrations — no browser required. It is designed for automation that happens without a human watching, such as an AI agent querying a database, calling an external API, or managing files on a server.

WebMCP operates on the client side, inside the browser tab. It is designed for situations where the user is present and the interaction benefits from the shared visual context of an active browser session. Think of it this way: Anthropic's MCP is for when no human is watching. WebMCP is for when the user is right there, ready to step in if something goes wrong.

The two standards do not compete — they complement each other. A travel company, for example, might run an Anthropic MCP server for direct API integrations with ChatGPT and Claude, while simultaneously implementing WebMCP on their consumer-facing booking page so that browser-based agents can interact with the checkout flow in the context of the user's live session.

The Cloudflare Connection: Reading and Acting

We covered Cloudflare's Markdown endpoint for AI agents in a previous piece — a feature that lets AI systems read your website's content cleanly, without parsing HTML noise, by exposing a structured text version of any page.

WebMCP completes the other half of that picture.

Cloudflare's Markdown endpoint solves the read problem: how do AI agents consume your content accurately and efficiently? WebMCP solves the action problem: how do AI agents do things on your website accurately and efficiently?

Together, these two developments represent the infrastructure layer of what is increasingly being called the agentic web — a version of the internet designed not just for human visitors, but for AI agents acting on their behalf. The read layer and the action layer, shipped within weeks of each other, by two of the most widely used web infrastructure companies in the world.

This is not a coincidence. It is a signal.

What This Means for Kenyan Developers and Businesses

Here is where the story gets practically important for anyone building on the web in Kenya.

The shift to agentic web interactions is going to create a new axis of competitive advantage — and the gap between websites that are agent-ready and those that are not will become as significant as the gap between websites that had mobile-responsive design and those that did not. That transition took several years to play out. This one will move faster.

For developers, WebMCP is worth exploring now, not later. The early preview program gives you access to documentation and demos before the broader rollout. Joining now means you understand the standard before your clients start asking about it. Given that WebMCP reuses existing front-end JavaScript rather than requiring new backend infrastructure, the implementation cost for a well-structured site is lower than it might appear. You are not rebuilding your site — you are adding a structured description of what it already does.

For Kenyan startups and businesses with transactional websites — e-commerce, booking platforms, fintech products, SaaS tools — the case for paying attention is direct. As AI assistants become more capable and more widely used, users will increasingly ask their agents to complete tasks on their behalf rather than navigating websites manually. A Kenyan e-commerce platform that has implemented WebMCP will be one that AI assistants can use reliably on behalf of their users. One that has not will be one that agents fail on, abandon, or route around.

The use cases Google has highlighted at launch — booking, e-commerce, and customer support — are precisely the categories where Kenyan digital businesses are growing fastest.

For everyone else, the honest answer is that WebMCP is still an early preview and formal browser support across all major browsers is expected sometime in mid-to-late 2026, with Google I/O flagged as a probable venue for a broader announcement. You do not need to implement anything today. But you do need to understand what is coming and factor it into how you think about your web presence over the next 12 to 18 months.

How to Access the Preview

If you are a developer who wants to start testing now, here is how to get access:

WebMCP is currently available in Chrome 146 Canary behind the "WebMCP for testing" flag at chrome://flags. To access the full documentation and demos, you need to join Google's Early Preview Program (EPP) via the Chrome for Developers blog at developer.chrome.com/blog/webmcp-epp.

The EPP gives you early access to Chrome 146 features, direct feedback channels with the team, and the ability to test how different large language models interpret your tool descriptions before the standard goes public. Given that vague tool descriptions can cause models to produce incorrect outputs, the testing period is genuinely valuable — not just a formality.

The Bigger Picture

The USB-C analogy that Chrome's Khushal Sagar has used to describe WebMCP is worth sitting with. USB-C did not just make one type of device faster — it created a single, standardised interface that any device could use, replacing a fragmented landscape of proprietary connectors. WebMCP is attempting the same thing for AI agent interactions: one standard interface that any agent can use to interact with any website, replacing the current tangle of bespoke scraping strategies and fragile automation scripts.

Whether WebMCP achieves that depends on adoption — by browser vendors beyond Chrome and by web developers who choose to implement it. With Google and Microsoft jointly shipping the code and the W3C providing institutional backing, the foundation is solid. The next six months will tell us whether the broader ecosystem follows.

What is already clear is that the direction of travel is set. The web is becoming agentic. The infrastructure to support that — from Cloudflare's read layer to Google's action layer — is being built right now. The question for Kenyan developers and businesses is not whether this shift will happen. It is whether you will be ready when it does.

Comments

to join the discussion.