← back to diary
#009

Dynamic UI in the Age of Agents

Research synthesis: SDUI, Generative UI, Machine Experience & the Agent-Native Web

Your product's frontend is about to become optional. AI agents already browse the web, compare prices, and execute tasks without touching a pixel you designed. When users chose generative UI over traditional websites 90% of the time, the message was clear: the interface you spent months building may soon be generated on the fly and thrown away after a single session.

From Server-Driven to AI-Generated

Server-Driven UI (SDUI) means the server controls what components render on the client. Instead of shipping a fixed app, the server sends a layout blueprint. Airbnb's Ghost Platform is the gold standard: one backend controls layout across web, iOS, and Android.

AI did not replace SDUI. It supercharged it. Traditional SDUI provided the delivery mechanism (server decides what to show). AI adds the decision mechanism (AI decides what should be shown).

Before AI, the server sent layout A or B based on simple rules: region, device type, A/B test group. Two options, maybe ten. With AI, the server sends a unique layout per user based on real-time behavior, purchase history, time of day, cognitive load, and session context. No two users see the same interface.

SDUI was always heading here. AI made the "audience of one" economically viable. You generate the right layout on the fly instead of designing ten and picking the best.

The Three Eras

Three eras shaped this evolution. Static UI gave everyone one interface: build it once, ship it, let users adapt. Moving the decision to the server was what SDUI accomplished, letting Airbnb, Netflix, and Spotify push layout changes without app releases. Generative UI completes the logic. Google's A2UI lets agents exchange whole UIs, Vercel's v0 generates React code from natural language, and when researchers found users chose GenUI over traditional websites 90% of the time, the conclusion was hard to avoid. UI shifts from product to byproduct, generated for a specific context and discarded after use.

AI Agents Are Already Web Consumers

AI agents browse the web today. Claude for Chrome launched in August 2025, letting users instruct Claude across browser tabs with scheduled tasks and autonomous navigation. OpenAI's Atlas Agent Mode followed in October, turning "find and compare flight prices to Tokyo" into a hands-off task: the agent opened tabs, navigated airline sites, extracted prices, and presented comparisons without a single click. By January 2026, Google Chrome Auto Browse powered autonomous scrolling, clicking, and navigation using Gemini 3 for AI Pro subscribers. GPTBot and Claude web requests now make up roughly 20% of Googlebot's traffic, making AI agents a significant and growing audience for every website.

One Web, Two Consumers

The human user clicks, scrolls, reads. Visual appeal, intuitive flow, and emotion are what drive the experience. For the AI agent, the task is to parse the content, extract what's relevant, and execute. Semantic structure and machine-readable markup matter in ways that have nothing to do with aesthetics. What satisfies a human and what satisfies an agent are different design constraints on the same page.

The agent knows intent, budget, and preferences before it lands on the page. It can act on goals immediately, and it might have access to financial info needed to close the deal. The agent holds an unfair advantage over human browsing.

New Standards for the Two-Audience Web

Three standards now govern how machines access your content. Traditional robots.txt controls search crawlers but was never designed for AI agents. Newer is llms.txt, a curated Markdown overview that tells AI what content matters most, so models return accurate information about your product. MCP, Anthropic's open standard for AI agent interactions, has reached 97 million monthly SDK downloads with both OpenAI and Google adopting it.

Machine Experience is the design discipline that names this shift. You design for two audiences now: people and the machines that read, interpret, and summarize your content before users see it. When someone asks Perplexity "what's the best project management tool?", your website's Machine Experience determines whether you appear in the answer.

Beyond A/B Testing

Traditional A/B testing splits users into two groups and measures which version wins. AI-driven personalization gives every user a unique interface.

Netflix's Thumbnail Engine creates 10 or more variants per title, with ML picking the one most likely to make you click. One user sees the romance angle; another sees the action. Spotify's reinforcement learning processes half a trillion events daily, changing layout based on time of day, location, and activity. Liquid Layouts shift priorities based on context: when a user opens an app while traveling, the UI foregrounds local logistics; at home, it shifts to virtual try-on mode.

Among companies using advanced personalization, McKinsey found 40% more revenue. Machine learning models achieve 89% accuracy and can reconfigure the UI in 1.2 seconds. Julie Bornstein said there is no reason you and I have to have the same store in the digital world, and she built personalized storefronts per user. With generative UI, you no longer pre-build those stores.

The interface generates itself.

The Trust Boundary

Agents with access to your preferences, budget, and purchase history can act on your behalf. The question: how much autonomy should they have?

A human approval layer is essential, but the boundary is personal. It scales with what you can afford to lose. A billionaire sets $5M aside for AI to invest autonomously; a student caps it at $20. Reordering detergent? Let the agent handle it. Signing a contract? That needs a human. The permission layer flexes per person, per category, per risk tolerance.

Connecting the Dots

These threads run through everything I've been studying. Bornstein's THE YES predicted personalized storefronts (#004); GenUI makes it real, the store generating itself per user. Dawar's Alexa entry showed platforms as gatekeepers for commerce (#008), but agent reality is bigger: agents become the gatekeepers for all digital interfaces. The technofeudalism framing from entry #001 applies cleanly. If the agent is the UI and platforms own the agent, platforms own the entire experience layer. The product company supplies data.

The Four UI Futures

Four UI futures coexist. Traditional UI persists for exploratory, creative, identity-expressive experiences like luxury shopping where the browsing is the point. Conversational UI uses chat and voice mediated by AI, where "find me a flight to Tokyo" replaces 47 clicks for 70% of customer journeys by 2028. Zero UI means no visual interface at all, relying on voice commands and ambient computing for 40% of banking interactions by 2026. Agentic UI has AI agents navigate interfaces on your behalf — you see no UI, and the agent is the interface.

At the extreme, the agent becomes the interface. No pixel you designed, no screen you built — just the agent consuming your data and surfacing it as image, video, conversation, or summary in whatever form fits the moment. You need an API and clean structured data. The frontend is optional.

What This Means for Nyantrace

The platform that knows the most about you across the most categories serves you best. Switching costs compound with every preference learned, every transaction completed. Nyantrace sits in the middle: middleware for AI agents means I am building the observability layer for the thing replacing traditional UI.

Fixed dashboard first, dynamic interface later. Right now, the Nyantrace play is nailing one agent framework's observability so well that switching away hurts. The longer arc: the dashboard itself becomes AI-generated, each developer seeing a different layout based on what they're debugging, which agents they're watching, what's failing. Go deep first. Embed so deeply in the platform's ecosystem that removing you hurts. Then expand.

Three Layers to Get Recommended by AI

Three layers determine whether you get recommended by AI. Being machine-readable is table stakes: structured data, semantic HTML, server-side rendering, an llms.txt file. Most companies still skip this, like not having a website in 2005. Being agent-accessible means building an MCP server so that when an agent needs to do what your product does, it calls your API directly instead of scraping your UI. Being the answer means your docs, content, and community generate enough signal that the model recommends you on merit when someone asks "what's the best tool for X?"

For Nyantrace, I'm building infrastructure agents run on. If agents need observability to operate safely and Nyantrace is already integrated, removing me breaks the stack.

The through-line of all this: SDUI was always heading toward an audience of one, and AI made it economically viable. UI is becoming a byproduct — interfaces generated on demand and discarded after use, not designed once and maintained. The web now has two audiences, humans and agents, and GPTBot and Claude traffic already make up roughly 20% of Googlebot's. Every product must serve both. Machine Experience is the new design discipline: you are designing for the machines that read your content before humans ever see it.

The platform that knows the most about you across the most categories serves you best. Switching costs compound with every preference learned and every transaction completed. Economy of scope applies to middleware too. The more agent frameworks I support at Nyantrace, the more cross-platform insight I accumulate, the more painful I am to rip out. At 99.co I built one interface for millions. Now I would build a million interfaces for one, or let the agent consume the data and present it however serves the human best. THE YES predicted personalized storefronts. Dawar predicted AI platforms as gatekeepers. The endgame is that the product company's job shifts from designing screens to structuring data. Nyantrace sits in the middle of that shift.

Building
Nyantrace.ai

Observability and governance for AI agent systems. If you're building with agents, I'd like to talk.

nyantrace.ai →
← back to diary