🔮 Generative UI for Agentic Apps
Build apps that adapt to your users.
Generative UI Resources
-
Contributing demo-generative-ui.mp4
This repository walks through how agentic UI protocols (AG-UI, A2UI, MCP Apps) enable Generative UI patterns (Static, Decl…
🔮 Generative UI for Agentic Apps
Build apps that adapt to your users.
Generative UI Resources
-
Contributing demo-generative-ui.mp4
This repository walks through how agentic UI protocols (AG-UI, A2UI, MCP Apps) enable Generative UI patterns (Static, Declarative, Open-ended) and how to implement them using CopilotKit.
👉 Generative UI Guide (PDF) - a conceptual overview of Generative UI, focused on trade-offs, UI surfaces and how agentic UI protocols work together.
What is Generative UI?
Generative UI is a pattern in which parts of the user interface are generated, selected, or controlled by an AI agent at runtime rather than being fully predefined by developers.
Instead of only generating text, agents can send UI state, structured UI specs, or interactive UI blocks that the frontend renders in real time. This turns UI from fixed, developer-defined screens into an interface that adapts as the agent works and as context changes.
In the CopilotKit ecosystem, Generative UI is approached in three practical patterns, implemented using different agentic UI protocols and specifications that define how agents communicate UI updates to applications:
- Static Generative UI (high control, low freedom) → AG-UI
- Declarative Generative UI (shared control) → A2UI, Open-JSON-UI
- Open-ended Generative UI (low control, high freedom) → MCP Apps / Custom UIs
AG-UI (Agent-User Interaction Protocol) serves as the bidirectional runtime interaction layer beneath these patterns, providing the agent ↔ application connection that enables Generative UI and works uniformly across A2UI, MCP Apps, Open-JSON-UI, and custom UI specifications.
The rest of this repo walks through each pattern from most constrained to most open-ended and shows how to implement them using CopilotKit.
The 3 Types of Generative UI
Static Generative UI (AG-UI)
Static Generative UI means you pre-build UI components, and the agent chooses which component to show and passes it the data it needs.
This is the most controlled approach: you own the layout, styling, and interaction patterns, while the agent controls when and which UI appears.
In CopilotKit, this pattern is implemented using the useFrontendTool hook, which lets the application register the get_weather tool and define how predefined React UI is rendered across each phase of the tool’s execution lifecycle.
// Weather tool - callable tool that displays weather data in a styled card
useFrontendTool({
name: "get_weather",
description: "Get current weather information for a location",
parameters: z.object({ location: z.string().describe("The city or location to get weather for") }),
handler: async ({ location }) => {
await new Promise((r) => setTimeout(r, 500));
return getMockWeather(location);
},
render: ({ status, args, result }) => {
if (status === "inProgress" || status === "executing") {
return <WeatherLoadingState location={args?.location} />;
}
if (status === "complete" && result) {
const data = JSON.parse(result) as WeatherData;
return (
<WeatherCard
location={data.location}
temperature={data.temperature}
conditions={data.conditions}
humidity={data.humidity}
windSpeed={data.windSpeed}
/>
);
}
return <></>;
},
});
- Try it out: go.copilotkit.ai/gen-ui-demo
- Docs: docs.copilotkit.ai/generative-ui
- Specs hub (overview): docs.copilotkit.ai/generative-ui/specs
- Ecosystem (how specs + runtime fit): copilotkit.ai/generative-ui
Declarative Generative UI (A2UI + Open‑JSON‑UI)
Declarative Generative UI sits between static and open-ended approaches. Here, the agent returns a structured UI description (cards, lists, forms, widgets) and the frontend renders it.
Two common declarative specifications used for Generative UI are A2UI and Open-JSON-UI.
A2UI → declarative Generative UI spec from Google, described as JSONL-based and streaming, designed for platform-agnostic rendering 1.
Open‑JSON‑UI → open standardization of OpenAI’s internal declarative Generative UI schema
Let’s first understand the basic flow of how to implement A2UI.
Instead of writing A2UI JSON by hand, you can use the A2UI Composer to generate the spec for you. Copy the output and paste it into your agent’s prompt as a reference template.
In prompt_builder.py, add one A2UI JSONL example so the agent learns the three message envelopes A2UI expects: surfaceUpdate (components), dataModelUpdate (state), then beginRendering (render signal).
UI_EXAMPLES = """
---BEGIN FORM_EXAMPLE---
{"surfaceUpdate":{"surfaceId":"form-surface","components":[ ... ]}}
{"dataModelUpdate":{"surfaceId":"form-surface","path":"/","contents":[ ... ]}}
{"beginRendering":{"surfaceId":"form-surface","root":"form-column","styles":{ ... }}}
---END FORM_EXAMPLE---
"""
Inject UI_EXAMPLES into the agent instruction so it can output valid A2UI message lines when a UI is requested.
instruction = AGENT_INSTRUCTION + get_ui_prompt(self.base_url, UI_EXAMPLES)
return LlmAgent(
model=LiteLlm(model=LITELLM_MODEL),
name="ui_generator_agent",
description="Generates dynamic UI via A2UI declarative JSON.",
instruction=instruction,
tools=[],
)
Final step: on the frontend, pass createA2UIMessageRenderer(...) into renderActivityMessages so CopilotKit renders streamed A2UI output as UI and forwards UI actions back to the agent.
import { CopilotKitProvider, CopilotSidebar } from "@copilotkitnext/react";
import { createA2UIMessageRenderer } from "@copilotkit/a2ui-renderer";
import { a2uiTheme } from "../theme";
const A2UIRenderer = createA2UIMessageRenderer({ theme: a2uiTheme });
export function A2UIPage({ children }: { children: React.ReactNode }) {
return (
<CopilotKitProvider
runtimeUrl="/api/copilotkit-a2ui"
renderActivityMessages={[A2UIRenderer]} // ← hook in the A2UI renderer
>
{children}
<CopilotSidebar defaultOpen labels={{ modalHeaderTitle: "A2UI Assistant" }} />
</CopilotKitProvider>
);
}
The pattern is the same for Open‑JSON‑UI. An agent can respond with an Open‑JSON‑UI payload that describes a UI “card” in JSON and the frontend renders it.
// Example (illustrative): Agent returns a declarative Open-JSON-UI–style specification
{
type: "open-json-ui",
spec: {
components: [
{
type: "card",
properties: {
title: "Data Visualization",
content: { ... }
}
}
]
}
}
- Try it out: go.copilotkit.ai/gen-ui-demo
- Docs: docs.copilotkit.ai/generative-ui
- Open‑JSON‑UI Specs (CopilotKit docs): docs.copilotkit.ai/generative-ui/specs/open-json-ui
- A2UI Specs (CopilotKit docs): docs.copilotkit.ai/generative-ui/specs/a2ui
- Ecosystem (how specs + runtime fit): https://www.copilotkit.ai/generative-ui
- How AG‑UI and A2UI fit together: copilotkit.ai/ag-ui-and-a2ui
Open-ended Generative UI (MCP Apps)
Open-ended Generative UI is when the agent returns a complete UI surface (often HTML/iframes/free-form content), and the frontend mostly serves as a container to display it.
The trade-offs are higher: security/performance concerns when rendering arbitrary content, inconsistent styling, and reduced portability outside the web.
This pattern is commonly used for MCP Apps. In CopilotKit, MCP Apps support is enabled by attaching MCPAppsMiddleware to your agent, which allows the runtime to connect to one or more MCP Apps servers.
import { BuiltInAgent } from "@copilotkit/runtime/v2";
import { MCPAppsMiddleware } from "@ag-ui/mcp-apps-middleware";
const agent = new BuiltInAgent({
model: "openai/gpt-4o",
prompt: "You are a helpful assistant.",
}).use(
new MCPAppsMiddleware({
mcpServers: [
{
type: "http",
url: "http://localhost:3108/mcp",
serverId: "my-server", // Recommended: stable identifier
},
],
}),
);
- Try it out: go.copilotkit.ai/gen-ui-demo
- Docs: docs.copilotkit.ai/generative-ui
- MCP Apps spec: docs.copilotkit.ai/generative-ui/specs/mcp-apps
- Practical guide (complete integration flow): Bring MCP Apps into your OWN app with CopilotKit & AG-UI
Generative UI Playground
The Generative UI Playground is a hands-on environment for exploring how all three patterns work in practice and see how agent outputs map to UI in real time.
- Try it out: go.copilotkit.ai/gen-ui-demo
- Repo: go.copilotkit.ai/gen-ui-repo-playground demo-generative-ui.mp4
Blogs
- Agent Factory: The new era of agentic AI: common use cases and design patterns - By Microsoft Azure
- Agentic AI vs AI Agents: A Deep Dive - UI Bakery
- Introducing Agentic UI Interfaces: A Tactical Executive Guide - AKF Partners
- Introducing A2UI: An open project for agent-driven interfaces - Google Developers
- From products to systems: The agentic AI shift - UX Collective
- Generative UI: A rich, custom, visual interactive user experience for any prompt - Google Research
- The State of Agentic UI: Comparing AG-UI, MCP-UI, and A2A Protocols - CopilotKit
- The Three Types of Generative UI: Static, Declarative and Fully Generated - CopilotKit
- Generative UI Guide 2025: 15 Best Practices & Examples - Mockplus
Videos
- Agentic AI Explained So Anyone Can Get It!
- Generative vs Agentic AI: Shaping the Future of AI Collaboration
- What is Agentic AI? An Easy Explanation For Everyone
- What is Agentic AI and How Does it Work?
Additional Resources
🤝 Contributions are welcome
Contributions welcome: PRs adding examples (Static/Declarative/Open‑ended), improving explanations or adding assets.
Discord for help and discussions. GitHub to contribute. @CopilotKit for updates.
| Project | Preview | Description | Links |
|---|---|---|---|
| Generative UI Playground | Shows the three Gen UI patterns with runnable, end-to-end examples. | Repo Demo |
Built something? Open a PR or share it in Discord.
For AI/LLM agents: docs.copilotkit.ai/llms.txt