OpenAI recently opened up a way to bring your entire application to run directly inside ChatGPT. Instead of building another chatbot around your product, your product can now run directly inside the chat.
This changes how we build things. With the Apps SDK and Model Context Protocol (MCP), you can build tools that respond to natural language, trigger actions on your server, and render interactive UIs, all without leaving the ChatGPT. Developers are already bringing in dashboards, mini editors, booking systems, and even design tools like Canva and Figma.
So, in this tutorial, weāll build our first ChatGPT App using the Apps SDK.
What Weāre Building
Weāre building a collaboā¦
OpenAI recently opened up a way to bring your entire application to run directly inside ChatGPT. Instead of building another chatbot around your product, your product can now run directly inside the chat.
This changes how we build things. With the Apps SDK and Model Context Protocol (MCP), you can build tools that respond to natural language, trigger actions on your server, and render interactive UIs, all without leaving the ChatGPT. Developers are already bringing in dashboards, mini editors, booking systems, and even design tools like Canva and Figma.
So, in this tutorial, weāll build our first ChatGPT App using the Apps SDK.
What Weāre Building
Weāre building a collaborative whiteboard that you can control with ChatGPT. You can tell ChatGPT to add shapes, put sticky notes, or change layouts, and the board will update instantly. Your teammates can join the board, see changes as they happen, and leave comments directly on the canvas.
Hereās how it looks:
Weāll use TLDraw for the canvas, Velt for real-time collaboration, and MCP to connect it all to ChatGPT. By the end, youāll have a functional whiteboard app that works inside ChatGPT and responds to natural language.
Letās break down how the pieces fit together.
Understanding the Foundation
A GPT App has two parts: a web widget that GPT renders in the interface, and an MCP server that exposes what your app can do.
The widget is your UI, in our case, a whiteboard canvas. The MCP server defines "tools" that ChatGPT can call, like adding shapes or comments. When you say "draw a rectangle," it reads the tool definition, calls your MCP server, and your server updates the canvas.
For the whiteboard, weāre using two tools:
- TLDraw: It handles the canvas. It provides drawing tools, shapes, text, and built-in real-time board sync through
@tldraw/sync. Everyone in the same room sees updates instantly. - Velt: It handles collaboration through a JavaScript SDK that provides real-time features like comments, live cursors, and presence indicators. It works through React components on the frontend and a REST API for server-side operations.
These two build the UI. Then, we need to connect it to ChatGPT, which weāll do using an MCP server.
The MCP Server
The MCP server is a Node.js app that defines tools ChatGPT can call. Each tool has a name, description, and parameters.
Hereās what a tool syntax looks like:
const tools = [
{
name: "add-item", // Tool identifier
description: "Add an item to the list", // ChatGPT reads this to decide when to use it
inputSchema: { // Define parameters if you need to add input into your tool call, then adds parameeters in intputSchema else leave it empty
type: "object",
properties: {
text: {
type: "string",
description: "The item text"
},
priority: {
type: "string",
enum: ["low", "medium", "high"],
description: "Item priority"
}
},
required: ["text"] // Which params are mandatory
}
}
];
The description tells ChatGPT when to use this tool. The inputSchema defines what parameters it needs. ChatGPT extracts these from your message and sends them to your server.
Now, letās look at the prerequisites and then the actual implementation.
Prerequisites
Youāll need a few accounts to get started:
- Velt - API key and Auth Token (handles comments and collaboration)
- tldraw - License key (needed to run whiteboard canvas on GPT)
- ngrok - It helps to expose your local server to ChatGPT
- ChatGPT Plus - Required for custom apps
Clone the repo and install dependencies:
git clone https://github.com/Studio1HQ/velt-app-examples
cd velt-app-examples
pnpm install
cd syncboard_server
pnpm install
cd ..
Chrome version 142+ users: you may need to disable the local network access check so ChatGPT can load your UI Widgets:
- Open
chrome://flags/ - Search for
local-network-access-check - Set it to Disabled and restart Chrome
Now, first weāll build the UI part(widget), which will show in GPT, and then weāll build the MCP server for it.
Building the Whiteboard
The code has two parts: the frontend (what users see) and the backend (what connects to ChatGPT).
src/syncboard/ # Frontend whiteboard
āāā syncboard.jsx # Canvas and Velt components
āāā mockUsers.js # Test users (Bob & Alice)
āāā index.jsx # Entry point
syncboard_server/ # Backend MCP server
āāā src/
āāā server.ts # Tool definitions
āāā velt/ # Comment handlers
Letās start with the canvas.
Setting Up TLDraw
Open src/syncboard/syncboard.jsx. Youāll find the canvas setup and the collaboration logic here.
Start with tldrawās canvas:
import { Tldraw } from 'tldraw'
import { useSyncDemo } from '@tldraw/sync'
import 'tldraw/tldraw.css'
function SyncboardCanvas() {
const store = useSyncDemo({
roomId: import.meta.env.VITE_TLDRAW_ROOM_ID // here you can give any string value eg: "my-room-abc"
})
return (
<div style={{ height: '100vh' }}>
<Tldraw
store={store}
licenseKey={import.meta.env.VITE_TLDRAW_LICENSE_KEY}
/>
</div>
)
}
The useSyncDemo hook creates a synced store connected to your room ID. Everyone in the same room sees the same canvas, and any updates show up for everyone right away.
Adding Velt
Now add collaboration. Wrap the canvas with Veltās provider:
// In syncboard.jsx
import { VeltProvider } from '@veltdev/react'
import { Tldraw } from 'tldraw'
import { useSyncDemo } from '@tldraw/sync'
import 'tldraw/tldraw.css'
function SyncboardCanvas() {
const store = useSyncDemo({
roomId: import.meta.env.VITE_TLDRAW_ROOM_ID
})
// ...
export default function Syncboard() {
return (
<VeltProvider apiKey={import.meta.env.VITE_VELT_API_KEY}>
<SyncboardCanvas />
</VeltProvider>
)
}
Inside syncboard.jsx, add Veltās components:
// In syncboard.jsx
import {
VeltComments,
VeltPresence,
VeltCursor,
VeltCommentTool,
VeltSidebarButton
} from '@veltdev/react'
function SyncboardCanvas() {
const { client } = useVeltClient()
const [veltReady, setVeltReady] = useState(false)
// Initialize Velt client
useEffect(() => {
const initializeVelt = async () => {
if (!client || veltReady) return
await client.identify(currentUser, { forceReset: true })
await client.setDocument("syncboard-whiteboard", {
documentName: "Syncboard Collaborative Whiteboard"
})
setVeltReady(true)
}
initializeVelt()
}, [client, veltReady])
return (
<>
{/* Top bar with collaboration controls */}
<div className="syncboard-topbar">
{veltReady && <VeltCommentTool />}
{veltReady && <VeltSidebarButton />}
{veltReady && <VeltPresence />}
</div>
{/* Live cursors */}
{veltReady && <VeltCursor />}
{/* The canvas */}
<Tldraw
store={store}
onMount={handleMount}
licenseKey={import.meta.env.VITE_TLDRAW_LICENSE_KEY}
/>
{/* Comment overlays and sidebar */}
{veltReady && <VeltComments />}
{veltReady && <VeltCommentsSidebar />}
</>
)
}
The veltReady state waits for Velt to initialize before rendering components. VeltCommentTool lets users add comments by clicking the canvas. VeltPresence shows whoās online. VeltCursor displays live mouse pointers. VeltComments renders comment bubbles on the canvas.
One more step, whitelist ChatGPTās domains in your Velt Console. Go to Configurations and add:
.oaiusercontent.comhttps://chatgpt.com
User Switching
The whiteboard needs user context for collaboration features. When someone adds a comment, Velt shows their name and avatar. When multiple people work together, the presence indicators show whoās online.
For local testing, we use mock users defined in mockUsers.js:
export const MOCK_USERS = [
{
userId: 'bob',
name: 'Bob Smith',
email: 'bob@example.com',
photoUrl: '<https://i.pravatar.cc/150?img=12>'
},
{
userId: 'alice',
name: 'Alice Cooper',
email: 'alice@example.com',
photoUrl: '<https://i.pravatar.cc/150?img=5>'
}
]
These let you simulate multiple team members without creating real accounts.
The UI includes a user switcher in the top bar. It shows your current avatar and name. Click it to see the full user list:
// In syncboard.jsx
const [currentUser, setCurrentUser] = useState(getDefaultUser());
const [showUserMenu, setShowUserMenu] = useState(false);
// User switcher button
<button
className="current-user-button"
onClick={() => setShowUserMenu(!showUserMenu)}
>
<img src={currentUser.photoUrl} alt={currentUser.name} />
<span>{currentUser.name}</span>
<span className="dropdown-arrow">ā¼</span>
</button>
When you switch users, the app signs out the current Velt session and reinitializes with the new user:
const switchUser = async (newUser) => {
if (!client || newUser.userId === currentUser.userId) return;
await client.signOutUser();
setCurrentUser(newUser);
setVeltReady(false); // Trigger re-initialization
};
This updates your Velt identity. Any comments you add will now show Bobās or Aliceās avatar, and the presence indicators will update as well. If you have multiple tabs open, theyāll all reflect the change.
The frontend is done. Now, letās build a server and connect to ChatGPT.
Building the MCP Server
The frontend is built. Now we need the server that turns ChatGPTās commands into Canvas actions.
Open syncboard_server/src/server.ts. The server exposes tools that ChatGPT can call. Each tool describes what it does and what parameters it needs.
Before we define tools, we need validation. ChatGPT sends natural language that gets parsed into parameters. We need to ensure those parameters are valid before using them. We use two files for this: /src/schemas/syncboard-schemas.ts for tool definitions (what ChatGPT sees) and src/parsers/syncboard-parsers.ts for validation (what the server enforces).
In syncboard_server/src/schemas/syncboard-schemas.ts:
export const syncboardCanvasSchema = {
type: "object",
properties: {
action: {
type: "string",
enum: ["add-sticky", "add-rectangle", "add-ellipse", "add-arrow", "add-text"],
description: "The type of shape to add"
},
content: { type: "string", description: "Text for sticky notes" },
x: { type: "number", description: "X coordinate (optional)" },
y: { type: "number", description: "Y coordinate (optional)" },
color: { type: "string", description: "Shape color" }
},
required: ["action"]
} as const;
This schema tells ChatGPT what parameters exist and what types they should be. ChatGPT reads the description fields to understand when and how to use each parameter.
In syncboard_server/src/syncboard-parsers.ts:
import { z } from "zod";
export const syncboardCanvasParser = z.object({
action: z.enum(["add-sticky", "add-rectangle", "add-ellipse", "add-arrow", "add-text"]),
content: z.string().optional(),
x: z.number().optional(),
y: z.number().optional(),
color: z.string().optional(),
});
export type SyncboardCanvasInput = z.infer<typeof syncboardCanvasParser>;
This parser uses Zod to validate incoming data at runtime. If ChatGPT sends invalid data (like a string for x coordinate), Zod catches it before it reaches your canvas code.
Here, the schema is for ChatGPT & the parser is for your MCP server.
Defining Tools
Now we define the actual tools using those schemas:
import { syncboardCanvasSchema, syncboardCommentSchema } from "./schemas/syncboard-schemas.js";
const syncboardTools: Tool[] = [
{
name: "syncboard-canvas-action",
description: "Add shapes, sticky notes, text, or drawings to the Syncboard canvas",
inputSchema: syncboardCanvasSchema // <-------- here's the sc
},
{
name: "another-too-name",
description: "Add another tool's description",
inputSchema: anotherSchema
}
];
Register the tools, so ChatGPT knows whatās available:
server.setRequestHandler(
ListToolsRequestSchema,
async () => ({
tools: syncboardTools // Tell ChatGPT what's available
})
);
Handling Commands
When ChatGPT calls a tool, the server receives the tool name and parameters. We validate the data, build a response, and pass it to the frontend.
In src/server.ts
import { syncboardCanvasParser, syncboardCommentParser } from "./syncboard-parsers.js";
server.setRequestHandler(
CallToolRequestSchema,
async (request: CallToolRequest) => {
const toolName = request.params.name;
const args = request.params.arguments;
if (toolName === "syncboard-canvas-action") {
// Validate with Zod parser
const parsed = syncboardCanvasParser.parse(args);
return {
content: [{
type: "text",
text: `Added ${parsed.action} to the canvas`
}],
structuredContent: {
action: parsed.action,
content: parsed.content,
x: parsed.x ?? 0,
y: parsed.y ?? 0,
color: parsed.color ?? "yellow"
},
_meta: widgetInvocationMeta(widget)
};
}
}
);
The syncboardCanvasParser.parse(args) line validates the data. If validation fails, Zod throws an error with details about what went wrong. The structuredContent object gets passed to the frontend widget, which draws the shape. The _meta field tells ChatGPT which widget to open, and links this tool result to your widget UI.
Now that shapes are handled, letās add support for comments.
Adding Comment Tool
Comments work differently, they go through Veltās REST API instead of the canvas. So, we have a separate parser for comment data.
In syncboard-parsers.ts
export const syncboardCommentParser = z.object({
commentText: z.string(),
targetUser: z.string().optional(),
fromUserId: z.string().optional(),
fromUserName: z.string().optional(),
fromUserEmail: z.string().optional(),
});
Add the comment tool to the same request handler:
if (toolName === "syncboard-add-comment") {
const args = syncboardCommentParser.parse(request.params.arguments ?? {});
const fromUser = {
userId: args.fromUserId || "chatgpt-assistant",
name: args.fromUserName || "ChatGPT Assistant",
email: args.fromUserEmail || "assistant@chatgpt.com"
};
await addComment({
commentText: args.commentText,
fromUser,
targetUser: args.targetUser
});
return {
content: [{
type: "text",
text: `ā
Comment added by ${fromUser.name}`
}]
};
}
ChatGPT extracts the user from your prompt. "As Bob" becomes fromUserId: "bob" with Bobās name and email. The addComment helper (in velt/comments.ts) calls Veltās API with authentication and user context. Velt stores the comment, broadcasts it to connected clients, and renders it with the correct avatar.
The validation layer ensures type safety. If you add more tools later, follow the same pattern: create a schema for ChatGPT, create a parser for validation, and use both in your handler.
The server is ready. Now, letās run everything and connect it to ChatGPT.
Testing & Connecting to ChatGPT
Now letās run everything and connect it to ChatGPT.
Running the Servers
You need three terminals running simultaneously.
Build the Frontend
*# In project root [Terminal 1]*
pnpm run build
This generates the widget files in the assets/ folder. Run this once, or whenever you change frontend code.
Serve the Assets
# In project root *[in same terminal]*
pnpm run serve
Starts a static server on http://localhost:4444. This serves your widget files to ChatGPT. You should see:
Serving on http://localhost:4444
Start the MCP Server
// *[Terminal 2]*
cd syncboard_server
pnpm start
You should see:
Syncboard MCP server listening on http://localhost:8000
This server exposes the tools ChatGPT will call. Keep it running. Now letās expose the servers so ChatGPT can reach them.
Exposing with ngrok
ChatGPT canāt access localhost, so we use ngrok to create public URLs.
// *[Terminal 3]*
ngrok http 8000
Copy the HTTPS URL (e.g., https://abc123.ngrok-free.app). This is your MCP server endpoint.
Connecting to ChatGPT
Open ChatGPT and go to Settings ā Connectors ā Enable Developer Mode.
Navigate back to Connectors and click Create.
Fill in all the details:
- Name:
Syncboard - Description:
Collaborative whiteboard with shapes and comments - MCP Server URL:
https://abc123.ngrok-free.app/mcp(your MCP ngrok URL +/mcp) - Authentication: None
- Trust this provider: ā Check
Click Create.
Testing It Out
Start a new chat in ChatGPT and try with these commands:
- open syncboard
- Add a sticky note with text: Hello World, etc.
Your app will work like this:
Things to Watch For
Before wrapping up, here are a few small issues that can trip you up while testing. None of these are major, but they can save you a lot of time if something suddenly stops working.
Chrome network access (Chrome 142+).
If the widget doesnāt load inside ChatGPT, Chrome might be blocking local network access. Visit chrome://flags, search for local-network-access-check, disable it, and restart Chrome.
Environment variables.
If values like VITE_TLDRAW_LICENSE_KEY or VELT_AUTH_TOKEN show up as undefined, itās usually just the wrong .env file or a missing reload. For quick debugging, hardcoding them temporarily also works.
Velt auth and allowed domains.
If comments donāt appear, it often means the Velt auth token is incorrect or the required domains (chatgpt.com and *.oaiusercontent.com) arenāt whitelisted.
Serving the widget correctly.
Make sure youāre running the built widget with pnpm run serve. If you just run the build command, ChatGPT wonāt be able to render the UI.
Input schema mistakes.
If your tool receives {} instead of the expected input, the schema is usually the issue. Keep the schema simple and validate it with Zod to avoid silent failures.
What You Built & Next Steps
Youāve built a ChatGPT App that turns conversation into canvas actions. Ask it to draw shapes, add sticky notes, or drop comments, and teammates see changes in real-time. The whiteboard works inside ChatGPT, controlled entirely through natural language.
To extend this:
Add custom tldraw shapes or templates for diagrams and flowcharts. Define new MCP tools for higher-level commands like "duplicate this section" or "summarize all comments." Replace mock users with real authentication so comments map to actual accounts. Turn on Veltās notifications, mentions, and status tags to make the board feel like a full workspace.
Resources: