Your users are complaining on Discord about cryptic errors from Cursor-generated code. Some silently abandon your library for a competitor. Sound familiar?
Coding agents are currently opaque for library maintainers. You can’t debug customer prompts or see everyone’s environment until it’s too late. Still, after building libraries for a few years and spending hundreds of dollars on coding agents, I can share some tips to make them behave.
Documentation
Every library has a “Quick Start” page that we all optimize for human developers to get the library working, well, quickly. But AI agents need more than that - they need precision.
Your quickstart is now a script the LLM will execute. Any ambiguity becomes a creative exercise prone to errors. The key is scope - your guide …
Your users are complaining on Discord about cryptic errors from Cursor-generated code. Some silently abandon your library for a competitor. Sound familiar?
Coding agents are currently opaque for library maintainers. You can’t debug customer prompts or see everyone’s environment until it’s too late. Still, after building libraries for a few years and spending hundreds of dollars on coding agents, I can share some tips to make them behave.
Documentation
Every library has a “Quick Start” page that we all optimize for human developers to get the library working, well, quickly. But AI agents need more than that - they need precision.
Your quickstart is now a script the LLM will execute. Any ambiguity becomes a creative exercise prone to errors. The key is scope - your guide needs a clear finish line, a concrete goal the LLM can aim for.

Bad version: “Quickstart: Install @yourlib/sdk or @yourlib/next-sdk and add API keys”
Good version: “Next.js Quickstart: 1. Install @yourlib/next-sdk. 2. add API keys to .env.local 3. Wrap your app layout.tsx with <Provider>. 4. Verify by using curl on /api/test endpoint.”
Code Examples
LLMs take code examples really precisely. Provide complete, runnable examples for each integration step - not pseudocode or partial snippets, and you’ll be able to guide LLM through with great control.
Example from our WorkOS AuthKit integration prompt:
Instead of describing how to set up middleware, we show the complete file. Here’s a short part of the instruction:
Create your
middleware.tsfile:// middleware.ts import { authkitMiddleware } from '@workos-inc/authkit-nextjs'; export default authkitMiddleware();Create the callback route:
// app/callback/route.ts import { handleAuth } from '@workos-inc/authkit-nextjs'; export const GET = handleAuth();
As you can see:
- Complete imports (not
import { ... } from 'library') - Exact file paths (
app/callback/route.ts) - Full working code (no
// ... rest of implementation)
Each example is copy-pasteable and doesn’t require additional stitching.
Backwards Compatibility
LLMs have a long training cycle. If your library got into the previous dataset, the model has learned the old API. For any missing details, it fills in methods from training - even if they’re deprecated.
Fix: Explicit migration instructions.
❌ DO NOT use `getUser()` - this is deprecated
✅ USE `withAuth()` instead
// Old (don't use):
const user = await getUser();
// New (correct):
const { user } = await withAuth({ ensureSignedIn: true });
Our WorkOS’s integration guide includes a “NEVER DO THE FOLLOWING” section that explicitly calls out deprecated patterns:
// ❌ DO NOT use old SDK patterns
import { WorkOS } from '@workos-inc/node'; // Incorrect for Next.js
This prevents the LLM from mixing training data with current docs.
Single Responsibility Principle
I have to say that the LLM quirks are not the only issue. A lot of times LLM failures can be traced to the design problems. For example, if your SDK provides three ways to authenticate, the LLM will pick randomly - or worse, mix them.
You will get confused if you see multiple equal approaches in the docs, right? So will the agent.
Audit your public API:
- Multiple initialization paths? Pick one canonical way.
- Three different auth patterns? Document one as “recommended.”
- Aliases for the same function? Deprecate them.
LLMs read your codebase including node_modules. Use JSDoc comments to guide them:
/**
* @deprecated Use createClient() instead
* This method is kept for backwards compatibility only
*/
export function initClient() { ... }
/**
* Recommended way to initialize the SDK
* @example
* const client = createClient({ apiKey: process.env.API_KEY });
*/
export function createClient() { ... }
Reduce the Impact Area
SDKs that require a nearly full refactor of the codebase naturally give the LLM more surface area to make mistakes.
High impact (risky): “Wrap your entire app, convert all pages to server components, add middleware to every route, restructure your auth layer”
Low impact (safer): “Add one route handler at /api/auth/callback and wrap your app in <Provider>”
The smaller the changeset, the fewer errors the LLM will make.

Testability
Even after you did all of the steps above, LLMs will still hallucinate. Design your SDK so errors surface quickly and clearly.
Type Restrictions
Bad:
function setMode(mode: string) { ... }
// LLM might generate: setMode("develpment") // typo
Good:
type Mode = 'development' | 'production' | 'staging';
function setMode(mode: Mode) { ... }
// TypeScript catches typos before runtime
Avoid Nullable Types
Bad:
interface Config {
apiKey?: string;
endpoint?: string;
timeout?: number;
}
// LLM might omit required fields
Good:
interface Config {
apiKey: string;
endpoint: string;
timeout?: number; // Only truly optional fields are optional
}
// TypeScript forces required fields
Fail Fast with Clear Errors
Bad:
function initialize(config) {
this.config = config;
// Fails later during API call with generic "unauthorized"
}
Good:
function initialize(config: Config) {
if (!config.apiKey) {
throw new Error(
'Missing WORKOS_API_KEY. Add it to your .env.local file.\n' +
'See: https://docs.workos.com/quickstart#environment-variables'
);
}
if (!config.apiKey.startsWith('sk_')) {
throw new Error(
'Invalid API key format. Expected key starting with "sk_"\n' +
'Find your API key at: https://dashboard.workos.com/api-keys'
);
}
this.config = config;
}
This is partially why TypeScript and other type-safe languages got so popular with AI - the provide the natural and quickest feedback mechanism to coding agents.
Runtime Validation
Consider adding an example protected page as part of your quickstart - something both the user and the LLM can verify after setup:
// app/dashboard/page.tsx
import { withAuth } from '@yourlib/sdk';
export default async function DashboardPage() {
const { user } = await withAuth({ ensureSignedIn: true });
return (
<div>
<h1>Protected Dashboard</h1>
<p>Welcome, {user.email}</p>
<p>✅ Authentication is working correctly!</p>
</div>
);
}
And include clear verification steps in your quickstart:
Verify your setup:
- Run
npm run dev- Navigate to
http://localhost:3000/dashboard- You should be redirected to sign in
- After signing in, you should see your email displayed
This gives both the LLM and users a concrete success indicator - if the protected page works, the integration is correct and everyone can simply check that.