A lot of frustration with AI comes from expecting it to behave like a genius that invents ideas from nothing. But that’s not what it does. AI works by transforming what you give it - turning a small seed of input into a larger, structured output. It generates meta-layers: summaries, specs, diagrams, explanations, code - all built on the context you supply.
That’s why in AI, everything is meta. The model doesn’t generate ideas from nowhere - it only generates content from the context you give it (starting with your first prompt). Once you appreciate that, you stop trying to make AI “be creative” in the human sense, and start using it for what it’s actually good at: scaling, shaping, and layering context with the goal of reaching the best possible results.
What is “meta”?
The …
A lot of frustration with AI comes from expecting it to behave like a genius that invents ideas from nothing. But that’s not what it does. AI works by transforming what you give it - turning a small seed of input into a larger, structured output. It generates meta-layers: summaries, specs, diagrams, explanations, code - all built on the context you supply.
That’s why in AI, everything is meta. The model doesn’t generate ideas from nowhere - it only generates content from the context you give it (starting with your first prompt). Once you appreciate that, you stop trying to make AI “be creative” in the human sense, and start using it for what it’s actually good at: scaling, shaping, and layering context with the goal of reaching the best possible results.
What is “meta”?
The term “meta” refers to something that reflects on or refers to itself. It’s a way of stepping outside something to examine or describe it. For example, metadata is information that describes other data, and a meta-narrative is a story that comments on the structure of storytelling itself.
In this sense, AI is fundamentally self-referential. Every LLM is inherently stateless and can only generate content using the content and context we give it. It doesn’t invent from nothing; it reflects, reshapes, and extends what’s already there.
It’s output built directly on our input.
That output looks new, but it’s always rooted in what you provided. Large language models work by recognising patterns in existing text and then producing new text that follows those patterns. They don’t truly invent from scratch.
Because AI is trained on vast amounts of human-created content, it comes loaded with implicit knowledge. But it doesn’t invent - it reuses. It generates new output by combining your input with that built-in knowledge. And that process of reuse and recombination is, at its core, deeply meta.
Patterns and Remixing
At first, the idea of AI generating “content from content” might sound like a limitation. But in reality, it mirrors how human creativity works: we build by reusing, reinterpreting, and recombining what already exists.
As Newton said,
If I have seen further it is by standing on the shoulders of giants.
All creativity builds on prior knowledge. Students learn by imitation. Artists, researchers, and musicians evolve their work through reference and iteration. Very little is truly original — and that’s not a flaw. It’s how progress happens.
AI follows the same principle, just at scale and speed. It draws from a vast reservoir of human-created content to reshape your input into something useful. It’s “everything is a remix,” accelerated.
And the reality is, you don’t even need to craft the perfect prompt to make that happen. Modern LLMs are remarkably good at interpreting intent. You can ask the AI to help write the prompts. It’s a very meta approach – using AI to write the prompt for the AI – and it often works surprisingly well.
Layered Iterative Meta
One of the most effective ways to use AI is through iterative prompting – creating some content, then asking the AI to build more content from it, layer by layer. You get an answer, then follow up with more detail or a different angle, gradually honing in on what you need. The skill is knowing how to design the necessary steps to guide the LLM into producing what you want. Each step uses the previous output as meta-input for the next.
For instance, you might start by asking,
“Our API fails silently when a required field is missing — how should we handle this better?”.
The AI suggests validation strategies and improved error handling. You follow up with:
“Write a spec for how the API should respond to invalid input, including status codes and message formats.”
Then:
“Generate updated endpoint code in Node.js using that spec.”
And finally:
“Write tests to cover valid and invalid requests, plus a contract test to enforce the spec.”
This kind of stepwise refinement is basically meta content feeding into more meta content. It’s essentially what a conversation with an AI is: each message builds on the last, adding context, clarifying, and zooming in on the goal.
Building Context, One Layer at a Time
A simple, real world example is using AI to create an image: you prompt ChatGPT to generate a detailed image description from a simple concept, iterate until it’s right, then ask it to create the image. This layered meta strategy – refining and expanding content in stages – is often far more effective than expecting a perfect result in one shot.
In software development, this layering approach has been formalised as Spec-Driven Development (SDD) — where the AI works step by step, evolving an initial idea into code through structured iterations. It typically starts with a user prompt, which the AI turns into a high-level design. That design is refined by the user, the AI then transforms this idea into a detailed specification. Finally, the spec is used to generate implementation tasks, and only then is all this context used to generate the code.
Each step — from design to spec to implementation to code — becomes a new layer of meta content, built directly into the context of the one before it. The AI’s ability to build on these layers illustrates how it doesn’t just generate — it iterates, using previous outputs as context to move forward with precision.
Codebases as meta
An important part of any software project isn’t the code that runs in production — it’s the supporting meta code around it. Unit tests are meta — they describe what the code should do. Documentation? That’s meta too — it explains how the code works, why it exists, or how to use it. Even your commit messages, code comments, and architectural diagrams - they’re all layers that exist about the code, not inside it.
This is exactly the kind of material AI thrives on.
AI is remarkably good at consuming, generating, and managing these layers because it thrives on patterns and context of what already exists. It can write tests based on your functional logic. It can suggest updates to documentation when the code changes. This is all meta content about your production code. When something falls out of sync - like stale docs or untested changes — it can often catch that too.
In many ways, this is AI doing what it does best: taking one kind of content and turning it into another — layering, summarising, translating and connecting the pieces. It’s meta all the way down. And when used well, that makes your code not just more complete, but more coherent and supported.
Leverage the Meta with Context
AI works best when you give it the right context – the better your input and supporting information, the better the result will be. In practice, this means a single-line prompt can generate a whole coherent response if it’s backed by sufficient context - implicit or supplied.
Modern AI coding tools will automatically apply that context for you. For example, Anthropic’s Claude Code assistant can scan your project with a single command (/init) and generate a condensed summary in a CLAUDE.md file. This summary is meta content about your project, condensed context the AI has written about your code, which it can then use to understand your project without rediscovering everything from scratch each time.
Claude Code also supports sub-agents — specialised AI helpers for focused tasks like writing tests, refactoring, or reviewing code. When creating a sub-agent, you start by providing a simple description of what it should do — for example, “Help me write unit tests.” Claude then generates a small markdown file that defines the sub-agent, combining your prompt with the existing context of your project.
It gives the AI the exact instruction and context it needs to perform a specific task - ready to be used only when required. But the real power is how easily they are created: from a simple prompt and your existing project context, you get a fully defined, purpose-built agent with almost no extra effort.
AI Isn’t Autopilot — Even When It Acts Autonomously
Because AI is meta — building each output on top of the last — small errors can snowball. Misunderstand a spec, and the code, tests, and documentation that follow all inherit that flaw.
But that doesn’t mean humans need to manage every step. Modern agent-based systems can handle complex workflows: refining their own outputs, verifying progress, and coordinating across subtasks. They can operate with a degree of autonomy — as long as the context is sound.
That’s why the human role shifts: from operator to orchestrator. You’re not needed for every prompt or change, but you do need to stay in the loop — reviewing key outputs, correcting course when needed, and ensuring the system doesn’t drift from the goal. This “human in the loop” oversight becomes especially important when the model is working across multiple layers of output. Try to get the model to do too much in one go, and small missteps will compound — especially in meta systems, where each output becomes the next input.
Spec-driven development is a clear example of this risk. Each phase — idea, design, spec, code, tests — builds directly on the last. If the context or intent is off early, the whole chain inherits that misalignment. A human in the loop ensures coherence, quality, and alignment at every step.
In meta workflows, you control the outcome by managing the context — and without a human in the loop, small missteps in early inputs can cascade into larger failures down the line.
Conclusion
Creating vast amounts of meta content is now trivially easy with AI. Drafts, summaries, expansions, translations, analyses – all are just a prompt away. But the quality of what the AI produces still hinges on us. We humans must provide the right seeds and guide the process, ensuring the AI has quality material and clear direction to work from. Yes, in AI everything is meta — but it’s up to us to make sure it’s the right meta: the kind that serves our purpose and adds value to the world.