A company recently raised $8 million for an AI-powered legal assistant, according to industry reporting. Impressive, right? Except when you dig into the product, it’s essentially GPT with some prompt engineering and a document upload interface. The entire “AI” part is OpenAI’s API. The entire “company” is a wrapper.
This isn’t an isolated case. Most of what’s getting funded as “AI companies” right now isn’t AI at all. It’s interfaces to someone else’s AI.
Customer service chatbots that are really just GPT-5 with custom prompts. Content generation tools that are Claude with a nice editor. Analytics platforms that are essentially API calls to various models with dashboards on top. An entire ecosystem of companies whose core technology is “we call someone else’s API and make it lo…
A company recently raised $8 million for an AI-powered legal assistant, according to industry reporting. Impressive, right? Except when you dig into the product, it’s essentially GPT with some prompt engineering and a document upload interface. The entire “AI” part is OpenAI’s API. The entire “company” is a wrapper.
This isn’t an isolated case. Most of what’s getting funded as “AI companies” right now isn’t AI at all. It’s interfaces to someone else’s AI.
Customer service chatbots that are really just GPT-5 with custom prompts. Content generation tools that are Claude with a nice editor. Analytics platforms that are essentially API calls to various models with dashboards on top. An entire ecosystem of companies whose core technology is “we call someone else’s API and make it look pretty.”
And the scale of this is massive. According to various industry analyses and reports, somewhere between 65% and 92% of AI startups launched in the past two years are primarily wrappers. Not companies training models. Not companies doing AI research. Just companies making it easier to use someone else’s AI.
This raises uncomfortable questions. Is this real innovation or are we watching a bubble inflate in real time? Will these companies exist in three years? And maybe most importantly: how is this different from all the companies that wrapped AWS services in a UI and sold them as products?
What We’re Actually Looking At
Let me be specific about what these wrappers look like in practice.
The typical pattern: A founder identifies a specific problem (legal document review, fitness coaching, HR candidate screening, whatever). They build an interface where users input their data. Behind the scenes, that data gets formatted into prompts and sent to OpenAI or Anthropic APIs. The response comes back, gets formatted nicely, and gets presented to the user as if the company built the intelligence themselves.
The barriers to entry are astonishingly low now. You can build an MVP in weeks using tools like LangChain or LlamaIndex to orchestrate API calls. You don’t need a research team. You don’t need GPU clusters. You need product intuition and decent engineering to make the wrapper feel seamless.
The economics are attractive too. No R&D costs for model development. No infrastructure for training. Just API costs that scale roughly with usage. A founder can launch, find product market fit, and start generating revenue before a traditional AI company even finishes recruiting their research team.
And it’s working. ProfilePicture.AI reportedly made over $2 million in its first year generating headshots using Stable Diffusion. AI email writers for Shopify stores are doing six figures monthly. Numerous meeting transcription tools, resume builders, and code documentation generators have launched and found paying customers. All wrappers. All making real money.
But here’s the catch. In March 2023, OpenAI reportedly raised API prices by up to 20% for some tiers according to industry reporting. Companies built entirely on GPT suddenly saw their margins compress overnight. They couldn’t negotiate. They couldn’t switch easily (because all their prompts were tuned for GPT). They just had to eat the cost or pass it to customers and risk churn.
These businesses are built on foundations they don’t control. When the model providers decide to compete directly in their vertical, what protection do they have? When a new open source model emerges that’s 80% as good but runs for pennies, how fast does their competitive advantage evaporate?
The Legitimacy Question
So is this a real business or just timing the hype cycle?
The bear case is straightforward. These aren’t defensible businesses. They have no moats. Anyone can replicate them. Users are starting to notice they’re just paying markup on API calls they could make themselves. Churn rates are brutal (industry reports suggest 60-65% annual churn for some wrapper categories, nearly double typical SaaS benchmarks). When the AI hype settles, these companies disappear.
The critique that stings most: they’re not building anything that lasts. Every improvement to the underlying models happens without them. Every innovation comes from somewhere else. They’re entirely dependent on the goodwill and pricing decisions of their API providers. That’s not a technology company. That’s a reseller with extra steps.
The bull case is more nuanced. Yeah, these are wrappers. So what? Most successful SaaS companies are wrappers around something. The value isn’t in rebuilding infrastructure. The value is in solving specific problems really well.
A marketing agency doesn’t need to train their own models. They need AI that integrates with their CRM, understands their workflow, and produces content in their brand voice. A wrapper that solves that specific problem is valuable even if the underlying intelligence comes from OpenAI.
The key word here is “specific.” Generic wrappers (basic ChatGPT interfaces with minimal customization) are commodity plays with no future. Specific wrappers (AI that solves exact problems in particular verticals) can build real businesses.
I think both arguments have merit. The legitimacy comes down to value addition. If all you’re doing is saving users a trip to ChatGPT, you’re not adding value. If you’re integrating AI into workflows in ways that genuinely solve problems users can’t solve themselves, you’re building something real.
The question each wrapper company needs to answer: could my users get 80% of this value by just using ChatGPT directly? If yes, you’re in trouble.
The AWS Comparison
This feels familiar because we’ve seen it before. Huge sections of the SaaS economy are wrappers around AWS services.
Take database management tools. Many are just interfaces to RDS and DynamoDB. Take deployment platforms. Many are orchestrating EC2, Lambda, and S3 with nice UIs. Take monitoring tools. Many aggregate CloudWatch data with better visualization.
These companies built billion-dollar businesses by wrapping AWS. So why wouldn’t AI wrappers work the same way?
The similarity is real. In both cases, you’re building on infrastructure you don’t own, adding a layer of abstraction, and charging for the convenience and specialization. The playbook is proven.
But there are critical differences.
AWS is stable. API contracts rarely break. Pricing changes are gradual and predictable. Services have long deprecation cycles. You can build on AWS and expect your foundation to look similar in three years.
AI is chaotic. Models improve dramatically every few months. API features change. Pricing is unpredictable. An update to GPT can break carefully tuned prompts. Open source alternatives appear overnight and undercut commercial APIs. You can build on OpenAI today and have no idea what your foundation looks like next year.
AWS has competition. You can architect for portability between AWS, Azure, and GCP. Lock-in exists but it’s manageable. Multi-cloud strategies work.
AI has concentration. OpenAI and Anthropic dominate. Open source models are catching up but aren’t there yet for many use cases. Switching costs are real because prompts don’t transfer cleanly between models.
The biggest difference: AWS wrappers succeeded because they added orchestration value in a stable environment. AI wrappers need to add value in an environment that’s changing faster than they can adapt.
The survivors will be those who build genuine workflow integration, proprietary data advantages, or multi-model strategies that reduce dependency on any single provider. Just like Snowflake succeeded by being cloud agnostic, AI wrappers might succeed by being model agnostic.
But many won’t make it. The speed of change in AI is just fundamentally different from the speed of change in cloud infrastructure.
Will This Last?
Here’s the honest assessment: most won’t. But some will.
The ones that won’t last are generic wrappers with no differentiation. If your value proposition is “ChatGPT but easier,” you have maybe 18 months before either OpenAI makes their interface good enough or users figure out they don’t need you. I’ve already seen this happen with early wave ChatGPT wrapper apps that briefly had traction and are now ghost towns.
The ones that might last are building real moats. Take Harvey AI, the legal assistant that reportedly raised over $100 million. According to public information, it’s built on language models they didn’t create, but they’re training on legal-specific data, integrating deeply with law firm workflows, and building features around compliance and confidentiality that generic models don’t handle. The wrapper was the entry point. The moat is everything they built around it.
Or look at what Jasper has done in content marketing, based on publicly available information about their evolution. They reportedly started as a wrapper around GPT-3 for marketing copy, then built brand voice training, integrated with marketing tools, added workflow management for teams, and created templates for specific use cases. They went from “GPT but easier” to “content workflow platform that happens to use AI.” That’s defensible.
The pattern is clear: wrappers work as starting points, not end points. You use the wrapper to validate demand and find product market fit fast. Then you build something that’s hard to replicate. That might mean:
Going deep in a vertical where you understand domain-specific problems better than anyone. It’s not enough to wrap GPT for legal work. You need to understand legal document structure, compliance requirements, confidentiality standards, and how lawyers actually work. That knowledge becomes your moat.
Or it means accumulating proprietary data that makes your AI better than generic alternatives. Every customer interaction trains your system on industry-specific edge cases. Over time, you’re not just calling an API anymore. You’re calling an API plus your accumulated learning.
Or it means integrating so deeply into customer workflows that switching costs become real. When your AI features are embedded in tools teams use every day, tied to their data, and customized to their processes, you’re not competing on model quality anymore. You’re competing on ecosystem integration.
The companies I’m skeptical of are those treating the wrapper as the entire business. They found a prompt that works well. They built a nice interface. They got some initial traction. Now they’re trying to ride that for as long as possible without building anything defensible underneath.
That doesn’t work. Either model providers will compete directly (OpenAI is already doing this in multiple categories), or competitors will replicate your wrapper in days, or customers will figure out they can do it themselves, or API prices will crush your margins.
Sustainability in AI wrappers requires a path from wrapper to platform. If you can’t articulate that path, you’re building a timing play, not a company.
What This Means If You’re Building One
If you’re building an AI wrapper (or thinking about it), here’s what you need to do in the first 90 days:
Pick a vertical and go deep. Don’t build “AI for content.” Build “AI for technical documentation in regulated industries.” Specificity is your only protection against commodity competition. You need to understand your vertical better than any generalist competitor ever will.
Plan your moat on day one. Before you write your first line of code, answer: what will be hard to replicate 12 months from now? If the answer is “nothing,” don’t build it. Your moat might be proprietary data accumulation, deep integrations, domain expertise, or network effects. But you need to know what it is before you start.
Build for model agnosticism from the start. Don’t tightly couple to GPT-5. Abstract your model layer so you can swap providers, use multiple models for different tasks, or switch to open source alternatives as they mature. The companies that survive will be those that can adapt when (not if) the model landscape shifts.
Track your unit economics religiously. If API costs are 40% of revenue and climbing, you don’t have a business. You have a temporary arbitrage that ends the moment your provider raises prices or your customer realizes they can call the API directly.
Focus on workflow, not features. Don’t just add AI capabilities. Integrate them into how users actually work. The wrapper that saves users three steps becomes essential. The wrapper that adds one AI feature to an existing workflow becomes optional.
Have a 12-month defensibility roadmap. What are you building this quarter that makes you harder to replace? If your answer is “we’re improving the prompts and the UI,” you’re not building defensibility. You’re just iterating on your wrapper.
The hard truth: if your entire value proposition is “I make it easier to use GPT,” you’re one product update away from irrelevance. ChatGPT’s interface gets better every month. Their enterprise features improve. Their API capabilities expand. If ease of use is all you offer, they’ll eat your lunch.
And if you’re evaluating AI companies (as an investor, potential customer, or someone considering joining), look past the AI claims. Ask what they’re actually building. Ask where the intelligence comes from. Ask what happens if OpenAI raises prices by 50%. Ask what their plan is when GPT-5 makes their current approach obsolete.
The companies with good answers to those questions might be worth betting on. The ones without answers are just riding the wave until it breaks.
The Real Question Nobody’s Asking
Here’s what keeps me up at night about the wrapper economy: we’re watching hundreds of millions in venture capital fund businesses whose core assumption is that the AI layer stays stable and accessible.
But what if it doesn’t?
What happens when OpenAI or Anthropic decide they’d rather own the application layer themselves? They have the models, the distribution, the brand recognition, and increasingly, the understanding of which use cases matter. Every API call is a signal about what customers want. They’re literally watching the entire market test product ideas in real time.
Why would they let wrapper companies keep that value when they could just build it themselves?
We’ve seen this movie before. AWS launched services that competed directly with their biggest customers. Google built features that killed entire categories of apps. Platform providers always move up the stack eventually.
The bet every AI wrapper company is making is that they can build defensible businesses faster than platform providers can build competing features. Maybe some will. But most won’t.
The AI wrapper boom is real. The money is real. The traction is real. But so is the fragility. We’re in the phase where everything works until suddenly it doesn’t.
Treat wrappers as starting points, not destinations. Use them to find product market fit fast, then build something that survives contact with an evolving platform. The companies that get this will thrive. The ones that don’t are just timing the hype cycle.
And if you’re building one right now? You’ve got maybe 12-18 months to figure out what makes you defensible. After that, the platform providers will have learned what works and the easy money will be gone.
The clock is ticking.
Disclaimer: This article mentions specific companies and products as examples for illustrative and educational purposes only. All information, including revenue figures, funding amounts, and business strategies, is based on publicly available sources, industry reports, and media coverage available at the time of writing. I have not independently verified all claims and cannot guarantee their accuracy. The analysis and opinions expressed are my own and do not represent statements of fact about any company’s current operations or performance. I have no financial interest, business relationship, or affiliation with any companies mentioned. This content is commentary and analysis, not investment, legal, or business advice. If any company believes information about them is inaccurate, please contact me and I will review and update as appropriate.