“That’s just an AI wrapper.”
The put‑down feels familiar to people developing something new using Artificial Intelligence.
The push-back feels just as familiar.
“Everything is a wrapper. OpenAI is a wrapper around Nvidia and Azure. Netflix is a wrapper around AWS. Salesforce is an Oracle database wrapper valued at $320 billion,” says Perplexity CEO Aravind Srinivas1.
For those not familiar with the term “AI Wrapper,” here’s a good definition2.
It is a dismissive term that refers to a lightweight application or service that uses existing AI models or APIs to provide specific functionality, t…
“That’s just an AI wrapper.”
The put‑down feels familiar to people developing something new using Artificial Intelligence.
The push-back feels just as familiar.
“Everything is a wrapper. OpenAI is a wrapper around Nvidia and Azure. Netflix is a wrapper around AWS. Salesforce is an Oracle database wrapper valued at $320 billion,” says Perplexity CEO Aravind Srinivas1.
For those not familiar with the term “AI Wrapper,” here’s a good definition2.
It is a dismissive term that refers to a lightweight application or service that uses existing AI models or APIs to provide specific functionality, typically with minimal effort or complexity involved in its creation. A popular example of an AI wrapper are apps that enable users to “chat” with a PDF. This type of AI application allows users to upload a PDF document, such as a research paper, and interact with an AI model to quickly analyze and obtain answers about the specific content. In the early days of ChatGPT, uploading documents as part of the prompt or creating a custom GPT was not possible, so these apps became very popular, very fast.
In my view, this AI wrapper debate misses a larger point. Wrappers are not all the same. Some enjoy a brief run and last only until big platforms bundle them into their suites. But products that can live where users already work, make use of proprietary data, and/or withstand incumbent distribution advantages can endure. The wrapper label is a distraction from what I think actually matters: (1) Is it a feature or a product, and (2) How big is the market segment.
Let’s first look at that earlier example of a wrapper that lets you chat with a PDF. Such a tool solves one narrow problem of answering questions about a document. It does not create new documents or edit existing ones. It typically does not capture any unique data, or learn from user behavior. So to me, it is a capability rather than an end-to-end solution. A means to an end, if I may. As a result, this kind of feature belongs inside a document viewer or editor, or in the flagship applications of model providers. So when the foundation models themselves (OpenAI/ChatGPT, Anthropic/Claude, Google/Gemini) bundle this feature natively, the standalone tool becomes redundant. This is classic feature behavior - easy to copy, no end-to-end job, no moat or long-term defensibility.
One caveat though; even those that are features can be an interesting indie businesses that make money until the platforms build it into their apps3.
PDF.ai $500K MRR, PhotoAI $77K MRR, Chatbase $70K MRR, InteriorAI $53K MRR4.
Jenni AI went from $2,000 to over $333,000 MRR in just 18 months5.
Some wrappers are genuine products but live in market segments so large that model builders and big tech platforms cannot ignore them. Two vectors of competition come into play: (1) model access, and (2) distribution.
Coding assistants illustrate both. Tools such as Cursor have turned a wrapper into an AI integrated development environment (IDE) that reads the repo, edits files, generates code, reverts changes, runs coding agents, and reimagines the developer experience for the AI-era. The market justifies the attention. Software developers represent roughly 30% of the workforce at the world’s five largest market cap companies, all of which are technology firms as of October 20256. Development tools that boost productivity by even modest percentages unlock billions in value. That makes this segment a prime target for both model builders and incumbents that already own distribution channels.
But Cursor and other such tools depend almost entirely on accessing Anthropic, OpenAI and Gemini models. Developer forums are filled with complaints about rate limits from paying subscribers. In my own projects, I exhausted my Claude credits in Cursor mid-project and despite preferring Cursor’s user interface and design, I migrated to Claude Code (and pay ten times more to avoid rate limits). The interface may be better, but model access proved decisive.
This foundation model competition extends to every category that OpenAI Applications CEO Fidji Simo flagged as strategic (Knowledge/Tutoring, Health, Creative Expression, and Shopping) as well as other large market segments such as Writing Assistants, Legal Assistants, etc.
Distribution poses the second threat. Even where model builders stay out, startups face a different competition question - can they build a user base faster than incumbents with existing products and distribution can add AI features? This is the classic Microsoft Teams vs. Slack Dynamic7. The challenge is in establishing a loyal customer base before Microsoft embeds Copilot in Excel/PowerPoint, or Google weaves Gemini into Workspace, or Adobe integrates AI across its creative suite. A standalone AI wrapper for spreadsheets or presentations must overcome not just feature parity but also bundling/distribution advantages and switching costs.
This distribution competition from incumbents also holds in other large markets such as healthcare and law. In these markets, regulatory friction and control of systems of record8 favor established players such as Epic Systems in healthcare. For e.g. A clinical note generator that cannot write to the Electronic Health Record (EHR) will likely come up against Epic’s distribution advantages sooner or later.
Three caveats here: (1) First, speed to market can create exit options even without long-term defensibility; tools like Cursor may lack control over its core dependency (model access), but rapid growth make them attractive targets for model builders seeking instant market presence. (2) Second, superior execution occasionally beats structural advantage; Midjourney’s product quality convinced Meta to use it despite Meta’s substantially larger budget and distribution power. (3) Third, foundation models may avoid certain markets despite their size; regulatory burden in healthcare and legal, or reputational damage from AI companions or pornographic adult content may provide opportunities for operators willing to face extreme regulatory scrutiny or controversy.
The opportunity remains large9, but competition (and/or acquisition) can come knocking.
Cursor went from zero to $100 million in recurring revenue in 18 months, and became the subject of recurring OpenAI acquisition rumors.
Windsurf, another coding assistant, received a $2.4B acquisition licensing deal from Google.
Gamma reached $50 million in revenue in about a year.
Lovable hit $50 million in revenue in just six months.
Galileo AI acquired by Google for an undisclosed amount.
Not every market gap attracts model builders or big tech. A long tail of jobs exists that are too small for venture scale but large enough to support multimillion-dollar businesses. These niches suit frugal founders with disciplined scope and lean operations.
Consider those Astrology or Manifestation or Dream Interpreter AI apps. A dream interpreter that lets users record dreams each morning, generates AI videos based on them, maintains some kind of dream journal, and surfaces patterns over time solves a complete job. Yes, users could describe dreams to ChatGPT and it even stores history/memory, but a dedicated app can structure the dream capture with specific fields (recurring people, places, things, themes etc.) and integrate with sleep tracking data in ways a general chatbot likely cannot. Such a niche is small enough to avoid model attention but seems large enough to sustain a profitable indie business.
While the previous categories frame opportunities for new ventures, incumbents face their own strategic choices in the wrapper debate when model builders arrive. Those that navigate model builder competition, in my view, will share two characteristics.
First, they will own the outcome even when they don’t own the model. Applications already embedded in user workflows (Gmail/Calendar, Sheets, EHR/EMR, Figma) require no new habit formation, and building these platforms from scratch is much harder than adding AI capability to existing ones. When these applications ship actions directly into a proprietary system of record (controlling the calendar event, filing the claim, creating the purchase order, and so on), “done” happens inside the incumbent’s environment. AI becomes another input to an existing workflow rather than a replacement for it. 1.
Second, successful incumbents will build proprietary data from customer usage. Corrections, edge cases, approvals, and any human feedback become training data that refines the product over time and that a frontier model will not have access to. Cursor, though not an incumbent and despite its dependence on external models, plans to compete by capturing developer behavior patterns as CEO Michael Truell notes in his Stratechery interview:
Ben: Is that a real sustainable advantage for you going forward, where you can really dominate the space because you have the usage data, it’s not just calling out to an LLM, that got you started, but now you’re training your own models based on people using Cursor. You started out by having the whole context of the code, which is the first thing you need to do to even accomplish this, but now you have your own data to train on.
Michael: Yeah, I think it’s a big advantage, and I think these dynamics of high ceiling, you can kind of pick between products and then this kind of third dynamic of distribution then gets your data, which then helps you make the product better. I think all three of those things were shared by search at the end of the 90s and early 2000s, and so in many ways I think that actually, the competitive dynamics of our market mirror search more than normal enterprise software markets.
Both critics and defenders of AI wrappers have a point, and both miss something. The critics are right that some wrappers lack defensibility and will disappear when platforms absorb their features. The defenders are right that every successful software company wraps something.
But I think the insight lies between these positions. Even if a new application starts as a wrapper, it can endure if it embeds itself in existing workflows, writes to proprietary systems of record, builds proprietary data and learns from usage, and/or captures distribution before incumbents bundle the feature. More importantly, wrappers that continue to swiftly ship features that solve users’ needs even as competition arrives are difficult to compete with. These are the same traits that separate lasting products from fleeting features.
If you enjoyed this post, please consider sharing it on your socials or with someone who might also find it interesting. Follow me on X.com or LinkedIn to discuss tech and business trends as they happen.
Perplexity AI CEO, Aravind Srinivas pushing back on criticism about the business potential of Perplexity:
I use the term system(s) of record to mean the final artifact when a jobs-to-be-done is completed; for example, controlling the calendar event, filing the claim, creating the purchase order, and so on.