The AI ecosystem seems to be full of contradictions. I and many of my peers in machine learning and software engineering roles are being faced with more and more pressure to incorporate AI into software we build, but it’s coming without clarity on the business goals we’re trying to achieve. At the same time, market analysts and “thought leaders” in tech are continuing to center massive amounts of their energy and attention on AI, seemingly at the expense of any other possible topics of interest. Executives and leaders hear from boards, analysts, and advisors that “deploying AI” or being “AI driven” is required for business success, and they come back to their teams asking for ways to carry out these instructions, but no one is clear about why it needs to be done. And all the while,…
The AI ecosystem seems to be full of contradictions. I and many of my peers in machine learning and software engineering roles are being faced with more and more pressure to incorporate AI into software we build, but it’s coming without clarity on the business goals we’re trying to achieve. At the same time, market analysts and “thought leaders” in tech are continuing to center massive amounts of their energy and attention on AI, seemingly at the expense of any other possible topics of interest. Executives and leaders hear from boards, analysts, and advisors that “deploying AI” or being “AI driven” is required for business success, and they come back to their teams asking for ways to carry out these instructions, but no one is clear about why it needs to be done. And all the while, across social media regular users are complaining about having unwanted AI injected into apps and products they liked fine before.
What’s driving these apparently conflicting forces? I have a hypothesis about the bigger picture that I want to share.
Tech startups usually have a central goal to build a product with functionality that will meet the needs of customers (product-market fit, as those of us in the space often hear). Those customers will pay for the product, use it to improve their own business performance, and success for all involved will ensue. This model is well established, and gives us a path to take. Identify the customer’s needs or problems, investigate what features could solve them, build those features, sell the product to the customers facing those problems, rinse, and repeat.
Where does “become AI driven” fit in this? Why is this AI mania so powerful at the current moment, with seemingly no regard for the customer’s actual pain points? It’s like a wave of carpentry companies all becoming “screwdriver driven” — sure, a screwdriver might be very useful and even necessary for a carpentry project, but it’s certainly not the only tool in your arsenal. AI is the same, and deciding to apply it should only come after the problem has been analyzed and assorted approaches have been considered.
Why is this AI mania so powerful at the current moment, with seemingly no regard for the customer’s actual pain points?
So, if AI is not the best solution to your customers’ pressing challenges, what do you do in face of overwhelming pressure to use it regardless? As engineers we might be tempted to just ignore this demand, given that it does not fit into the model of startup development I’ve described. Some leaders and companies may be able to just brush off the AI hype tsunami and carry on working as they have been. If an AI tool happens to be the best solution for something, they can use it, but it doesn’t have to drive their whole roadmap.
However, for most of us, this probably isn’t a viable path. So, where do we go? And, how did we get here? I’ve got some ideas, so let’s look at the major players involved.
Diagram by the author using Excalidraw
Inside the startup
As I have explained, software startups generally set out to create some functionality that will help customers solve a problem. Uber created the ride-share app, which helped people find convenient and comfortable rides at much lower prices than taxis. (This had and continues to have serious negative externalities and consequences, but let’s set that aside for the moment.) Other startups have tried to create solutions to inefficiencies and inconveniences in myriad other sectors. They come up with an idea, solicit investment from venture capital or others, and use that money to turn the idea into a real product. They pick up customers, start earning money, and become profitable. Eventually, an IPO or an acquisition by a larger company may happen.
Having a good idea isn’t all there is to the startup approach, however — there’s often also an element of showmanship. Because of the funding model, a lot of startups build a minimum viable product, or MVP, and then carry on “building the plane while flying it” as the old cliche goes. This means that you’re selling a rudimentary (but hopefully still useful) initial product with promises of future improvements, functionality extensions, and so on. This can work, but you need to have some amount of hype to get people interested in investing (as customers or as investors) in the potential of those future promises.
We’d like to think that you just need to have the best idea, and the smartest plan for how to achieve it, and funding will follow — but that’s not the reality. If you need to get attention in order to succeed, you need to be able to promise what people are looking for, and to demonstrate that you are the most current, fresh, advanced option out there. Theoretically, the argument is that a tech company whose offering is not “cutting edge” will not provide customers the best value for money, or will not offer the most useful functionality. Even tech companies who have even established and good products, much more developed than MVP, are still trying to grow their customer bases. They still need market attention and sizzle to keep the attention of prospective buyers.
We’d like to think that you just need to have the best idea, and the smartest plan for how to achieve it, and funding will follow — but that’s not the reality.
This is where some of the demands for becoming “AI powered” come into play. Over 50% of venture capital investment in all of 2025 went to “AI” companies. Incorporating AI functionality into software isn’t inherently a bad idea, but neither is it a silver bullet. It’s just a tool, like any other, and if you’re hoping that just throwing AI around will actually improve your product, you’re deeply mistaken. Just like the carpenters with their screwdrivers, the tool choice shouldn’t be driving the strategy. You can end up building something that is not functional, or even if it is functional, misses key requirements to being useful or appealing to customers.
But nonetheless, this is where the available money is going right now, so understandably this is the instruction leadership is giving engineers at many, many companies. When the CEO of your company comes to the engineering department and instructs you that AI needs to be the new strategic direction, you might find yourself thinking “what does that mean?” or “how does that fit in to all our established plans?” I’d actually argue that these are the wrong questions to ask.
Inside the boardroom
What you really want to know is “Why is my leader asking for this? What do they really want?” And to answer these, you need to consider a little bit more about the startup CEO’s pressures and influences.
The role of the startup CEO is actually pretty difficult. You’re leading a whole organization, which is already complicated, but you don’t have unlimited autonomy, in most cases. Frequently, you report to a board of directors who are there to ensure that the business goes well. That’s where a lot of the pressure to increase AI involvement and visibility in the product can come from.
A CEO might organically be bought in to AI hype, and be perfectly willing to push incorporation of this tool on the rest of the organization without much concern for the problem being solved. However, they might also cynically look at the economic environment and recognize that the way to media attention and analyst applause is incorporating AI in any way that can be marketed. Simultaneously, they may just be limited in their choices if their board of directors is very bought in to the AI narrative.
CEOs and board members alike are hearing from lots of media and analyst sources about the prestige and importance of AI, and many of them will believe this, which leads to fears about not being seen as tech-forward. Even if they are skeptical, they still want to get the advantage of attention from the market analysts and maintain prestige to drive business, which means playing the game.
If that board pressure is there, as it is for many tech leaders right now, it’s not something you can just ignore. A board of directors has a very significant amount of influence and power in most startups. They can usually fire the CEO, for example. So if the CEO doesn’t think blanket, general application of AI is a good business decision for your company, the CEO can try to push back, but there comes a point where they have to accept the board’s instructions.
No CEO wants to gain a reputation as being technologically stale and outdated, even if they keep their role. Pushing back on the common wisdom of the moment comes with cost. Political capital is not infinite, and executives have to spend a lot of their time deciding which battles to fight. And this is even assuming the CEO isn’t fully bought-in to the AI hype themselves, which they very well might be.
Pushing back on the common wisdom of the moment comes with cost.
Where are market analysts getting this?
I’ve mentioned market analysts in passing, but they’re important to this overall ecosystem. Let’s dig in to what their role really is.
Companies like Gartner, Forrester, and other market analysis firms make their money by researching and examining companies in different industry spaces, ranking and scoring them, and selling those reports to people trying to decide which business to contract with. These agencies are kind of like credit rating agencies for banks — they vouch for you so other companies and purchasers who don’t have all that time can easily decide if you’re for real, or if you’re vaporware.
How do they decide which companies to recommend? Well, there’s usually some kind of rubric or set of measurement standards, such as which products have the broadest functionality, which ones seem to do the best job solving certain problems, how satisfied current customers are with the products, and so on.
As you might have guessed, there’s a new element that’s sucking all the oxygen out of the room right now, which is “if this company is using a lot of AI”. What does that even mean? Unfortunately, based on what I’ve seen so far, it frequently means “does this company’s product have an AI chatbot in it?” It’s hard work to take a deep look at a company’s product and all the different ways that machine learning or AI could be incorporated into its various functions and offerings under the surface. It’s pretty easy, on the other hand, to look for a chat window and listen for “AI” in the marketing pitch.
There are endless ways, good and bad, that machine learning and AI can be incorporated into any given software product, and I would argue that a chatbot is rarely the best one for most use cases. But it’s flashy, and obvious, and people who know little or nothing about how the technology actually works can spot it a mile away, so a lot of organizations are landing on this.
It’s functioning as a shorthand outsiders can use to assert that “this company is using lots of AI”, which in turn is shorthand for “this company is technologically advanced and innovative.” Unfortunately, this line of logic is very misguided. AI is not a measure of technical savvy or quality, especially not right now where low-quality plug-and-play AI solutions for jamming into software products are being sold by every other vendor out there. Slapping a chatbot on your website has nothing to do with how good your codebase is, the quality of your engineering talent, your strategic savvy, or anything else.
AI is not a measure of technical savvy or quality, especially not right now where low-quality plug and play AI solutions for jamming into software products are being sold by every other vendor out there.
Does the market analyst realize this? Maybe, but as with the board and the CEO, it doesn’t really matter, because the broader ecosystem is already so frantic with AI hype. Consider what would happen if you ran a market analysis firm, and your competitor firms were all making great hay about which startups have the most “advanced” AI technology, and your market report stuck to the meat and potatoes of basic features. Could you do it? Perhaps. But the readers of your reports are the board members, executives, and other leaders around the industry, and what are they getting from all sides right now? AI hype. They’re going to want to know whether these companies have AI, not because they know why they should care, or what the AI has to do with the company’s business. They want to know because they’ve been solemnly informed by media and AI companies that this is the cutting edge and anyone who misses it is going to be left behind. (Readers may be reminded of the Web 3.0/blockchain craze that left us with assorted companies throwing their lot into blockchain-based business models that really made no sense at all.)
Why are AI companies the way they are?
This brings us around the circle to AI companies themselves. These are the entities with the obvious motivation to convince all the rest of us that incorporating AI into more or less all software is necessary, because their business model is providing the underlying functionality to make that possible.
The nuanced and difficult part is that AI isn’t always the wrong choice. AI can be very useful for a number of different things! But AI isn’t the right choice for everything, and that’s the difference. It’s a tool to use to pursue a goal, and we should be slow and thoughtful about where we incorporate it. This is for many reasons — for one thing, building AI functionality has opportunity costs, and prevents you from using your time and resources to build something else that the customer might need. But additionally, as I’ve described many times, AI is extraordinarily environmentally, socially, and economically expensive. The cost of building this is so much higher than we can see from our desks, so it needs to be used only in the most appropriate and necessary scenarios.
The cost of building this is so much higher than we can see from our desks, so it needs to be used only in the most appropriate and necessary scenarios.
Even considering this, I sincerely think that if AI companies took a measured approach, providing AI capabilities where needed, there could be a healthy albeit not extraordinary market for this technology. Unfortunately, this is not the business model for AI companies — instead, hundreds of billions of dollars have been invested by big tech firms and investors into OpenAI, Anthropic, and others, and they expect this investment to pay off in one way or another.
Simultaneously, OpenAI in particular is engaged in some strange financial machinations, promising investments to hardware providers that vastly exceed any possible measure of their financial resources. Matt Levine at Bloomberg covered this in his most recent Money Stuff column, noting “if you owe the bank $100, that’s your problem. If you owe Broadcom $500 billion (emphasis in original), that’s Broadcom’s problem. If you owe every big tech company hundreds of billions of dollars, that is their problem. Surely they’ll find a solution! Or you will. The money will figure itself out.” He goes on to explain that taking on massive debt is likely the strategy that will be undertaken to find the money to make good on such promises, if anything is done at all. But if OpenAI is leveraged up to their ears, what is the end game not just for them, but for their creditors and the companies that they’ve promised investments to? A lot of influential and large companies have existential incentives to make this AI economy work.
I think it’s very possible that tech giants and major investors have already sunk more money into the AI companies than it is ever possible to actually get in return. We are seeing a frenzied, hype-driven AI marketing force because in order for the AI companies to make good on their revenue and profit promises, they can’t just settle for the customers who have a thoughtful, intentional use case for AI within their product. They need every company out there to be desperately spending on AI, no matter the price, because hundreds of billions of dollars need to be found to keep the wheel spinning. And let’s not forget that for the companies supplying the foundational models, like Anthropic, OpenAI, and xAI, it’s unclear if reasonable retail prices can cover their costs, suggesting they may lose money when people use the product. Many second layer AI solution providers, such as coding tools are increasing prices and reducing the usage allocation for plans, to try and close this financial gap.
We are seeing a frenzied, hype-driven AI marketing force because in order for the AI companies to make good on their revenue and profit promises, they can’t just settle for the customers who have a thoughtful, intentional use case for AI within their product.
In tech more broadly, we refuse to accept that a successful company can be satisfied with moderate size and healthy profits — we instead demand gargantuan size and extraordinary profits, from every startup, or it’s deemed a failure. This manifests in venture capital expectations of not just 2x or 5x returns on an investment, but 50x or 100x returns, but the dollar amounts being invested into AI make achieving anything close to this seem utterly unrealistic.
What happens to the startups?
Now we’ve traced the phenomena back to their origin, how does the cycle complete? Your startup shoehorns AI into their product, in a flashy enough way to be noticed, and you pay fees to the AI companies for use of the model. Those fees are significant, pay-per-use, and often rise as new models are released.
If the AI application is well thought out and carefully constructed, it might solve a problem a customer actually has, and this can work. Revenue growth may ensue, and this may contribute to business success. This is the ideal scenario, certainly! Will the revenue this brings in be enough to pay increasing prices to the underlying model provider, so that company scale up and return billions to investors? That’s much less clear. And, if you do come to the conclusion that this functionality isn’t the right path, or isn’t worth the maintenance, untangling it from your application could be very challenging.
But if the AI integration is not strategic, but instead is sloppy and driven by the hype and not by customer needs or product-market fit, the result can be disastrous. Customer backlash is a real possibility, as is data privacy or security failure, PR crises if a model hallucinates particularly badly, or other dangers. You could very possibly lose customer confidence and experience user churn, in the worst case. Then you’re not paying substantially to the AI companies at the core of this, because no one is using the functionality, but you’ve lost time, wasted other opportunities, and hurt your business all in one go.
In the most banal scenario, you build out an AI chatbot for your product, and customers may simply find it uninteresting. The amount of technical labor required to set this up is not negligible, and the opportunity cost is real. What happens if you get through all this and find out it just wasn’t worth it? You may not see any increase in revenue, but providing this AI functionality still costs you money when it is used. Do you undo all that hard work getting it implemented, and go back to how you were before? What will your customers, leaders, board, investors, and the market say?
This is daunting, for sure — and it’s theoretically possible for organizations and individuals to stay out of this whole fray. You can keep on keepin’ on as your software startup, not fretting about AI, and just keeping it as a tool in your toolbox if the right problem should come along. But you’re taking a chance. Are you sure you’re never going to need a funding round? Is your board absolutely okay with this approach, or do you just not have one? It’s possible to take this path, but it’s dangerous. Businesses operate in an economy, not a vacuum, and they are never immune from the pressures the broader world might bring to bear on them.
Conclusion
Lots of people online these days complain about dissatisfaction with AI functionalities that have been added to otherwise beloved or valued software products. They ask, “why can’t I just have it without the AI, for the old price?” when a new AI offering is released and it comes with a subscription fee hike that is evidently not optional. I think when you take a look at the broader picture, the answer is pretty clear. Tech leaders were instructed by their boards and the broader media ecosystem that they needed to have AI, they implemented it, and now they need to find a way to justify the investment that was made. It’s like the gambler’s fallacy, throwing good money after bad instead of cutting your losses.
AI implementation isn’t cheap, and in many if not most cases, the business’s AI functionality comes with a regular fee to pay to the AI providers, like OpenAI or Anthropic, so they continue to incur costs when you use the product, so it has to continue to cost extra. The Board was convinced by all the influences they listened to that offering AI was vital to continued relevance and success, and they assumed that this AI would go over well with customers. The gap that everyone missed was whether this was true, and whether the AI would actually solve problems the customers had, in a way that was desirable.
If the business needed AI, and the AI implementation actually solved customers’ problems, then this whole AI economy can work! But if you do it badly, and slap some poorly thought through AI functionality on the product in a way that doesn’t make sense, the whole cycle can collapse. Who’s going to end up holding the bag? Small startups who spend their limited funds trying to make it work? AI companies who can never pay back the investments? Big tech companies that have sunk so much into this venture that it actually impacts their main business?
In fact, I’d argue that the survival of the AI economy in any form, if it’s possible, depends on a serious change of perspective. We could have a moderate but successful AI economy with AI implementation being thoughtful, careful, and conservative, but we have to learn to accept that moderation. Instead of exorbitant spending on training the next model version for very marginal improvement, we could be prioritizing efficiency, environmental sustainability, and practical uses of LLMs. We know that building a product that fits the market, solves somebody’s problem, and is within their budget is the way to SaaS startup success. Just because AI has entered the scene does not change that core reality. People won’t buy your product if it doesn’t solve their problems, and if they do buy it based on hype that doesn’t bear out, they’ll drop you and churn, and you’ll be left holding the bag.
We could have a moderate but successful AI economy with AI implementation being thoughtful, careful, and conservative, but we have to learn to accept that moderation.
We are only going to have success if we realize that AI is a tool. It’s not the end goal, it’s just a possible way, out of many possible ways, of achieving an end goal. It’s also not magic, it’s software. Building it well requires just as much hard work, careful planning, expertise, and skill to develop as any other kind of software.
What to Do
I’ve written about this issue before, from different angles. If you’re a machine learning engineer or leader and your CEO has come to you demanding “something with AI”, I have advice for how to handle that. However, individual players in this game usually do not have the power to really change it. I didn’t really understand initially that while many CEOs are deeply bought in to the AI hype, many others are under pressure from forces that they can’t meaningfully rebel against, at least not to any meaningful effect.
I’ve also been part of projects with other experts in the field discussing how you actually do build AI solutions for production that can work, and that won’t just be boondoggles. Be warned, it’s hard! It takes real effort and is not something just anyone can throw together in a weekend. But if done well, there can be real value generated by targeted AI functionality in service of real customer needs.
I recommend that companies feeling the pressure to put in AI for marketing or prestige reasons step back and really work to find a way that this can be aligned with the goals of the business. Recognize that there is opportunity cost in choices you make, and doing this wrong has consequences. Don’t pretend that what you choose affects only you and your business, and make this decision with all the effects clearly in mind.
Read more of my work at www.stephaniekirmer.com.
Further Reading
https://tech.yahoo.com/ai/article/nvidia-investing-100-billion-openai-175159210.html
https://www.eastbaytimes.com/2025/10/03/ai-is-dominating-2025-vc-investing-pulling-in-192-7-billion/
https://techcrunch.com/2025/10/10/the-billion-dollar-infrastructure-deals-powering-the-ai-boom/
AI coding tools aren’t cheap anymore
https://seekingalpha.com/article/4829811-broadcom-joins-the-openai-bubble-club
https://www.bloomberg.com/opinion/newsletters/2025-10-13/openai-keeps-doing-deals
https://futurism.com/future-society/ai-data-centers-finances
Joe Wilkins AI Investment Is Already So Much Larger Than the Subprime Mo…
https://www.stephaniekirmer.com/writing/dosomethingwithai
https://towardsdatascience.com/deploying-ai-safely-and-responsibly/