While this was definitely the year of Agentic AI, what is less clear is the effect LLM generation will have on otherwise neutral projects. We’ve seen a few software products ride the marketing wave to give themselves time to pivot, but the public expectation from AI has not settled.
This leads to the issue of what types of project can be undertaken without being negatively impacted by AI. This post underlines how AI exposes products that don’t express their true value proposition very clearly.
How AI Changes User Expectations and Exposes Risk
The problem that creative industries (of which the games industry is the easiest to identify) are having with generative AI is not their use or abuse of it, rather how the products are now seen by users. Should a game only have human gene…
While this was definitely the year of Agentic AI, what is less clear is the effect LLM generation will have on otherwise neutral projects. We’ve seen a few software products ride the marketing wave to give themselves time to pivot, but the public expectation from AI has not settled.
This leads to the issue of what types of project can be undertaken without being negatively impacted by AI. This post underlines how AI exposes products that don’t express their true value proposition very clearly.
How AI Changes User Expectations and Exposes Risk
The problem that creative industries (of which the games industry is the easiest to identify) are having with generative AI is not their use or abuse of it, rather how the products are now seen by users. Should a game only have human generated assets, like a classic artwork? This is no longer an academic question.
We understand that if you try to pass off a piece of poorly generated art in place of human-created art, it will be rightly described as “AI slop” and will cause upset because of the perceived lower value. But if AI is part of an enhanced workflow, even where humans have been paid for their work, users may need to know more. For example, presented with a robotic enemy in the recently launched multiplayer game ARC Raiders, players were unsurprised that the robot’s movements were generated with AI help. But on hearing human voices in the game, some players were upset that not every line was voice acted. Even though the voice actors were paid and AI was only used to generate variable voice lines, some players felt this was wrong.
All workflows and processes impacted by LLMs that interact directly with human users are at risk of having their value questioned.
The bottom line is that all workflows and processes impacted by LLMs that interact directly with human users are at risk of having their value questioned. We all considered ChatGPT novel when we first interacted with it, although attempts to replace human help with chat box generated responses were initially rejected. That said, where a company has correctly described what the user is likely to get (e.g. 24-hour help with common problems) most users have accepted this proposition.
Simon Willison regularly asks any LLM models he is looking at to draw a pelican on a bike. These are usually comically bad, but add to the understanding that LLMs have so far only been able to extrude objective reality from the web (which most LLMs have ingested by now). This is why we are right not to value LLM image creation highly at this time. As an example, the SVG image below was the response from Mistral Large 3:
Simon is not doing this to ridicule LLMs, he is just making the point that all models make completely unique and (to human eyes) strange attempts at the task, since they possess no real-world model to work with in their heads. Indeed, they have no heads.
The Pitfalls of ‘AI-Infused’ Products
The Ladybird web browser project I mentioned earlier this year has a simple premise: it is built from scratch with no components from other browsers. As well as promising no monetisation deals, it focuses on just being a web browser. So now it has an extra unspoken boon: it does not have an LLM service baked in.
Contrast this with, for example, Google, which is so busy integrating AI into Chrome and other products that it is no longer clear to users what they are getting. Technically, the AI Overviews service in search that produces a short generated response — as opposed to simply a list of links — is separate from the browser. But these necessary separations are now muddied with Google presenting Chrome as an AI browser. I have written about Atlas, OpenAI’s browser “with ChatGPT built in” that takes us further away from the original web. Nobody wants a pelican on a bike.
The very worst thing to do is present a product or experience as “AI native” or “AI first.” Everything should aim to be user first.
Again, concerns arise not because LLMs are some sort of poison, but because large companies are failing to clarify how their services bring value. In choosing to present LLMs as witchcraft, that adds some mystical effect that cannot be quantified but should nevertheless be infused across an organisation for maximum effect. This works against most people’s experience of using ChatGPT — they can see exactly what it does well and what it does not. The very worst thing to do is present a product or experience as “AI native” or “AI first.” Everything should aim to be user-first. I regularly report on how Agentic AI is successfully giving developers new ways of developing software — for instance see my review of Conductor — even if it is still too early to measure the true effectiveness. However, users (developers, in this case) can actually see the benefits.
I understand that pressing a brake pedal to slow down a car is usually still a mechanical process (i.e., hydraulic force to the brakes), but can also now be done via wire (i.e., the pedal sends an electronic signal to the brakes). I understand that Electric Vehicles (EVs) may still use a mechanical backup. I vaguely understand how ABS works. The information is available, but we accept that the engineering is done holistically. But I wouldn’t be attracted to a car pronouncing itself as “wire native” or “wire first.” I simply expect that any car maker values using safe and efficient components in their vehicles.
Understanding the Legal Risks of Using LLMs
The final risk from LLMs is more familiar. Midjourney has been successful in producing a very good image generation product with open source models, but it is now facing several legal issues. A larger company can better manage the legal risk, or the relationships with the businesses that they are undermining. Midjourney should clearly have seen and combatted the “blatant copyright infringement” that people warned them about from the beginning. But they haven’t done enough innovation at the user prompt layer or the output layer to circumvent this. By comparison, YouTube works hard at spotting infringements in videos uploaded to its platform.
The problem for other projects is that even with small amounts of LLM generation, they are now exposed to legal threats — and increasingly so if litigation is successful. By limiting their value proposition earlier on, they may stop users from exploring areas where infringements are likely.
Why a Clear Value Proposition Matters in the AI Era
I encourage the idea that LLMs are just tools, yet they clearly will effect many existing and new software projects in unpredictable ways. What LLMs have introduced is the need to fully explore and explain how your product’s value proposition may now have altered. Or whether that proposition was mature enough in the first place.
Projects need to work harder to independently explain what their products produce, what outcomes are expected, and in what domain these outcomes fall. Then the project’s area of responsibility will be clearer. “The LLM made it” will no longer count as a valid answer when examining these boundaries in the future.
TRENDING STORIES