Satisficing is one of the most important and yet least understood ideas in marketing. The idea comes from Nobel Prize-winning economist Herbert Simon and is a portmanteau of "satisfy" and "suffice." The basic premise is that a much more reasonable model of human behavior than the popular economic concept of utility maximization is that, when we make decisions, we ensure that we clear an arbitrary satisfaction threshold (satisfy) and then give up excess utility for ease (suffice). Here’s Simon from his 1956 paper "Rational choice and the structure of the environment":
*The central problem of this paper has been to construct a simple mechanism of choice that would suffice for the behavior of an organism confronted wit…
Satisficing is one of the most important and yet least understood ideas in marketing. The idea comes from Nobel Prize-winning economist Herbert Simon and is a portmanteau of "satisfy" and "suffice." The basic premise is that a much more reasonable model of human behavior than the popular economic concept of utility maximization is that, when we make decisions, we ensure that we clear an arbitrary satisfaction threshold (satisfy) and then give up excess utility for ease (suffice). Here’s Simon from his 1956 paper "Rational choice and the structure of the environment":
The central problem of this paper has been to construct a simple mechanism of choice that would suffice for the behavior of an organism confronted with multiple goals. Since the organism, like those of the real world, has neither the senses nor the wits to discover an "optimal" path — even assuming the concept of optimal to be clearly defined — we are concerned only with finding a choice mechanism that will lead it to pursue, a "satisficing" path, a path that will permit satisfaction at some specified level of all of its needs.
Alephic Newsletter
Our company-wide newsletter on AI, marketing, and building software. Subscribe to receive all of our updates directly in your inbox.
Simon won a Nobel for his work on bounded rationality, of which satisficing is a component. To me, it’s a perfect way to articulate why emotional messages resonate more than intellectual ones. Consumers realize, even if they can’t articulate it, that in most categories the choice between products is relatively small (despite the protestations of each brand). So, rather than spending time making a perfectly rational decision about the optimal product, they go with the easiest-to-buy option that also meets their price, quality, etc. standards. My go-to example is toothpaste: you could read the back of every box in CVS to decide the optimal brand for purchase, or you could trust that CVS wouldn’t carry junk and choose the first one that you recognize (it’s the one with Scope for me, I can’t even remember the brand at the moment). What’s easiest to buy is usually the thing that’s a) available in front of you and b) recognizable.
Enter AI.
One of the fundamental questions I have about these models is that if you assume they will continue to become more important mediators of product decisions for consumers (which I do), then a major question for marketers is going to be how you persuade and market to the models and whether that represents a fundamentally different communications approach than the one they’ve historically taken with consumers. Specifically, I’m curious whether the kind of rational persuasion that marketers shy away from—“feeds and speeds” is the pejorative term some folks in the industry use—will actually be the thing that convinces a language model to recommend your brand or product.
Or maybe, and I think this is more likely based on my own experience playing with these models, what if it’s just content and communications that looks rational? As we covered recently in the BRXND newsletter, research supports this intuition. Springboards.ai ran a creativity benchmark with nearly 700 marketing professionals evaluating outputs across major LLMs. When they had the models themselves judge the same work, reasoning models like o3 strongly preferred outputs with clear logical progression—they "don’t want big creative leaps," as Springboards CEO Pip Bingemann put it. Humans, meanwhile, were drawn to messier, more subjective work. This is telling: the models aren’t reasoning their way to better answers—they’re pattern-matching on what they think persuasive writing looks like. They’ve been RLHF’d to please us and, apparently, have concluded that humans want things that sound professional and structured. It’s a kind of emotional reasoning dressed up in a blazer.
One of the complaints we all have about the output of these systems is that they often give us stuff that looks professional and reads like a high school sophomore doing their best to sound like how they think a grown-up sounds. It’s possible—and critically, we don’t really know yet—that maybe the models will respond better to stuff that looks like rational writing, whether or not that writing is actually rational.
In that way, it creates a funny marketing paradox where we think that consumers are purely emotional beings who fail to think rationally, even though Herbert Simon proved that their emotional approach was actually economically rational. On the other hand, these models, which we think of as perfect embodiments of logical thinking, are actually far more emotional, aiming to give us what they think we want, rather than actually acting rationally.
Speculating on AI in 2017, Daniel Kahneman said, “The robot will be much better at statistical reasoning and less enamored with stories and narratives than people are.” Which sounds right until you realize that its main goal is to “act as a helpful assistant.” The fundamental question, I think, is what satisficing will look like for these models. Going back to our toothpaste example, AI can easily read all those boxes in parallel, so clearly the equation will be fundamentally different than our practice in the pharmacy aisle.
My guess is there’s a lot of room for backstory. In the aisle, you get a box, but in a conversation, you get to explain the box: Why this ingredient? What problem does it solve? This is merchandising 101, but with space to talk. Which is funny, because Kahneman thought the robot would be less enamored with narratives. Five years later, ChatGPT RLHF’ed its way into our hearts. Sadly, Kahneman passed away in March of 2024, but I suspect he would have updated his thinking. When you RLHF a model to be a "helpful assistant," you’re essentially training it to care about context, explanation, and story—exactly the things he thought the robot would skip past.