In a previous article, we covered key theoretical concepts that underpin expected value analysis — which involves probabilistic weighting of uncertain outcomes — and focused on the relevance to AI product management. , we will zoom out and consider the bigger picture, looking at how probabilistic thinking based on expected values can help AI teams tackle broader strategic problems such as opportunity identification and selection, product portfolio management, and countering behavioral biases that lead to irrational decision making. The target audience of this article includes AI business sponsors and executives, AI product leaders, data scientists and engineers, and any other stakeholders engaged in the…
In a previous article, we covered key theoretical concepts that underpin expected value analysis — which involves probabilistic weighting of uncertain outcomes — and focused on the relevance to AI product management. , we will zoom out and consider the bigger picture, looking at how probabilistic thinking based on expected values can help AI teams tackle broader strategic problems such as opportunity identification and selection, product portfolio management, and countering behavioral biases that lead to irrational decision making. The target audience of this article includes AI business sponsors and executives, AI product leaders, data scientists and engineers, and any other stakeholders engaged in the conception and execution of AI strategies.
Identifying and Selecting AI Opportunities
How to spot value-creating opportunities to invest scarce resources, and then optimally select among these, is an age-old problem. Advances in the theory and practice of investment analysis over the past five hundred years have given us such useful tools and concepts as net present value (NPV), *discounted cash flow (DCF) *analysis, return on invested capital (ROIC), and real options, to name but a few. All these tools acknowledge the uncertainty inherent in making decisions about the future and try to account for this uncertainty using educated assumptions and — unsurprisingly — the notion of expected value. For example, NPV, DCF, and ROIC all require us to forecast expected returns (or cash flows) over some future time period. This fundamentally involves estimating the probabilities of potential business outcomes along with their associated returns in that time period and combining these estimates to compute the expected value.
With an understanding of expected value, powerful, field-tested methods of investment analysis such as those mentioned above can be leveraged by AI product teams to identify and select investment opportunities (e.g., projects to work on and features to ship to customers). In this publication by appliedAI, a European institute fostering industry-academic collaboration and the promotion of responsible AI, the authors outline an approach to computing the ROIC of AI products using expected values. They show a tree diagram of the ROIC calculation, which breaks down the “return” term of the formula into the “benefits” of the AI product (based on the quantity and quality of model predictions) and the uncertainty/expected costs of these benefits. They set these returns against the cost of investment, i.e., the total cost of the resources needed (IT, labor, and so on) to develop, operate, and maintain the AI product. Calculating the ROIC of different AI investment opportunities using expected values can help product teams identify and select promising opportunities despite the inherent uncertainty involved.
The use of real options can give teams even more flexibility in their decision making (see more information on real options here and here). Common types of real options include the option to expand (e.g., increasing the functionality of an AI product, offering the product to a broader set of customers), the option to contract or reduce (e.g., only offering the product to premium customers in the future), the option to switch (e.g., having the flexibility to move AI workloads from one hyperscaler to another), the option to wait (e.g., deferring the decision to build an AI product until market readiness can be ascertained), and the option to abandon (e.g., sunsetting a product). In order to decide whether to invest in one or more of these options, product teams can estimate the expected value of each option and proceed accordingly.
Check out the video below for hands-on examples of how standard frameworks (NPV, DCF) and real option analysis can lead to different conclusions about the attractiveness of investment decisions:
AI Portfolio Management
At any given time, businesses (especially large ones) tend to be active on multiple fronts, launching new products, expanding or streamlining existing products, and sunsetting others. Product leaders are thus faced with the never-ending and non-trivial challenge of product portfolio management, which involves allocating scarce resources (budget, staffing, and so on) across an evolving portfolio of products that may be at different stages of their lifecycle, with due consideration of internal factors (e.g., the company’s strengths and weaknesses) and external factors (e.g., threats and opportunities pertaining to macroeconomic trends and changes in the competitive landscape). The challenge becomes especially daunting as new AI products fight for space in the product portfolio with other essential products and initiatives (e.g., related to overdue technology migrations, modernization of user interfaces, and improvements targeting the reliability and security of core services).
Although primarily associated with the field of finance, modern portfolio theory (MPT) is a concept that relies on expected value analysis and can be used to manage AI product portfolios. In essence, MPT can help product leaders construct portfolios that combine different types of assets (products) to maximize expected returns (e.g., revenue, usage, and customer satisfaction over a future time period) while minimizing risk (e.g., due to mounting technical debt, threats from competitors, and regulatory pushback). Probabilistic thinking in the form of expected value analysis can be used to estimate expected returns and account for risks, allowing a more sophisticated, data-driven assessment of the portfolio’s overall risk-return profile; this assessment, in turn, can lead to actionable recommendations for optimally allocating resources across the different products.
See this video for a deeper explanation of MPT:
Countering Behavioral Biases
Suppose you have won a game and are presented with the following three prize options: (1) a guaranteed $100, (2) a 50% chance of winning $200, and (3) a 10% chance of winning $1100. Which prize would you choose, and how would you rank the prizes overall? Whereas the first prize guarantees a certain return, the latter two come with varying degrees of risk. However, the expected return of the second prize is $200*0.5 + $0*0.5 = $100, so we ought to (at least in theory) be indifferent to receiving either of the first two prizes; after all, their expected returns are the same. Meanwhile, the third prize offers an expected return of $1100*0.1 + $0*0.9 = $110, so clearly, we should (in theory) choose this prize option over the others. In terms of ranking, we would give the third prize option the top rank, and jointly give the other two prize options the second rank. Readers who wish to gain a deeper understanding of the above discussion are encouraged to review the theory section and selected case studies in this article.
The preceding analysis assumes that we are what economists might refer to as perfectly rational agents, always making optimal choices based on the available information. But in reality, of course, we tend to be anything but perfectly rational. As human beings, we are plagued by a number of so-called behavioral biases (or cognitive biases), which — despite their potential evolutionary rationale — can often impair our judgment and lead to suboptimal decisions. One important behavioral bias that may have affected your choice of prize in the above example is called loss aversion, which is about having greater sensitivity to losses than gains. Since the first prize option represents a certain gain of $100 (i.e., no feeling of loss), whereas the third prize option comes with a 90% possibility of gaining nothing, loss aversion (or risk aversion) may lead you to opt for the first — theoretically suboptimal — prize option. In fact, even the way the prize options are framed or presented can affect your decision. Framing the third prize option as “a 10% chance of winning $1100” may make it seem more attractive than framing it as “a 90% risk of getting nothing and a 10% chance of getting $1100,” since the latter framing suggests the possibility of a loss (compared to the guaranteed $100), and makes no explicit mention of “winning.”
Guarding against suboptimal decisions resulting from behavioral biases is vital when developing and executing a sound AI strategy, especially given the hype surrounding generative AI since ChatGPT was released to the public in late 2022. Nowadays, the topic of AI has board-level attention at companies across industry sectors and calling a company “AI-first” is likely to boost its stock price. The potentially game-changing impact of AI (which could significantly bring down the cost of creating many goods and services) is often compared to pivotal moments in history such as the emergence of the Internet (which reduced the cost of distribution), and cloud computing (which reduced the cost of IT ownership). The hype around AI, even if it may be justified in some cases, puts tremendous pressure on decision makers in leadership positions to jump on the AI bandwagon despite often being ill-prepared to do so effectively. Many companies lack access to the kind of data and AI talent that would let them build competitive AI products. Piggybacking on third-party providers may seem expedient in the short-term, but entails long-term risks due to vendor lock-in.
Against this backdrop, company leaders can use probabilistic thinking — and the concept of expected value, in particular — to counter common behavioral biases such as:
- Herd mentality: Decision makers tend to follow the crowd. If a CEO sees her counterparts at other companies making substantial investments in generative AI, she may feel compelled to do the same, even though the risks and limitations of the new technology have not been thoroughly evaluated, and her product teams may not yet be ready to properly take on the challenge. This bias is closely related to the so-called fear of missing out (FOMO). Product leaders can help steer colleagues in the C-suite away from potentially misguided “follow the herd,” FOMO-driven decisions by arguing in favor of creating a diverse set of real options and prioritizing these options based on expected value.
- Overconfidence: Product leaders may overestimate their ability to predict the success of new AI-powered products. They might think that they understand the underlying technology and the likely receptiveness of customers to the new AI products better than they actually do, leading to unwarranted confidence in their investment decisions. Overconfidence can lead to excessive risk-taking, especially when dealing with unproven technologies such as generative AI. Expected value analysis can help temper this confidence and lead to more prudent decision making.
- Sunk cost fallacy: This logical fallacy is often referred to as “throwing good money after bad.” It happens when product leaders and teams believe that past investments in something justify additional future investments, even if the return on all these investments may be negative. For example, product leaders today may feel compelled to allocate more and more resources to products built using generative AI, even though the expected returns may be negative due to issues related to hallucinations, data privacy, safety and security. Thinking in terms of expected value can help guard against this fallacy.
- Confirmation bias: Company leaders and managers may tend to seek out information that confirms their existing beliefs, leaving them blind to vital information that might counter these beliefs. For instance, when evaluating (generative) AI, product managers might selectively focus on success stories and findings from user research that align with their preconceptions, making it harder to objectively assess limitations and risks. By analyzing the expected value of AI investments, product managers can challenge unfounded assumptions, and make rational decisions without being swayed by prior beliefs or selective information. Crucially, the concept of expected value allows beliefs to be updated based on new information and encourages a prudent, long-term view of decision making.
See this Wikipedia article for a more exhaustive list of such biases.
The Wrap
As this article demonstrates, probabilistic thinking in terms of expected values can help shape a company’s AI strategy in several ways, from discovering real options and constructing robust product portfolios to guarding against behavioral biases. The relevance of probabilistic thinking is perhaps not entirely surprising, given that most companies today operate in a so-called “VUCA” business environment, which is characterized by varying degrees of volatility, uncertainty, complexity, and ambiguity. In this context, expected value analysis encourages decision makers to recognize and quantify the uncertainty of future pay-offs, and act prudently to capture value while mitigating risks. Overall, probabilistic thinking as a strategic toolkit is likely to gain importance in a future where uncertain technologies such as AI play an outsized role in shaping company growth and shareholder value.