Main
Climate misinformation1,2 exploits the existence of uncertainty in climate science to sow doubt about how much scientists agree on climate science[3](#ref-CR3 “Maibach, E., Myers, T. & Leiserowitz, A. Climate scientists need to set the record straight: there is a scientific consensus that human-caused climate change is happening….
Main
Climate misinformation1,2 exploits the existence of uncertainty in climate science to sow doubt about how much scientists agree on climate science3,4,5,6,7 and fuel climate change scepticism8. We complement that perspective by proposing that some forms of uncertainty communication can be a pernicious source of perceived disagreement, while others can foster more positive perceptions. We suggest that the negative verbal probabilities prescribed by the Intergovernmental Panel on Climate Change (IPCC) to communicate low-probability outcomes might inadvertently sow doubt about their evidential and consensual base.
The IPCC has standardized guidelines for communicating uncertainty. They prescribe the use of a set of probability terms (also called verbal probabilities) that are associated with numerical probability bands9,10,11. Like many international organizations (for example, the European Food Safety Agency (EFSA)12, the North Atlantic Treaty Organization (NATO)13 and the Financial Accounting Standards Board (FASB)14), the IPCC mandates the use of negative verbal expressions to characterize probabilities in the lower part of the scale (for example, ‘unlikely’ for <33%). We posit that this terminology is not ideal because of connotations implied by negative terms.
Verbal probabilities can have either a positive (affirming) or a negative (refuting) directionality. Positive verbal probabilities (for example, ‘a chance’) direct attention towards the possibility that an event might happen, whereas negative verbal probabilities (for example, ‘unlikely’) suggest that the event might not happen after all—even when they convey a similar probability15,16. One way to determine the directionality of a verbal probability is to ask for reasons in a sentence completion task. Statements including positive verbal probabilities, such as ‘There is a chance of flood, because …’, will normally be completed with supporting or enabling reasons for why a flood might occur (for example, ‘it has rained a lot’). In contrast, a negative verbal probability, such as ‘It is unlikely there will be a flood, because …’ will be associated with disabling circumstances (for example, ‘it has rained less than last year’). Negative directionality is often, but not exclusively, determined by the presence of a negation (as in ‘unlikely’). For example, ‘the likelihood is low’ (suggested by the IPCC as an alternative to ‘unlikely’)9 does not have a negation but would also elicit reasons against the event occurring.
It might seem sensible to use negative verbal probabilities for low-probability outcomes (<50%). However, in many domains, low-probability events are those for which preparedness is most needed17. A 20% chance might be ‘small’ technically and might seem inconsequential for mundane forecasts, but probabilities of this magnitude are not to be neglected for high-impact events18. Calling those events ‘unlikely’ might be misguided and undermine people’s awareness that they could happen and obstruct decisions to prepare against these risks16,19,20. For example, in a computer game, people who were told that a landslide was ‘unlikely’ were less likely to evacuate the area than when the same landslide risk was described as having ‘a small probability’ of occurring19. The effect of directionality is akin to its better-known cousin, the attribute framing effect, where logically equivalent information leads to different decisions (for example, a 20% chance of success versus an 80% chance of failure)21.
In addition to shaping attention and decisions, negative verbal probabilities might suggest a lack of scientific consensus. In everyday discourse, verbal probabilities with a negative directionality are typically used to voice disagreement with an over-confident speaker. For instance, a 30% probability was more often described negatively (‘it is unlikely’) when following a 50% estimate, but positively as ‘a chance’ when following a 10% estimate22. It is hence plausible that negative verbal probabilities would suggest contention, whereas positive ones, even those that indicate small chances, might emphasize that P > 0 and reflect scientific consensus. A corollary of this hypothesis about scientific consensus is that events described with negative verbal probabilities might not be perceived as being well grounded in scientific evidence, making them less trustworthy23.
Negative directionality might indicate dissent (and poor data) because speakers often use negative verbal probabilities, such as ‘unlikely’, to describe outcomes that are so extreme that they are not supported by existing data24,25. For instance, when asked what is ‘unlikely’ in a distribution of quantitative outcomes (for example, sea-level rise and temperature rise), people tend to select extreme outcomes from beyond the range of possible values24,25,26—whereas they associate positive directionality with less extreme outcomes24,27,28.
In eight experiments, we demonstrate that the IPCC negative uncertainty terminology inadvertently undermines the perception of scientific consensus, relative to a positive uncertainty terminology (preregistrations, materials, data and analytical scripts accessible at https://osf.io/ch4wf/; ref. [29](https://www.nature.com/articles/s41558-025-02472-1#ref-CR29 “Juanchich, M., Teigen, K.-H., Sirota, M. & Shepherd, T. G. Effect of directionality on consensus perception. Project preregistration, materials, data and analytical code. Open Science Framework https://doi.org/10.17605/OSF.IO/CH4WF
(2025)“)).
Negative terms imply disagreements and extreme outcomes
In experiment 1, a total of 301 UK residents allocated to four conditions completed an online questionnaire about verbal forecasts of various climate events (full description in Supplementary Information Section A). The forecasts included two negative verbal probabilities recommended by the IPCC for low probabilities (‘unlikely’ and ‘the likelihood is low’)9 and two alternatives (‘there is a small probability’ and ‘there is a small possibility’) assumed to be positive. We expected the four verbal probabilities to convey the same numeric probability but to elicit different attention focus. An additional task (shown in Extended Data Fig. 1) tested whether the two negative verbal probabilities were more often associated with a fringe climate change outcome (from beyond the range of possible outcomes) and a final task examined whether climate scientists using positive or negative verbal probabilities were perceived to agree or disagree with their colleagues in a round-table discussion. In this study and the following, we report analyses of variance, χ2 tests of independence and independent t-tests and report two-tailed P values for tests comparing two values.
The results show that, as expected, all four verbal probabilities were estimated to indicate probabilities in the 10–30% range, peaking around 15–20% (Table 1). An analysis of variance (ANOVA) revealed no statistically significant difference, F(3, 300) = 2.16, P = 0.093, η**P2 = 0.02. The presumed negative verbal probabilities (‘It is unlikely [the likelihood is low] that the area will be flooded, because …’) were mostly explained by con-reasons (‘It is far from the river’), whereas the two presumed positive phrases were mostly completed by pro-reasons (‘It is close to the river’); χ2(3) = 104.93, P < 0.001, φ = 0.59, Table 1, upper panel). Despite conveying a similar probability, the two negative verbal probabilities called attention to the non-occurrence of the target outcome, whereas the two positive ones attracted readers’ attention towards its occurrence.
The consensus inference task showed that, when set in a hypothetical round-table discussion, a scientist using the negative probability statements describing temperature rise was perceived as indicating disagreement more often than a scientist using either of the two positive probability statements, χ2(6) = 21.12, P = 0.002, φ = 0.27, V = 0.19 (Table 1).
In the outcome selection task, participants completed the probability statement with an outcome value taken from a graphical display of sea-level rise projections ranging from 0.3 m to 1 m. The outcome chosen was categorized as moderate (within the boundaries), minimum/maximum (at the boundaries) or as extreme (beyond the range). The positive versus negative verbal probability conditions differed, χ2(6) = 36.98, P < 0.001, φ = 0.35, V = 0.25. When asked which sea-level rise magnitude was ‘unlikely’, most participants selected outcomes outside the confidence intervals of the climate models shown in the graph (over 1 m, or below 0.1 m). Such outcomes were also frequently selected for ‘the likelihood is low’ (Table 1), but not for ‘a small probability’ or ‘a small possibility’ (Supplementary Information Section B).
Experiment 2 aimed to replicate the effect of directionality on outcome selection and consensus perception. It also aimed to test the relationship between extreme outcome selection and judgements of consensus while controlling for graph comprehension. In a sample of 481 UK residents, participants were randomly allocated to two conditions: one negative (unlikely) and one positive (there is a small possibility). Participants completed the same four tasks as in experiment 1 in one of two conditions: positive (there is a small possibility) or negative (it is unlikely). To simplify the outcome selection task, we included a graph explicitly showing the number of models supporting each sea-level rise25 and we measured graph comprehension using three questions (Extended Data Fig. 2).
Again, both verbal phrases indicated probabilities around 20%, with ‘unlikely’ conveying slightly higher values than ‘there is a small possibility’, t(479) = 3.04, P = 0.003, Cohen’s d = 0.28 (lower panel of Table 1). Despite this, most participants associated ‘unlikely’ with reasons against the event occurrence, whereas ‘a small possibility’ was explained with reasons supporting the event occurrence, χ2(1) = 293.72, P < 0.001, φ = 0.78 (also Table 1).
Participants believed that a scientist describing a temperature rise as ‘unlikely’ disagreed with other scientists more often than when a scientist used ‘a small possibility’, χ2(2) = 45.77, P < 0.001, φ = 0.31. Furthermore, most participants in the negative condition associated ‘unlikely’ with extreme values (that did not occur in any models), whereas only 1 in 10 did so for ‘a small possibility’, χ2(2) = 136.08, P < 0.001, φ = 0.53. Importantly, the tendency to select extreme outcomes and to perceive disagreements were related. Of people who chose an outcome that had occurred in the model, 30% believed the speaker disagreed, against 46% of participants who selected an out-of-range value, χ2(4) = 22.08, P < 0.001, φ = 0.21.
Most participants (86%) correctly answered the three graph comprehension questions and could accurately identify the number of models showing different sea-level rise. Yet they selected extreme values equally often as those who failed on one or more questions. More detailed results can be found in Supplementary Information Section C.
The robustness of these findings was tested in two further experiments (experiments SI-A and SI-B) reported in Supplementary Information Sections D and E, respectively. Experiments SI-A and SI-B included a wider range of negative and positive verbal probabilities (for example, unlikely, the likelihood is low, the chance is small versus a small probability, a small chance, a small possibility), with similar results.
Negative terms suggest lower consensus and poorer evidence
The setting in the previous experiments, an imagined round-table discussion, might have facilitated perceived disagreement between adversaries. In experiment 3, climate projections were embedded in newspaper headlines about two future climate events: temperature increase in Australia and precipitation decrease in Africa (for example, ‘Climate scientist says: there is a small possibility [it is unlikely] that by 2050…’). A sample of 414 UK residents assessed the perceived scientific consensus (‘How many out of 100 climate scientists would agree?’) and the perceived strength of supporting evidence (‘How solid is the evidence for such a forecast?’).
Perceived consensus was higher for positive than for negative verbal probabilities for both the temperature and precipitation projections, t(399.37) = 11.89, P < 0.001, Cohen’s d = 1.17, t(412) = 2.18, P = 0.030, Cohen’s d = 0.21, respectively (Fig. 1). Furthermore, consistent with our expectations, participants believed that the negative probability projections relied less on scientific evidence than the positive ones. This difference was statistically significant in the temperature rise scenario, but not in the precipitation decrease scenario, t(404.38) = −7.85, P < 0.001, Cohen’s d = 0.77, t(412) = −1.62, P = 0.107, Cohen’s d = −0.16. The weaker effects in this scenario could be due to its content (precipitation decrease might sound more disputable than an increase in temperatures worldwide—see experiment 5 about the role of context).
Fig. 1: Ratings of consensus and scientific evidence for negative and positive verbal probability projections of temperature and precipitation (experiment 3, N = 414).
a,b, Consensus perception (0–100 scientists) and scientific evidence judgements (0, no objective evidence; 100, fully based on objective evidence) are shown for statements about temperature rise (a) and precipitation decrease (b). ****P < 0.0001 and *P < 0.05, not significant (NS) P > 0.05 in independent t-tests. Bars show means, error bars show 95% CI and points show participants’ responses.
Complementary likely framing raises perceived consensus
To avoid the adverse effect of negative directionality, communicators could choose to focus on the complementary positive high probability, ‘likely’ event that would be expected to lead to better consensus perception. Instead of saying that a very high temperature increase is unlikely, one might say that a lower increase is likely. Examining the sixth IPCC report we find the term ‘likely’ used 26 times more often than ‘unlikely’ (Supplementary information Section F).
Participants in experiment 4 (N = 497 UK residents, representative for age, gender and ethnicity) reported their personal beliefs in climate change and judged outcomes based on two climate projections—temperature rise based on low GHG emissions and sea-level rise based on medium GHG emissions. The unlikely projections were taken from one tail of the distribution (more than 2 °C temperature increase and less than 0.5-m sea-level rise) and the likely ones from the middle and other tail of the distribution (less than 2 °C temperature increase and more than 0.5-m sea-level rise).
Participants believed that more scientists would agree with the ‘likely’ than the ‘unlikely’ projections and that the former also relied more strongly on scientific evidence, despite the formal equivalence of both statements (Fig. 2). Analyses of variance showed a main effect of directionality on perception of consensus and evidence, F(1, 495) = 51.56, P < 0.001, η**P2 = 0.09; F(1, 495) = 27.33, P < 0.001, η**P2 = 0.05. These effects remained after adding participants’ climate change beliefs as a covariate (Supplementary Information Section F). We also identified a main effect of the climate outcome (judgements were more negative for the low-emission pathway temperature projections than the medium-emission pathway sea-level rise) and a predicted interaction effect between verbal probabilities and the climate change outcome (Supplementary Information Section F).
Fig. 2: Ratings of consensus and scientific evidence for negative and positive verbal probability projections (experiment 4, N = 497).
a,b, Consensus perception (0–100 scientists) and scientific evidence judgements (0, no objective evidence; 100, fully based on objective evidence) are shown for statements about temperature rise (a) and sea-level rise (b). ****P < 0.0001, **P < 0.01 and *P < 0.05 in independent t-tests. Bars show means, error bars show 95% CI and points show participants’ responses.
Positive terms and numerical ones raise perceived consensus
Experiment 5 tested (1) the robustness of verbal directionality effects for projections described as issued by IPCC; (2) the drawback of projections reframed as ‘likely’; and (3) the directionality of numerical probability expressions (expected to be positive). Overall, 802 participants from a UK representative sample (quota sampling for gender, age and political preference) read and judged one of four temperature projections, all stated to be issued by the IPCC: an ‘unlikely’ and a ‘small probability’ projection of warming reaching 3 °C, a corresponding numerical 10–33% probability projection and a ‘likely’ projection of temperatures not reaching 3 °C. Participants evaluated consensus (out of 100 scientists), attention focus (occurrence vs non-occurrence) and degree of concern.
Participants estimated different levels of consensus across projections, F(3, 801) = 31.73, P < 0.001, η2 = 0.11 (Table 2). They inferred more consensus for the ‘small probability’ and the numerical probability projections than for the ‘unlikely’ and ‘likely… not’ projections (pairwise comparisons in Supplementary Information Section G). These effects were independent of individual differences in knowing the IPCC, education level and political conservatism (Supplementary Information Section G).
Reported outcome focus was clearly dependent on the wording of the statement, χ2(2) = 135.06, P < 0.001, φ = 0.41 (Table 2). ‘A small probability’ and the 10–33% numerical probability directed attention to the possibility that the outcome might happen, whereas the ‘unlikely’ and ‘likely… not’ pointed to the possibility that it might not happen.
Contrary to our expectations, levels of concern caused by the projections did not differ across conditions, F(3, 801) = 1.13, P = 0.337, η**P2 < 0.01 (Table 2). One participant commented that the negative frame could cause alarm by suggesting denial of the severity of the event. Hence, it is possible that positive and negative verbal probabilities could cause alarm for different reasons.
Prior beliefs as boundary conditions
Experiment 6 tested whether the effects of directionality on perceived consensus, evidence support and climate change concerns are moderated by prior beliefs and knowledge about climate change in a sample of 873 UK residents (representative based on quota sampling for gender, age and political voting preference). To introduce variation in participants’ familiarity and prior beliefs, we included projections about temperature and winter precipitation changes, with the former expected—and found—to be more familiar. In addition, in a between-subjects design, we manipulated the GHG emission levels underpinning the projections. Low-emission projections were expected to clash more with participants’ beliefs than high-emission projections—which they did. We report baseline perceptions in Supplementary Information Section H.
For the temperature projection (Fig. 3, upper panel), directionality had the expected effect on consensus, scientific evidence and concerns (Table 3). Participants perceived higher consensus and scientific evidence and reported greater concerns about climate change in the positive than in the negative condition. The effect of directionality was larger in the low GHG emission than in the high-emission scenario condition (where it was not significant). For UK winter precipitation, the effect of directionality was smaller and only statistically significant for concerns (Table 3). These variations can be explained by participants’ baseline expectations about the outcomes. Negative (versus positive) directionality led to more negative perceptions when the projected values were believed to actually be likely to occur (for example, participants expected temperatures to warm more than the low GHG estimates). Projected values that were higher than participants’ expectations (for example, based on high GHG emissions) showed smaller or null effects of directionality.
Fig. 3: Perceived consensus, evidence and concerns for positive and negative climate projections about temperature and winter precipitation projections in experiment 6 (N = 872 participants).
a,b, Consensus perception (0–100 scientists) (i), scientific evidence judgement (0, no objective evidence; 100, fully based on objective evidence) (ii) and concerns (1, not at all; 4: completely) (iii) for temperature projections (ai–aiii) and winter precipitation projections (bi–biii) based on low versus high greenhouse gas emission scenario (GHG). ****P < 0.0001, *P < 0.05, not significant (NS) P > 0.05 in independent t-tests. Bars show means, error bars show 95% CI and points show participants’ responses.
Discussion
Uncertainty is inherent to scientific endeavours and how it should be communicated to the public represents an important challenge8. It should not fuel false beliefs about a lack of agreement between climate scientists30,31,32. A body of findings suggests that admitting uncertainty (versus appearing confident) suggests a lack of competence33,34,35,36, whereas other studies indicate that admitting uncertainty might actually increase people’s trust in scientists37,38. We show here that how uncertainty is communicated plays an important role in shaping these effects.
Consistent with past research15,16,19, we found that negatively framed expressions, such as the IPCC-recommended terms ‘unlikely’ and ‘the likelihood is low’9, divert people’s attention away from the occurrence of the target outcome and make it appear insignificant. A low probability but potentially impactful hazard described negatively may accordingly be downplayed as an event that will not happen (experiments 1, 2 and 4) and/or can be taken lightly (experiment 5). This dismissal is reinforced by the implied extremity of the outcome: prototypical ‘unlikely’ outcomes are pictured as exceptional (experiments 1, 2 and experiment SI-A of Supplementary Information Section D). Importantly, we provide evidence that in addition, unlikely projections are perceived as expressing more dissent (experiments 1–6 and experiments SI-A and SI-B of Supplementary Information Sections D and E) and are believed to be less based on solid scientific evidence (experiments 2, 3 and 5) than the same projections expressed in positive, affirmative terms. This adds to other drawbacks of negations, which have previously been associated with wider variability in interpretation39 and less cautious decisions19.
Results also addressed the role of personal beliefs. While ‘unlikely’ projections consistently shifted attention away from the outcome and reduced concerns, their effect on perceived consensus and scientific evidence was more dependent on context and prior beliefs. Projections that were far off from people’s prior beliefs and expectations were less affected by how low probabilities were phrased (experiment 6). Self-assessed knowledge about climate change did not attenuate the directionality effects in a sample of UK adults, but further research should test whether these effects generalize to climate scientists and policy-makers.
Instead of describing unexpected outcomes or rare events as ‘unlikely’, climate scientists could adhere to IPCC guidelines by characterizing their complements as being ‘likely’. This seems to be the frame preferred by IPCC authors9 who used ‘likely’ much more frequently than ‘unlikely’. However, this usage would not help raise people’s awareness and concerns about such potentially important issues (experiments 4 and 5) and it might even lead to lower perceived consensus when it features a negation (experiment 5), or to an increase in uncertainty when focusing on a wider estimate33.
We recommend using affirmative low-probability expressions (for example, ‘a low probability of severe drought’) whenever possible and favouring them over high-probability expressions of outcome complements (‘a high probability that severe drought will not occur’). Affirmative low-probability terms refer directly to the target event, rather than negating it, aligning more closely with numeric probability ranges (10–33% probability), which are often regarded as more scientific and trustworthy40,41.
Our studies showcase the importance of studying verbal uncertainty beyond their quantitative interpretations42,43. The implicit meaning of negative (vs positive) verbal phrases tends to be overlooked, but it might include information about speaker’s attitude and recommendation16 or about the nature of the event44,45. Negative verbal probabilities are not inherently inappropriate or misleading. They could be effective in conveying that a claim lacks support and can be put aside (for example, ‘it is unlikely that the current climate change is natural’). It is essential to consider and explore pragmatic aspects of our estimative lexicon when developing scales and guidelines. Verbal probabilities are a toolbox, from which science communicators should choose carefully.
Communicating clearly how much climate change scientists agree about different possible outcomes is essential to inform citizens and policy-makers. Uncertainty should not fuel scepticism46, but serve as an informative and useful component of risk communication. We provide evidence that by using negative verbal probabilities, the IPCC and other organizations (for example, EFSA12, NATO13 and FASB14) might unknowingly undermine the information they aim to convey. When communicating low-probability outcomes that are supported by scientific evidence, positive verbal probabilities should be preferred, as they are more likely to signal stronger scientific consensus and evidential support.
Methods
All the studies were preregistered. Preregistrations, materials, data and analytical code are available on the Open Science Framework at https://osf.io/ch4wf/ (ref. [29](https://www.nature.com/articles/s41558-025-02472-1#ref-CR29 “Juanchich, M., Teigen, K.-H., Sirota, M. & Shepherd, T. G. Effect of directionality on consensus perception. Project preregistration, materials, data and analytical code. Open Science Framework https://doi.org/10.17605/OSF.IO/CH4WF
(2025)“)).
Method experiment 1
Participants
Overall, 301 UK residents were recruited from Prolific for an online questionnaire, which they all completed fully and met the preregistered minimal completion time (>1.5 min for a study that had a median completion time of 5 min). The sample size provides sufficient power to detect a small–medium effect in a chi-square test comparing four groups with 90% power (alpha = 5%, Cohen’s w = 0.21, two-sided test). Participants were between 18 years old and 76 years old (M = 39.53, s.d*.* = 13.13), 50% men, 49% women, 1% non-binary and 0.3% preferred not to say. Information about education, ethnicity and English proficiency is provided in Supplementary Information Section A.
Design, materials and procedure
Participants read projections concerning low-probability climate-related outcomes in four separate tasks (a probability task, a directionality task, an outcome selection task and a consensus task). In each task, climate change-related outcomes were described with one of four verbal probabilities presumed to be either negative or positive (between-subjects random allocation within each task):
Negative verbal probabilities
It is unlikely that…
The likelihood is low that…
Positive verbal probabilities
There is a small probability that…
There is a small possibility that…
The two verbal probabilities assumed to be negative are both recommended by the IPCC to describe probabilities in the 0–33% range9. The ‘low likelihood’ terminology was introduced in the 2021 IPCC report9, where it was used more often than ‘unlikely’ (115 versus 99 times). ‘There is a small probability’ has been shown to be positive and to convey the same probability as ‘unlikely’19. Possibilities and probabilities are not the same47, but when graded, ‘There is a small possibility’ was expected to convey a similar expectation as ‘a small probability’ and to retain a positive directionality.
Probability task
The assumed probabilistic equivalence was tested by asking participants to ‘translate’ the verbal phrases into corresponding numeric probabilities. Each participant provided their judgements about one of the four phrases (random allocation) on a sliding scale ranging from 0% (labelled ‘impossible’) to 100% (‘certain’).
Directionality task
Directionality was tested by sentence completion, asking participants to select the most appropriate of two opposite reasons, related to a flood forecast. The statement to be explained included one of the four verbal probabilities (random allocation). For instance, ‘It is unlikely that the area will be flooded, because…’. Participants could choose a reason supporting the occurrence of flood (‘It is close to the river’), or a reason against flood (‘It is far from the river’). Supporting reasons indicated that the verbal probability had a positive (affirmative) directionality, whereas reasons against indicated a negative directionality. Then, participants read the consensus and outcome tasks, presented in random order.
Consensus task
Participants were asked to imagine a TV round-table discussion with several climate scientists. One of them stated how much warmer the climate would be in the year 2100, saying: ‘It is unlikely [the likelihood is low] [there is a small probability] [there is a small possibility] that temperature will rise 3 degrees Celsius’. On the basis of this statement, participants assessed whether this scientist agreed with the other scientists. Participants could answer that the scientist probably ‘agreed’, ‘disagreed’ or that it was ‘impossible to say’.
Outcome selection task
Participants read about projected sea-level rise between 2000 and 2100, accompanied by the IPCC Figure SPM.6 sea-level rise projections48 shown in Extended Data Fig. 1. The graph shows mean estimates of sea-level rise along with their frequencies in the projections (ranging from 0.25 m to 0.95 m). Participants selected an outcome value to complete a probability statement that featured one of the four verbal probabilities (for example, ‘It is unlikely that the sea-level will rise by… metres’). Participants could select the outcome from a list of eight values spanning from below the range shown in the graph (0–0.2) up to values from above the range (>1.4 m) in intervals of 0.2 m as shown in Extended Data Fig. 1. Answers falling outside the range of the confidence intervals (≤0.2 m and ≥1 m) were coded as out of range, extreme values. Values from the lower and upper part of the confidence intervals (0.2–0.4 and 0.8–1.0 m) were coded as minimum/maximum values, whereas less extreme values within the uncertainty intervals were coded as moderate.
Method experiment 2
Participants
Overall, 481 participants, recruited by Prolific, completed the study fully and they all met the completion time that we deemed appropriate (>1.5 min for a study that had a median