A friend of mine had recently been asked to consider the possibility that facts can change. Since she brought her thoughts to me, I’ve been thinking about the different ways in which that’s possible. For one, there’s reality and then there’s our knowledge of reality; the two needn’t be coincident. While a statement like ‘facts are facts’ could mean that reality doesn’t change, what we know about reality can still change. For example, our methods to acquire information about reality may have been flawed before and are less flawed now, so what we know about reality, i.e. our facts, change.
Some items in the political sphere are institutional or conventional: they count as facts only by social consensus. Such facts can lose their identity as such if they lose that consensus. Some …
A friend of mine had recently been asked to consider the possibility that facts can change. Since she brought her thoughts to me, I’ve been thinking about the different ways in which that’s possible. For one, there’s reality and then there’s our knowledge of reality; the two needn’t be coincident. While a statement like ‘facts are facts’ could mean that reality doesn’t change, what we know about reality can still change. For example, our methods to acquire information about reality may have been flawed before and are less flawed now, so what we know about reality, i.e. our facts, change.
Some items in the political sphere are institutional or conventional: they count as facts only by social consensus. Such facts can lose their identity as such if they lose that consensus. Some examples include money, laws, cultural norms like red light means stop, and — closer to science — the decision to use the p-values as a meaningful statistical threshold; consensus among scientists as to how to define different units of measurement; and the convention of dividing time into years, months, and days.
A third interpretation is that misinformation or disinformation can get in the way of a person understanding which information is factual and which isn’t. The important thing here is there’s still a constituency of people — the scientists — for whom some information is factual even if for a different constituency that piece of information is not factual.
In the first interpretation, dispute arises within the expert community as evidence changes; in the second, dispute concerns a socially instituted status across society (including experts). That is, in the first case, scientists’ own knowledge of that information can be updated, sometimes drastically. In the second case, society (including scientists) may disagree over whether a convention should count as a fact for coordination. In the third case, expert consensus holds that the information is factual but segments of the publics reject it.
These three possibilities leave behind a practical question: when we (non-experts) can’t settle a question of factuality by ourselves — because the evidence is evolving, we can’t agree on conventions or because not all people accept it equally — on what grounds can we justifiably defer to experts? That is, what makes deference rational?
Epistemic dependence names a basic fact of modern life: for most of what we claim to know, we rely on other people’s testimony rather than our own inspection of the evidence. John Hardwig (the American philosopher famed for the “dying art” argument) has contended that this dependence isn’t a defect but a rational and defensible strategy in complex societies. For instance, individuals can’t master the mathematics of cryptography, the molecular biology of vaccines, the econometrics of inflation, and the engineering of bridges — yet they still trust these fields of science and the suggestions of their exponents in order to make their own decisions.
Their challenge isn’t to acquire this scientific knowledge (which is often impossible) but to develop reliable ways to distinguish trustworthy from untrustworthy sources of scientific wisdom and to design institutions that make accurate testimony likely and deception expensive. In short, Hardwig’s point is that epistemic responsibility typically involves, rather than rejects, responsible deference to scientists, with the ‘responsibility’ reinforced by the scientists’ track records, incentive structures, and the error-correcting mechanisms operating in the context of their work.
The German sociologist Max Weber’s typology of authority is relevant here because it helps structure deference. Weber drew lines between traditional authority, charismatic authority, and rational-legal authority. The authority of science aspires to the third because it’s less grounded in who speaks and more in the procedures by which statements are vetted. For instance, a research finding that survives peer review, replication attempts, and other forms of critical scrutiny post-publication bears an impersonal authority — one that doesn’t demand allegiance to a particular leader or a lineage.
This rational-legal form also defines how sanctions in science work. Retractions, loss of funding, and reputational damage follow codified rules and shared expectations of disclosure and transparency rather than serve as conduits to express the wrath of a sovereign. The non-expert’s deference to scientific claims is thus a portable deference to procedures that the non-expert believes correspond to the truth rather than just the social prestige of scientists. The flip side is that the non-expert must endeavour constantly to maintain these procedures.
Further, when scientific procedures are politicised or when charismatic or traditional authorities claim jurisdiction over empirical questions, the basis for deference goes away. That is to say, appeals to ‘trust science’ work only to the extent that the rational-legal authority remains credible.
The sociology of expertise has refined these observations by describing how expertise is distributed and recognised. In particular, the philosophers Harry Collins and Robert Evans have distinguished between contributory expertise and interactional expertise. Contributory experts can produce and evaluate new knowledge within a field; they’re called so because their competence is a function of their ability to contribute meaningfully to research. Interactional experts can’t contribute original work but they can speak the language of the field fluently enough to engage credibly with contributory experts.
Policymakers, journalists, and ethicists embedded in laboratories often need this interactional fluency to translate findings across domains and to interrogate claims without performing the experiments themselves. This distinction helps separate legitimate from irrational deference. A well-equipped non-expert or policymaker still can’t adjudicate between competing models in climate dynamics, say, but an interactional expert should be able to parse which disagreements are barely signals (rather than noise) and which are symptoms of deeper methodological divides.
(Aside: The idea isn’t unlike Bora Zivkovic’s concept of journalists as “temporary expertise” because the topics they’re conversant with in the interactional sense can be transient, from anthropology this week to zoology the next. But for the purpose of this post, this nuance is redundant.)
Further, peer review, gatekeeping, and credentialing don’t only protect quality but also control who’s inside the conversation and who isn’t. These practices can devolve into exclusion and conservatism but they’re also useful to guard against diluting standards. In their paper, Collins and Evans proposed that the legitimacy of expert advice in public matters depends on both the technical adequacy of contributory experts and the social processes that connect them to decision-makers and the affected publics. And deference is both rational and democratic when those processes are transparent, include mechanisms for non-experts to challenge experts, and acknowledge uncertainty.
Robert Merton’s widely cited norms of communalism, universalism, disinterestedness, and organised scepticism underpin these arrangements. Communalism holds that scientific knowledge is a common resource and that results should be shared, methods disclosed, data made available, etc. Universalism requires claims to be evaluated by impersonal criteria and independent of the claimant’s identity or status. Disinterestedness expects scientists to subordinate their personal or financial incentives to the pursuit of truth and declare conflicts and design protections against bias. Organised scepticism institutionalises doubt in the form of peer review, replication studies, and methodological criticism.
Together, these Mertonian norms offer a sort of moral economy for the production of reliable beliefs — but the issue is reality is almost always more messy. Empirical studies often reveal ‘counter-norms’ and tensions while competition for grants and prestige can incentivise scientists to chase hype (e.g. Brian Keating), salami-slice their results (e.g. Brian Wansink) or resort to p-hacking (e.g. Francesca Gino). Commercialisation and intellectual property regimes can restrain communalism. Social hierarchies can undermine universalism through the Matthew effect, where credit accrues to already eminent scientists. People can be insufficiently sceptical of research findings when they align with dominant paradigms or market interests.
The replication crisis in parts of psychology and biomedicine also revealed how structural incentives could produce a research literature high in statistical significance but low in reliability. Yet the very diagnosis of a replication crisis also illustrates the self-correcting aspiration of the Mertonian norms: attempts at reform in the form of registered reports, data-sharing mandates, stricter statistical thresholds, and post-publication review are simply forms of organised scepticism turned inward on itself. The point isn’t that Merton’s norms are fully realised but that they set expectations against which research practice can be judged and corrected.
Taken together, epistemic dependence is unavoidable — and perhaps desirable. Authority rooted rational-legal procedures can channel that dependence through institutions explicitly designed to reward truth and punish errors. In parallel, the sociology of expertise explains how technical competence is recognised, translated, and connected to publics while the Mertonian norms articulate the moral constraints that make the whole arrangement credible.
When this system in toto functions well, non-experts don’t need to track every inference in a paper to hold a justified belief: it’s enough that they trust a claim has been produced in conditions that make accuracy more likely than not and that there are durable pathways for them to detect and fix mistakes. Likewise, when the system falters because incentives have become misaligned, boundaries have hardened into dogma or norms are being honoured in the breach, deference ceases to be rational and starts to resemble a more reductive allegiance.
To be clear, punishing errors isn’t the essence of scientific credibility so much as transparency in the face of organised criticism. Sanctions against scientists are important to uphold incentives for them to pay attention, conduct replication studies, and disclose their methods and data — but punishment without openness can quickly become arbitrary. Second, rational deference is compatible with democratic debates about how expertise is mobilised in policy. A technically sound result can still be challenged on the grounds of its values and trade-offs.
In practice, then, the non-expert’s trust is best anchored not in claims about the moral virtue of scientists or assertions that “science says something” but in the visibility of institutions that embody Mertonian norms, the availability of interactional experts who can translate and interrogate scientific knowledge, and the continuity of disciplinary mechanisms that correct errors in public view.
Axiomatically, deference to any “alternative system” of knowledge is indefensible when it asks for authority without submitting to the same procedures that justify deference to science. The problem isn’t the origin of a claim but how tests of its reliability are governed. When the so-called “Indian knowledge system” is advanced as an epistemic substitute, for instance, it grounds authority in identity, heritage, and scriptural precedence — all bases that don’t instantiate the mechanisms that make testimony trustworthy in complex domains, including public methods, reproducible tests, data disclosure, independent scrutiny, and routine exposure to organised criticism.
Scientific authority is portable because its procedures are impersonal, i.e. a result is credible irrespective of who produced it, provided it survives scrutiny. Alternative systems invert this logic by privileging who speaks — the text, the lineage, the nation — over how claims are vetted. This inversion erodes Mertonian communalism by restricting access to methods or sources to insider circles and blunts organised scepticism by classifying critical appraisal as disloyalty. Once criticism becomes pathologised in this way, incentives to detect and report error fade and testimony ceases to be a rational basis for belief.
Discover more from Root Privileges
Subscribe to get the latest posts sent to your email.