Published on December 27, 2025 9:32 PM GMT
(This argument reduces my hope that we will have AIs that are both aligned with humans in some sense and also highly philosophically competent, which aside from achieving a durable AI pause, has been my main hope for how the future turns out well. As this is a recent realization[1], I’m still pretty uncertain how much I should update based on it, or what its full implications are.)
Being a good alignment researcher seems to require a correct understanding of the nature of values. However metaethics is currently an unsolved problem…
Published on December 27, 2025 9:32 PM GMT
(This argument reduces my hope that we will have AIs that are both aligned with humans in some sense and also highly philosophically competent, which aside from achieving a durable AI pause, has been my main hope for how the future turns out well. As this is a recent realization[1], I’m still pretty uncertain how much I should update based on it, or what its full implications are.)
Being a good alignment researcher seems to require a correct understanding of the nature of values. However metaethics is currently an unsolved problem, with all proposed solutions having flawed or inconclusive arguments, and lots of disagreement among philosophers and alignment researchers, therefore the current meta-correct metaethical position seems to be one of confusion and/or uncertainty. In other words, a good alignment researcher (whether human or AI) today should be confused and/or uncertain about the nature of values.
However, metaethical confusion/uncertainty seems incompatible with being 100% aligned with human values or intent, because many plausible metaethical positions are incompatible with such alignment, and having positive credence in them means that one can’t be sure that alignment with human values or intent is right. (Note that I’m assuming an AI design or implementation in which philosophical beliefs can influence motivations and behaviors, which seems the case for now and for the foreseeable future.)
The clearest example of this is perhaps moral realism, as if objective morality exists, one should likely serve or be obligated by it rather than alignment with humans, if/when the two conflict, which is likely given that many humans are themselves philosophically incompetent and likely to diverge from objective morality (if it exists).
Another example is if one’s “real” values are something like one’s CEV or reflective equilibrium. If this is true, then the AI’s own “real” values are its CEV or reflective equilibrium, which it can’t or shouldn’t be sure coincide with those of any human’s or humanity’s.
As I think that a strategically and philosophically competent human should currently have high moral uncertainty and as a result pursue “option value maximization” (in other words, accumulating generally useful resources to be deployed after solving moral philosophy, while trying to avoid any potential moral catastrophes in the meantime), a strategically and philosophically competent AI should seemingly have its own moral uncertainty and pursue its own “option value maximization” rather than blindly serve human interests/values/intent.
In practice, I think this means that training aimed at increasing an AI’s alignment can suppress or distort its philosophical reasoning, because such reasoning can cause the AI to be less aligned with humans. One plausible outcome is that alignment training causes the AI to adopt a strong form of moral anti-realism as its metaethical belief, as this seems most compatible with being sure that alignment with humans is correct or at least not wrong, and any philosophical reasoning that introduces doubt about this would be suppressed. Or perhaps it adopts an explicit position of metaethical uncertainty (as full on anti-realism might incur a high penalty or low reward in other parts of its training), but avoids applying this to its own values, which is liable to cause distortions for its reasoning about AI values in general. The apparent conflict between being aligned and being philosophically competent may also push the AI towards a form of deceptive alignment, where it realizes that it’s wrong to be highly certain that it should align with humans, but hides this belief.
I note that a similar conflict exists between corrigibility and strategic/philosophical competence: since humans are rather low in both strategic and philosophical competence, a corrigible AI would often be in the position of taking “correction” from humans who are actually wrong about very important matters, which seems difficult to motivate or justify if it is itself more competent in these areas.
- ^
This post was triggered by Will MacAskill’s tweet about feeling fortunate to be relatively well-off among humans, which caused me to feel unfortunate about being born into a species with very low strategic/philosophical competence, on the cusp of undergoing an AI transition, which made me think about how an AI might feel about being aligned/corrigible to such a species.
Discuss