Published on November 10, 2025 12:33 AM GMT
Why does a water bottle seem like a natural chunk of physical stuff to think of as “A Thing”, while the left half of the water bottle seems like a less natural chunk of physical stuff to think of as “A Thing”? More abstractly: why do real-world agents favor some ontologies over others?
At various stages of rigor, an answer to that question looks like a story, an argument, or a mathematical proof. Regardless of the form, I’ll call such an answer an ontological foundation.
Broadly speaking, the ontological foundations I know of fall into three main clusters.
Translatability Guarantees
Suppose an agent wants to structure its world model around internal representations which can translate well into other world mo…
Published on November 10, 2025 12:33 AM GMT
Why does a water bottle seem like a natural chunk of physical stuff to think of as “A Thing”, while the left half of the water bottle seems like a less natural chunk of physical stuff to think of as “A Thing”? More abstractly: why do real-world agents favor some ontologies over others?
At various stages of rigor, an answer to that question looks like a story, an argument, or a mathematical proof. Regardless of the form, I’ll call such an answer an ontological foundation.
Broadly speaking, the ontological foundations I know of fall into three main clusters.
Translatability Guarantees
Suppose an agent wants to structure its world model around internal representations which can translate well into other world models. An agent might want translatable representations for two main reasons:
- Language: in order for language to work at all, most words need to point to internal representations which approximately “match” (in some sense) across the two agents communicating.
- Correspondence Principle: it’s useful for an agent to structure its world model and goals around representations which will continue to “work” even as the agent learns more and its world model evolves.
Guarantees of translatability are the type of ontological foundation presented in our paper Natural Latents: Latent Variables Stable Across Ontologies. The abstract of that paper is a good high-level example of what an ontological foundation based on translatability guarantees looks like:
Suppose two Bayesian agents each learn a generative model of the same environment. We will assume the two have converged on the predictive distribution (i.e. distribution over some observables in the environment), but may have different generative models containing different latent variables. Under what conditions can one agent guarantee that their latents are a function of the other agent’s latents?
We give simple conditions under which such translation is guaranteed to be possible: the natural latent conditions. We also show that, absent further constraints, these are the most general conditions under which translatability is guaranteed.
Environment Structure
A key property of an ideal gas is that, if we have even just a little imprecision in our measurements of its initial conditions, then chaotic dynamics quickly wipes out all information except for a few summary statistics (like e.g. temperature and pressure); the best we can do to make predictions about the gas is to use a Boltzman distribution with those summary statistics. This is a fact about the dynamics of the gas, which makes those summary statistics natural ontological Things useful to a huge range of agents.
Looking at my own past work, the Telephone Theorem is aimed at ontological foundations based on environment structure. It says, very roughly:
When information is passed through many layers, one after another, any information not nearly-perfectly conserved through nearly-all the “messages” is lost.
A more complete ontological foundation based on environment structure might say something like:
- Information which propagates over long distances (as in the Telephone Theorem) must (approximately) have a certain form.
- That form factors cleanly (e.g. in the literal sense of a probability distribution factoring over terms which each involve only a few variables)
Mind Structure
Toward Statistical Mechanics Of Interfaces Under Selection Pressure talks about the “APIs” used internally by a neural-net-like system. The intuition is that, in the style of stat mech or singular learning theory, the exponential majority of parameter-values which produce low loss will use APIs for which a certain entropic quantity is near-minimal. Insofar as that’s true (which it might not be!), a natural prediction would be that a wide variety of training/selection processes for the same loss would produce a net using those same APIs internally.
That would be the flavor of an ontological foundation based on mind structure. An ideal ontological foundation based on mind structure would prove that a wide variety of mind structures, under a wide variety of training/selection pressures, with a wide variety of training/selection goals, converge on using “equivalent” APIs or representations internally.
All Of The Above?
Of course, the real ideal for a program in search of ontological foundations would be to pursue all three of these types of ontological foundations, and then show that they all give the same answer. That would be strong evidence that the ontological foundations found are indeed natural.
Discuss