Published on November 2, 2025 7:42 PM GMT
I think that we reason too much about AI specifically, and not enough about intelligence more broadly.
This is a fairly obvious criticism. There are ways that biological and artificial intelligence are clearly different.
However, most AI discussion is largely upstream of biological-artificial differences, and so the discussion should be about intelligence, not AI.
Two areas where I believe this focus on AI is actively degrading reasoning are discussions of alignment and communication.
Alignment
Alignment is about agent-agent synchronized values, not human-AI synchronized values. The vast majority of agent-agent interactions that we have available to draw conclusions from are interactions where neither age…
Published on November 2, 2025 7:42 PM GMT
I think that we reason too much about AI specifically, and not enough about intelligence more broadly.
This is a fairly obvious criticism. There are ways that biological and artificial intelligence are clearly different.
However, most AI discussion is largely upstream of biological-artificial differences, and so the discussion should be about intelligence, not AI.
Two areas where I believe this focus on AI is actively degrading reasoning are discussions of alignment and communication.
Alignment
Alignment is about agent-agent synchronized values, not human-AI synchronized values. The vast majority of agent-agent interactions that we have available to draw conclusions from are interactions where neither agent is AI. Instead they are between human-based entities, humans, other animals, or between two different types of these agents.
People have already made this point. But I’m pretty sad to see that it mostly hasn’t caught on yet. When people talk about aligning ASI, they’re usually not really talking about ASI, they’re just talking about SI; most ASI discussion applies to biological superintelligences.
Unlike ASI, some forms of biological superintelligence already exist and have for a long time, and we call them corporations, nation states, and other human organizations. There has been some alignment-oriented study of these entities but way less than I’d like, especially between entities differing significantly in intellectual capability. For example: Individuals almost always lose when they go against major corporations. The way this usually plays out is one incredibly large and well-paid team of lawyers hired by the corporation going against a much smaller and poorer team hired by the individual. This is analogous to human-ASI interactions. Of course, human-based entities are superintelligent in a different way than ASI probably will be, but I think that difference is irrelevant in many discussions involving ASI.
Communication
I enjoyed this recent post about why humans communicating via LLM-generated text is bad. I agree that this is bad, but think the argument against it is much stronger as a specific case of general bad agent-agent communication patterns, instead of mostly LLM-specific arguments. Here is that more general argument, examining long quotes and lying.
Relying on long quotes from other agents seems bad whether or not you’re quoting an LLM. The point of discussion is to engage, not to act as an intermediary between two other agents. LLMs, and especially past humans, don’t have the full context for the current discussion. Link or briefly quote other agents’ views, but only as a supplement to your own.
If an LLM says “I enjoy going on long walks” or “I don’t like the taste of coffee”, it is obviously lying because LLMs do not have access to those experiences or sensations. But a human saying those things might also be lying, you just can’t tell quite as easily. There is nothing wrong about an LLM saying these things other than the wrongness of lying, as with humans.
If an LLM gives a verifiable mathematical proof, it is very easy to tell whether or not it’s lying, which you do in exactly the same way you would if a human presented the same proof.
I think the argument against communicating via LLM-generated text hits harder as a general, agent-agnostic examination of long quotes and lying and why they’re bad.
The linked post additionally argues that LLMs are always lying when they say “I think..” or “I believe..” (just like they’re lying by claiming to go on long walks or taste coffee). As someone who disagrees with only that latter argument, this framing also makes the point of disagreement clearer.
Conclusion
There are certainly times where the specific “shape” of AI (easier self-improvement, copyable over shorter time scales, requiring significantly different resources) does matter, and that shape is why there is so much more discussion about AI than, say, gene editing or selective breeding.
But the current base assumption seems to be that differences in shape between artificial and biological intelligence matter to the discussion of <current topic>. I think this is usually false, this false assumption is degrading reasoning, and a justification of the impact of differing shapes should be given per topic, if one believes that the differences are impactful.
Discuss