Transformers Must Hallucinate
nikitph.medium.com·4h·
Discuss: Hacker News
🛡️AI Security
Preview
Report Post

4 min read1 hour ago

This is not a critique of engineering choices.

It is a structural argument: **given how transformers are built, hallucination is not a bug — it is inevitable.**

By “hallucination,” I mean confident, specific assertions that are **not supported by the available evidence**, rather than obvious nonsense.

The claim here is simple:

> Any system that always produces an answer, while collapsing evidence into a single point before checking consistency, must hallucinate on some inputs.

Transformers do exactly this.

— -

## 1. What a Transformer Actually Does

At the heart of every transformer is attention. Stripped to its essentials, an attention head:

1. Takes a set of signals (tokens, memories, documents, tools) 2. Scores them against a query…

Similar Posts

Loading similar posts...