Hallucination is fundamental to how transformer-based language models work. In fact, it’s their greatest asset: this is the method by which language models find links between sometimes disparate concepts. But hallucination can become a curse when language models are applied in domains where the truth matters. Examples range from questions about health care policies, to code that correctly uses third-party APIs. With agentic AI, the stakes are even higher, as the autonomous bots can take irreversible action—like sending money—on our behalf.

The good news is that we have methods for making AI systems follow the rules, and the underlying engines of those tools are also scaling dramatically each year. This branch of AI is called automated reasoning (a/k/a symbolic AI) which symbolic…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help