Published December 6, 2025 | Version 1.0

Journal article Open

Description

Enterprises increasingly rely on AI assistants to support research, procurement, product comparisons, competitive intelligence, and communication tasks. These systems are commonly assumed to behave like stable analysts: consistent, predictable, and aligned with factual sources. Our findings demonstrate that this assumption is incorrect.

Across 200 controlled tests involving GPT, Gemini, and Claude, we observe substantial instability:

  • 61 percent of identical runs produce materially different answers
  • 48 percent shift their reasoning
  • 27 percent contradict themselves
  • 34 percent disagree with competing models

This behaviour is structural, not incidental. It arises from silent model updates, a lack o…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help