The Collapse of Trust in AI Assistants
zenodo.org·13h·
Discuss: Hacker News
💬AI Code Assistants
Preview
Report Post

Published December 6, 2025 | Version 1.0

Journal article Open

Description

Enterprises increasingly rely on AI assistants to support research, procurement, product comparisons, competitive intelligence, and communication tasks. These systems are commonly assumed to behave like stable analysts: consistent, predictable, and aligned with factual sources. Our findings demonstrate that this assumption is incorrect.

Across 200 controlled tests involving GPT, Gemini, and Claude, we observe substantial instability:

  • 61 percent of identical runs produce materially different answers
  • 48 percent shift their reasoning
  • 27 percent contradict themselves
  • 34 percent disagree with competing models

This behaviour is structural, not incidental. It arises from silent model updates, a lack o…

Similar Posts

Loading similar posts...