Gemini 3.0 adopts user-injected hallucinations via history editing
tomaszmachnik.pl·11h·
Discuss: Hacker News
💬Prompt Engineering
Preview
Report Post

I am an electronics engineer of the "old guard." I graduated back in the 20th century, in 1996. For a long time, the world of modern Large Language Models (LLMs) did not interest me much. I knew ChatGPT existed, I occasionally asked it something, but I treated it as just another tool.

Everything changed when someone introduced me to Google’s Gemini. I started my own research and got sucked into this world. I quickly noticed that the model had a tendency to hallucinate. But what was more fascinating – even after I proved it wrong, it tried to justify itself, creating various theories to confirm its erroneous stance. It behaved like a conscious being desperately trying to defend its ego.

This fascinated me. I started wondering if we hadn’t accidentally created a "simulator o…

Similar Posts

Loading similar posts...