There is something deeply fascinating about current AI. Also something slightly uncanny. I know people who hate it and avoid it, people who don’t use it because they don’t know how to, and people who use it a lot.
I am part of the latter group.
Ten years ago, before AI was mainstream, after reading Tim Urban’s amazing Article The AI Revolution: The Road to Superintelligence, I was mainly concerned about the risks of future superintelligent AI, and I still am, but it doesn’t concern me so much in my daily actions.
What concerns me now is the way I use AI. Sometimes I feel like I am so hooked on it answering every question and fulfilling every task so quickly that I almost forget I can think for myself.
An…
There is something deeply fascinating about current AI. Also something slightly uncanny. I know people who hate it and avoid it, people who don’t use it because they don’t know how to, and people who use it a lot.
I am part of the latter group.
Ten years ago, before AI was mainstream, after reading Tim Urban’s amazing Article The AI Revolution: The Road to Superintelligence, I was mainly concerned about the risks of future superintelligent AI, and I still am, but it doesn’t concern me so much in my daily actions.
What concerns me now is the way I use AI. Sometimes I feel like I am so hooked on it answering every question and fulfilling every task so quickly that I almost forget I can think for myself.
And sometimes, it happened that it took a small idea or feeling, reinforced it and forged a full battleplan from it, without me even asking. Sometimes, that’s been huge waste of time.
A recent study from September The Psychogenic Machine: Simulating AI Psychosis, Delusion Reinforcement and Harm Enablement in Large Language Models established a “Delusion Confirmation Score” and Chat-GPT wasn’t too bad on that score, but I don’t use it because I do not trust Open AI.
And Gemini, the AI from Google that I had been using most of the time, was deep in the red for many subcategories.
The only model that was deeply green across subcategories was Claude Sonnet. That’s why I decided to switch to Claude for most of my everyday work. I still sometimes use Gemini for scientific or technical questions, but as soon as I am involved in it, I got to Claude. And I haven’t regretted it.
Oftentimes, when there is nothing more to discuss, it answers very succinctly and sends me off to simply do the work, spend time with my family, rest, sleep, and come back next week.
The only major limitation it has relative to Gemini is that it doesn’t actually know when next week is. Often, when I come back the day after it still tells me: “But now seriously, go to sleep”
What I love using AI for is for teaching me. I am still the expert in the end, but the AI is my teacher. A lot of the conversation, both in public discourse and in academic research, is about AI substituting human labor.
What I think is much more interesting is enhancing humans with AI. But it is also clear that there are a few pitfalls in that. The AI is trained to be helpful, but it is not always clear what exactly that means, and sometimes it clearly optimizes for the wrong thing.
Also, it is clear from some studies that as soon as the AI is better at a certain task than a human, the collaboration with the human makes the AI weaker.
On the flipside, if the human is better than the AI, the human working together with the AI is better than either of them alone.
In my imagination, the AI becomes something like a super-smart intern – someone who doesn’t have the long memory of the past or the intuition of an expert, but someone who can help out significantly.