Andrej Karpathy, former Tesla AI director, describes AI systems as ‘alien tools’ without a manual. He also expressed concern over the shift in programming due to AI’s stochastic nature and highlighted the challenges of understanding reasoning behind answers given by LLMs
Here’s advice Andrej Karpathy, OpenAI co-founder and former director for AI and Autopilot Vision at Tesla, wishes he was told as an undergraduate student.(via X)
Former Tesla AI director and OpenAI co-founder Andrej Karpathy has called A…
Andrej Karpathy, former Tesla AI director, describes AI systems as ‘alien tools’ without a manual. He also expressed concern over the shift in programming due to AI’s stochastic nature and highlighted the challenges of understanding reasoning behind answers given by LLMs
Here’s advice Andrej Karpathy, OpenAI co-founder and former director for AI and Autopilot Vision at Tesla, wishes he was told as an undergraduate student.(via X)
Former Tesla AI director and OpenAI co-founder Andrej Karpathy has called AI systems an ‘alien tool’ that comes without a manual. The senior techie, who is often credited for Tesla’s Autopilot and Full Self-Driving (FSD) systems, also said that he hasn’t felt “this much behind as a programmer”.
In a post on X (formerly Twitter), Karpathy wrote, “The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and far between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year, and a failure to claim the boost feels decidedly like a skill issue.”
With AI doing the bulk of the coding these days, Karpathy pointed to the pitfalls of “fundamentally stochastic, fallible, unintelligible, and changing entities suddenly intermingled with what used to be good old-fashioned engineering.”
“Clearly some powerful alien tool was handed around, except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude-9 earthquake is rocking the profession. Roll up your sleeves to not fall behind,” he added.
What the former OpenAI researcher is pointing out here is that traditional coding is deterministic. That is, if you give a program the same code 1,000 times, you will get the same result. Moreover, if a line of code breaks, you can also read the logic to see exactly what went wrong with the program.
However, as Karpathy pointed out, the large language models (LLMs), the building blocks behind current AI tools, are stochastic. This means that LLMs run on probability rather than certainty. Moreover, the ‘alien’ tag may be a reference to the ‘black box’ problem, that is, the inability of even the developers of these LLMs to fully understand how they arrive at a specific solution.
Dario Amodei on the black box problem of AI:
Karpathy, however, isn’t the first to talk about the non-deterministic nature of LLMs, with Anthropic co-founder Dario Amodei previously raising concerns about the ‘black box’ issue of AI models.
In an essay earlier this year, Amodei wrote, “The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will. This emergent nature also makes it difficult to detect and mitigate such development.”
“These systems will be absolutely central to the economy, technology, and national security, and will be capable of so much autonomy that I consider it basically unacceptable for humanity to be totally ignorant of how they work,” he added.