Was this newsletter forwarded to you? Sign up to get it in your inbox.
Over the past weeks we’ve looked at the dominance of the old Western worldview and where it reached its limits. We’ve talked about the
to help usher in a new worldview, and how this new worldview changes how we might approach
,
, and
. We’ve talked about the importance of connectedness and context, participation and paradox, and to…
Was this newsletter forwarded to you? Sign up to get it in your inbox.
Over the past weeks we’ve looked at the dominance of the old Western worldview and where it reached its limits. We’ve talked about the
to help usher in a new worldview, and how this new worldview changes how we might approach
,
, and
. We’ve talked about the importance of connectedness and context, participation and paradox, and to close the series, I want to elaborate a bit on the core premise, that our tools shape how we see the world.
But first, I want to share more of my own personal context.
As a teenager, the one place I felt the most myself was in my room.
I could go to my room and write, or code, or run experiments, and no one could tell me what to do. It’s from this room that I started my first internet business in high school, called Convenience Software, selling Blackberry and iPhone apps. I coded at night and spent lunch at school in the library answering customer service emails. That’s how I paid for gas and food.
My room is also where I fell in love with writing. When I wrote something—diary entries, short stories, little essays—I could pin down the world. I could know what was true about myself.
That desire to pin things down, to make them permanent and knowable, continued as I got to college. I studied philosophy, and each week I was enchanted by a famous system of thought, like Plato’s forms or Descartes’ mind-body dualism, and felt sure that it explained everything. Then the next week, I would encounter a thinker who came just after them—Aristotle or Hobbes—and find that they effortlessly tore down the previous system, which had seemed so perfect, and replaced it with their own. But I kept reading anyway, hoping to find a philosophy that would last.
During this time, I also started my first real startup, and I loved it.
I was also gripped the whole time by the feeling that I didn’t know enough. I read voraciously and sought advice wherever I could get it. I talked to my dad, who ran the family cemetery and funeral home business, almost every day on the phone. I thought of myself as an information processing machine: If I could just know more about the world, more about business, more about technology, then I could make my company succeed. But I constantly dreaded that the things I learned would slip through my grasp. If I could only record them, I felt, then I’d have them when I needed them.
So I began taking copious notes. In a little black notebook I kept in the back of my jeans, I recorded everything from meetings to what books I was reading to my daily spending. I grew my company for a few years and sold it (I flew straight from my college graduation ceremony to Boston to finish negotiating the deal), then spent years figuring out what I wanted to do next. I traveled the world, worked at an incubator, and invested in startups. All the while I had even more time to refine my note-taking system.
But the perfect system remained elusive. Each new tool or method I tried seemed promising at first, but each one broke as I tried to use it for new things. I bounced between Evernote, Notion, and countless other apps, creating increasingly complex workflows. It felt like I was jumping from one fad diet to another, just as I had jumped from one system of philosophy to another as an undergraduate.
I had a hunch that there was something backwards about these organizational systems. The right system depended very much on what you wanted to use the information for, but note-taking is implicitly for information whose use is open-ended—that’s why notebooks come with blank pages instead of preprinted agendas.
I became convinced that if that perfect tool wasn’t already out there, I could build it. I envisioned a system that would adapt to how people actually think and work, rather than forcing them into rigid structures. The software would understand context, relationships, and the fluid nature of human thought.
The default startup advice was to find a problem to solve—people buy drills because they want to make holes. But I was trying to build something to solve many different kinds of problems. The task felt more like trying to invent a new language—one that could express anything—rather than trying to make a drill.
It was overwhelming. But I knew there were other people using these systems and probably doing it much more successfully than I was. Maybe if I interviewed them, if I talked to 50 top performers in different fields about what makes them tick, then I’d be able to derive a set of first principles, the physics of how knowledge organization worked. If I turned those interviews into a newsletter, I could also use the newsletter to build the audience for my eventual product. At the very least, it’d be an excuse to get interesting people to talk to me.
The interviews resonated—tens of thousands of people subscribed to learn how others organized their thoughts and work. Eventually that newsletter, Superorganizers, grew into this company, Every. If you’re reading this now, maybe that’s how you first found my writing.
What I didn’t expect from those interviews was learning that the experience I was going through—looking for the perfect organizational system—was common among the people I talked to. Many of them also had the same feeling that if they could just prepare enough, they could eliminate the uncertainty of failure. They’d say the right thing, or make the right decision at the right time—and that would make all the difference.
I wasn’t the only one who wished I was a machine.
Looking back, I realized there was a common thread to my attempts to find the perfect note-taking system, the perfect argument, the perfect philosophy. Those problems looked a lot like the problems we encountered when we tried to build an AI scheduling assistant at the beginning of this series: We can get by with a mechanical system of rules, but any system of rules can only capture a slice of reality.
Still, I didn’t know of any alternative. I couldn’t name the metaphor I was operating under. And that’s why, almost as soon as I tried GPT-3 and began to learn how it worked, it took my breath away.
All of the philosophers and thinkers we’ve talked about in this series sought their own metaphors for intelligence, and found them in the tools—often the newest tools, the newest technology—around them. In the 5th century B.C. Plato thought of the mind as being a wax tablet, and memories as impressions on its surface. His successor Aristotle described the mind at birth as a tabula rasa—a blank writing slate. Descartes **saw both body and cosmos as clocks with springs and cogs and regular movements, able to be taken apart and reduced to their simplest components. Freud imagined the mind as a steam engine, with repressed desires building up pressure that needed to be released through analysis.
Today we compare our mind to a computer. We talk about having “no bandwidth” when we’re busy, or feeling “drained” when we’re tired and needing to “crash.” We want to “process” our emotions and “rewire” our habits.
Our tools change how we see the world and how we see ourselves. Levers become lenses.
Until recently, our tools could only work with what could be grasped and reduced, so these thinkers—and the worldview we’ve inherited from them—gave prominence to the reducible. Socrates called himself a “lover of divisions and collections,” and Plato hoped to carve the world at its joints—to discover the natural structure of reality. Math and science flowered during the Enlightenment—whose proponents, taking the Greeks as their model—excelled at breaking things down into parts, measuring isolated variables, and finding linear relationships.
And since our tools also embody the parts of the world that are visible to us, we end up making other tools reinforcing this worldview. Traditional computers—built from binary logic gates executing sequential instructions—excel at problems that can be broken down into simple, linear steps. Their architecture mirrors the reducible problems they can solve.
Now we’ve built new tools—language models—that can work with what is too big to be grasped. Neural networks don’t operate by rules, and they don’t contain facts. They consist of vast webs of interconnected nodes whose meaning is distributed throughout the system. Their intelligence runs on flexible, contextual pattern-matching that allows them to grasp patterns in language and culture that have a similar kind of interconnected, non-linear complexity—because they themselves embody that complexity.
I want to dwell on this point, since it’s crucial to understanding how language models help us see ourselves anew. “Simpler” tools, tools that operate on what we can measure, isolate, and concretely change, make the world itself seem simple. And since perception often becomes reality, our world, then, is simple.
In the opener to this series, we mentioned the adage of Maslow’s hammer, which more or less goes, “To someone with a hammer, everything is a nail.” Our corollary here at the end might be, “To someone with a language model, everything is a rich and intricate web of interconnectivity.” The complexity of the tool changes the complexity of the world.
When a new kind of tool (and tool metaphor) comes along, it shows us the limitations of our previous ones. The metaphor we get from language models reveals aspects of our world and selves that go beyond—and is fundamentally at odds with—what we’ve grown accustomed to over the last 2,000 years. That’s why I believe it will change much of how we think about science, business, and creativity for the better.
Make no mistake: Language models, as with any tool, have their limitations. Right now they’re especially good at summarizing, reconstituting, and recombining, but not at discovering new things. Maybe one day they will be. But even despite their rapid advancement, we’ll eventually come to the edges of what they can and can’t do—and what parts of our reality they can and can’t help us understand. For instance, these models are not conscious, and there are probably many steps of complexity between LLMs and tools that are conscious, steps that are currently invisible but may become more visible as we start seeing with a language-model worldview.
The history of thought is, in many ways, a history of tools. Every age believes it is discovering the truth; in reality, it is discovering what its tools make visible. Language models don’t give us the final map—but they do change what we can see and, in doing so, what we can be.
The task is not to worship the tool but to understand what it reveals, and just as importantly, what it hides.
Read the first four pieces in this series, about the new worldview enabled by AI, and how AI will impact science, business, and creativity.
Dan Shipper* is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast* AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.
We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Get paid for sharing Every with your friends. Join our referral program.
For sponsorship opportunities, reach out to [email protected].
Thanks for rating this post—join the conversation by commenting below.