I vividly recall that, when I was a graduate student in the late 1990s, on the bookshelves of the professors’ bookcases, I would often see the two volumes of *Parallel Distributed Processing: Explorations in the Microstructure of Cognition *(MIT Press, 1986). These two “PDP” volumes were everywhere. I did not see one bookcase without them. The two volumes were edited by David Rumelhart and James McClelland, trailblazers who set out on an important mission: to test whether a collection of things as single-minded and unintelligent as neurons could ever, when configured a certain way, give rise to complex processes and intelligent behavior. These [neural](https://www.psychologytoday.com/us/basic…
I vividly recall that, when I was a graduate student in the late 1990s, on the bookshelves of the professors’ bookcases, I would often see the two volumes of *Parallel Distributed Processing: Explorations in the Microstructure of Cognition *(MIT Press, 1986). These two “PDP” volumes were everywhere. I did not see one bookcase without them. The two volumes were edited by David Rumelhart and James McClelland, trailblazers who set out on an important mission: to test whether a collection of things as single-minded and unintelligent as neurons could ever, when configured a certain way, give rise to complex processes and intelligent behavior. These neural networks led to the present form of artificial intelligence, one in which no sophisticated rules or knowledge are pre-built into the system. The elements of the network were neuron-like, single-minded, and, frankly, dumb. Would these neurons fail? As captured in those two volumes, they did not. These networks could perform complex operations, such as recalling a memory from a single cue, completing a pattern, identifying a pattern, selecting which word to use to name an object, and much, much more. Some of the big questions in the 1990s then became, “How much can these (relatively simple) networks do?” and “What can’t they do?”
It is important to note that, at the outset, these networks were meant to model how neurons operate. Hence, by seeing what these networks can do and how they solve problems, we can gain insights into how brains carry out similar functions. It is important to emphasize that these neuron-like, PDP networks were different from earlier forms of artificial intelligence because, unlike them, no complex rules or knowledge were pre-built into the networks. The PDP networks were not pre-wired with any syntactic rules or task-specific modes of operation. Yet, just by specifying units, connections, activation rules, and learning rules, these networks could recognize objects, retrieve memories, integrate context, decide among options, and even learn from sparse experience. For students interested in consciousness, they revealed the sophisticated things that (presumably) unconscious networks can achieve. To many, this welcomed the deep question, “What does consciousness add, if anything, to all these neural interconnections?”
When reading those volumes, I thought that one day, there would be a popular book about these fascinating developments. Everyone will love to hear about them. That day has come, and fortunately, the great McClelland is one of its authors. The book is aptly titled *The Emergent Mind: How Intelligence Arises in People and Machines. *The mind is “emergent” because the processing units themselves are not mindful or intelligent. Given all the developments in artificial intelligence, this book is more relevant today than it was back in the 1990s, and it was very relevant then! The other author of the book, Prof. Gaurav Suri, is a faculty member in my department at San Francisco State University. He was trained in neural networks by McClelland at Stanford University. It was through Suri that one day I had the treat of meeting McClelland. When I met him, I recalled seeing all of the PDP volumes (with the two books always next to each other) in all those bookcases, a form of memory retrieval that is today explained by these very networks (an “autoassociator”).
The questions of today are similar to those of the late 1990s: “How much can these (relatively simple) networks do?” “What can’t they do?” and “How does consciousness, which obviously exists, fit into the picture with this proposed ‘micro-structure’ of cognition?” The authors deal with this last question in Chapter 10. My hunch is that, thanks to all this excellent work on the microstructure of cognition over the last 40 years, we will soon (in less than 40 years’ time) have a better understanding of the relationship between consciousness and this neural machinery.
References
Rumelhart, D. E., McClelland, J. L., and the PDP Research Group (1986). Parallel distributed processing: Explorations in the microstructure of cognition, Vols. 1 and 2. Cambridge, MA: Massachusetts Institute of Technology.