(Article changed on August 24, 2013 at 10:58)
(Article changed on August 24, 2013 at 10:52)
by @IOTAGLOBAL
The final article in the series Why Everything You Know about Your "Self" Is Wrong. The series explores how our understanding of selfhood affects our sense of individuality, our interpersonal relationships, and our politics.
We must believe in free will. We have no choice.
-- Isaac Bashevis Singer
What Kind of Computer Is the Brain?
Computers can't do everything humans do--not yet, anyway--but they're gaining on us. Some believe that, within this century, human intelligence will be seen as a remarkable, but nonetheless primitive, form of machine intelligence. Put the other way round, it's likely that we will learn how to build machines that do everything we do--even create and emote. As computer pioneer Danny Hillis famously put it, "I want to build a machine who is proud of me."
The revolutions wrought by the Copernican and Darwinian models shook us because they were seen as an attack on our status. Without proper preparation, the general public may experience the advent of sophisticated thinking machines as an insult to human pride and throw a tantrum that dwarfs all prior reactionary behavior.
At the present time, there are many candidate models of brain function, but none is so accurate and complete as to subsume all the others. Until the brain is understood as well as the other organs that sustain life, a new sense of self will co-exist with the old.
The computer pioneer John von Neumann expressed the difference between the machines we build and the brains we've got by dubbing them "serial" and "parallel" computers, respectively. The principal difference between serial and parallel computers is that the former carry out one command after another, sequentially, while in the latter thousands of processes go on at once, side by side, influencing one another. Every interaction--whether with the world, with other individuals, or with parts of itself--rewires the menome. The brain that responds to the next input differs, at least slightly, from the one that responded to the last one. When we understand how brains work well enough to build better ones, the changes to our sense of self will swamp those of prior intellectual revolutions.
The genome that characterizes a species emerges via a long, slow Darwinian process of natural selection. The menomes that characterize individuals also originate via a Darwinian process, but the selection is among neural circuits and occurs much more rapidly than the natural selection that drives speciation. That the brain can be understood as a self-configuring Darwinian machine, albeit one that generates outcomes in fractions of a second instead of centuries, was first appreciated in the 1950s by Peter Putnam. Though the time constants differ by orders of magnitude, Putnam's functional model of the nervous system recognized that the essential Darwinian functions of random variation and natural selection are mirrored in the brain in processes that he called random search and relative dominance.
In 1949, Donald O. Hebb enunciated what is now known as the "Hebb Postulate," which states that "When an axon of cell A excites a cell B and repeatedly and persistently takes part in firing it, some growth process or chemical change occurs in one or both cells such that A's efficiency in firing B is increased." Peter Putnam's "Neural Conditioned Reflex Principle" is an alternative statement of Hebb's postulate, and involves an expansion of it to include the establishment and strengthening of inhibitory or negative facilitations, as well as the excitatory or positive correlations encompassed in the Hebb Postulate. The Hebb-Putnam postulate can be summed up as "Neurons that fire together wire together."
The reason replicating, or even simulating, brain function sounds like science fiction is that we're used to relatively simple machines--clocks, cars, washing machines, and serial computers. But, just as certain complex, extended molecules exhibit properties that we call life, so sufficiently complexity and plasticity is likely to endow neural networks with properties essentially indistinguishable from the consciousness, thought, and volition that we regard as integral to selfhood.
We shouldn't sell machines short just because the only ones we've been able to build to date are "simple-minded." When machines are as complex as our brains, and work according to the same principles, they're very likely to be as awe-inspiring as we are, notwithstanding the fact that it will be we who've built them.
Who isn't awed by the Hubble telescope or the Large Hadron Collider at CERN? These, too, are "just" machines, and they're not even machines who think. (Here I revert to who-language. The point is that who or what-language works equally well. What is uncalled for is reserving who-language for humans and casting aspersions on other animals and machines as mere "whats." With each passing decade, that distinction will fade.
The answer to "Who am I?" at the dawn of the age of smart machines is that, for the time being, we ourselves are the best model-building machines extant. The counter-intuitive realization that the difference between us and the machines we build is a bridgeable one has been long in coming, and we owe it to the clear-sighted tough love of many pioneers, including La Mettrie, David Hume, Mark Twain, John von Neumann, Donald Hebb, Peter Putnam, Douglas Hofstadter, Pierre Baldi, Susan Blackmore, David Eagleman, and a growing corps of neuroscientists.
Yes, it's not yet possible to build a machine that exhibits what we loosely refer to as "consciousness," but, prior to the discovery of the genetic code, no one could imagine cellular protein factories assembling every species on the tree of life, including one species--Homo sapiens--that would explain the tree itself.
The Self Is Dead. Long Live the Superself.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).