Are We Headed For The Matrix?
by Ray Kurzweil
Most viewers of The Matrix consider the more fanciful elements--intelligent computers, downloading information into the human brain, virtual reality indistinguishable from real life--to be fun as science fiction, but quite remote from real life. Most viewers would be wrong. As renowned computer scientist and entrepreneur Ray Kurzweil explains, these elements are very feasible and are quite likely to be a reality within our lifetimes.
The Matrix is set in a world one hundred years in the future, a world offering a seemingly miraculous array of technological marvels--sentient (if malevolent) programs, the ability to directly download capabilities into the human brain, and the creation of virtual realities indistinguishable from the real world. For most viewers these developments may appear to be pure science fiction, interesting to consider, but of little relevance to the world outside the movie theatre. But this view is shortsighted. In my view, these developments will become a reality within the next three to four decades.
I've become a student of technology trends as an outgrowth of my career as an inventor. If you work on creating technologies, you need to anticipate where technology will be at points in the future so that your project will be feasible and useful when it's completed, not just when you started. Over the course of a few decades of anticipating technology, I've become a student of technology trends and have developed mathematical models of how technologies in different areas are developing.
This has given me the ability to invent things that use the materials of the future, not just limiting my ideas to the resources we have today. Alan Kay has noted, "To anticipate the future we need to invent it." So we can invent with future capabilities if we have some idea of what they will be.
Perhaps the most important insight that I've gained, which people are quick to agree with but very slow to really internalize and appreciate all of its implications, is the accelerating pace of technical change itself.
One Nobel laureate recently said to me: "There's no way we're going to see self-replicating nanotechnological entities for at least a hundred years." And yes, that's actually a reasonable estimate of how much work it will take. It'll take a hundred years of progress, at today's rate of progress, to get self-replicating nanotechnological entities. But the rate of progress is not going to remain at today's rate; according to my models, it's doubling every decade.
We will make a hundred years of progress at today's rate of progress in 25 years. The next ten years will be like twenty, and the following ten years will be like 40. The 21st century will therefore be like 20,000 years of progress--at today's rate. The twentieth century, as revolutionary as it was, did not have a hundred years of progress at today's rate; since we accelerated up to today's rate, it really was about 20 years of progress. The 21st century will be about a thousand times greater, in terms of change and paradigm shift, than the 20th century.
A lot of these trends stem from thinking about the implications of Moore's Law. Moore's Law refers to integrated circuits and famously states that the computing power available for a given price will double every twelve to twenty-four months. Moore's Law has become a synonym for the exponential growth of computing.
I've been thinking about Moore's Law and its context for at least twenty years. What is the real nature of this exponential trend? Where does it come from? Is it an example of something deeper and more profound? As I will show, the exponential growth of computing goes substantially beyond Moore's Law. Indeed, exponential growth goes beyond just computation, and applies to every area of information-based technology, technology that will ultimately reshape our world.
Observers have pointed out that Moore's Law is going to come to an end. According to Intel and other industry experts, we'll run out of space on an integrated circuit within fifteen years, because the key features will only be a few atoms in width. So will that be the end of the exponential growth of computing?
That's a very important question as we ponder the nature of the 21st century. To address this question, I put 49 famous computers on an exponential graph. Down at the lower left hand corner is the data processing machinery that was used in the 1890 American census (calculating equipment using punch cards). In 1940, Alan Turing developed a computer based on telephone relays that cracked the German enigma code and gave Winston Churchill a transcription of nearly all the Nazi messages. Churchill needed to use these transcriptions with great discretion, because he realized that using them could tip off the Germans prematurely.
If, for example, he had warned Coventry authorities that their city was going to be bombed, the Germans would have seen the preparations and realized that their code had been cracked. However, in the Battle of Britain, the English flyers seemed to magically know where the German flyers were at all times.
In 1952, CBS used a more sophisticated computer based on vacuum tubes to predict the election of a U.S. president, President Eisenhower. In the upper right-hand corner is the computer sitting on your desk right now.
One insight we can see on this chart is that Moore's Law was not the first but the fifth paradigm to provide exponential growth of computing power. Each vertical line represents the movement into a different paradigm: electro-mechanical, relay-based, vacuum tubes, transistors, integrated circuits. Every time a paradigm ran out of steam, another paradigm came along and picked up where that paradigm left off.
People are very quick to criticize exponential trends, saying that ultimately they'll run out of resources, like rabbits in Australia. But every time one particular paradigm reached its limits, another, completely different method would continue the exponential growth. They were making vacuum tubes smaller and smaller but finally got to a point where they couldn't make them any smaller and maintain the vacuum. Then transistors came along, which are not just small vacuum tubes. They're a completely different paradigm.
Every horizontal level on this graph represents a multiplication of computing power by a factor of a hundred. A straight line in an exponential graph means exponential growth. What we see here is that the rate of exponential growth is itself growing exponentially. We doubled the computing power every three years at the beginning of the century, every two years in the middle, and we're now doubling it every year.
It's obvious what the sixth paradigm will be: computing in three dimensions. After all, we live in a three-dimensional world and our brain is organized in three dimensions. The brain uses a very inefficient type of circuitry. Neurons are very large "devices," and they're extremely slow. They use electrochemical signaling that provides only about 200 calculations per second, but the brain gets its prodigious power from parallel computing resulting from being organized in three dimensions. Three-dimensional computing technologies are beginning to emerge. There's an experimental technology at MIT's Media Lab that has 300 layers of circuitry. In recent years, there have been substantial strides in developing three-dimensional circuits that operate at the molecular level.
Nanotubes, which are my favorite, are hexagonal arrays of carbon atoms that can be organized to form any type of electronic circuit. You can create the equivalent of transistors and other electrical devices. They're physically very strong, with 50 times the strength of steel. The thermal issues appear to be manageable. A one-inch cube of nanotube circuitry would be a million times more powerful than the computing capacity of the human brain.
Over the last several years, there has been a sea change in the level of confidence in building three-dimensional circuits and achieving at least the hardware capacity to emulate human intelligence. This has raised a more salient issue, namely that "Moore's Law may be true for hardware but it's not true for software."
From my own four decades of experience with software development, I believe that is not the case. Software productivity is increasing very rapidly. As an example from one of my own companies, in 15 years, we went from a $5,000 speech-recognition system that recognized a thousand words poorly, without continuous speech, to a $50 product with a hundred-thousand-word vocabulary that's far more accurate. That's typical for software products. With all of the efforts in new software development tools, software productivity has also been growing exponentially, albeit with a smaller exponent than we see in hardware.
Many other technologies are improving exponentially. When the genome project was started about 15 years ago, skeptics pointed out that at the rate at which we can scan the genome, it will take 10,000 years to finish the project. The mainstream view was that there would be improvements, but there was no way that the project could be completed in 15 years. But the price-performance and throughput of DNA sequencing doubled every year, and the project was completed in less than 15 years. In twelve years, we went from a cost of $10 to sequence a DNA base pair to a tenth of a cent.
Even longevity has been improving exponentially. In the 18th century, every year we added a few days to human life expectancy. In the 19th century, every year, we added a few weeks. We're now adding about 120 days every year to human life expectancy. And with the revolutions now in an early stage in genomics, therapeutic cloning, rational drug design, and the other biotechnology transformations, many observers including myself anticipate that within ten years we'll be adding more than a year, every year. So, if you can hang in there for another ten years, we'll get ahead of the power curve and be able to live long enough to see the remarkable century ahead.
Miniaturization is another very important exponential trend. We're making things smaller at a rate of 5.6 per linear dimension per decade. Bill Joy, in the essay following this one, has, as one of his recommendations, to essentially forgo nanotechnology. But nanotechnology is not a single unified field, only worked on by nanotechnologists. Nanotechnology is simply the inevitable end result of the pervasive trend toward making things smaller, which we've been doing for many decades.
Above is a chart of computing's exponential growth, projected into the 21st century. Right now, your typical $1000 PC is somewhere between an insect and a mouse brain. The human brain has about 100 billion neurons, with about 1,000 connections from one neuron to another. These connections operate very slowly, on the order of 200 calculations per second, but 100 billion neurons times 1,000 connects creates 100 trillion-fold parallelism. Multiplying that by 200 calculations per second yields 20 million billion calculations per second, or, in computing terminology, 20 billion MIPS. We'll have 20 billion MIPS for $1000 by the year 2020.
Now that won't automatically give us human levels of intelligence, because the organization, the software, the content and the embedded knowledge are equally important. Below I will address the scenario in which I envision achieving the software of human intelligence, but I believe it is clear that we will have the requisite computing power. By 2050, $1000 of computing will equal one billion human brains. That might be off by a year or two, but the 21st century won't be wanting for computational resources.
Now let's consider the virtual-reality framework envisioned by The Matrix--a virtual reality which is indistinguishable from true reality. This will be feasible, but I do quibble with one point. The thick cable entering Neo's brainstem made for a powerful visual, but it's unnecessary; all of these connections can be wireless. Let's go out to 2029 and put together some of the trends that I've discussed. By that time, we'll be able to build nanobots, microscopic-sized robots that can go inside your capillaries and travel through your brain and scan the brain from inside. We can almost build these kinds of circuits today. We can't make them quite small enough, but we can make them fairly small.
The Department of Defense is developing tiny robotic devices called "Smart Dust." The current generation is one millimeter--that's too big for this scenario--but these tiny devices can be dropped from a plane, and find positions with great precision. You can have many thousands of these on a wireless local area network. They can then take visual images, communicate with each other, coordinate, send messages back, act as nearly invisible spies, and accomplish a variety of military objectives.
We are already building blood-cell-sized devices that go inside the blood stream, and there are four major conferences on the topic of "bioMEMS" (biological Micro Electronic Mechanical Systems). The nanobots I am envisioning for 2029 will not necessarily require their own navigation. They could move involuntarily through the bloodstream and, as they travel by different neural features, communicate with them the same way that we now communicate with different cells within a cell phone system.
Brain-scanning resolution, speeds, and costs are all exploding exponentially. With every new generation of brain scanning we can see with finer and finer resolution. There's a technology today that allows us to view many of the salient details of the human brain. Of course, there's still no full agreement on what those details are, but we can see brain features with very high resolution, provided the scanning tip is right next to the features. We can scan a brain today and see the brain's activity with very fine detail; you just have to move the scanning tip all throughout the brain so that it's in close proximity to every neural feature.
Now how are we going to do that without making a mess of things? The answer is to send the scanners inside the brain. By design, our capillaries travel by every interneuronal connection, every neuron and every neural feature. We can send billions of these scanning robots, all on a wireless local area network, and they would all scan the brain from inside and create a very high-resolution map of everything that's going on.
What are we going to do with the massive database of neural information that develops? One thing we will do is reverse-engineer the brain, that is, understand the basic principles of how it works. This is an endeavor we have already started. We already have high resolution scans of certain areas of the brain. The brain is not one organ; it's comprised of several hundred specialized regions, each organized differently. We have scanned certain areas of the auditory and visual cortex, and have used this information to design more intelligent software. Carver Mead at Caltech, for example, has developed powerful, digitally controlled analog chips that are based on these biologically inspired models from the reverse engineering of portions of the visual and auditory systems. His visual sensing chips are used in high-end digital cameras.
We have demonstrated that we are able to understand these algorithms, but they're different from the algorithms that we typically run on our computers. They're not sequential and they're not logical; they're chaotic, highly parallel, and self-organizing. They have a holographic nature in that there's no chief-executive-officer neuron. You can eliminate any of the neurons, cut any of the wires, and it makes little difference--the information and the processes are distributed throughout a complex region.
Based on these insights, we have developed a number of biologically inspired models today. This is the field I work in, using techniques such as evolutionary "genetic algorithms" and "neural nets," which use biologically inspired models. Today's neural nets are mathematically simplified, but as we get a more powerful understanding of the principles of operation of different brain regions, we will be in a position to develop much more powerful, biologically inspired models. Ultimately we can create and recreate these processes, retaining their inherently massively parallel, digitally controlled analog, chaotic, and self-organizing properties. We will be able to recreate the types of processes that occur in the hundreds of different brain regions, and create entities--they actually won't be in silicon, they'll probably be using something like nanotubes--that have the complexity, richness, and depth of human intelligence.
Our machines today are still a million times simpler than the human brain, which is one key reason that they still don't have the endearing qualities of people. They don't yet have our ability to get the joke, to be funny, to understand people, to respond appropriately to emotion, or to have spiritual experiences. These are not side effects of human intelligence, or distractions; they are the cutting edge of human intelligence. It will require a technology of the complexity of the human brain to create entities that have those kinds of attractive and convincing features.
Getting back to virtual reality, let's consider a scenario involving a direct connection between the human brain and these nanobot-based implants. There are a number of different technologies that have already been demonstrated for communicating in both directions between the wet, analog world of neurons and the digital world of electronics. One such technology, called a neuron transistor, provides this two-way communication. If a neuron fires, this neuron transistor detects that electromagnetic pulse, so that's communication from the neuron to the electronics. It can also cause the neuron to fire or prevent it from firing.
For full-immersion virtual reality, we will send billions of these nanobots to take up positions by every nerve fiber coming from all of our senses. If you want to be in real reality, they sit there and do nothing. If you want to be in virtual reality, they suppress the signals coming from our real senses and replace them with the signals that you would have been receiving if you were in the virtual environment.
In this scenario, we will have virtual reality from within and it will be able to recreate all of our senses. These will be shared environments, so you can go there with one person or many people. Going to a Web site will mean entering a virtual-reality environment encompassing all of our senses, and not just the five senses, but also emotions, sexual pleasure, humor. There are actually neurological correlates of all of these sensations and emotions, which I discuss in my book The Age of the Spiritual Machines.
For example, surgeons conducting open-brain surgery on a young woman (while awake) found that stimulating a particular spot in the girl's brain would cause her to laugh. The surgeons thought that they were just stimulating an involuntary laugh reflex. But they discovered that they were stimulating the perception of humor: whenever they stimulated this spot, she found everything hilarious. "You guys are just so funny standing there" was a typical remark.
Using these nanobot-based implants, you will be able to enhance or modify your emotional responses to different experiences. That can be part of the overlay of these virtual-reality environments. You will also be able to have different bodies for different experiences. Just as people today project their images from Web cams in their apartment, people will beam their whole flow of sensory and even emotional experiences out on the Web, so you can, ˆ la the plot concept of the movie Being John Malkovich, experience the lives of other people.
Ultimately, these nanobots will expand human intelligence and our abilities and facilities in many different ways. Because they're communicating with each other wirelessly, they can create new neural connections. These can expand our memory, cognitive faculties, and pattern-recognition abilities. We will expand human intelligence by expanding its current paradigm of massive interneuronal connections as well as through intimate connection to non-biological forms of intelligence.
We will also be able to download knowledge, something that machines can do today that we are unable to do. For example, we spent several years training one research computer to understand human speech using the biologically inspired models--neural nets, Markov models, genetic algorithms, self-organizing patterns--that are based on our crude current understanding of self-organizing systems in the biological world. A major part of the engineering project was collecting thousands of hours of speech from different speakers in different dialects and then exposing this to the system and having it try to recognize the speech. It made mistakes, and then we had it adjust automatically, and self-organize to better reflect what it had learned.
Over many months of this kind of training, it made substantial improvements in its ability to recognize speech. Today, if you want your personal computer to recognize human speech, you don't have to spend years training it the same painstaking way, as we need to do with every human child. You can just load the evolved models, it's called "loading the software." So machines can share their knowledge. We don't have quick downloading ports on our brains. But as we build nonbiological analogs of our neurons, interconnections, and neurotransmitter levels where our skills and memories are stored, we won't leave out the equivalent of downloading ports. We'll be able to download capabilities as easily as Trinity downloads the program that allows her to fly the B-212 helicopter.
When you talk to somebody in the year 2040, you will be talking to someone who may happen to be of biological origin but whose mental processes are a hybrid of biological and electronic thinking processes, working intimately together. Instead of being restricted, as we are today, to a mere hundred trillion connections in our brain, we'll be able to expand substantially beyond this level. Our biological thinking is flat; the human race has an estimated 1026 calculations per second, and that biologically determined figure is not going to grow. But nonbiological intelligence is growing exponentially. The crossover point, according to my calculations, is in the 2030s; some people call this the Singularity.
As we get to 2050, the bulk of our thinking--which in my opinion is still an expression of human civilization--will be nonbiological. I don't believe that the Matrix scenario of malevolent artificial intelligences in mortal conflict with humans is inevitable. Then the nonbiological portion of our thinking will still be human thinking, because it's going to be derived from human thinking. Its programming will be created by humans, or created by machines that are created by humans, or created by machines that are based on reverse-engineering of the human brain or downloads of human thinking, or one of many other intimate connections between human and machine thinking that we can't even contemplate today.
A common reaction to this is that this is a dystopian vision, because I am "placing humanity with the machines." But that's because most people have a prejudice against machines. Most observers don't truly understand what machines are ultimately capable of, because all the machines that they've ever "met" are very limited, compared to people. But that won't be true of machines circa 2030 and 2040. When machines are derived from human intelligence and are a million times more capable, we'll have a different respect for machines, and there won't be a clear distinction between human and machine intelligence. We will effectively merge with our technology.
We are already well down this road. If all the machines in the world stopped today, our civilization would grind to a halt. That wasn't true as recently as thirty years ago. In 2040, human and machine intelligence will be deeply and intimately melded. We will become capable of far more profound experiences of many diverse kinds. We'll be able to "recreate the world" according to our imaginations and enter environments as amazing as that of The Matrix, but, hopefully, a world more open to creative human expression and experience.
| |