Engineered Intelligence: Creating a Successor Species
World Healthcare Innovation and Technology Conference
Keynote Address of Congressman Brad Sherman
December 10, 2007 (Updated October 2013)
I believe that the impact of science on this century will be far greater than the enormous impact science had on the last century. As futurist Christine Peterson notes: “If someone is describing the future 30 years from now and they paint a picture that seems like it is from a science fiction movie, they might be wrong. But, if someone is describing the future a generation from now and they paint a picture that doesn’t look like a science fiction movie, then you know they are wrong.” We are going to live in a science fiction movie, we just don’t know which one.
There is one issue that I think is more explosive than even the spread of nuclear weapons: engineered intelligence. By that I mean, the efforts of computer engineers and bio-engineers who may create intelligence beyond that of a human being. In testimony at the House Science Committee1, the consensus of experts testifying was that in roughly 25 years we would have a computer that passed the Turing Test,2 and more importantly, exceeded human intelligence.
As we develop more intelligent computers, we will find them useful tools in creating ever more intelligent computers, a positive feedback loop. I don’t know whether we will create the maniacal Hal from 2001, or the earnest Data from Star Trek — or perhaps both.
There are those who say don’t worry, even if a computer is intelligent and malevolent — it is in a box and it cannot affect the world. But I believe that there are those of our species who sell hands to the Beelzebub, in return for a good stock tip.
I do draw solace from the fact that just because a computer is intelligent, or even self-aware, this does not mean that it is ambitious. By ambitious, I mean possessing a survival instinct together with a desire to affect the environment so as to ensure survival, and usually a desire to propagate or expand.
My washing machine does not seem to care whether I turn it off or not. My pet mouse does seem to care. So even a computer possessing great intelligence may simply have no ambition, survival instinct, or interest in affecting the world.
DARPA3 is the government agency on the cutting edge of supercomputer research. I have urged DARPA to develop computer systems designed to maximize the computer’s utility, while avoiding self-awareness, or at least ambition.
Bio-engineers should be able to start with human DNA and create a 2,000 pound mammal with a 300 pound brain designed to beat your grandkids on the LSAT. No less troubling, they might start with canine DNA and create a mammal with sub-human intelligence, and no civil rights.
DNA is inherently ambitious. Those microbes which didn’t seek to survive or replicate, didn’t. Birds seem to care whether they or their progeny survive, and they seek to affect their environment to achieve that survival.
In any case, you have the bio-engineers and the computer engineers both working toward new levels of intelligence. I believe in our lifetime we will see new species possessing intelligence which surpasses our own.
The last time a new higher level of intelligence arose on this planet was roughly 50,000 years ago. It was our own ancestors, who then said hello to the previously most intelligent species, Neanderthals. It did not work out so well for the Neanderthals.
I used to view this as a contest between the bio-engineers and the computer engineers (or if you use the cool new lingo, wet nanotechnology and dry nanotechnology), in an effort to develop a new species of superior intelligence. I felt that the last decision that humans would make would be whether our successors are carbon-based or silicon-based:4 the product of bio-engineering or of computer engineering.
Now I believe we are most likely to see combinations that will involve nature, computer engineering, and bio-engineering: humans with pharmaceutical intelligence boosters; DNA enhancements; computer-chip implants; or all three. First, this will be used to cure disease, then to enhance human capacity. The enhanced-human will precede the trans-human.
Now how should we react to all of this? It is important that we benefit from science, even as we consider its more troubling implications. I chaired the House Subcommittee on Nonproliferation which deals with the only other technologies that pose an existential threat to humankind, namely the proliferation of nuclear and biological weapons.
The history of nuclear technology is instructive. On August 2, 1939, Einstein sent Roosevelt a letter saying a nuclear weapon was possible; six years later, nuclear technology literally exploded onto the world scene. Only after society saw the negative effects of nuclear technology, did we see the prospects for nuclear power and nuclear medicine.
The future of engineered intelligence will be different. The undeniable benefits of computer and DNA research will arrive long before the problematic possibilities. Their introduction will be gradual, not explosive. And fortunately, we will have far more than six years to consider the implications — unless we choose to squander the next few decades. My fear is that our philosophers, ethicists and society at large, will ignore the issues that will inevitably present themselves until . . . . they actually present themselves. And these issues require more than a few years of thought.5
I am confident that if we plan ahead we can obtain the utility of supercomputers, and the benefits of bio-engineering, without creating new levels of intelligence. We can then pause and decide whether we in fact wish to create a new intelligent species or two.
Finally, I would quote Oliver Wendell Holmes who said 100 year ago, “I think it not improbable that man, like the grub that prepares a chamber for the winged thing it never has seen but is to be — that man may have cosmic destinies that he does not understand.”6
Likewise, it is possible that our grandchildren — or should I say “our successors” — will have less resemblance to us than a butterfly has to a caterpillar. I don’t know whether to cry or rejoice, but I do know that our best minds in philosophy, science, ethics and theology ought to be focused on this issue. Now.
1. On April 9, 2003, the U.S. House of Representatives, Committee on Science, held a hearing titled The Societal Implications of Nanotechnology. The transcript is available at https://commdocs.house.gov/committees/science/hsy86340.000/hsy86340_0f.htm
2. A test to determine whether computers are able to demonstrate intelligence matching a human’s. In particular, a human sends text-only messages to communicate with both a computer and another human located in a different room. If the human sending the messages cannot determine if the response messages are composed by the computer or by the human, then the computer has passed the Turing Test. It should also be noted that one route to developing a computer with human intelligence is by reverse engineering the human brain perhaps using nanobots.
3. The Defense Advanced Research Projects Agency (DARPA).
4. While I realize that supercomputers may not use chips with silicon substrate, I still prefer to call computer chips “silicon”.
5. This issue is discussed in “Brave New World War” by Jamie Metzl. Published in Issue 8, Spring 2008, Democracy: A Journal of Ideas.
6. Oliver Wendell Holmes. “Law and the Court,” speech at the Harvard Law School Association of New York, 15 February 1913.
2007 Speech By Congressman Brad Sherman