Progress in machine classification of images. The error rate of AI by year. Red line – the error rate of a trained human
“I think it very likely—in fact, inevitable—that biological intelligence is only a transitory
phenomenon… If we ever encounter extraterrestrial intelligence, I believe it is very likely to be postbiological in nature …” Paul Davies
Maddie Stone wrote a provocative piece today at Motherboard: “The Dominant Life Form in the Cosmos Is Probably Superintelligent Robots.” It begins in dramatic form:
If and when we finally encounter aliens, they probably won’t look like little green men, or spiny insectoids. It’s likely they won’t be biological creatures at all, but rather, advanced robots that outstrip our intelligence in every conceivable way. While scores of philosophers, scientists and futurists have prophesied the rise of artificial intelligence and the impending singularity, most have restricted their predictions to Earth. Fewer thinkers—outside the realm of science fiction, that is—have considered the notion that artificial intelligence is already out there, and has been for eons.
Stone notes that prominent thinkers who espouse this view, that the dominant intelligence in the cosmos is probably artificial, include: Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence program; the esteemed astrobiologist Paul Davies; Library of Congress Chair in Astrobiology Stephen Dick; and the philosopher Susan Schneider. Her recent paper, “Alien Minds,” describes why alien life forms are likely to be synthetic, not biological.
As increasing evidence shows potentially habitable worlds strewn across the galaxy, it appears less likely that we are alone. But what would encounters with intelligent life forms be like? Schneider answers:
Everything about their cognition—how their brains receive and process information, what their goals and incentives are—could be vastly different from our own … Astrobiologists need to start thinking about the possibility of very different modes of cognition.
Thus the case for artificial superintelligence. If we are to communicate with them, if we are to survive our encounter with them, we need to develop artificial superintelligence.
There’s an important distinction here from just ‘artificial intelligence’ … I’m not saying that we’re going to be running into IBM processors in outer space. In all likelihood, this intelligence will be way more sophisticated than anything humans can understand.
Advanced civilizations probably moved quickly from radio to computers to AI to superintelligence at which point biological brains would be obsolete. Schneider notes the rapidly expanding world of brain-computer interface technology, including DARPA’s ElectRX neural implant program is evidence that the singularity is near. She predicts that we will upgrade our minds with technology, eventually switching to synthetic hardware. “It could be that by the time we actually encounter other intelligences, most humans will have substantially enhanced their brains,” Schneider said.
Still, we may be too late. If we encounter other intelligences they will probably be millions of years older than we are, they will already be artificial superintelligences according to many astronomers. As Seth Shostak puts it:
The way you reach this conclusion is very straightforward … Consider the fact that any signal we pick up has to come from a civilization at least as advanced as we are. Now, let’s say, conservatively, the average civilization will use radio for 10,000 years. From a purely probabilistic point of view, the chance of encountering a society far older than ourselves is quite high.
Our intelligence may be trivial compared to other intelligences in the universe. Moreover, there is good reason to think these intelligences would be conscious, independent of the substrate on which consciousness runs:
I don’t see any good reason to believe an artificial superintelligence couldn’t possess consciousness … I believe the brain is inherently computational—we already have computational theories that describe aspects of consciousness, including working memory and attention … Given a computational brain, I don’t see any good argument that silicon, instead of carbon, can’t be an excellent medium for experience.
Granted the idea that the heavens are teeming with superintelligent AIs is speculative, but it has practical consequences as Shostak notes:
So far, we’ve pointed antennas at stars that might have planets that might have breathable atmospheres and oceans and so forth … But if we’re correct that the dominant intelligence in the cosmos is artificial, then does it have to live on a planet with an ocean? … All artificial life forms would need is raw materials … They might be in deep space, hovering around a star, or feeding off a black hole’s energy at the center of the galaxy.
How then might superintelligent aliens view us? Will they see us as brothers of as biofuel? Schneider doesn’t think they’ll care: “If they were interested in us, we probably wouldn’t be here … My gut feeling is their goals and incentives are so different from ours, they’re not going to want to contact us.” Shostak agrees: “… We’re just too simplistic, too irrelevant. “You don’t spend a whole lot of time hanging out reading books with your goldfish. On the other hand, you don’t really want to kill the goldfish, either.”
The idea that aliens will be uninterested in us differs from Steven Hawking’s claim that advanced aliens might want to kill us. Either way, we should probably continue to upgrade our intelligence. That way, if we encounter superintelligences bent on our destruction, asteroids hurling toward earth, or out of control climate change, our intelligence will increase our chances of surviving the encounter.