Prototype humanoid robots at the Intelligent Robotics Laboratory in Osaka, Japan
There has been a lot of discussion about the rise of intelligent machines in the last year. Here are 4 recent books about the subject with a brief description of each.
Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat:
At the heart of filmmaker Barrat’s book is the prophecy of the British mathematician I.J. Good, a colleague of Alan Turing. Good reasoned that once machines become more intelligent than humans, then the machines would design other machines leading to an intelligence explosion that would leave humans far behind. Is this true? What Barrat finds is that almost half of experts in the field expect intelligent machines within 15 years, and a large majority expect them shortly thereafter. Barrett concludes that this intelligence explosion will lead almost immediately to the singularity, although we have no idea what these machines will do.
Eclipse of Man: Human Extinction and the Meaning of Progress, by Charles T Rubin:
The political philosopher Rubin’s book explores the roots of our desire to use technology to alter the human condition. This urge has aided humans greatly in the past, but Rubin believes that technologically minded idealists think of humanity itself as a problem. This is a mistake, he believes, and allowing machines to make our decisions is problematic. Instead of improving us, our technology might supplant us. It will act as a hostile alien invader.
Smarter Than Us: The Rise of Machine Intelligence, by Stuart Armstrong:
Armstrong is a fellow of the Future of Humanity Institute at Oxford who has thought hard about how superintelligence could be made “friendly.” He argues that it would be difficult to communicate with alien beings that have computer minds. We might ask them to rid the planet of violence, and they would rid the planet of us! The point is that values are hard to explain since they are based on, among other things, common sense and unstated assumptions. To turn those values into programable code would be extraordinarily challenging, and to avoid catastrophe, we could not make mistakes.
In Our Own Image: Will Artificial Intelligence Save or Destroy Us?, by George Zarkadakis:
Most of our ideas about what it would be like to live with superintelligences comes from science fiction, says AI researcher George Zarkadakis. There is little doubt that science fiction stories and metaphors influence us, and as a result, we tend to anthropomorphize in order to make sense of our technology. We imagine robots like Schwarzenegger’s Terminator; we imagine them with human qualities. But intelligence machines won’t be human, they will not share our evolutionary history, or have brains like ours. So who knows their goals and value, or how they will regard humans. Perhaps they won’t need us.
All these books worry that intelligent machines might destroy us, if only inadvertently. Another problem is that many AI researchers aren’t trying to create friendly AIs, instead developing robots for war—unfriendly AIs. Surely things might go wrong if we create machines that kill humans. Thus all of these authors believe that we should be worried.