Four Recent Books About The Rise of the Machines

Prototype humanoid robots at the Intelligent Robotics Laboratory in Osaka, Japan

Prototype humanoid robots at the Intelligent Robotics Laboratory in Osaka, Japan

(This article was reprinted in Humanity+ Magazine, May 5, 2015)

There has been a lot of discussion about the rise of intelligent machines in the last year. Here are 4 recent books about the subject with a brief description of each.

Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat:

At the heart of film maker Barrat’s book is the prophecy of the British mathematician I.J. Good, a colleague of Alan Turing. Good reasoned that once machines became more intelligent than humans, then the machines would design other machines leading to an intelligence explosion which would leave humans far behind. Is this true? What Barrat finds is that almost half of experts in the field expected intelligent machines within 15 years, and a large majority expected them shortly thereafter.  Barrett concludes that this intelligence explosion will lead almost immediately to the singularity, although we have no idea what these machines will do.

Eclipse of Man: Human Extinction and the Meaning of Progress, by Charles T Rubin:

The political philosopher Rubin’s book explores the roots of our desire to use technology to alter the human condition. This urge has aided humans greatly in the past, but Rubin believes that technologically-minded idealists regard humanity as a problem. This is a mistake, he believes, and allowing machines to make our decisions is problematic. Instead of improving us, our technology might supplant us; it would be like a hostile alien invader.

Smarter Than Us: The Rise of Machine Intelligence, by Stuart Armstrong:

Armstrong is a fellow of the Future of Humanity Institute at Oxford who has thought hard about how superintelligence could be made to be “friendly.” He argues that it would be difficult to communicate with alien beings that have computer minds. We might ask it to rid the planet of violence, and it would rid the planet of us! The point is that values are hard to explain, since they are based on, among other things, common sense and unstated assumptions. To turn those values into programming code would be extraordinarily challenging, and to avoid catastrophe, we could not make mistakes.

In Our Own Image: Will Artificial Intelligence Save or Destroy Us?, by George Zarkadakis:

Most of our ideas about what it would be like to live with superintelligences comes from science fiction, says the AI researcher George Zarkadakis. There can be little doubt that science fiction stories and metaphors have influenced us. As a result, we tend to anthropomorphize in order to make sense of our technology. We imagine robots like Schwarzenegger’s Terminator; we imagine robots and superintelligences with human qualities. But intelligence machines won’t be human, they will not share our evolutionary history, they will not have brains like ours. So who knows their goals and values; who knows how they will regard humans. Perhaps they will have no need for us.

All these books worry that intelligent machines might destroy us, even if only inadvertently. Moreover, many AI researchers aren’t even concerned about the problem of creating friendly AIs. In fact, a large part of AI research is dedicated to developing robots for war—to developing unfriendly AI. Surely things might go wrong if we create mostly machines that kill humans. All of these authors believe that we should be worried.


Leave a Reply

Your email address will not be published. Required fields are marked *