What Are Slaughterbots?

The above video was produced by

While lethal, fully autonomous weapons systems, or killer robots, aren’t yet able to select and attack targets without human control, a number of countries are developing such devices. And a number of organizations including The Future of Life InstituteHuman Rights Watch, and the International Committee for Robot Arms Control, have all warned against their development.

For those interested, you can sign an open letter against weaponizing AI at: http://autonomousweapons.org/

Also, Professor Stuart Russell of the computer science department at UC-Berkeley gave a TED Talk a few months ago which explored the issue:

And the science fiction writer Daniel Suarez explored the same theme in this TED talk:

Finally, consider that the political party in the USA most associated with toughness and defense is the same one that is anti-science and anti-intellectual. It doesn’t promote our security to undermine the science and technology that is the source of USA military power. If other countries develop AI, robots, and autonomous weapons first, then nuclear weapons may be obsolete. So it is counterproductive for a country that wants to dominate others or defend itself to make it almost impossible for bright foreign students to get HB1 visas. Of course, the primary enemies of the USA today are domestic ones.

Liked it? Take a second to support Dr John Messerly on Patreon!
Become a patron at Patreon!

One thought on “What Are Slaughterbots?

  1. I’ve done a lot of software development, and I can assure you that it is almost impossible to design foolproof software. For simple systems, such as Internet security (believe me, that’s a fairly simple problem), very few operations actually work flawlessly; hackers seem to wriggle into all sorts of places they shouldn’t be able to get to. For a wildly complex system like an autonomous lethal weapon, I will state flatly that it is impossible to design a foolproof system. And what if somebody hacks the weapon system?

    One of the most gorily funny examples of this problem was presented way back in the 1980s with the first Robocop movie. When they demonstrated their first autonomous system, there was a bug and… well, watch it for yourself:

    “Dick, I’m very disappointed…”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.