news    views    podcast    learn    |    about    contribute     republish     events

Offsetting lethal autonomy with empathy | CityAM


curated by | December 27, 2014

“The benefits of giving artificial intelligence a bigger role in the military are obvious – on the front line, machines, rather than human lives, would be put at risk (in the case of the attacker, at least), and the potential damage inflicted by a highly efficient and powerful machine far exceeds that inflicted by a human. But with this impact comes greater risk. What if a robot is programmed incorrectly? It could result in thousands of human lives being ended by accident, or in the unintended destruction of huge amounts of expensive infrastructure.”

Source: www.cityam.com

This is a decent overview of the risks involved in investing lethal machines with autonomy, and also the risks (vulnerability to hacking) of not doing so. The argument for the proposed solution is sketchy, but does provide enough information for the reader to locate primary sources.

 

Via https://twitter.com/DeepStuff



comments powered by Disqus


Artificial Intelligence and Data Analysis in Salesforce Analytics
September 17, 2018

Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign