Offsetting lethal autonomy with empathy | CityAM
“The benefits of giving artificial intelligence a bigger role in the military are obvious – on the front line, machines, rather than human lives, would be put at risk (in the case of the attacker, at least), and the potential damage inflicted by a highly efficient and powerful machine far exceeds that inflicted by a human. But with this impact comes greater risk. What if a robot is programmed incorrectly? It could result in thousands of human lives being ended by accident, or in the unintended destruction of huge amounts of expensive infrastructure.”
This is a decent overview of the risks involved in investing lethal machines with autonomy, and also the risks (vulnerability to hacking) of not doing so. The argument for the proposed solution is sketchy, but does provide enough information for the reader to locate primary sources.