Robot machines have been shaping the future of war since the first siege engines appeared in ancient times (I like to think the Trojan Horse was motorized). Now with technology significantly extending our military reach and impact, small surgical-strike war is becoming more assured, though not — I’m glad to say — cheaper. Humans are still the best cost-to-benefit weapons for the battlefield and will be so for quite a while. That offers hope as the personal risks, regardless of ideology, become universally recognized as too damn high.
What also assures me comes from my years of robot gaming: when you have battles between autonomous robots, people just don’t care, as there are no human egos bolstered or defeated by the outcome. Machines beating on machines has no emotional connection for us, which is where the ceiling of robot-soldier tolerance might stall.
What will be horrific is the first time a humanoid soldier machine is broadcast hurting or killing humans in a war setting. When I worked in vision technology, we were asked how would a machine tell the difference between a group of soldiers and a pack of boy scouts from a distance? It couldn’t, which is why human judgement is still the means by which a trigger is pulled. Even still, when that video broadcasts, everyone in the world will know our place as the dominant species has just become … less so.
But the question comes down to who gets blamed when a robot commits an atrocity? Without human frailty to take blame on site, is it the remote pilots, the generals, the politicians? Sadly the precedent for blame-free robot conflict is being settled by beltway-lawyers now. A new cold war where you’ll be able to legally and blamelessly use a killer-drone App, though you’ll still go to jail for downloading a movie.
Because that’d be immoral.