This concludes the ShanghAI Lecture series of 2012. After a wrap-up of the class, we announce the winners of the EmbedIT and NAO competitions and end with an outlook of the future of the ShanghAI Lectures.
Then there are three guest lectures: Tamás Haidegger (Budapest University of Technology and Economics) on surgical robots, Aude Billard (EPFL) on how the body shapes the way we move (and how humans can shape the way robots move), and Jamie Paik (EPFL) on soft robotics.
For the next week, Robohub will host a special focus on robots and jobs, featuring original articles from leading experts in the fields of robotics and automation. The goal of the series is to explore the shifting employment landscape as robots become more prevalent in the workplace, and we’ve got a great lineup!
In this episode we hear how the Spanish robotic startup Adele is creating a marketplace for robotics software. Through their platform, robot developers can buy software components for their robots, and software developers can sell their code, in a practical way. Examples of the software components, Adele calls them sparks, are speech recognition, synthetic speech, vision systems and user interface components. Their flagship project FIONA (Framework for Interactive-services Over Natural-conversational Agents) allows users to create intelligent and interactive virtual avatars.
So if you really think about it, today’s comic brings up several interesting issues regarding robots and war. One that comes to mind that is certainly not anywhere near a reality, is what if robots are self-aware, does that change the morality of them fighting (I’m thinking Asimov). If robots are used as soldiers, would they ever fight human soldiers? If robot soldiers are fighting against other robot soldiers, then is anything really accomplished? I guess you would be costing each other money, and that’s definitely something, but you’d think it’d be easier to just gamble at that point. In some ways, I’m reminded of the Star Trek episode where the two countries determined casualties randomly, without ever fighting, and if you “lost” the lottery, you went to be killed. Deep stuff!
Surprise! Robots are much more popular than expected!
Robots on Tour started off on Friday 8 March 2013 with a scientific symposium at café/bar Sphères in Zurich, Switzerland. Big names from the robotic community shared and discussed their research with the audience. The venue was packed, and many people sat on the floor as there weren’t enough chairs.
Robot machines have been shaping the future of war since the first siege engines appeared in ancient times (I like to think the Trojan Horse was motorized). Now with technology significantly extending our military reach and impact, small surgical-strike war is becoming more assured, though not — I’m glad to say — cheaper. Humans are still the best cost-to-benefit weapons for the battlefield and will be so for quite a while. That offers hope as the personal risks, regardless of ideology, become universally recognized as too damn high.
What also assures me comes from my years of robot gaming: when you have battles between autonomous robots, people just don’t care, as there are no human egos bolstered or defeated by the outcome. Machines beating on machines has no emotional connection for us, which is where the ceiling of robot-soldier tolerance might stall.
What will be horrific is the first time a humanoid soldier machine is broadcast hurting or killing humans in a war setting. When I worked in vision technology, we were asked how would a machine tell the difference between a group of soldiers and a pack of boy scouts from a distance? It couldn’t, which is why human judgement is still the means by which a trigger is pulled. Even still, when that video broadcasts, everyone in the world will know our place as the dominant species has just become … less so.
But the question comes down to who gets blamed when a robot commits an atrocity? Without human frailty to take blame on site, is it the remote pilots, the generals, the politicians? Sadly the precedent for blame-free robot conflict is being settled by beltway-lawyers now. A new cold war where you’ll be able to legally and blamelessly use a killer-drone App, though you’ll still go to jail for downloading a movie.
Because that’d be immoral.
How will robots shape the future of war? I don’t know. I think that the more important question, however, is: what role should robots have in warfare?
In my answer I have tried (as much as is humanly possible) to put myself in the role of an alien dispassionately analyzing the situation. And when I do, I keep returning to the following conclusion: the best possible outcome for humanity would be for robots to not play any part (with the possible exception of purely defensive roles such as defusing mines) in warfare whatsoever.
If I were an alien, this is what I would first observe:
And this is what I would then conclude:
Dominant powers are being seduced by the advantages that robots can bring to the battlefield. In the short term, this is a perfectly rational strategy. In the long term, however, this leads to an arms race. Even though a dominant power may be able to maintain its lead by continually developing robotic weapons, the capabilities of its adversaries, while inferior, will co-develop and eventually reach levels that will allow them to inflict catastrophic damage.
Furthermore, asymmetric warfare insidiously erodes the sense of fairness outlined in point 3, with detrimental consequences for both sides. The losing side is disenfranchised, which coupled with points 1 and 2 above, is extremely destabilizing. The winning side loses its moral compass and the fabric that holds its society together begins to unravel, leading to home-grown disenfranchisement and destabilization there as well.
The net result of this robotic arms race will be a high-volatility stalemate, with dangerous weapons available to the masses and a lack of social restraint to prevent their indiscriminate use.
If I were an alien, and thus immune to personal and economic factors that could influence my impartiality (such as having a loved one in combat, or being employed by a weapons dealer or manufacturer), I could only conclude that humanity would greatly benefit from imposing strict and far-reaching bans on the use of robotic technology in warfare.
As a human, not only am I skeptical that this will happen, I admit that my personal views are situation dependent: if my daughter were in combat, I wouldn’t care about asymmetry or fairness, I would want her to be as safe as possible. I don’t think that this makes me a hypocrite, it just makes me human.