The Singapore Ministry of Defence convened the sixth edition of the Island Forum in November 2014 to explore the emerging future of artificial intelligence and robotics, developments that could have enormous implications for societies, economies, culture, and security. Representatives from the Ministry and the Singapore Armed Forces were joined by some of the world’s leading roboticists, economists, military strategists, and ethicists to share their insights in deep conversations over the two-day event. This international, multi-disciplinary group addressed how these emerging technologies could disrupt labour markets; how human-robotic interaction might evolve; how robotic agents might influence the development of a new kind of conflict; and the importance of autonomy and ethics in these themes. The Island Forum is held in Singapore every two to three years and focusses on a wide range of key issues.
The Egyptian writer Naguib Mahfouz said you can tell whether a man is clever by his answers, and whether a man is wise by his questions. In this spirit, the Island Forum was a superb prompt for questions and questioning the impact of robotics and artificial intelligence (AI) advancements. A frisson of excitability accompanies discussion of robotics and AI, and we start with questions on what’s new, temper our expectations with what might disappoint us, try to anticipate what might surprise us in a bad way/good way, and end with some takeaway questions.
Dr Andrew Ng, founder of the Google Brain project and now chief scientist at Baidu, observed that the performance of algorithm-based learning plateaus after a certain dataset size. Machine or “deep” learning by contrast shows continued performance improvement. This improvement allows new ways to interface with the Internet other than conventional text – such as speech and image. There is ready demand – 10% of Baidu’s search queries today are via speech – and deep learning may one day enable speech/image killer apps to expand ability of users to interact with technology they are unfamiliar with. The audience had a glimpse of Baidu’s deep learning products – the Baidu coolbox, a speech-based music streaming service for the home and Baidu Eye, a device to rival Google Glass that analyses images around the wearer.
“Deep” learning is not “deep” reasoning, and the type of AI favoured by science fiction writers is still decades away. Similarly, robots working with humans (co-bots) are emerging in structured environments like factory floors, warehouses, hospitals and airports, but social environments are much more unstructured. Homes or offices may be among the last spaces to see automation. Several speakers worry that the current robot or AI hype echoes past fads (e.g. the Segway), and hurts industry prospects when hype inevitably leads to disappointment.
Joseph Dyer, retired vice admiral US Navy and former senior executive of iRobot, observes that robots and productivity combine in a way that is good for developed economies that have the combination of market demand, manufacturing/infocomm infrastructure and skillsets. Emerging economies that lack this combination can be badly hit, leading to de-industrialisation. Baidu’s Andrew Ng put it slightly differently: economies that have the requisite hardware and software ecosystem for robots are in a good position for growth.
On a different track, Pacific Social attorney Tim Hwang works with social bots – lines of code deployed on social networks like Twitter and Facebook (what he terms “infrastructures of influence”) – to control the flow of information and alter the social landscape and behaviour of users. Bots were active in promoting candidates at the 2012 Mexican presidential election, though these bots were, at the time, crude and easily detectable. Even so, these imaginary citizens of the Internet have surprising power to make celebrities, presidential candidates, and companies more popular than they really are, swaying public opinion and influencing the social and political agenda. Tim’s observation is that people are much better at handling viruses compared to spam, i.e. humans still fall for spam, much less sophisticated social bots. Would a social group be able to identify if it has been “hacked”? As organisations adopt more sophisticated and subtle bot strategies, they can shape the social landscape with less detection and controversy.
James Kuffner of Google observes how cloud robotics enables robotsourcing, which, like human crowdsourcing, can help scale hard semantic and quality control problems globally. Markus Waibel, of Verity Studios and ETH Zurich, calls this evolution of individual robots towards specialisation and coordination. This shared infrastructure enables complex cooperation and integration, allowing collective learning to overcome previous limitations. For example, a robot may breakdown below a certain time threshold (say 10,000 hours) needed to master a problem but this can be overcome via cloud robotics – 10,000 robots taking an hour each can approach mastery.
Carl Frey of Oxford University thinks that while the prevailing worry is robotics/AI resulting in technological (human) unemployment, robot-human interaction is the key to robotics taking off in society over the next ten years, e.g. unmanned/manned teaming. Robots will then be companions, augmenting and not replacing workers. What is currently understudied is human behaviour recognition so robots can anticipate human actions better. For example, autonomous vehicles that take into account the range of possible near future actions (a few seconds out) of human pedestrians may be safer.
Ron Arkin of Georgia Tech shared how robots, building on deep learning advances in speech/image interaction, can interact with humans in a subtle and sophisticated manner, such as learning our body language and how to live with us. Humans can fall in love with what is essentially a chunk of metal, and this human-robot relationship can help bridge relationships with other humans, for example, that between a caregiver and a patient. Joseph Dyer observed that if this uncanny chasm was crossed, robots would augment humans and be able to extend the productive use of an ageing population.
Robots will become part of tomorrow’s “hard power”, and bots a part of tomorrow’s “soft power”. Will armed conflict be more likely? Possibly, but in the case of fewer humans involved, unmanned conflict could ironically lead to higher levels of conflict or destruction because of the disconnect that societies and governments might feel from the lack of human loss. Will conflict be more likely to escalate? The answer is a Yes/No, depending on whether humans are involved. There may be a persistent low level of unmanned conflict between robots and bots – between tomorrow’s hard and soft powers. Will conflict be more unequal? One scenario considers that more technologically advanced societies could be more likely to launch unmanned conflict and trigger conflict against lesser capable societies; but as we have seen elsewhere, militaries with technical superiority do not necessarily overcome highly motivated asymmetric forces.
The bigger question that governments will face is regulating the degree of autonomy allowed in military and non-military service robots. To what degree should autonomous weapons systems be designed with a “human in the loop”? Can robots autonomously decide to take human life? The spectre of “killer bots” waging war outside of human intervention and guidance, popularised by media, could spark political backlash to constrain the future actions of governments. There already is consideration being discussed for a UN moratorium on autonomous weapons systems. These ethical questions must be posed before robots are ever designed or allowed into service.
Authors LEE Chor Pharn and Aaron MANIAM were guests of the Island Forum.