news    views    podcast    learn    |    about    contribute     republish     events

Hewlett-Packard Professor of Electronic Engineering, UWE Bristol Visiting Professor, Department of Electronics, University of York Director of the UWE Science Communication Unit EPSRC Senior Media Fellow I am deeply interested in mobile robots for two reasons: (1), they are complex and potentially useful machines that embody just about every design challenge and discipline there is and (2), robots allow us to address some deep questions about life, emergence, culture and intelligence in a radically new way, that is by building models. Thus, robotics is for me both engineering and experimental philosophy. I'm committed to the widest possible dissemination of research and ideas in science, engineering and technology and I believe that robots provide us with a wonderful vehicle for public engagement. Actually I would go a stage further and argue that intelligent robots will become ubiquitous in the near future and we therefore need to start a dialogue now about the ethical and moral questions that will arise.



by   -   July 31, 2018

A few weeks ago we had the kick-off meeting, in York, of our new 4 year EPSRC funded project Autonomous Robot Evolution (ARE): cradle to grave. We – Andy Tyrrell and Jon Timmis (York), Emma Hart (Edinburgh Napier), Gusti Eiben (Free University of Amsterdam) and myself – are all super excited. We’ve been trying to win support for this project for five years or so, and only now succeeded. This is a project that we’ve been thinking, and writing about, for a long time – so to have the opportunity to try out our ideas for real is wonderful.

by   -   July 16, 2018

Readers of this blog will know that I’ve become very excited by the potential of robots with simulation-based internal models in recent years. So far we’ve demonstrated their potential in simple ethical robots and as the basis for rational imitation. Our most recent publication instead examines the potential of robots with simulation-based internal models for safety. Of course it’s not hard to see why the ability to model and predict the consequences of both your own and others’ actions can help you to navigate the world more safely than without that ability.

Our paper Simulation-Based Internal Models for Safer Robots demonstrates the value of anticipation in what we call the corridor experiment. Here a smart robot (equipped with a simulation based internal model which we call a consequence engine) must navigate to the end of a corridor while maintaining a safe space around it at all times despite five other robots moving randomly in the corridor – in much the same way you and I might have to navigate down a busy office corridor while others are coming in the opposite direction.

Here is the abstract from our paper:

In this paper, we explore the potential of mobile robots with simulation-based internal models for safety in highly dynamic environments. We propose a robot with a simulation of itself, other dynamic actors and its environment, inside itself. Operating in real time, this simulation-based internal model is able to look ahead and predict the consequences of both the robot’s own actions and those of the other dynamic actors in its vicinity. Hence, the robot continuously modifies its own actions in order to actively maintain its own safety while also achieving its goal. Inspired by the problem of how mobile robots could move quickly and safely through crowds of moving humans, we present experimental results which compare the performance of our internal simulation-based controller with a purely reactive approach as a proof-of-concept study for the practical use of simulation-based internal models.

So, does it work? Thanks to some brilliant experimental work by Christian Blum the answer is a resounding yes. The best way to understand what’s going on is with this wonderful gif animation of one experimental run below. The smart robot (blue) starts at the left and has the goal of safely reaching the right hand end of the corridor – its actual path is also shown in blue. Meanwhile 5 (red) robots are moving randomly (including bouncing off walls) and their actual paths are also shown in red; these robots are equipped only with simple obstacle avoidance behaviours. The larger blue circle shows blue’s ‘attention radius’ – to reduce computational effort blue will only model red robots within this radius. The yellow paths in front of the red robots in blue’s attention radius show blue’s predictions of how those robots will move (taking into account collisions with the corridor walls and with blue and each other). The light blue projection in front of blue shows which of the 34 next possible actions of blue that is internally modelled is actually chosen as the next action (which, as you will see, sometimes includes standing still).

What do the results show us? Christian ran lots of trials – 88 simulations and 54 real robot experiments – over four experiments: (1) the baseline in simulation – in which the blue robot has only a simple reactive collision avoidance behaviour, (2) the baseline with real robots, (3) using the consequence engine (CE) in the blue robot in simulation, and (4) using the consequence engine in the blue robot with real robots. In the results below (a) shows the time taken for the blue robot to reach the end of the corridor, (b) shows the distance that the blue robot covers while reaching the end of the corridor, (c) shows the “danger ratio” experienced by the blue robot, and (d) shows the number of consequence engine runs per timestep in the blue robot. The danger ratio is the percentage of the run time that anther robot is within the blue robot’s safety radius.

For a relatively small cost in additional run time and distance covered, panels (a) and (b), the danger ratio is very significantly reduced from a mean value of ~20% to a mean value of zero, panel (c). Of course there is a computational cost, and this is reflected in panel (d); the baseline experiment has no consequence engine and hence runs no simulations, whereas the smart robot runs an average of between 8 and 10 simulations per time-step. This is exactly what we would expect: predicting the future clearly incurs a computational overhead.


Full paper reference:
Blum C, Winfield AFT and Hafner VV (2018) Simulation-Based Internal Models for Safer Robots. Front. Robot. AI 4:74. doi: 10.3389/frobt.2017.00074


Acknowledgements:
I am indebted to Christian Blum who programmed the robots, set up the experiment and obtained the results outlined here. Christian lead authored the paper, which was also co-authored by my friend and research collaborator Verena Hafner, who was Christian’s PhD advisor.

by   -   June 21, 2018

Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils.

Last week my colleague Dieter Vanderelst presented our paper: The Dark Side of Ethical Robots at AIES 2018 in New Orleans.


This blogpost is a round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. The principles are presented here (in full or abridged) with notes and references but without commentary. If there are any (prominent) ones I’ve missed please let me know.

Here are the slides I gave recently as member of panel Sci-Fi Dreams: How visions of the future are shaping the development of intelligent technology, at the Centre for the Future of Intelligence 2017 conference. I presented three short stories about robot stories.

NAO robot. Photo courtesy: Paul Bremner/UWE

I was asked to write a short op-ed on the European Parliament Law Committee’s recommendations on civil law rules for robotics. In the end, the piece didn’t get published, so I am posting it here:

Last week I had the pleasure of debating the question “does AI pose a threat to society?” with friends and colleagues Christian List, Maja Pantic and Samantha Payne. The event was organised by the British Academy and brilliantly chaired by the Royal Society’s director of science policy Claire Craig. Here follows my opening statement:

Part 2: Autonomous Systems and Transparency

In my previous post I argued that a wide range of AI and Autonomous Systems (from now on I will just use the term AS as shorthand for both) should be regarded as Safety Critical. I include both autonomous software AI systems and hard (embodied) AIs such as robots, drones and driverless cars. Many will be surprised that I include in the soft AI category apparently harmless systems such as search engines. Of course no-one is seriously inconvenienced when Amazon makes a silly book recommendation, but consider very large groups of people. If a truth (such as global warming) is – because of accidental or willful manipulation – presented as false, and that falsehood is believed by a very large number of people, then serious harm to the planet (and we humans who depend on it) could result.

With machine intelligence emerging as an essential tool in many aspects of modern life, Alan Winfield discusses autonomous sytems, safety and regulation.

car-wash

We tend to assume that automation is a process that continues – that once some human activity has been automated there’s no going back. That automation sticks. But, as Paul Mason pointed out in a recent column that assumption is wrong.

Alan Winfield introduces the recently published IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems…

NAO-humanoid-robot-robots-Aldebaran

When I was interviewed on the BBC Radio 4’s Today programme in 2014, Justin Webb’s final question was, “If you can make an ethical robot, doesn’t that mean you could make an unethical robot?” The answer, of course, is yes. But at the time, I didn’t realise quite how easy it is to transform a robot from ethical to unethical. In a new paper, we show how.

space-robotics-exploration

Ever since Elon Musk’s recent admission that he’s a simulationist, several people have asked me what I think of the proposition that we are living inside a simulation. My view is very firmly that the Universe we are right now experiencing is real. Here are my reasons.

To design a gendered robot is a deception. Robots cannot have a gender in any meaningful sense. To impose a gender on a robot, either by design of its outward appearance, or programming some gender stereotypical behaviour, cannot be for reasons other than deception – to make humans believe that the robot has gender, or gender specific characteristics.