Robohub.org
 

The next big things in robotics

by
08 July 2014



share this:

Last week I attended the launch event for a new NESTA publication called Our work here is done: Visions of a Robot Economy.It was an interesting event, and not at all what I was expecting. In fact I didn’t know what to expect. Even though I contributed a chapter to the book I had no idea, until last week, who else had written for it – or the scope of those contributions and the book as a whole. I was very pleasantly surprised. Firstly because it was great to find myself in such good company: economists, philosophers, historians, (ex-) financiers and all round deep thinkers. And second because the volume faces up to some of the difficult societal questions raised by second wave robotics.

The panel discussion was excellent, and the response by economist Carlota Perez was engaging and thought provoking – check here for the Storified tweets and pictures. Perhaps the thing that surprised me the most, given the serious economists on the panel (FT, The Economist) was that the panel ended up agreeing that the Robot Economy will necessitate something like a Living Wage. Music to this socialist’s ears.

In my contribution: The Next Big Things in Robotics (pages 38-44) I do a bit of near-future gazing and suggest four aspects of robotics that will, I think, be huge. They are:

  • Wearable Robotics
  • Immersive Teleoperated Robots
  • Driverless Cars
  • Soft Robotics

To see why I chose these – and to read the other great articles – please download the book. Let me know if you disagree with my choice, or to suggest other Next Big Things in robotics. I end my chapter with a section called What’s not coming soon: super intelligent robots

“My predicted things that will be really big in robotics don’t need to be super intelligent. Wearable robots will need advanced adaptive (and very safe and reliable) control systems, as well as advanced neural–electronics interfaces, and these are coming. But ultimately it’s the human wearing the robot who is in charge. The same is true for teleoperated robots: again, greater low–level intelligence is needed, so that the robot can operate autonomously some of the time but ask for help when it can’t figure out what to do next. But the high–level intelligence remains with the human operator and – with advanced immersive interfaces as I have suggested – human and robot work together seamlessly. The most autonomous of the next big things in robotics is the driverless car, but again the car doesn’t need to be very smart. You don’t need to debate philosophy with your car – just trust it to take you safely from A to B.”



tags: , , , , , , ,


Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.
Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog.





Related posts :



Interview with Dautzenberg Roman: #IROS2023 Best Paper Award on Mobile Manipulation sponsored by OMRON Sinic X Corp.

The award-winning author describe their work on an aerial robot which can exert large forces onto walls.
19 November 2023, by

Robot Talk Episode 62 – Jorvon Moss

In the latest episode of the Robot Talk podcast, Claire chatted to Jorvon (Odd-Jayy) Moss from Digikey about making robots at home, and robot design and aesthetics.
17 November 2023, by

California is the robotics capital of the world

In California, robotics technology is a small fish in a much bigger technology pond, and that tends to conceal how important Californian companies are to the robotics revolution.
12 November 2023, by

Robot Talk Episode 61 – Masoumeh Mansouri

In the latest episode of the Robot Talk podcast, Claire chatted to Masoumeh (Iran) Mansouri from the University of Birmingham about culturally sensitive robots and planning in complex environments.
10 November 2023, by

The 5 levels of Sustainable Robotics

Robots can solve the UN SDGs and not just via the application area.
08 November 2023, by

Using language to give robots a better grasp of an open-ended world

By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.
06 November 2023, by





©2021 - ROBOTS Association


 












©2021 - ROBOTS Association