The ability for humans to generalize their knowledge and experiences to new situations is remarkable, yet poorly understood. For example, imagine a human driver that has only ever driven around their city in clear weather. Even though they never encountered true diversity in driving conditions, they have acquired the fundamental skill of driving, and can adapt reasonably fast to driving in neighboring cities, in rainy or windy weather, or even driving a different car, without much practice nor additional driver’s lessons. While humans excel at adaptation, building intelligent systems with common-sense knowledge and the ability to quickly adapt to new situations is a long-standing problem in artificial intelligence.
Most of the ocean is unknown. Yet we know that the most challenging environments on the planet reside in it. Understanding the ocean in its totality is a key component for the sustainable development of human activities and for the mitigation of climate change, as proclaimed by the United Nations. We are glad to share our perspective about the role of soft robots in ocean exploration and offshore operations at the outset of the ocean decade (2021-2030).
Minimally invasive surgeries in which surgeons gain access to internal tissues through natural orifices or small external excisions are common practice in medicine. They are performed for problems as diverse as delivering stents through catheters, treating abdominal complications, and performing transnasal operations at the skull base in patients with neurological conditions.
Schools of fish exhibit complex, synchronized behaviors that help them find food, migrate, and evade predators. No one fish or sub-group of fish coordinates these movements, nor do fish communicate with each other about what to do next. Rather, these collective behaviors emerge from so-called implicit coordination — individual fish making decisions based on what they see their neighbors doing.
How do honeybees land on flowers or avoid obstacles? One would expect such questions to be mostly of interest to biologists. However, the rise of small electronics and robotic systems has also made them relevant to robotics and Artificial Intelligence (AI). For example, small flying robots are extremely restricted in terms of the sensors and processing that they can carry onboard. If these robots are to be as autonomous as the much larger self-driving cars, they will have to use an extremely efficient type of artificial intelligence – similar to the highly developed intelligence possessed by flying insects.

The UK Robotics Growth Partnership (RGP) aims to set the conditions for success to empower the UK to be a global leader in Robotics and Autonomous Systems whilst delivering a smarter, safer, more prosperous, sustainable and competitive UK. The aim is for smart machines to become ubiquitous, woven into the fabric of society, in every sector, every workplace, and at home. If done right, this could lead to increased productivity, and improved quality of life. It could enable us to meet Net Zero targets, and support workers as their roles transition from menial tasks.
By Leah Burrows / SEAS communications
Newly engineered slinky-like strain sensors for textiles and soft robotic systems survive the washing machine, cars and hammers.
By Caitlin Dawson
USC researchers have developed a method that could allow robots to learn new tasks, like setting a table or driving a car, from observing a small number of demonstrations.
A few weeks ago I gave a short paper at the excellent International Conference on Robot Ethics and Standards (ICRES 2020), outlining a case study in Ethical Risk Assessment – see our paper here. Our chosen case study is a robot teddy bear, inspired by one of my favourite movie robots: Teddy, in A. I. Artificial Intelligence.
By Conn Hastings, science writer
Controlling a swarm of robots to paint a picture sounds like a difficult task. However, a new technique allows an artist to do just that, without worrying about providing instructions for each robot. Using this method, the artist can assign different colors to specific areas of a canvas, and the robots will work together to paint the canvas. The technique could open up new possibilities in art and other fields.
By Oleh Rybkin, Danijar Hafner and Deepak Pathak
To operate successfully in unstructured open-world environments, autonomous intelligent agents need to solve many different tasks and learn new tasks quickly. Reinforcement learning has enabled artificial agents to solve complex tasks both in simulation and real-world. However, it requires collecting large amounts of experience in the environment for each individual task.
February 24, 2021
Need help spreading the word?
Join the Robohub crowdfunding page and increase the visibility of your campaign