news    views    podcast    learn    |    about    contribute     republish    

University of Southern California         


interview by   -   February 17, 2020

In this episode, Shihan Lu interviews Dr. Heather Culbertson, Assistant Professor in the Computer Science Department at the University of Southern California, about her work in haptics. Dr. Culbertson discusses the data-driven realistic texture modeling and rendering, haptic technologies in the social touch, the combination of haptics and robots, expectations and obstacles of haptics in the next 5 years.

University of Washington         


interview by   -   February 2, 2020

In this episode, Lauren Klein interviews Human-Robot Interaction researcher Patrícia Alves-Oliveira. Alves-Oliveira tells us about the upcoming RSS Pioneers workshop at the 2020 Robotics: Science and Systems Conference; the workshop brings senior PhD students and postdoctoral researchers together to collaborate and discuss their work with distinguished members of the robotics field. She also describes her own research designing robots to encourage creativity in children.

Australian Centre for Robotic Vision         


interview by   -   January 29, 2020


In this episode, Lilly interviews Juxi Leitner, a Postdoctoral Research Fellow at the Queensland University of Technology and Co-Founder/CEO of LYRO Robotics. LYRO spun out of the 2017 win of the Amazon Robotics Challenge by Team ACRV. Here Juxi discusses deep learning, computer vision, intent in grasping and manipulation, and bridging the gap between abstract and low-level understandings of the world. He also discusses why robotics is really an integration field, the Amazon and other robotics challenges, and what’s important to consider when spinning an idea into a company.

by   -   January 26, 2020

By Glen Berseth

All living organisms carve out environmental niches within which they can maintain relative predictability amidst the ever-increasing entropy around them (1), (2). Humans, for example, go to great lengths to shield themselves from surprise — we band together in millions to build cities with homes, supplying water, food, gas, and electricity to control the deterioration of our bodies and living spaces amidst heat and cold, wind and storm. The need to discover and maintain such surprise-free equilibria has driven great resourcefulness and skill in organisms across very diverse natural habitats. Motivated by this, we ask: could the motive of preserving order amidst chaos guide the automatic acquisition of useful behaviors in artificial agents?

by   -   January 26, 2020

Like the city that hosts the Consumer Electronics Show (CES) there is a lot of noise on the show floor. Sifting through the lights, sounds and people can be an arduous task even for the most experienced CES attendees. Hidden past the North Hall of the Las Vegas Convention Center (LVCC) is a walkway to a tech oasis housed in the Westgate Hotel. This new area hosting SmartCity/IoT innovations is reminiscent of the old Eureka Park complete with folding tables and ballroom carpeting. The fact that such enterprises require their own area separate from the main halls of the LVCC and the startup pavilions of the Sands Hotel is an indication of how urbanization is being redefined by artificial intelligence.

by   -   January 23, 2020

An AI model developed at MIT and Qatar Computing Research Institute that uses only satellite imagery to automatically tag road features in digital maps could improve GPS navigation, especially in countries with limited map data.
Image: Google Maps/MIT News

By Rob Matheson

A model invented by researchers at MIT and Qatar Computing Research Institute (QCRI) that uses satellite imagery to tag road features in digital maps could help improve GPS navigation.  

interview by   -   January 10, 2020


In this episode, our interviewer Audrow Nash speaks to Gil Weinberg, Professor in Georgia Tech’s School of Music and the founding director of the Georgia Tech Center for Music Technology. Weinberg leads a research lab called the Robotic Musicianship group, which focuses on developing artificial creativity and musical expression for robots and on augmented humans. Weinberg discusses several of his improvisational robots and how they work, including Shimon, a multi-armed robot marimba player, as well as his work in prosthetic devices for musicians.

by   -   December 24, 2019

Thanks to all those that sent us their holiday videos. Here’s a selection of 20+ videos to get you into the spirit this season.

interview by , and   -   December 22, 2019

Welcome to the 300th episode of the Robohub podcast! You might not know that the podcast has been going in one form or another for 14 years. Originally called “Talking Robots,” the podcast was started in 2006 by Dario Floreano and several of his PhD students at EPFL, in Switzerland, including Sabine Hauert, Peter Dürr, and Markus Waibel, who are all still involved in Robohub today.  Since then, the podcast team has become international, with most of its interviewers in the United States and Europe, and all of its members being volunteers.

To celebrate 300 episodes of our podcast, we thought we would catch up with some of our former, as well as current, volunteers from around the world to find out why and how they got involved in the podcast, how their involvement impacted on their lives and careers, and what they’re doing in their day jobs now.

by   -   December 18, 2019

A group of EPFL researchers have developed a foldable device that can fit in a pocket and can transmit touch stimuli when used in a human-machine interface.

When browsing an e-commerce site on your smartphone, or a music streaming service on your laptop, you can see pictures and hear sound snippets of what you are going to buy. But sometimes it would be great to touch it too – for example to feel the texture of a garment, or the stiffness of a material. The problem is that there are no miniaturized devices that can render touch sensations the way screens and loudspeakers render sight and sound, and that can easily be coupled to a computer or a mobile device.

by   -   December 18, 2019

A researcher’s hand hovers over the water’s surface in the Intelligent Towing Tank (ITT), an automated experimental facility guided by active learning to explore vortex-induced vibrations (VIVs), revealing a path to accelerated scientific discovery.
Image: Dixia Fan and Lily Keyes/MIT Sea Grant

By Lily Keyes/MIT Sea Grant

In its first year of operation, the Intelligent Towing Tank (ITT) conducted about 100,000 total experiments, essentially completing the equivalent of a PhD student’s five years’ worth of experiments in a matter of weeks.

interview by   -   December 9, 2019

From Robert the Robot, 1950s toy ad

In this episode, we take a closer look at the effect of novelty in human-robot interaction. Novelty is the quality of being new or unusual.

The typical view is that while something is new, or “a novelty”, it will initially make us behave differently than we would normally. But over time, as the novelty wears off, we will likely return to our regular behaviors. For example, a new robot may cause a person to behave differently initially, as its introduced into the person’s life, but after some time, the robot won’t be as exciting, novel and motivating, and the person might return to their previous behavioral patterns, interacting less with the robot.

To find out more about the concept of novelty in human-robot interactions, our interviewer Audrow caught up with Catharina Vesterager Smedegaard, a PhD-student at Aarhus University in Denmark, whose field of study is Philosophy.

Catharina sees novelty differently to how we typically see it. She thinks of it as projecting what we don’t know onto what we already know, which has implications for how human-robot interactions are designed and researched. She also speaks about her experience in philosophy more generally, and gives us advice on philosophical thinking.

by   -   December 7, 2019

robot-santa-call-for-holiday-videos

That’s right! You better not run, you better not hide, you better watch out for brand new robot holiday videos on Robohub!

by   -   December 7, 2019

By Aviral Kumar

One of the primary factors behind the success of machine learning approaches in open world settings, such as image recognition and natural language processing, has been the ability of high-capacity deep neural network function approximators to learn generalizable models from large amounts of data. Deep reinforcement learning methods, however, require active online data collection, where the model actively interacts with its environment. This makes such methods hard to scale to complex real-world problems, where active data collection means that large datasets of experience must be collected for every experiment – this can be expensive and, for systems such as autonomous vehicles or robots, potentially unsafe. In a number of domains of practical interest, such as autonomous driving, robotics, and games, there exist plentiful amounts of previously collected interaction data which, consists of informative behaviours that are a rich source of prior information. Deep RL algorithms that can utilize such prior datasets will not only scale to real-world problems, but will also lead to solutions that generalize substantially better. A data-driven paradigm for reinforcement learning will enable us to pre-train and deploy agents capable of sample-efficient learning in the real-world.

At Danfoss in Gråsten, the Danish Technological Institute (DTI) is testing, as part of a pilot project in the European robot network ROBOTT-NET, several robot technologies: Manipulation using force sensors, simpler separation of items and a 3D-printed three-in-one gripper for handling capacitors, nuts and a socket handle.

Haptics and Virtual Interactions
February 17, 2020

supported by: