news    views    podcast    learn    |    about    contribute     republish    

Aarhus University         

interview by   -   December 9, 2019

From Robert the Robot, 1950s toy ad

In this episode, we take a closer look at the effect of novelty in human-robot interaction. Novelty is the quality of being new or unusual.

The typical view is that while something is new, or “a novelty”, it will initially make us behave differently than we would normally. But over time, as the novelty wears off, we will likely return to our regular behaviors. For example, a new robot may cause a person to behave differently initially, as its introduced into the person’s life, but after some time, the robot won’t be as exciting, novel and motivating, and the person might return to their previous behavioral patterns, interacting less with the robot.

To find out more about the concept of novelty in human-robot interactions, our interviewer Audrow caught up with Catharina Vesterager Smedegaard, a PhD-student at Aarhus University in Denmark, whose field of study is Philosophy.

Catharina sees novelty differently to how we typically see it. She thinks of it as projecting what we don’t know onto what we already know, which has implications for how human-robot interactions are designed and researched. She also speaks about her experience in philosophy more generally, and gives us advice on philosophical thinking.

by   -   December 7, 2019


That’s right! You better not run, you better not hide, you better watch out for brand new robot holiday videos on Robohub!

by   -   December 7, 2019

By Aviral Kumar

One of the primary factors behind the success of machine learning approaches in open world settings, such as image recognition and natural language processing, has been the ability of high-capacity deep neural network function approximators to learn generalizable models from large amounts of data. Deep reinforcement learning methods, however, require active online data collection, where the model actively interacts with its environment. This makes such methods hard to scale to complex real-world problems, where active data collection means that large datasets of experience must be collected for every experiment – this can be expensive and, for systems such as autonomous vehicles or robots, potentially unsafe. In a number of domains of practical interest, such as autonomous driving, robotics, and games, there exist plentiful amounts of previously collected interaction data which, consists of informative behaviours that are a rich source of prior information. Deep RL algorithms that can utilize such prior datasets will not only scale to real-world problems, but will also lead to solutions that generalize substantially better. A data-driven paradigm for reinforcement learning will enable us to pre-train and deploy agents capable of sample-efficient learning in the real-world.

At Danfoss in Gråsten, the Danish Technological Institute (DTI) is testing, as part of a pilot project in the European robot network ROBOTT-NET, several robot technologies: Manipulation using force sensors, simpler separation of items and a 3D-printed three-in-one gripper for handling capacitors, nuts and a socket handle.

by   -   December 7, 2019

An MIT-invented model demonstrates an understanding of some basic “intuitive physics” by registering “surprise” when objects in simulations move in unexpected ways, such as rolling behind a wall and not reappearing on the other side.
Image: Christine Daniloff, MIT
By Rob Matheson

Humans have an early understanding of the laws of physical reality. Infants, for instance, hold expectations for how objects should move and interact with each other, and will show surprise when they do something unexpected, such as disappearing in a sleight-of-hand magic trick.

by   -   December 7, 2019

By Sudeep Dasari

This post is cross-listed at the SAIL Blog and the CMU ML blog.

In the last decade, we’ve seen learning-based systems provide transformative solutions for a wide range of perception and reasoning problems, from recognizing objects in images to recognizing and translating human speech. Recent progress in deep reinforcement learning (i.e. integrating deep neural networks into reinforcement learning systems) suggests that the same kind of success could be realized in automated decision making domains. If fruitful, this line of work could allow learning-based systems to tackle active control tasks, such as robotics and autonomous driving, alongside the passive perception tasks to which they have already been successfully applied.

by   -   December 7, 2019

The elephant in the room loomed large two weeks ago at the inaugural Internet of Things Consortium (IoTC) Summit in New York City. Almost every presentation began apologetically with the refrain, “In a 5G world” practically challenging the industry’s rollout goals. At one point Brigitte Daniel-Corbin, IoT Strategist with Wilco Electronic Systems, sensed the need to reassure the audience by exclaiming, ‘its not a matter of if, but when 5G will happen!’ Frontier Tech pundits too often prematurely predict hyperbolic adoption cycles, falling into the trap of most soothsaying visions. The IoTC Summit’s ability to pull back the curtain left its audience empowered with a sober roadmap forward that will ultimately drive greater innovation and profit.

by   -   December 7, 2019

MIT researchers have invented a way to efficiently optimize the control and design of soft robots for target tasks, which has traditionally been a monumental undertaking in computation.

interview by   -   November 26, 2019

In this episode Lilly Clark interviews Marlyse Reeves, PhD student at MIT, about her work in cognitive robotics and hybrid activity-motion planning. Reeves discusses the role of robotics in space, the challenges of multi-vehicle missions, planning under uncertainty, and her work on an underwater exploration mission.

by   -   November 21, 2019

The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (#IROS2019) was held in Macau earlier this month. The theme this year was “robots connecting people”.

by   -   November 21, 2019

The new “growing robot” can be programmed to grow, or extend, in different directions, based on the sequence of chain units that are locked and fed out from the “growing tip,” or gearbox.
Image courtesy of researchers, edited by MIT News

In today’s factories and warehouses, it’s not uncommon to see robots whizzing about, shuttling items or tools from one station to another. For the most part, robots navigate pretty easily across open layouts. But they have a much harder time winding through narrow spaces to carry out tasks such as reaching for a product at the back of a cluttered shelf, or snaking around a car’s engine parts to unscrew an oil cap.

by   -   November 21, 2019
In lane-merging scenarios, a system developed at MIT could distinguish between altruistic and egoistic driving behavior.
Image courtesy of the researchers.

Self-driving cars are coming. But for all their fancy sensors and intricate data-crunching abilities, even the most cutting-edge cars lack something that (almost) every 16-year-old with a learner’s permit has: social awareness.

interview by   -   November 11, 2019

In this episode, we hear from Brad Hayes, Assistant Professor of Computer Science at the University of Colorado Boulder, who directs the university’s Collaborative AI and Robotics lab. The lab’s work focuses on developing systems that can learn from and work with humans—from physical robots or machines, to software systems or decision support tools—so that together, the human and system can achieve more than each could achieve on their own.

Our interviewer Audrow caught up with Dr. Hayes to discuss why collaboration may at times be preferable to full autonomy and automation, how human naration can be used to help robots learn from demonstration, and the challenges of developing collaborative systems, including the importance of shared models and safety to allow adoption of such technologies in future.

by   -   November 6, 2019

The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (#IROS2019) is being held in Macau this week. The theme this year is “robots connecting people”.

by   -   November 6, 2019

The Wyss Institute’s and SEAS robotics team built different models of the soft actuator powered RoboBee. Shown here is a four-wing, two actuator, and an eight-wing, four-actuator RoboBee model the latter of which being the first soft actuator-powered flying microrobot that is capable of controlled hovering flight. Credit: Harvard Microrobotics Lab/Harvard SEAS
By Leah Burrows

The sight of a RoboBee careening towards a wall or crashing into a glass box may have once triggered panic in the researchers in the Harvard Microrobotics Laboratory at the Harvard John A. Paulson School of Engineering and Applied Science (SEAS), but no more.

On the Novelty Effect in Human-Robot Interaction
December 9, 2019

supported by: