news    views    talk    learn    |    about    contribute     republish     crowdfunding     archives     events

human-robot interaction

By Christoph Salge, Marie Curie Global Fellow, University of Hertfordshire

How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.

interview by   -   July 8, 2017



In this episode, MeiXing Dong conducts interviews at the 2017 Midwest Speech and Language Days workshop in Chicago. She talks with Michael White of Ohio State University about question interpretation in a dialogue system; Dmitriy Dligach of Loyola University Chicago about extracting patient timelines from doctor’s notes; and Denis Newman-Griffiths of Ohio State University about connecting words and phrases to relevant medical topics.

File 20170609 4841 73vkw2
A subject plays a computer game as part of a neural security experiment at the University of Washington.
Patrick Bennett, CC BY-ND

By Eran Klein, University of Washington and Katherine Pratt, University of Washington

 

In the 1995 film “Batman Forever,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a “consumer neuroscience” division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.

The device named “Spark” flew high above the man on stage with his hands waving in the direction of the flying object. In a demonstration of DJI’s newest drone, the audience marveled at the Coke can-sized device’s most compelling feature: gesture controls. Instead of a traditional remote control, this flying selfie machine follows hand movements across the sky. Gestures are the most innate language of mammals, and including robots in our primal movements means we have reached a new milestone of co-existence.

In this video, Philip “Robo-Phil” English offers a tutorial on programming your NAO robot for human interation. Enjoy!

Last week I had the pleasure of debating the question “does AI pose a threat to society?” with friends and colleagues Christian List, Maja Pantic and Samantha Payne. The event was organised by the British Academy and brilliantly chaired by the Royal Society’s director of science policy Claire Craig. Here follows my opening statement:

by   -   April 29, 2017

Engineers and researchers are already speculating about the next phase of UI development, especially for robotics control. So far, the leading candidate is gesture-based control—the use of physical gestures to relay commands.

Researcher Joffrey Becker explores why robots can sometimes appear as strange creatures to us and seeks to better understand people’s tendency to anthropomorphise machines.

The demands posed by a rapidly ageing global population are leading manufacturers of robots to develop technology for providing care and rehabilitation for elderly and impaired people in their own homes.

Robots are the technology of the future. But the current legal system is incapable of handling them. This generic statement is often the premise for considerations about the possibility of awarding rights (and liabilities) to these machines at some, less-than clearly identified, point in time. Discussing the adequacy of existing regulation in accommodating new technologies is certainly necessary, but the ontological approach is incorrect. Andrea Bertolini explains.

interview by   -   March 18, 2017



In this episode, Audrow Nash interviews Bradley Knox, founder of bots_alive. Knox speaks about an add-on to a Hexbug, a six-legged robotic toy, that makes the bot behave more like a character. They discuss the novel way Knox uses machine learning to create a sense character. They also discuss the limitation of technology to emulate living creatures, and how the bots_alive robot was built within these limitations.

by   -   March 13, 2017

Automated cars are hurtling towards us at breakneck speed, with all-electric Teslas already running limited autopilot systems on roads worldwide and Google trialling its own autonomous pod cars. However, before we can reply to emails while being driven to work, we have to have a foolproof way to determine when drivers can safely take control and when it should be left to the car.

by   -   March 6, 2017
The feedback system enables human operators to correct the robot’s choice in real-time – Jason Dorfman, MIT CSAIL

For robots to do what we want, they need to understand us. Too often, this means having to meet them halfway: teaching them the intricacies of human language, for example, or giving them explicit commands for very specific tasks. But what if we could develop robots that were a more natural extension of us and that could actually do whatever we are thinking?

Artificial intelligence (AI) already plays a major role in human economies and societies, and it will play an even bigger role in the coming years. To ponder the future of AI is thus to acknowledge that the future is AI. But how bright is that future? Or how dark?

by   -   February 22, 2017

If a machine can think, decide and act on its own volition, if it can be harmed or held responsible for its actions, should we stop treating it like property and start treating it more like a person with rights?



Midwest Speech and Language Days 2017 Posters
July 8, 2017


Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign