news    views    podcast    learn    |    about    contribute     republish     events
by   -   April 18, 2018

In this episode of Robots in Depth, Per Sjöborg speaks with David Johan, Co-Founder and CEO of Shape Robotics.

by   -   April 17, 2018

In a basement of New York University in 2013, Dr. Sergei Lupashin wowed the room of one hundred leading technology enthusiasts with one of the first indoor Unmanned Aerial Vehicle (UAV) demonstrations. During his presentation, Dr. Lupashin of ETH Zurich  attached a dog leash to an aerial drone while declaring to the audience, “there has to be another way” of flying robots safely around people. Lupashin’s creativity eventually led to the invention of Fotokite and one of the most successful Indiegogo campaigns.

by   -   April 17, 2018

The choice of gait, that is whether we walk or run, comes to us so naturally that we hardly ever think about it.  We walk at slow speeds and run at high speeds.  If we get on a treadmill and slowly crank up the speed, we will start out with walking, but at some point we will switch to running; involuntarily and simply because it feels right.  We are so accustomed to this, that we find it rather amusing to see someone walking at high speeds, for example, during the racewalk at the Olympics.  This automatic choice of gait happens in almost all animals, though sometimes with different gaits.  Horses, for example, tend to walk at slow speeds, trot at intermediate speeds, and gallop at high speeds.  What is it that makes walking better suited for low speeds and running better for high speeds?  How do we know that we have to switch, and why don’t we skip or gallop like horses?  What exactly is it that constitutes walking, running, trotting, galloping, and all the other gaits that can be found in nature?

by   -   April 17, 2018

Great news here at Dreaming Robots as we’ve just been able to confirm with Sky Atlantic and NowTV that we will be live tweeting again each episode of the new series of HBO’s completely fantastic Westworld, starting with the Season 2 premiere on 23 April.

University of California, Berkeley         


interview by   -   April 14, 2018
Toyota HSR Trained with DART to Make a Bed.

In this episode, Audrow Nash speaks with Michael Laskey, PhD student at UC Berkeley, about a method for robust imitation learning, called DART. Laskey discusses how DART relates to previous imitation learning methods, how this approach has been used for folding bed sheets, and on the importance of robotics leveraging theory in other disciplines.

by   -   April 11, 2018

Motion control problems have become standard benchmarks for reinforcement learning, and deep RL methods have been shown to be effective for a diverse suite of tasks ranging from manipulation to locomotion. However, characters trained with deep RL often exhibit unnatural behaviours, bearing artifacts such as jittering, asymmetric gaits, and excessive movement of limbs. Can we train our characters to produce more natural behaviours?

The NHTSA/SAE “levels” of robocars are not just incorrect. I now believe they are contributing to an attitude towards their “level 2” autopilots that plays a small, but real role in the recent Tesla fatalities.

by   -   April 11, 2018


It all started with 166 companies spread across 12 European countries appling for a “golden ticket” to ROBOTT-NET’s Voucher Program. 64 companies received a voucher and highly specialized consultancy from a broad range of the brightest robotics experts around Europe. Now five of the 64 projects have been selected for a ROBOTT-NET pilot.

by   -   April 11, 2018

In the constantly changing landscape of today’s global digital workspace, AI’s presence grows in almost every industry. Retail giants like Amazon and Alibaba are using algorithms written by machine learning software to add value to the customer experience. Machine learning is also prevalent in the new Service Robotics world as robots transition from blind, dumb and caged to mobile and perceptive.

by   -   April 11, 2018
Aude Oliva (right), a principal research scientist at the Computer Science and Artificial Intelligence Laboratory and Dan Gutfreund (left), a principal investigator at the MIT–IBM Watson AI Laboratory and a staff member at IBM Research, are the principal investigators for the Moments in Time Dataset, one of the projects related to AI algorithms funded by the MIT–IBM Watson AI Laboratory.
Photo: John Mottern/Feature Photo Service for IBM

By Meg Murphy
A person watching videos that show things opening — a door, a book, curtains, a blooming flower, a yawning dog — easily understands the same type of action is depicted in each clip.

by   -   April 4, 2018

In this episode of Robots in Depth, Per Sjöborg speaks with Harsha Prahlad, CoFounder and Chief Technology and Products Officer at Grabit Inc. Harsha talks about his novel gripper for soft goods manufacturing, and how he got into robotics via the aerospace industry.

The Uber car and Tesla’s autopilot, both in the news for fatalities are really two very different things. This table outlines the difference. Also, see below for some new details on why the Tesla crashed and more.

by   -   April 4, 2018

The long-anticipated, Steven Spielberg-helmed Ready Player One has just been released in UK cinemas this week, and as a film of obvious interest to DreamingRobots and Cyberselves everywhere, we went along to see what the Maestro of the Blockbuster has done with Ernest Cline’s 2011 novel (which the author himself helped to adapt to the screen).

interview by   -   March 31, 2018



In this interview, Audrow speaks with Andrea Bajcsy and Dylan P. Losey about a method that allows robots to infer a human’s objective through physical interaction. They discuss their approach, the challenges of learning complex tasks, and their experience collaborating between different universities.

by   -   March 30, 2018

In this episode of Robots in Depth, Per Sjöborg speaks with Franziska Kirstein, Human-Robot Interaction Expert and Project Manager at Blue Ocean Robotics, about her experience as a linguist working with human robot interaction.

DART: Noise injection for robust imitation learning
April 14, 2018




April 2018

Mon Tue Wed Thu Fri Sat Sun
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30