In modern factories, human workers and robots are two major workforces. For safety concerns, the two are normally separated with robots confined in metal cages, which limits the productivity as well as the flexibility of production lines. In recent years, attention has been directed to remove the cages so that human workers and robots may collaborate to create a human-robot co-existing factory.
The Robot Launch global startup competition is over for 2017. We’ve seen startups from all over the world and all sorts of application areas – and we’d like to congratulate the overall winner Semio, and runners up Apellix and Mothership Aeronautics. All three startups met the judges criteria; to be an early stage platform technology in robotics or AI with great impact, large market potential and near term customer pipeline.
Three very different robotics startups have been battling it out over the last week to win the “Robohub Choice” award in our annual startup competition. One was social, one was medical and one was agricultural! Also, one was from the UK, one was from the Ukraine and one was from Canada. Although nine startups entered the voting, it was clear from the start that it was a three horse race – thanks to our Robohub readers and the social media efforts of the startups.
By Sylvia Herbert, David Fridovich-Keil, and Claire Tomlin
The Problem: Fast and Safe Motion Planning
Real time autonomous motion planning and navigation is hard, especially when we care about safety. This becomes even more difficult when we have systems with complicated dynamics, external disturbances (like wind), and a priori unknown environments. Our goal in this work is to “robustify” existing real-time motion planners to guarantee safety during navigation of dynamic systems.
The nature, promise and risks of new technologies enter into our shared thinking through narrative – explicit or implicit stories about the technologies and their place in our lives. These narratives can determine what is salient about the technologies, influencing how they are represented in media, culture and everyday discussion. The narratives can influence the dynamics of concern and aspiration across society; the ways and the contexts in which different groups and individuals become aware of and respond to mainstream, new and emerging technologies. The narratives available at a particular point in time, and who tells them, can affect the course of technology development and uptake in subtle ways.
In the lead up to the finals of the Robot Launch 2017 competition on December 14, we’re having one round of public voting for your favorite startup from the Top 25. While in previous years we’ve had public voting for all the startups, running alongside the investor judging, this year it’s an opt-in, because many of the startups seeking investment are not yet ready to publicize. Each year the startups get better and better, so we can’t wait to see who you think is the best! Make sure you vote for your favorite – below – by 6pm PST, 10 December and spread the word through social media using #robotlaunch2017.
Enabling robots to act autonomously in the real-world is difficult. Really, really difficult. Even with expensive robots and teams of world-class researchers, robots still have difficulty autonomously navigating and interacting in complex, unstructured environments.
Shortly after SoftBank acquired his company last October, Marc Raibert of Boston Dynamics confessed, “I happen to believe that robotics will be bigger than the Internet.” Many sociologists regard the Internet as the single biggest societal invention since the dawn of the printing press in 1440. To fully understand Raibert’s point of view, one needs to analyze his zoo of robots which are best know for their awe-striking gait, balance and agility. The newest creation to walk out of Boston Dynamic’s lab is SpotMini, the latest evolution of mechanical canines.
In Imitation Learning (IL), also known as Learning from Demonstration (LfD), a robot learns a control policy from analyzing demonstrations of the policy performed by an algorithmic or human supervisor. For example, to teach a robot make a bed, a human would tele-operate a robot to perform the task to provide examples. The robot then learns a control policy, mapping from images/states to actions which we hope will generalize to states that were not encountered during training.
Wanda Tuerlinckx and Erwin R. Boer have fused their scientific and photographic interests in robots and traveled the world since 2016 to visit roboticists to discuss and photograph their creations. The resulting set of photographs documents the technical robot revolution that is unfolding before us. The portfolio of photographs below presents the androids from Wanda’s collection of robot photographs.
The acquisition and processing of a video stream can be very computationally expensive. Typical image processing applications split the work across multiple threads, one acquiring the images, and another one running the actual algorithms. In MATLAB we can get multi-threading by interfacing with other languages, but there is a significant cost associated with exchanging data across the resulting language barrier. In this blog post, we compare different approaches for getting data through MATLAB’s Java interface, and we show how to acquire high-resolution video streams in real-time and with low overhead.
Governor Andrew Cuomo of the State of New York declared last month that New York City will join 13 other states in testing self-driving cars: “Autonomous vehicles have the potential to save time and save lives, and we are proud to be working with GM and Cruise on the future of this exciting new technology.” For General Motors, this represents a major milestone in the development of its Cruise software, since the the knowledge gained on Manhattan’s busy streets will be invaluable in accelerating its deep learning technology. In the spirit of one-upmanship, Waymo went one step further by declaring this week that it will be the first car company in the world to ferry passengers completely autonomously (without human engineers safeguarding the wheel).
The culmination of work by Alistair C. McConnell (lead-researcher) through his PhD and the SOPHIA team, the Soft Orthotic Physiotherapy Hand Interactive Aid (SOPHIA) forms the foundation for our future research into Soft Robotic rehabilitation systems.
At #WebSummit 2017, I was part of a panel on what the future will bring in 2030 with John Vickers from Blue Abyss, Jacques Van den Broek from Randstad and Stewart Rogers from Venture Beat. John talked about how technology will allow humans to explore amazing new places. Jacques demonstrated how humans were more complex than our most sophisticated AI and thus would be an integral part of any advances. And I focused on how the current technological changes would look amplified over a 10–12 year period.