In this episode, Abate interviews David Mindell, co-founder of Humatics. David discusses a system they developed that can detect the location of a special tracking device down to a centimeter level accuracy. They are currently developing a device to detect location down to a millimeter level accuracy. This solves a the core problem of localization for robots. David discusses the technology behind these products and their applications.
A simulation system invented at MIT to train driverless cars creates a photorealistic world with infinite steering possibilities, helping the cars learn to navigate a host of worse-case scenarios before cruising down real streets.
Researchers from the University of Zurich and NCCR Robotics have demonstrated a flying robot that can detect and avoid fast-moving objects. A step towards drones that can fly faster in harsh environments, accomplishing more in less time.
Reinforcement learning has seen a great deal of success in solving complex decision making problems ranging from robotics to games to supply chain management to recommender systems. Despite their success, deep reinforcement learning algorithms can be exceptionally difficult to use, due to unstable training, sensitivity to hyperparameters, and generally unpredictable and poorly understood convergence properties. Multiple explanations, and corresponding solutions, have been proposed for improving the stability of such methods, and we have seen good progress over the last few years on these algorithms. In this blog post, we will dive deep into analyzing a central and underexplored reason behind some of the problems with the class of deep RL algorithms based on dynamic programming, which encompass the popular DQN and soft actor-critic (SAC) algorithms – the detrimental connection between data distributions and learned models.
Training interactive robots may one day be an easy job for everyone, even those without programming expertise. Roboticists are developing automated robots that can learn new tasks solely by observing humans. At home, you might someday show a domestic robot how to do routine chores. In the workplace, you could train robots like new employees, showing them how to perform many duties.
Of all the cool things about octopuses (and there are a lot), their arms may rank among the coolest.
Two-thirds of an octopus’s neurons are in its arms, meaning each arm literally has a mind of its own. Octopus arms can untie knots, open childproof bottles, and wrap around prey of any shape or size. The hundreds of suckers that cover their arms can form strong seals even on rough surfaces underwater.
In this interview, Lilly interviews Vijay Kumar, Professor and Dean at the University of Pennsylvania. He discusses coordination, cooperation, and collaboration in multi-robot systems. He also explains where he draws inspiration from in his research, and why robotics has yet to meet science fiction.
In this episode, Shihan Lu interviews Dr. Heather Culbertson, Assistant Professor in the Computer Science Department at the University of Southern California, about her work in haptics. Dr. Culbertson discusses the data-driven realistic texture modeling and rendering, haptic technologies in the social touch, the combination of haptics and robots, expectations and obstacles of haptics in the next 5 years.
In this episode, Lauren Klein interviews Human-Robot Interaction researcher Patrícia Alves-Oliveira. Alves-Oliveira tells us about the upcoming RSS Pioneers workshop at the 2020 Robotics: Science and Systems Conference; the workshop brings senior PhD students and postdoctoral researchers together to collaborate and discuss their work with distinguished members of the robotics field. She also describes her own research designing robots to encourage creativity in children.
In this episode, Lilly interviews Juxi Leitner, a Postdoctoral Research Fellow at the Queensland University of Technology and Co-Founder/CEO of LYRO Robotics. LYRO spun out of the 2017 win of the Amazon Robotics Challenge by Team ACRV. Here Juxi discusses deep learning, computer vision, intent in grasping and manipulation, and bridging the gap between abstract and low-level understandings of the world. He also discusses why robotics is really an integration field, the Amazon and other robotics challenges, and what’s important to consider when spinning an idea into a company.
Like the city that hosts the Consumer Electronics Show (CES) there is a lot of noise on the show floor. Sifting through the lights, sounds and people can be an arduous task even for the most experienced CES attendees. Hidden past the North Hall of the Las Vegas Convention Center (LVCC) is a walkway to a tech oasis housed in the Westgate Hotel. This new area hosting SmartCity/IoT innovations is reminiscent of the old Eureka Park complete with folding tables and ballroom carpeting. The fact that such enterprises require their own area separate from the main halls of the LVCC and the startup pavilions of the Sands Hotel is an indication of how urbanization is being redefined by artificial intelligence.
In this episode, our interviewer Audrow Nash speaks to Gil Weinberg, Professor in Georgia Tech’s School of Music and the founding director of the Georgia Tech Center for Music Technology. Weinberg leads a research lab called the Robotic Musicianship group, which focuses on developing artificial creativity and musical expression for robots and on augmented humans. Weinberg discusses several of his improvisational robots and how they work, including Shimon, a multi-armed robot marimba player, as well as his work in prosthetic devices for musicians.