news    views    podcast    learn    |    about    contribute     republish    

CAIS Center for AI in Society         


interview by   -   June 11, 2019


In this episode, Lauren Klein interviews Professor Milind Tambe of Computer Science and Industrial and Systems Engineering at the University of Southern California about his research using artificial intelligence for wildlife conservation. Dr. Tambe describes his team’s use of security games to combat poaching, and his experience deploying his algorithms to inform park ranger schedules internationally.

by   -   June 3, 2019


By Avi Singh

Communicating the goal of a task to another person is easy: we can use language, show them an image of the desired outcome, point them to a how-to video, or use some combination of all of these. On the other hand, specifying a task to a robot for reinforcement learning requires substantial effort. Most prior work that has applied deep reinforcement learning to real robots makes uses of specialized sensors to obtain rewards or studies tasks where the robot’s internal sensors can be used to measure reward. For example, using thermal cameras for tracking fluids, or purpose-built computer vision systems for tracking objects. Since such instrumentation needs to be done for any new task that we may wish to learn, it poses a significant bottleneck to widespread adoption of reinforcement learning for robotics, and precludes the use of these methods directly in open-world environments that lack this instrumentation.

by   -   June 3, 2019

MIT researchers have developed a low-cost, sensor-packed glove that captures pressure signals as humans interact with objects. The glove can be used to create high-resolution tactile datasets that robots can leverage to better identify, weigh, and manipulate objects.
Image: Courtesy of the researchers
By Rob Matheson

Wearing a sensor-packed glove while handling a variety of objects, MIT researchers have compiled a massive dataset that enables an AI system to recognize objects through touch alone. The information could be leveraged to help robots identify and manipulate objects, and may aid in prosthetics design.

Robonomics Platform         


interview by   -   May 28, 2019



In this episode, Lilly Clark interviews Aleksandr Kapitonov, “robot economics” academic society professor at Airalab, on his work for Robonomics Platform, an Ethereum network infrastructure for integrating robots and cyber-physical systems directly into the economy. Kapitonov discusses the advantages of using blockchain, use cases including a fully autonomous vending machine, and the Robonomics technology stack.

by   -   May 27, 2019

By Marvin Zhang and Sharad Vikram

Imagine a robot trying to learn how to stack blocks and push objects using visual inputs from a camera feed. In order to minimize cost and safety concerns, we want our robot to learn these skills with minimal interaction time, but efficient learning from complex sensory inputs such as images is difficult. This work introduces SOLAR, a new model-based reinforcement learning (RL) method that can learn skills – including manipulation tasks on a real Sawyer robot arm – directly from visual inputs with under an hour of interaction. To our knowledge, SOLAR is the most efficient RL method for solving real world image-based robotics tasks.

by   -   May 23, 2019

To bring more human-like reasoning to autonomous vehicle navigation, MIT researchers have created a system that enables driverless cars to check a simple map and use visual data to follow routes in new, complex environments.
Image: Chelsea Turner

By Rob Matheson

With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments.

by   -   May 21, 2019

The IEEE International Conference on Robotics and Automation (ICRA) is being held this week in Montreal, Canada. It’s one of the top venues for roboticists and attracts over 4000 conference goers.

by   -   May 15, 2019
Drone delivery. Credit: Wing

Returning from vacation, my inbox overflowed with emails announcing robot “firsts.” At the same time, my relaxed post-vacation disposition was quickly rocked by the news of the day and recent discussions regarding the extent of AI bias within New York’s financial system. These unrelated incidents are very much connected in representing the paradox of the acceleration of today’s inventions.

interview by   -   May 13, 2019


In this episode, Audrow Nash interviews Bernt Børnich, CEO, CTO, and Co-founder of Halodi Robotics, about Eve (EVEr3), a general purpose full-size humanoid robot, capable of a wide variety of tasks.  Børnich discusses how Eve can be used in research, how Eve’s motors have been designed to be safe around humans (including why they use a low gear ratio), how they do direct force control and the benefits of this approach, and how they use machine learning to reduce cogging in their motors.  Børnich also discusses the longterm goal of Halodi Robotics and how they plan to support researchers using Eve.

by   -   May 12, 2019

By Edmund Hunt, University of Bristol

From flocks of birds to fish schools in the sea, or towering termite mounds, many social groups in nature exist together to survive and thrive. This cooperative behaviour can be used by engineers as “bio-inspiration” to solve practical human problems, and by computer scientists studying swarm intelligence.

by   -   May 12, 2019

This blogpost is an updated round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication.

by   -   May 12, 2019
Figure 1: Our model-based meta reinforcement learning algorithm enables a legged robot to adapt online in the face of an unexpected system malfunction (note the broken front right leg).

By Anusha Nagabandi and Ignasi Clavera

Humans have the ability to seamlessly adapt to changes in their environments: adults can learn to walk on crutches in just a few seconds, people can adapt almost instantaneously to picking up an object that is unexpectedly heavy, and children who can walk on flat ground can quickly adapt their gait to walk uphill without having to relearn how to walk. This adaptation is critical for functioning in the real world.

by   -   May 12, 2019
From the Reservoir the fluid goes to the Pump where there are three connections. 1. Accumulator(top) 2. Relief Valve(bottom) & 3. Control Valve. The Control Valve goes to the Cylinder which returns to a filter and then back to the Reservoir.

Hydraulics are sometimes looked at as an alternative to electric motors.

by   -   May 12, 2019

Adversarial examples are slightly altered inputs that cause neural networks to make classification mistakes they normally wouldn’t, such as classifying an image of a cat as a dog.
Image: MIT News Office

By Rob Matheson

MIT researchers have devised a method for assessing how robust machine-learning models known as neural networks are for various tasks, by detecting when the models make mistakes they shouldn’t.

interview by   -   May 1, 2019

dam-prod.media.mit.edu

In this episode, Lauren Klein interviews Hae Won Park, a Research Scientist in the Personal Robots Group at the MIT Media Lab, about storytelling robots for children. Dr. Park elaborates on enabling robots to understand how children are learning, and how they can help children with literacy skills and encourage exploration.

On Artificial Intelligence for Wildlife Conservation
June 11, 2019