Dig below the surface of some of today’s biggest tech controversies and you are likely to find an algorithm misfiring. These errors are not primarily caused by problems in the data that can make algorithms discriminatory, or their inability to improvise creatively. No, they stem from something more fundamental: the fact that algorithms, even when they are generating routine predictions based on non-biased data, will make errors. To err is algorithm.
As the last in our series of blog posts on machine learning in research, we spoke to Dr Nathan Griffiths to find out more about machine learning in transport. Nathan is a Reader in the Department of Computer Science at the University of Warwick, whose research into the application of machine learning for autonomous vehicles (or “driverless cars”) has been supported by a Royal Society University Research Fellowship.
How can we create robots that can carry out important tasks in dangerous environments? Machine learning is supporting advances in the field of robotics. To find out more, we talked to Dr Rustam Stolkin, Royal Society Industry Fellow for Nuclear Robotics, Professor of Robotics at the University of Birmingham, and Director at A.R.M Robotics Ltd, about his work combining machine learning and robotics to create practical solutions to nuclear problems.
Northwestern University mechanical engineering professor Todd Murphey and his team are engineering robots that one might say could make robotic assistance as seamless as “humanly” possible. With support from the National Science Foundation (NSF), the team is using novel algorithmic tools, such as a drawing robot, to develop the algorithms, or rules of behavior, that would greatly enhance a robot’s ability to adapt to human unpredictability.
In this episode, Audrow Nash interviews Bradley Knox, founder of bots_alive. Knox speaks about an add-on to a Hexbug, a six-legged robotic toy, that makes the bot behave more like a character. They discuss the novel way Knox uses machine learning to create a sense character. They also discuss the limitation of technology to emulate living creatures, and how the bots_alive robot was built within these limitations.
If you take humans out of the driving seat, could traffic jams, accidents and high fuel bills become a thing of the past? As cars become more automated and connected, attention is turning to how to best choreograph the interaction between the tens or hundreds of automated vehicles that will one day share the same segment of Europe’s road network.
For robots to do what we want, they need to understand us. Too often, this means having to meet them halfway: teaching them the intricacies of human language, for example, or giving them explicit commands for very specific tasks. But what if we could develop robots that were a more natural extension of us and that could actually do whatever we are thinking?
Artificial intelligence (AI) already plays a major role in human economies and societies, and it will play an even bigger role in the coming years. To ponder the future of AI is thus to acknowledge that the future is AI. But how bright is that future? Or how dark?
Current legal AI systems do not think like human lawyers. But, as their capabilities improve, the temptation grows to use such systems not only to supplement but to eliminate the need for some personnel. Ron Yu examines how this might affect the legal profession and the future development of legal AI.
Computer scientist Regina Barzilay is working with MIT students and medical doctors in an ambitious bid to revolutionize cancer care. She is relying on a tool largely unrecognized in the oncology world but deeply familiar to hers: machine learning.
There’s a great deal of concern over artificial intelligence; what it means for our jobs, whether robots will one day replace us in the workplace, whether it will one day lead to robot wars. But current research projects show that artificial intelligence (AI) can also be used for the greater good. Here are five global problems that machine learning could help us solve.