We are only in the earliest stages of so-called algorithmic regulation – intelligent machines deploying big data, machine learning and artificial intelligence (AI) to regulate human behaviour and enforce laws – but it already has profound implications for the relationship between private citizens and the state.
How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.
In the 1995 film “Batman Forever,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a “consumer neuroscience” division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.
Current legal AI systems do not think like human lawyers. But, as their capabilities improve, the temptation grows to use such systems not only to supplement but to eliminate the need for some personnel. Ron Yu examines how this might affect the legal profession and the future development of legal AI.
On the 15th November 2016, the IEEE’s AI and Ethics Summit posed the question: “Who does the thinking?” In a series of key-note speeches and lively panel discussions, leading technologists, legal thinkers, philosophers, social scientists, manufacturers and policy makers considered such issues as:
The social, technological and philosophical questions orbiting AI.
Proposals to program ethical algorithms with human values to machines.
The social implications of the applications of AI.
The population of the scenic ski-resort Davos, nestled in the Swiss Alps, swelled by nearly +3,000 people between the 17th and 20th of January. World leaders, academics, business tycoons, press and interlopers of all varieties were drawn to the 2017 World Economic Forum (WEF) Annual Meeting. The WEF is the foremost creative force for engaging the world’s top leaders in collaborative activities to shape the global, regional and industry agendas for the coming year and beyond. Perhaps unsurprisingly given recent geopolitical events, the theme of this year’s forum was Responsive and Responsible Leadership.
The MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University will serve as the founding anchor institutions for a new initiative aimed at bridging the gap between the humanities, the social sciences, and computing by addressing the global challenges of artificial intelligence (AI) from a multidisciplinary perspective.
In the wake of the BSI report 8611 on robots and robotic devices, Yueh-Hsuan Weng interviews Prof. Joanna Bryson of the University of Bath about her take on roboethics and regulating the future of human-robot relationships.
Yueh-Hsuan Weng interviews Prof. Hiroko Kamide about her theory of “One Being for Two Origins”, derived from the teachings of the Buddha, and how her philosophy might impact the emerging field of roboethics.
Should you always do what other people tell you to do? Clearly not. Everyone knows that. So should future robots always obey our commands? At first glance, you might think they should, simply because they are machines and that’s what they are designed to do. But then think of all the times you would not mindlessly carry out others’ instructions – and put robots into those situations.