news    views    talk    learn    |    about    contribute     republish     crowdfunding     archives     events

ethics

We are only in the earliest stages of so-called algorithmic regulation – intelligent machines deploying big data, machine learning and artificial intelligence (AI) to regulate human behaviour and enforce laws – but it already has profound implications for the relationship between private citizens and the state.

By Christoph Salge, Marie Curie Global Fellow, University of Hertfordshire

How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.

File 20170609 4841 73vkw2
A subject plays a computer game as part of a neural security experiment at the University of Washington.
Patrick Bennett, CC BY-ND

By Eran Klein, University of Washington and Katherine Pratt, University of Washington

 

In the 1995 film “Batman Forever,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a “consumer neuroscience” division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.

by   -   February 21, 2017

Current legal AI systems do not think like human lawyers. But, as their capabilities improve, the temptation grows to use such systems not only to supplement but to eliminate the need for some personnel. Ron Yu examines how this might affect the legal profession and the future development of legal AI.

IEEE-main-AI-ethics-2016
Image: IEEE

On the 15th November 2016, the IEEE’s AI and Ethics Summit posed the question: “Who does the thinking?” In a series of key-note speeches and lively panel discussions, leading technologists, legal thinkers, philosophers, social scientists, manufacturers and policy makers considered such issues as:

  • The social, technological and philosophical questions orbiting AI.
  • Proposals to program ethical algorithms with human values to machines.
  • The social implications of the applications of AI.

With machine intelligence emerging as an essential tool in many aspects of modern life, Alan Winfield discusses autonomous sytems, safety and regulation.

by   -   January 20, 2017

world-economic-forum-2017-theresa-may

The population of the scenic ski-resort Davos, nestled in the Swiss Alps, swelled by nearly +3,000 people between the 17th and 20th of January. World leaders, academics, business tycoons, press and interlopers of all varieties were drawn to the 2017 World Economic Forum (WEF) Annual Meeting. The WEF is the foremost creative force for engaging the world’s top leaders in collaborative activities to shape the global, regional and industry agendas for the coming year and beyond. Perhaps unsurprisingly given recent geopolitical events, the theme of this year’s forum was Responsive and Responsible Leadership.

Join Professor Brian Cox as he brings together experts on AI and machine learning (including RoboHub’s own Sabine Hauert) to discuss key issues that will shape our technological future

by   -   January 10, 2017
Image: Gerd Altmann
Image: Gerd Altmann

The MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University will serve as the founding anchor institutions for a new initiative aimed at bridging the gap between the humanities, the social sciences, and computing by addressing the global challenges of artificial intelligence (AI) from a multidisciplinary perspective.

Alan Winfield introduces the recently published IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems…

In the wake of the BSI report 8611 on robots and robotic devices, Yueh-Hsuan Weng interviews Prof. Joanna Bryson of the University of Bath about her take on roboethics and regulating the future of human-robot relationships.

by   -   December 8, 2016

Robotics and artificial intelligence enthusiast Thosha Moodley gives a summery of her experience at European Robotics Week 2016’s central event in Amsterdam, where the theme was service robots.

Yueh-Hsuan Weng interviews Prof. Hiroko Kamide about her theory of “One Being for Two Origins”, derived from the teachings of the Buddha, and how her philosophy might impact the emerging field of roboethics.

by   -   October 19, 2016

Algorithms are prone to errors, biases and predictable malfunctions, writes Frank Pasquale.

Photo by Jiuguang Wang
Photo by Jiuguang Wang

Should you always do what other people tell you to do? Clearly not. Everyone knows that. So should future robots always obey our commands? At first glance, you might think they should, simply because they are machines and that’s what they are designed to do. But then think of all the times you would not mindlessly carry out others’ instructions – and put robots into those situations.



Tensegrity Control
August 18, 2017


Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign