news    views    podcast    learn    |    about    contribute     republish    

ethics

by   -   July 23, 2021

As robots become more affordable and feasible technically, the challenge of designing robots that act in accordance with context-specific social norms becomes increasingly pronounced. Researchers in human-robot interaction and roboethics have attempted to resolve this issue for the past few decades, and while progress has been made, there is an urgent need to address the ethical implications of service robots in practice. As an attempt to take a more solution-focused path for these challenges, we are happy to share a completely new competition.

I took part in the first panel at the BSI conference The Digital World: Artificial Intelligence. The subject of the panel was AI Governance and Ethics. Emma (my co-panelist) and I each gave short opening presentations prior to the Q&A. The title of my talk was Why is Ethical Governance in AI so hard? Something I’ve thought about alot in recent months.

The European Commission has published a report by an independent group of experts on Ethics of Connected and Automated Vehicles (CAVs). This report advises on specific ethical issues raised by driverless mobility for road transport. The report aims to promote a safe and responsible transition to connected and automated vehicles by supporting stakeholders in the systematic inclusion of ethical considerations in the development and regulation of CAVs.

by   -   March 8, 2021

Traditional computer scientists and engineers are trained to develop solutions for specific needs, but aren’t always trained to consider their broader implications. Each new technology generation, and particularly the rise of artificial intelligence, leads to new kinds of systems, new ways of creating tools, and new forms of data, for which norms, rules, and laws frequently have yet to catch up. The kinds of impact that such innovations have in the world has often not been apparent until many years later.

by   -   February 20, 2021

My coding project is to start building an ethical black box (EBB), or to be more accurate, a module that will allow a software EBB to be incorporated into a robot. Conceptually the EBB is very simple, it is a data logger – the robot equivalent of an aircraft Flight Data Recorder, or an automotive Event Data Recorder.

As the field of robotics matures, our community must grapple with the multifaceted impact of our research; in this article, we describe two previous workshops hosting robotics debates and advocate for formal debates to become an integral, standalone part of major international conferences, whether as a plenary session or as a parallel conference track.

A few weeks ago I gave a short paper at the excellent International Conference on Robot Ethics and Standards (ICRES 2020), outlining a case study in Ethical Risk Assessment – see our paper here. Our chosen case study is a robot teddy bear, inspired by one of my favourite movie robots: Teddy, in A. I. Artificial Intelligence.

interview by   -   December 9, 2019

From Robert the Robot, 1950s toy ad

In this episode, we take a closer look at the effect of novelty in human-robot interaction. Novelty is the quality of being new or unusual.

The typical view is that while something is new, or “a novelty”, it will initially make us behave differently than we would normally. But over time, as the novelty wears off, we will likely return to our regular behaviors. For example, a new robot may cause a person to behave differently initially, as its introduced into the person’s life, but after some time, the robot won’t be as exciting, novel and motivating, and the person might return to their previous behavioral patterns, interacting less with the robot.

To find out more about the concept of novelty in human-robot interactions, our interviewer Audrow caught up with Catharina Vesterager Smedegaard, a PhD-student at Aarhus University in Denmark, whose field of study is Philosophy.

Catharina sees novelty differently to how we typically see it. She thinks of it as projecting what we don’t know onto what we already know, which has implications for how human-robot interactions are designed and researched. She also speaks about her experience in philosophy more generally, and gives us advice on philosophical thinking.

interview by   -   March 19, 2018



In this episode, Audrow Nash speaks with Maja Matarić, a professor at the University of Southern California and the Chief Science Officer of Embodied, about socially assistive robotics. Socially assistive robotics aims to endow robots with the ability to help people through individual non-contact assistance in convalescence, rehabilitation, training, and education. For example, a robot could help a child on the autism spectrum to connect to more neurotypical children and could help to motivate a stroke victim to follow their exercise routine for rehabilitation (see the videos below). In this interview, Matarić discusses the care gap in health care, how her work leverages research in psychology to make robots engaging, and opportunities in socially assistive robotics for entrepreneurship.

We are only in the earliest stages of so-called algorithmic regulation – intelligent machines deploying big data, machine learning and artificial intelligence (AI) to regulate human behaviour and enforce laws – but it already has profound implications for the relationship between private citizens and the state.

By Christoph Salge, Marie Curie Global Fellow, University of Hertfordshire

How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.

File 20170609 4841 73vkw2
A subject plays a computer game as part of a neural security experiment at the University of Washington.
Patrick Bennett, CC BY-ND

By Eran Klein, University of Washington and Katherine Pratt, University of Washington

 

In the 1995 film “Batman Forever,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a “consumer neuroscience” division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.

by   -   February 21, 2017

Current legal AI systems do not think like human lawyers. But, as their capabilities improve, the temptation grows to use such systems not only to supplement but to eliminate the need for some personnel. Ron Yu examines how this might affect the legal profession and the future development of legal AI.

IEEE-main-AI-ethics-2016
Image: IEEE

On the 15th November 2016, the IEEE’s AI and Ethics Summit posed the question: “Who does the thinking?” In a series of key-note speeches and lively panel discussions, leading technologists, legal thinkers, philosophers, social scientists, manufacturers and policy makers considered such issues as:

  • The social, technological and philosophical questions orbiting AI.
  • Proposals to program ethical algorithms with human values to machines.
  • The social implications of the applications of AI.

With machine intelligence emerging as an essential tool in many aspects of modern life, Alan Winfield discusses autonomous sytems, safety and regulation.



Autonomous Aircraft by Xwing
July 12, 2021


Are you planning to crowdfund your robot startup?

Need help spreading the word?

Join the Robohub crowdfunding page and increase the visibility of your campaign