news    views    podcast    learn    |    about    contribute     republish    

Jason Millar teaches robot ethics in the philosophy department at Carleton University in Ottawa, Canada, though he is not himself a robot. His research investigates the interplay between technology design and the social implications of technologies including robotics, health technologies, and biotechnologies. His current research involves the development of design methods to help anticipate the social implications of technology. Designed with social implications in mind, technology becomes more trustworthy and ethical. Users and customers tend to like those features. Jason has authored book chapters, reports and articles on robot ethics, design ethics, privacy, and science and technology policy. As an engineer Jason designed processes and hardware for leading aerospace and telecommunications corporations. He enjoys working alongside engineers in multidisciplinary teams dedicated to improving design through ethical design frameworks.

Facebook’s algorithms are making decisions about what kind of person you appear to be to your ffriends.


Image credit: Craig Berry

We are moving closer to having driverless cars on roads everywhere, and naturally, people are starting to wonder what kinds of ethical challenges driverless cars will pose. One of those challenges is choosing how a driverless car should react when faced with an unavoidable crash scenario. Indeed, that topic has been featured in many of the major media outlets of late. Surprisingly little debate, however, has addressed who should decide how a driverless car should react in those scenarios. This who question is of critical importance if we are to design cars that are trustworthy and ethical.