Will your next doctor be an app? A cost-cutting NHS wants more patients to act as “self-carers,” with some technologized assistance. A series of flowcharts and phone trees might tell parents whose children have chicken pox how best to care for them—no visits to surgeries required. Or a mole-checking app might tell a worrywart when a given skin discoloration looks harmless, and when to go to a dermatologist, by comparing it to thousands of images in a database.
Cost-cutters in the legal field also promise an algorithmically cheapened future. Tax software simplifies the process of filing by walking the filer through a series of questions. Documents that might have taken human attorneys months to read, can be scanned for keywords in a matter of seconds. Predictive policing promises to deploy force with surgical precision.
All these initiatives have some promise, and may make health care and legal advice more accessible. But they are also prone to errors, biases, and predictable malfunctions. Last year, the US Federal Trade Commission settled lawsuits against firms who claimed their software could aid in the detection of skin cancer, by evaluating photographs of the user’s moles. The FTC argued that there was not sufficient evidence to support such claims. The companies are now prohibited from making any “health or disease claims” about the impact of the apps on the health of users unless they provide “reliable scientific evidence” grounded in clinical tests. If algorithms designed merely to inform patients aren’t ready for prime time, why presume diagnostic robots are imminent?
Legal automation has also faced some serious critiques lately. The University of North Carolina legal scholar Dana Remus has questioned the value and legitimacy of the “predictive coding” now deployed in many discovery proceedings. She and co-author Frank S. Levy (of MIT) raise serious questions about more advanced applications of legal automation as well. The future cannot be completely anticipated in contracts, nor can difficult judgment calls be perfectly encoded into the oft-reductionist formulae of data processing. Errant divorce software may have caused thousands of errors in the UK lately, just as US software systems have disrupted or derailed proper dispositions of benefits applications.
Moreover, several types of opacity impede public understanding of algorithmic ranking and rating processes in even more familiar contexts, like credit scoring or search rankings. Consumers do not understand all the implications of the US credit scoring process, and things are about to get worse as “alternative” or “fringe” data moves into the lending mix for some startups. If the consequences of being late on a bill are not readily apparent to consumers, how can they hope to grasp new scoring systems that draw on their social media postings, location data, and hundreds of other data points? At the level of companies, many firms do not feel that Google, Facebook, and Amazon are playing a fair game in their algorithmic rankings of websites, ads, and products. These concerns, too, are stymied by widespread secrecy of both algorithms and the data fed into them.
In response, legal scholars have focused on remediable legal secrecy (curbing trade secrets and improving monitoring by watchdogs) and complexity (forbidding certain contractual arrangements when they become so complicated that regulators or citizens cannot understand their impact). I have recommended certain forms of transparency for software—for example, permitting experts to inspect code at suspect firms, and communications between managers and technical staff. The recent Volkswagen scandal served as yet another confirmation of the need for regulators to understand code.
But there is a larger lesson in these failures of algorithmic ordering. Rather than trying to replace the professions with robots and software, we should instead ask how professional expertise can better guide the implementation of algorithmic decision-making procedures. Ideally, doctors using software in medical settings should be able to inspect the inputs (data) that go into them, restrict the contexts in which they are used, and demand outputs that avoid disparate impacts. The same goes for attorneys, and other professionals now deploying algorithmic arrangements of information. We will be looking at “The Promise and Limits of Algorithmic Accountability in the Professions” at Yale Law School this Spring, and welcome further interventions to clarify the complementarity between professional and computational expertise.
This post was originally published on the website of Nesta.
Should you always do what other people tell you to do? Clearly not. Everyone knows that. So should future robots always obey our commands? At first glance, you might think they should, simply because they are machines and that’s what they are designed to do. But then think of all the times you would not mindlessly carry out others’ instructions – and put robots into those situations.
Self-driving cars are already cruising the streets today. And while these cars will ultimately be safer and cleaner than their manual counterparts, they can’t completely avoid accidents altogether. How should the car be programmed if it encounters an unavoidable accident? Patrick Lin navigates the murky ethics of self-driving cars in this TED-Ed lecture.
A mouthwatering array of over 750 events has been taking place throughout Europe this week as the continent celebrates Robotics Week 2015. The festivities began with an eye-opening debate on “Robots and Society” in the UK city of Bristol on Tuesday, with experts versed in strategy, business, academia, law and policy. But, for many, the star of the show was Nao, in his guise as robot avatar.
In this episode, Audrow Nash interviews M. Bernardine Dias, Associate Research Professor at the Robotics Institute at Carnegie Mellon University, about TechBridgeWorld. TechBridgeWorld in an organization, founded by Dias, that develops technology to help serve developing communities. This interview focuses on a device that helps the blind learn to write.
The Icelandic Institute of Intelligent Machines (IIIM) has become the first R&D centre in the world to adopt a policy that repudiates development of robotic technologies intended for military operations.
After years of alarmist comments from robo-ethicists, futurists, technologists, business leaders and pundits, including Elon Musk, Bill Gates and Stephen Hawking, two journalists bring some reality to the issue.
With the continuous increase in life expectancy and the number of people aged 65+ on the rise, it is no wonder that many roboticists have been discussing the use of robot as companion/caregiver for elderly. Here’s a reality check: the US Department of Health and Human Services’ Administration on Aging projected that by 2030, the number of seniors over 65 will be 72.1 million. That’s 19% of the population – a significant increase from the 13% it was in 2009 (39.6 million). In Japan, the country with the world’s oldest population (22.3% of the population is over 65), the problem of meeting the increased demand of caregiving has been labelled the “Japan elder crisis” and has triggered the launch of many projects to tackle the issue, including substituting/augmenting human caregivers with robotic ones.
In early 2012, I raised a variation of the classic thought experiment to argue that there is not always a single absolute right choice in the design of automated vehicles — and that engineers should not presume to always know it. While this remains true, the kind of expert comments that concerned me three years ago have since become more the exception than the norm. Now, to their credit, engineers at Google and within the automotive industry openly acknowledge that significant technical hurdles to fully automated vehicles remain and that such vehicles, when they do exist, will not be perfect.
A major new sci-fi movie, Automata, promises to not only provide a feast for the eyes (see below for a clip from the film), but an overdue opportunity to spotlight some of the ethical dilemmas arising from autonomous systems.
Hollywood has already been quick off the mark to explore issues thrown up by robotics, in movies such as Simulation and Robot & Frank. Automata, by Spanish director Gabe Ibanez, promises to provide not a little food for thought and throw a more immediate moral debate into the mix.
The film stars Antonio Banderas as an insurance agent living in the year 2044, where robots are now a common sight. To keep these metallic slaves under our control, there’s a law that expressly forbids them from modifying themselves – but nothing, it seems, can stop the rise of artificial intelligence.
I like Star Wars. I like technology. I like philosophy. I like teaching. I like the occasional meme. I like robots. I like a lot of things.
But if I were to give an accurate account of the things that define me, that truly make up my identity, I would focus on my liking philosophy, the way it informs my thoughts about technology and the world. I would probably describe some of the research I do. There’s a good chance I’d talk about one of the courses I teach. If the conversation went long enough, I just might mention something about Star Wars.
Human-robot interaction is a fascinating field of research in robotics. It also happens to be the field that is closely related to many of the ethical concerns raised with regards to interactive robots. Should human-robot interaction (HRI) practitioners keep in mind things such as human dignity, psychological harm, and privacy? What about how robot design relates to racism and sexism?
We are moving closer to having driverless cars on roads everywhere, and naturally, people are starting to wonder what kinds of ethical challenges driverless cars will pose. One of those challenges is choosing how a driverless car should react when faced with an unavoidable crash scenario. Indeed, that topic has been featured in many of the major media outlets of late. Surprisingly little debate, however, has addressed who should decide how a driverless car should react in those scenarios. This who question is of critical importance if we are to design cars that are trustworthy and ethical.
Google completed a major step in its long and extensive self-driving cars project by presenting its first purpose-built autonomous car, which is designed from scratch for its role and is not a modified conventional Toyota.
The as yet unnamed car is very small (looks smaller than a Smart) and can accommodate two people and some luggage. It’s probably electric and its maximum speed is limited to 25mph (~40km/h). Its most striking characteristic is that it doesn’t have any controls — no steering wheel, accelerator or brake pedals — and you can ride it strictly as a passenger, which is probably a strange feeling, but according to Google’s video not entirely unpleasant.