By Meg Murphy K. Daron Acemoglu, the Elizabeth and James Killian Professor of Economics at MIT, is a leading thinker on the labor market implications of artificial intelligence, robotics, automation, and new technologies. His innovative work challenges the way people think about these technologies intersect with the world of work. In 2005, he won the John Bates Clark Medal, an honor shared by a number of Nobel Prize recipients and luminaries in the field of economics.
In the past decade, countries and regions around the globe have developed strategic roadmaps to guide investment and development of robotic technology. Roadmaps from the US, South Korea, Japan and EU have been in place for some years and have had time to mature and evolve. Meanwhile roadmaps from other countries such as Australia and Singapore are just now being developed and launched. How did these strategic initiatives come to be? What do they hope to achieve? Have they been successful, and how do you measure success?
This blogpost is a round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. The principles are presented here (in full or abridged) with notes and references but without commentary. If there are any (prominent) ones I’ve missed please let me know.
The International Conference on Robot Ethics and Safety Standards (ICRESS-2017) took place in Lisbon, Portugal, from 20th to 21st October 2017. Maria Isabel Aldinhas Ferreira and João Silva Sequeira coordinated the conference with the aim to create a vibrant multidisciplinary discussion around pressing safety, ethical, legal and societal issues of the rapid introduction of robotic technology in many environments.
As AI surpasses human abilities in Go and poker – two decades after Deep Blue trounced chess grandmaster Garry Kasparov – it is seeping into our lives in ever more profound ways. It affects the way we search the web, receive medical advice and whether we receive finance from our banks.
We are only in the earliest stages of so-called algorithmic regulation – intelligent machines deploying big data, machine learning and artificial intelligence (AI) to regulate human behaviour and enforce laws – but it already has profound implications for the relationship between private citizens and the state.
China has recently announced their long-term goal to become #1 in A.I. by 2030. They plan to grow their A.I. industry to over $22 billion by 2020, $59 billion by 2025 and $150 billion by 2030. They did this same type of long-term strategic planning for robotics – to make it an in-country industry and to transform the country from a low-cost labor source to a high-tech manufacturing resource, and it’s working.
How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.
Join us at the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) for a full day workshop that will bring together international stakeholders in robotics to examine best practices for accelerating robotics innovation through strategic policy frameworks.
I’m examining the perception of autonomous cars using hypothetical scenarios. Each of the hypothetical scenarios is accompanied with an image to help illustrate the scene — using grey tones and nondescript human-like features — along with the option to listen to the question spoken out loud to fully visualise an association.
If you live in the UK, you can take this survey and help contribute to my research!
In the 1995 film “Batman Forever,” the Riddler used 3-D television to secretly access viewers’ most personal thoughts in his hunt for Batman’s true identity. By 2011, the metrics company Nielsen had acquired Neurofocus and had created a “consumer neuroscience” division that uses integrated conscious and unconscious data to track customer decision-making habits. What was once a nefarious scheme in a Hollywood blockbuster seems poised to become a reality.
The world’s brightest minds in Artificial Intelligence (AI) and humanitarian action will meet with industry leaders and academia at the AI for Good Global Summit, 7-9 June 2017, to discuss how AI will assist global efforts to address poverty, hunger, education, healthcare and the protection of our environment. The event will in parallel explore means to ensure the safe, ethical development of AI, protecting against unintended consequences of advances in AI.
After a successful 2016 first edition, our next summer school cohort on The Regulation of Robotics in Europe: Legal, Ethical and Economic Implications will take place in Pisa at the Scuola Sant’Anna, from 3- 8 July.