Dear Robohub readers: We need your support. Robohub is growing. Robohub is now a community with over 150 contributors and more than 65,000 unique visitors each month. In order for us to continue covering the latest in robotics research, bring in-depth coverage of conferences worldwide, and showcase interviews with leading roboticists, we need your support. Your donation will go towards expanding our coverage, paying the salaries of our dedicated staff, and maintaining the website. Keep Robohub alive for another year by donating to our campaign today. Thank you!
Alex Leveringhaus, author of a recent Oxford Martin School policy paper titled Robo-Wars: The Regulation of Robotic Weapons discusses the ethics of autonomous weapons, urges governments to recognise the increasing prominence of these weapons in contemporary and future forms of warfare, and proposes steps towards suitable regulation.
October 14, 2014 12:00PM EST Featuring Randal O’Toole, Senior Fellow, Cato Institute; Marc Scribner, Research Fellow, Competitive Enterprise Institute; and Adam Thierer, Senior Research Fellow, Mercatus Center; moderated by Matthew Feeney, Policy Analyst, Cato Institute.
On June 10th, the FAA issued a press release announcing their approval of the first commercial UAS flight over land. According to Transportation Secretary Anthony Foxx, this represented an “important step toward broader commercial use of unmanned aircraft” in the US. What do experts inside the drone community think of this development?
We are moving closer to having driverless cars on roads everywhere, and naturally, people are starting to wonder what kinds of ethical challenges driverless cars will pose. One of those challenges is choosing how a driverless car should react when faced with an unavoidable crash scenario. Indeed, that topic has been featured in many of the major media outlets of late. Surprisingly little debate, however, has addressed who should decide how a driverless car should react in those scenarios. This who question is of critical importance if we are to design cars that are trustworthy and ethical.
Posting on the Slate blog Future Tense, James Bessen takes issue with the notion that technology causes unemployment, illustrating his point by debunking a pair of frequently cited examples, textile workers in the early nineteenth century and telephone operators during the mid-twentieth century.
Amidst a climate of fiscal austerity and vibrant debates over the growing importance of unmanned vehicles in foreign policy and homeland security, the 2013 AUVSI Unmanned Systems Conference returned to Washington, D.C., last week after hosting the 2012 event in Las Vegas. The event was not without controversy, however, as activist group Code Pink held a demonstration outside the venue and disrupted a keynote address. The show itself was a tale of two storylines as the exhibit hall demonstrated that applications for defense and law enforcement are still the lifeblood of the unmanned systems industry, while the technical program and panel discussions pointed to a growing interest to move into commercial industries. Here’s what you missed:
Policy is really about long-term thinking — a process we should do but don’t do for various reasons. Though China is a notable exception, very few governments make long-term planning a priority.
Corporations are more disciplined and less prevailed upon by conflicting interests than governments; hence long-term planning is a regular part of their management practice. But corporations have neither ethics nor loyalties, and often do marginally (if not outright) immoral things to preserve the profitability of the company over the welfare of the community and workforce.
Economic policy may not jump to mind as a hot topic for roboticists, but it is a fundamental and influential driver behind the failure or success of the robotics community as a whole. After all, economic policy is what’s behind how governments set their interest rates, determine their budgets, enforce their rules for the labour market and deal with questions of national ownership.
This month we asked Robotics by Invitation panel members Rich Mahoney and Frank Tobe for their take on what policy-makers need to do to keep economic development apace with important developments in robotics. Here’s what they have to say …
I am not sure how to describe the specifics of what policy makers should do, but I think there are two gaps that policy makers should think about that are associated with the economic development impact of robotics: sufficient funding to support an emerging robotics marketplace; and detailed descriptions of the innovations needed to solve specific problems …
I think the biggest thing happening today is the acceptance of the low-cost Baxter and Universal robots into SMEs and small factories everywhere. Sales will likely be 2% of the total; 5% in 2014 and possibly 15% in 2015. That’s growth! And that’s before the might of the big four robot makers start selling their low-cost entry robots for SMEs …
[RBI Editors] As an active robotics investor, a leading authority on the business of robotics, and the author of The Robot Report and Everything Robotic, you are at the pulse of the field’s economic development. In a nutshell, what’s happening in robotics today?
I think the biggest thing happening today is the acceptance of the low-cost Baxter and Universal robots into SMEs and small factories everywhere. Sales will likely be 2% of the total; 5% in 2014, and possibly 15% in 2015. That’s growth! And that’s before the big four robot makers start selling their low-cost entry robots for SMEs. This has more near-term promise than unmanned aerial or ground vehicles in agriculture and elsewhere. These co-robots are proving that we need more high-tech people and fewer low-skilled people in this globally competitive economy.
On April 8-9, Stanford Law School held the second annual robotics and law conference, We Robot. This year’s event focused on near-term policy issues in robotics and featured panels and papers by scholars, practitioners, and engineers on topics like intellectual property, tort liability, legal ethics, and privacy. The full program is here.
On April 8-9, Stanford Law School held the second annual robotics and law conference, We Robot. This year’s event focused on near-term policy issues in robotics and featured panels and papers by scholars, practitioners, and engineers on topics like intellectual property, tort liability, legal ethics, and privacy. The full program is here. This is the first of our posts recapping the event. Check back this week for more coverage!
Looking at the two words together is enough to conjure up images of chaos and destruction. They’re an image far too familiar in science fiction settings such as Isaac Asimov or Arthur C. Clarke. It’s also a concept many A.I. researchers will gladly tell you they’ve been plagued with at least once by friends or colleagues. However, how much of a real ethical concern do they pose for society?