Robohub.org
 

Robots and the two-edged blade of new technology


by
29 November 2017



share this:

There’s a scare-tactic video going around on social media, and I wanted to weigh in on it—this particular video has gone from 500,000 views to almost 2 million in the past 10 days. As a matter of principle, I will not link to it. It presents a scary future in which killer robotic drones—controlled by any terrorist organization or government—run rampant.

The twin issues of killer robots and robots taking our jobs are the result of the two-edged blade of new technology, i.e., technologies that can be used for both good and evil. Should these new technologies be stopped entirely or regulated? Can they be regulated? Once you see a video like this one, one doubts whether they can ever be controlled. It’s fearful media that doesn’t say it is fake until far beyond the irresponsible level.

Videos like this one—and there are many—are produced for multiple purposes. The issues often get lost to the drama of the message. They are the result of, or fueled by, headline-hungry news sources, social media types and commercial and political strategists. This particular shock video—fake as it is—is promoting a longer, more balanced documentary and non-profit organization on the subject of stopping autonomous killing machines. Yet there are other factual videos of the U.S. military’s Perdix drones swarming just like in the shock video. Worse still, the same technologists that teach future roboticists at MIT are also developing those Perdix drones and their swarming capabilities.

My earlier career was in political strategy and I know something about the tactics of fear and manipulation—of raising doubts for manipulative purposes, as well as the real need for technologies to equalize the playing field. Again, the two-edged sword.

At the present time, we are under very real threat militarily and from the cyber world. We must invest in countering those threats and inventing new preventative weaponry. Non-militarily, jobs ARE under threat—particularly the dull, dirty and dangerous (DDD) ones easily replaced by robots and automation. In today’s global and competitive world, DDD jobs are being replaced because they are costly and inefficient. But they are also being replaced without too much consideration for those displaced.

It’s hard for me as an investor and observer (and in the past as a hands-on participant) to reconcile what I know about the state of robotics, automation and artificial intelligence today with the future use of those very same technologies.

I see the speed of change, e.g.: for many years, Google has had thousands of coders coding their self-driving system and compiling the relevant and necessary databases and models. But along comes George Hotz and other super-coders who single-handedly write code that writes code to accomplish the same thing. Code that writes code is what Elon Musk and Stephen Hawking fear, yet it is inevitable and soon will be commonplace. Ray Kurzweil named this phenomenon and claims that the ‘singularity’ will happen by 2045 with an interim milestone in 2029 when AI will achieve human levels of intelligence. Kurzweil’s forecasts, predicated on exponential technological growth, is clearly evident in the Google/Hotz example.

Pundits and experts suggest that when machines become smarter than human beings, they’ll take over the world. Kurzweil doesn’t think so. He envisions the same technology that will make AIs more intelligent giving humans a boost as well. It’s back to the two-edged sword of good and evil.

In my case, as a responsible writer and editor covering robotics, automation and artificial intelligence, I think it’s important to stay on topic, not fan the flames of fear, and to present the positive side of the sword.




Frank Tobe is the owner and publisher of The Robot Report, and is also a panel member for Robohub's Robotics by Invitation series.
Frank Tobe is the owner and publisher of The Robot Report, and is also a panel member for Robohub's Robotics by Invitation series.





Related posts :

“Robot, make me a chair”

  17 Feb 2026
An AI-driven system lets users design and build simple, multicomponent objects by describing them with words.

Robot Talk Episode 144 – Robot trust in humans, with Samuele Vinanzi

  13 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Samuele Vinanzi from Sheffield Hallam University about how robots can tell whether to trust or distrust people.

How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu

and   12 Feb 2026
Find out more about work published at the Conference on Robot Learning (CoRL).

Sven Koenig wins the 2026 ACM/SIGAI Autonomous Agents Research Award

  10 Feb 2026
Sven honoured for his work on AI planning and search.

Robot Talk Episode 143 – Robots for children, with Elmira Yadollahi

  06 Feb 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Elmira Yadollahi from Lancaster University about how children interact with and relate to robots.

New frontiers in robotics at CES 2026

  03 Feb 2026
Henry Hickson reports on the exciting developments in robotics at Consumer Electronics Show 2026.

Robot Talk Episode 142 – Collaborative robot arms, with Mark Gray

  30 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Mark Gray from Universal Robots about their lightweight robotic arms that work alongside humans.

Robot Talk Episode 141 – Our relationship with robot swarms, with Razanne Abu-Aisheh

  23 Jan 2026
In the latest episode of the Robot Talk podcast, Claire chatted to Razanne Abu-Aisheh from the University of Bristol about how people feel about interacting with robot swarms.


Robohub is supported by:





 













©2026.01 - Association for the Understanding of Artificial Intelligence