news    views    talk    learn    |    about    contribute     republish     crowdfunding     archives     events

Frank Tobe on “What were the highlights at IROS/iREX this year?”

November 13, 2013

Two images remain in my mind from IROS 2013 last week in Tokyo. The respect for Professor Emeritus Mori and his charting of the uncanny valley in relation to robotics, and the need for a Watson-type synthesis of all the robotics-related scientific papers produced every year.

Let me explain.

Almost all of the presentations at IROS were abstract and technical except for the discussion about Prof. Mori’s Uncanny Valley theory. First of all, he was there and described how he came to observe the uncanny valley under different situations and circumstances. Secondly, all of the presenters and audience were respectful of Prof. Mori’s work, his theory, and him as a person. Third, and most interesting to me, each of the other speakers in this special lecture session described how the uncanny valley theory was relevant in different settings and disciplines. In art, philosophy, psychology — in the works of David Hanson and Hiroshi Ishiguro (both of whom were there) — as well as in medicine, prosthetics and in robotics in general. To me it was a reminder that robotics crosses sciences and connects with humans in many different forms, and this tribute presentation at IROS brought the personal relationships and the breadth of their reach to the forefront, and away from the abstract, theoretical and mechanical side of IROS.

In this video by IEEE/Spectrum, filmed outside the door of the room where the session was held, one can clearly see the multi-science and psychological/philosophical aspects of the theory:

Worse, 90% of scientists don’t even know whether their research is “new” or not.

Ever since I learned of the IBM Watson Jeopardy project my mind has been fascinated with possibilities for practical applications. IBM is on that trail as well and is using Watson to help with medical diagnoses and legal research and briefing. My idea is to get the NSF and IEEE (and other organizations) to commission a Watson project to synthesize robotics and AI-related science papers into a meaningful resource for all to use. At present, there are so many papers published that a researcher cannot possibly read them all. Consequently we don’t even know what we already know. But with Watson, we could know — and we could redirect research activities truly into the unknown without reinventing things over and over.

Read more answers →

Frank Tobe is the owner and publisher of The Robot Report, and is also a panel member for Robohub's Robotics by Invitation series... read more

comments powered by Disqus

We pose timely questions of importance to the robotics community and ask our panel of experts to answer.

Rbi questions:

Rbi answers:

    by   -   March 14, 2014

    RBI Update
    The short answer is: a lot of patience and perseverance!

    More seriously, this is one of the most frustrating aspects of entrepreneurship. High-tech entrepreneurs are supposed to be innovators, but, even more, they are supposed to be visionaries. They have to see the value that a certain technology can bring into the market.

    The naïve approach when one gets to a technological breakthrough is to think that a new product will succeed because it is better. In the head of the entrepreneur, the added value, the competitive advantages, and the future use of the product are already clear.

    by   -   March 12, 2014

    This is a really important question, and one that our community should focus on more. That said, the answer is not truly profound or particularly obscure. It takes three things: doing something people really want, doing something profitable, and a lot of hard work.

    by   -   March 12, 2014

    The rise of online crowdfunding platforms over the last decade has created a whole new pathway for some robot startups. In the process, crowdfunding campaigns have helped to catapult hardware and robots into the public eye, captivating our imaginations in the process. Quite simply, crowdfunding is a form of entertainment just as much as it is a form of fundraising. And learning how to tell your story to others is a critical part of turning your idea or project into a product.

    by   -   March 12, 2014

    There are two parts to this process: the invisible and visible.

    Rodin once said that sculpture is an art dedicated to holes. What he meant is that great work is invisible: if you are building a technology, or a company, or a product, if it is truly good, then most of the work will not be seen. It’s the stuff that, like dark matter, holds everything else together. You have to dedicate yourself to that.

    by   -   February 12, 2014

    Judging by the levels of media coverage and frenzied speculation that has followed each acquisition, the short answer to what does it mean is: endless press exposure. I almost wrote ‘priceless exposure’ but then these are companies with very deep pockets; nevertheless the advertising value equivalent must be very high indeed. The coverage really illustrates the fact that these companies have achieved celebrity status.

    by   -   February 12, 2014

    Google, is the wild card for me.  With more acquisitions (DeepMind, Boston Dynamics, Redwood Robotics, Industrial Perception, Meka, Schaft, and others) than Apple, Amazon, Facebook, and Microsoft combined, the GOOG looks to be rigging up a kit that would offer excellent image recognition + navigation + mobility.

    by   -   February 12, 2014

    We have reasons to feel both excited and uneasy about giant corporations’ investment in robotics.

    It’s exciting for the robotics community that the giants (Google, Apple, and Amazon) are actively investing in robotics.

    by   -   January 15, 2014

    There are two kinds of cyborgs – those that have broken the skin, and those that have not. Iron Man comes to mind as a cyborg of the second category, in that he can remove his enhancement (save for that pacemaker, of course). Being able to fly would be great, but we have planes. A hardshell carapace would be fun if I was into doing things like running into walls and falling from buildings. Though I have little super-hero ambition I do think there’s something that Iron Man has that I’d like, and that’s J.A.R.V.I.S., Tony Stark’s A.I. assistant.

    I’d like a personal gentleman’s gentleman, if you will, someone that is there to both advise and help. A Sancho Panza, a Samwise Gamgee, a Dr. Gonzo, or a Dean Moriarty. A Ron Weasley or a Huckleberry Finn. A real companion to help me through life.

    Though I have not spent more than a few minutes with it Marvel did build an app that is intended to do just this. Someone had the right idea but this is a thin semblance of what we need. Unfortunately, what Marvel missed was what makes J.A.R.V.I.S. so intelligent – his street smarts. His worldly knowledge and personality.

    J.A.R.V.I.S. is based on Reginald Jeeves, the fictional valet of Bertie Wooster, from the writing of P. G. Wodehouse (1881–1975). Jeeves offered Bertie advice, assisted him with daily operations, helped him keep track of things, run systems, and do it via natural language. Jeeves was someone that enhanced Bertie’s knowledge, understanding, amplified his perception and wisdom and even fixed him the occasional hangover cure. So I’d like a Jeeves – an advisor of the most intimate sort that’s there as a consultant, teacher, confidante, and companion. Especially for the morning of January 1st, when I suspect I’ll have a bit of a hangover. He would, after all, know exactly what I’d had to drink that night, and would have probably been the one that had called the cab for me to get home.

    Read more answers →

    by   -   January 15, 2014

    As a researcher in robotics, I tend to cringe whenever someone asks how long it will take until people start to see terminator-like robots on the streets. It’s a fun question to think about, but it is often asked with all too much seriousness, as though the world with terminators is the inevitable future that lies ahead of us.

    But when I was asked this month’s Robotics by Invitation question, I gladly put on my imagination hat without much hesitation or cringing. Part of it might have something to do with the fact that no one will come after me and ask “so, when do you think that kind of technology will be available in the future?” So I felt very much free to let my imagination do what it does best.

    The first thing that crossed my mind was a vision or an idea Mr. John S. Canning of the Naval Surface Warfare Center Dahlgren Division had discussed many years ago (in 2009 I believe) at a talk he titled “A Concept of Operations for Armed Autonomous Systems”. After thirty-something powerpoint slides, he summarized the talk with “Let the machines target machines – not people”. I think it’s a cool notion to think about building robots that are not built as ultimate killing machines, but built as the ultimate weapon-neutralizing machines. Imagine that, instead of targeted killing of humans, you send robots for targeted neutralization of weapons?

    After coming across that summary, I remember thinking how useful it would be if I had an expandable, hidden robotic device implanted on my forearm, such that when I (if ever) need to go neutralize someone’s weapon, or protect myself from someone attacking me (for whatever reason), the device will automatically activate, expand into a bullet-proof shield, and help me detect dangerous weapons in the area to neutralize. If it comes with a mini jet-pack that allows me to fly, that’s even better. I’d be the ultimate superwoman whose day-job is to do research in robotics, but with a side job to fly to random places and help out with conflict situations. Ok, that sounds like a plot from a comic book.

    inspector_gadget2Some of you might think I sound like I’m dreaming to be a female version of Iron Man. But I am thinking of something more subtle (at least while the device isn’t activated), like the Inspector Gadget (for those of you who don’t know him, Inspector Gadget was a cartoon character that could hide all of his cyborg gadgetry inside his trench coat). I would look just like a normal person, except that, when necessary, my ‘implanted devices’ would activate to serve whatever various purposes I need.

    That’s only if you are asking me about implants. But if you are asking me about robotic accessories, then that’s a whole different story. Wouldn’t it be amazing if there was a foldable and light pocket-sized device that you could carry with you while travelling (or grocery shopping), so that when you don’t want to carry heavy things, you could just activate it, and it would become a full sized stair-climber and a follow-bot? It would have come in very handy if I had such a device during my trip to Europe, hopping between trains and planes with my luggage. I don’t think I’d use anything bigger or heavier than my purse for this purpose, because that defeats the purpose.

    Anyone have one of these available for testing yet?

    Read more answers →

    by   -   January 15, 2014

    The potential of robotic implants is limitless, but I am not interested in super-human powers. Instead, I’d be happy with human powers, and in particular the ability to remember. Growing up, I would read a used book and then sell it back to the store with the mistaken notion that I had copied the book into “the vault” of my mind. That was true while I was hot out of the gate, but after awhile, well, I don’t remember forgetting half of what I read and learned.

    And that’s why I would consider adopting a type of neural implant called a “memory prosthetic.” Future versions of these implants could improve short-term memory retention and also help with the transfer of short-term memory to long-term. Implanted, my experiences could be finally be locked away safely in my brain, instead of being allowed to dribble slowly out of my ears over the decades. In thirty years I don’t want to have hard drives teeming with photos of forgotten trips, or scrap books stuffed with my kids’ childhoods, or your name hovering on the tip of my tongue. All I want, you see, is what’s mine.

    Read more answers →

    by   -   December 11, 2013

    ‘David and Goliath’ was the most exciting story in robotics this year. 2013 has seen huge companies showing an interest in robotics, starting with Apple launching Anki live on stage at their global developer forum in June; followed by strategics like Flextronics and Samsung opening seed funds and accelerators; law firms and banks announcing robotics departments; and just about everybody including Amazon announcing drone delivery.

    Probably the biggest Goliath of them all is Google’s Andy Rubin acquiring 7 top robotics companies for a secret new complex in Palo Alto, with some more acquisitions still to be revealed. Boston Dynamics coming to Silicon Valley is one of the popular suggestions but neither Boston Dynamics nor Google have responded publicly.

    Goliath stories are great for stimulating interest in robotics and investment in robotics companies but if Google is buying them all, where can investors find innovative new robot startups?

    Fortunately, there were some David stories in 2013 too. Unbounded Robotics is the best example I know of a small fast moving team doing big things from a short runway. Unbounded are a team of ex Willow Garage engineers who have turned their experience building both PR2 and Turtlebots into the UBR, a new mobile manipulating platform capable of more than a PR2 but at a fraction of the price. Willow Garage transitioned from one big research park into many small startups. Here’s hoping that for every Goliath, there are many more Davids.

    Read more answers →

    by   -   December 11, 2013

    As Editor and Publisher of the Robot Report, I follow robotics news closely, especially in business and finance. Here are my top picks for 2013:

    by   -   December 11, 2013

    2013 was a year filled with talk of drones.

    I’m not saying this just because I’m biased by the recent news reporting on how large companies (AmazonDHL, and UPS to be exact) are exploring the use of drones as a new delivery mechanism. If this is news to you, don’t worry. The robotics community came across this only a couple of weeks ago.

    by   -   November 13, 2013

    I found the plenary speeches at IROS to be especially interesting. Marc Raibert gave an entertaining talk on the robots being developed at Boston Dynamics. It’s encouraging to see that robots are becoming more and more robust, even for very challenging domains. Marc emphasized his company philosophy of pushing robots until they break, and then learning from those breakdowns to improve robot performance and reliability. Learning from failure is often overlooked in robotics, but is critically important for achieving usable systems. It’s also a good life lesson!

    Masayuki Yamato gave an inspiring talk on transplantable cell sheets, and how they can help speed the recovery from many different diseases and surgeries. He showed, complete with surgery videos (not for the faint of heart!), how his cell sheet therapy technology can address many medical problems in the eye, heart, esophagus, etc.  While robotics isn’t a main part of the research, it is clear that robotics is an important tool for enabling these clinical applications that can change people’s lives.

    In the third plenary, Tim Lüth challenged the audience to not automate for the sake of automation, but to show how automation can improve the outcome in people’s lives. He showed a variety of successful devices that his team has developed for medical applications, and made a compelling argument that new technologies can be more readily accepted if they are quickly designed and close in nature to the non-automated medical approaches. He argued that simpler robots that are custom designed and manufactured for a specific patient and/or procedure might revolutionize medicine in the future.

    One last highlight to mention from IROS is the iREX robot exhibition. The sheer number of industrial and automation robots on display was so impressive. The robots were fast, precise, and made excellent use of advanced vision technologies. And that massive FANUC arm with a 1300+ kg payload capacity was a sight to behold! It puts a whole new twist on the issue of robot safety!

    Read more answers →

    by   -   November 13, 2013

    Two images remain in my mind from IROS 2013 last week in Tokyo. The respect for Professor Emeritus Mori and his charting of the uncanny valley in relation to robotics, and the need for a Watson-type synthesis of all the robotics-related scientific papers produced every year.

    Let me explain.

    Almost all of the presentations at IROS were abstract and technical except for the discussion about Prof. Mori’s Uncanny Valley theory. First of all, he was there and described how he came to observe the uncanny valley under different situations and circumstances. Secondly, all of the presenters and audience were respectful of Prof. Mori’s work, his theory, and him as a person. Third, and most interesting to me, each of the other speakers in this special lecture session described how the uncanny valley theory was relevant in different settings and disciplines. In art, philosophy, psychology — in the works of David Hanson and Hiroshi Ishiguro (both of whom were there) — as well as in medicine, prosthetics and in robotics in general. To me it was a reminder that robotics crosses sciences and connects with humans in many different forms, and this tribute presentation at IROS brought the personal relationships and the breadth of their reach to the forefront, and away from the abstract, theoretical and mechanical side of IROS.

    In this video by IEEE/Spectrum, filmed outside the door of the room where the session was held, one can clearly see the multi-science and psychological/philosophical aspects of the theory:

    Worse, 90% of scientists don’t even know whether their research is “new” or not.

    Ever since I learned of the IBM Watson Jeopardy project my mind has been fascinated with possibilities for practical applications. IBM is on that trail as well and is using Watson to help with medical diagnoses and legal research and briefing. My idea is to get the NSF and IEEE (and other organizations) to commission a Watson project to synthesize robotics and AI-related science papers into a meaningful resource for all to use. At present, there are so many papers published that a researcher cannot possibly read them all. Consequently we don’t even know what we already know. But with Watson, we could know — and we could redirect research activities truly into the unknown without reinventing things over and over.

    Read more answers →

    by   -   November 13, 2013

    For me, the highlight of IROS was the Uncanny Valley special session, although the sheer size of the IROS conference and the parallel iRex industrial and service robot expo also gave much food for thought. In particular, the new coworking robots from Kawada [video] and ABB look very interesting, but it’s clear that it still takes a long time for research to transition into robust applied robotics.

    by   -   October 9, 2013

    The premise of this question is that robotics companies are manufacturers and that there is choice between an open source and closed source business model.  Robotics companies are best thought of as service companies (even manufacturers, especially when moving beyond early adopters) and openness is not an ‘either/or’ choice, but rather a continuum.  In this day and age the question is, ‘What do you need to keep open create value for your customers?’

    My company, TerrAvion, has been building a data delivery system for our robotics system on Amazon Web Services (AWS).  AWS (Question for another day: is AWS a cloud services robot?) is an excellent example of how to think about handling openness.  The platform is very open—the essential premise is that the customer can build any kind of web application on AWS they can imagine and code, while never having to buy or run a physical server.  Almost everything that could be touched by a customer is open.  There are tons of open source re-useable code and tools available freely to developers on AWS.

    However, not everything is open.  We collect almost a terabyte of data a day when we run our system, so we store a lot of idle data in a sub-service of AWS, called Glacier, which is one of the cheapest ways to store data in the cloud, but it has long retrieval times.  Amazon publishes a lot of information about Glacier: speed, redundancy, expected loss over time, retrieval procedures — but nobody knows how Amazon really does it.  A magnetic tape storage solution is the most common belief, but there is also speculation that they use retired hard drives that are generally powered off.  Not to mention there are a bunch of back-end code and human procedures that remain secret.

    In a sense, these tools could be open source.  I’d wager that whatever tool AWS uses to run Glacier runs on Linux in an open source language.  Some Glacier code may even be shared back with the community in bits and pieces.  It wouldn’t be crazy to think that the open software runs with some vendor-sourced proprietary components.  However, that whole side of the business is concealed from us.  All we know is that AWS’s combination of technology and organization allows them to store data a price almost no one else can match.

    Amazon has achieved openness Nirvana.  They have a solution that functions as totally open from the customer’s perspective.  Customers completely understand how to use, functionally duplicate, hack, and interface with the AWS service.  However, at the same time Amazon manages to achieve secrecy and differentiation around their unique advantage, which is having the technology and processes for price leadership.

    If robotics is actually a service industry, Amazon’s approach points to the correct way of thinking about the problem of openness.  What openness will help your customers?  What is it that your company does that is differentiated?  What actually enables you to create and maintain that advantage?  Robotics can be mostly open source and extract the advantages of unique and differentiated intellectual property.  Try on this idea for your domain, and let me know what you think.

    Read more answers →

    by   -   October 9, 2013

    The IT economy has powerfully demonstrated what happens when companies can leverage open source infrastructure when they build new products and services.  A company like Google would never have come into existence had they not been able to rely from the beginning on solid open source tools like Python and GCC. IBM would arguably have not been able to make its immensely successful pivot from products to services without Linux.  How many startups these days begin as a cloud-hosted machine running some derivative of the venerable LAMP stack?  And increasingly the underlying cloud infrastructure itself is open.

    While arguing by analogy is fraught with peril, I believe that the similarities between robotics and the rest of the IT world are strong enough to justify it.  In robotics, we have many shared problems to solve when developing a product or service, from low-level drivers to high-level capabilities, and all the developer libraries and tools in between.  I have yet to see a successful robotics business for whom any of that stuff is the competitive advantage.  Rather, success comes from the innovative composition and application of that technology in a form that somebody will pay for.  The hard part is figuring out what the robot should *do*.  By working together on the common underlying problems, we end up with better, more reliable solutions, and we free ourselves to spend more time at the application level, which is where we can differentiate ourselves.

    In other words, I believe that open source is a great model for the robotics business as a whole.  Now, is it a good model for any individual company?  It certainly can be.  As examples, we see small-to-medium companies, such as Clearpath Robotics, Rethink Robotics, and Yujin Robot, which use ROS directly in their products. And we see larger companies, such as Bosch and Toyota, using ROS in R&D and prototyping efforts.  These are all profit-motivated companies making what is presumably a rational economic decision to rely on open source software.  They’re each holding something back that is their “special sauce,” whether that’s higher level application software, configuration data, customizations to the open source code, or the designs for the hardware.  And that’s expected: unless you’re in a pure consulting business (selling your time), then you need to own and control something that forms the basis of your product or service offering (to allow you to sell something other than your time).

    Fortunately, open source software is entirely compatible with such business models.  In fact, it was our hope to one day see such commercial users of ROS that led us to choose a permissive license (BSD, or Apache 2) for the code that we developed.  We’re now witnessing, with the debut of so many new robotics companies, the fruits of those earlier labors in building a shared development platform.

    Read more answers →

    by   -   October 9, 2013

    To be able to choose between proprietary software packages is to be able to choose your master. Freedom means not having a master. Freedom means not using proprietary software.

    – Richard Stallman, open systems advocate

    Certainly robotics has its share of proprietary software and control systems. Each robot manufacturer markets their products based on the need for secure, proprietary and un-shared systems so that they can ensure stability and control. Whole industries have been set up to bridge those proprietary barriers so that multi-vendor solutions can happen.

    Two prominent people in the robotics industry had a discussion on the subject last year. In a spirited cocktail party debate in Lyon, France at InnoRobo 2012, an innovation forum and trade show for service robotics, Colin Angle and Robert Bauer argued their points of view.

    Left: Robert Bauer, Executive Director, Commercialization, Willow Garage. Right: Colin Angle, Chairman of the Board, co-founder and CEO, iRobot (NASDAQ:IRBT).

    Angle suggested that freely providing such a key and critical component as the robotic operating and simulation system – and the extensive libraries that go with it – as the Open Source Robotics Foundation (previously Willow Garage) does with their open source and unprotected robotic operating system (ROS) – was tantamount to letting the biggest consumer giant(s) gobble up any mass market applications and re-market them globally at low cost because they already have (or could easily reverse-engineer) the hardware and could produce it cheaply, the operating system was free courtesy of ROS, and the only real cost was the acquisition of the application(s).

    Angle thought that it was dangerous and led to losing a potentially American/European market to offshore commodity conglomerates, and said:

    Robotics innovation represents a tremendous opportunity for economic growth akin to automobiles, aerospace and information technology. If we are to freely share our ‘intellectual capital’ on the open market we risk losing the economic engine that will advance our economies and send growth and jobs overseas.

    Cover of 3/19/2012 issue of Bloomberg Businessweek magazine.

    The issue of losing trade secrets to foreign conglomerates has been a continuing focus at Bloomberg Businessweek magazine:

    In November, 14 U.S. intelligence agencies issued a report describing a far-reaching industrial espionage campaign by Chinese spy agencies. This campaign has been in the works for years and targets a swath of industries: biotechnology, telecommunications, and nanotechnology, as well as clean energy. “It’s the greatest transfer of wealth in history,” said General Keith Alexander, director of the National Security Agency.

    Bauer said that Willow Garage’s objectives with ROS was to stimulate the industry by enabling participants to not have to reinvent the many cross-science elements of robotics ventures; to reuse software because it saves developer time and allows researchers to focus on research. By giving them free access to the tools, libraries and simulation capabilities of ROS, and access to the PR2s that are available for testing and experimentation, Willow Garage hoped to advance the state-of-the-art in autonomous robotics technologies.

    Bauer also said that, once a successful app was developed, at that point the new endeavor would likely lock down the operating system and application software in order to protect their invention.

    Angle suggested that what the robotic industry needs for inspiration is successful robotics companies – profitable companies with millionaire employees selling in-demand products; not more notches on the oversized belts of big offshore conglomerates. Further, he said that unless ROS is protected and made stable and secure, it could never be used for sensitive (defense, space, security) solutions, and until it became rugged, secure and stable, it could never be used in factories, which cannot afford down time from either their robots or software.

    Since that time, solutions that bridge the open vs. shut debate are showing up in many sectors:

    • Willow Garage has transitioned ROS to two different non-profit foundations to continue development of ROS and ROS-Industrial: The Open Source Robotics Foundation and the
    • ROS-Industrial is a new effort to enable closed industrial systems to at least have a “front end” to make available the introduction of new sensors, make robot programing and simulation easier, and take advantage of the wealth of new talent exposed to ROS in academia.
    • Start-up companies selling co-robots are using ROS and beginning to share application software. Danish Universal Robots and Rod Brooks’ Rethink Robotics both use ROS for software development but not for control systems. Rethink Robotics plans to offer an SDK capability with an app store for robotics applications shared by other Baxter users sometime in 2014. The SDK is already available in the academic version of Baxter.
    • Industrial robot makers are beginning to provide ROS-like capabilities in the form of updated software and simulation suites, e.g., ABB Robotics recently introduced RobotStudio which is a GIS interface to ABB’s proprietary internals for robot simulation and programming.

    Thus as the debate rages on, so too do the very pragmatic solutions that are necessary to make things move forward and work.

    The best solutions often involve multiple vendors. Look at the Tesla factory. Integrating their software and control systems into the larger manufacturing system, or even between different systems on a line, involves serious and talented programming — a process that everyone agrees needs to be simplified and made less costly.

    ROS-like products are fine for development and simulation, and because they are prevalent in most of academia, new hires are familiar with what it does and how it works. But that’s when those new hires are confronted with the complexities of proprietary software and teaching pendants. I’ve heard it said that it’s like going back to the mainframe era of computing. At the least, it involves learning old-style coding languages.

    Most of the big robot manufacturers are beginning to make an effort to improve their training and programming methods, to get them onto more practical tablets, and to provide offline simulation. But the going is slow, hence the argument for open source rages on. The truth appears to be in the middle: older systems need to be updated and yet still retain their proprietary nature. Mix and match between vendors is a fact of life and needs to be accommodated either by the use of ROS-Industrial or by the robot manufacturers themselves in the form of a new set of standards and interfaces.

    Read more answers →

    by   -   September 16, 2013

    A quality learning experience centered on robotics is hard to find for many students who lack STEM resources through their own schools. Although new science standards hope to improve the situation, K-12 schools are struggling to provide a basic STEM education, let alone opportunities involving more specialized lessons in robotics. So on more than one occasion, I have talked to parents who are struggling to find rewarding opportunities for their children. Fortunately, even if one lives in a “robot desert,” today there are many online and physical resources that can provide rich, self-guided education on robotics.

    by   -   September 15, 2013

    We look for good people from all over the world who have had some formal education in robotics theory, particularly in the basics of kinematics, perception, and cognition. Many universities offer courses in these areas.

    In addition, so much of robotics today depends on software that it is important for roboticists to be well versed in programming languages (C++, Java, Python), if not computer science and software engineering in general.

    I always tell people “Take every computer course you can! Learn everything there is to know about computers!”

    Read more answers →

    by   -   September 15, 2013

    In the past, a robotics education started with any inspiration that filtered through the sparse media of the time. Imagine a dull illness during a bland winter, black and white TV on a fuzzy channel, and then out of nowhere, mom drops a Jack Kirby ‘Fantastic Four’ comic on your sickbed.

    In full color.

    For those who remember, King Kirby was a genius at thick rendered, forced perspective sci-fi illustrations: spaceships, weapons, and best of all robots in immaculate detail, exciting situations, and traceable isomorphic projection.

    Robotic education starts then, tracing and drawing your plans, usually in crayon.  That kind of inspiration is vital to keep the obsessiveness to face the thousands of hours needed before you have something you can be proud of (or paid for).

    Following the sketches come the personal discoveries and skills needed to remove your hardware fears: Tinkertoys, Lego, Meccano, balsa airplanes, general disassembly (no alarm clock is safe!), car repair, welding shop, and (if you can afford it) servo-based RC items which give an instinctive feel for  set-point positioning and materials strength.

    (Your electric screwdriver is your best learning tool, so get a good one.)

    After that, a quality robotics education can be picked up pretty much anywhere, provided you’re an ADHD polyglot with a hankering for electronics, electrics, power systems, industrial and product design, acoustics, physics, statics, materials science, animation, behavioral rendering, dynamics, AI, firmware and app programming, illumination focusing and filters, sensors, vision systems, gradient optimization, interfaces and protocols, haptics … (list continues ad-infinitum as speaker fades into distance, then back up), then you’ll be fine.

    But first and always …

    When I occasionally get to lecture before K-through-12s with a dozen various  robots, I like to point out that: ‘Robotics isn’t one thing, it’s *everything* that makes technology cool brought together. What you’re learning in school *now* applies to how these work.’

    Then I get one of my robots to burp animatedly, to emphasize the point.

    Afterwards the class plays with the bots and fights over the remotes, but sometimes you get a kid who asks insightful questions, wants specific details, shows a deliberate interest, and a fascination with what now might be possible.  Something he never thought accessible before.

    Inspiration delivered?

    One in a thousand.

    Good luck kid.

    Read more answers →

    by   -   September 15, 2013

    At the high school or middle school level there is no single best way for students to get a robotics education: there are many ways, and each way reaches the students differently. The easiest way is for students to join an established team: FIRST® Robotics (FLL, FTC, or FRC) teams, VEX robotics teams, BEST Robotics teams, and Botball teams. For a shorter-term experience, students can enroll in various summer camps at a local science center or other locations.  Finally, their teachers can offer to integrate commercially-available Hummingbird Robotics kits into the curriculum at multiple levels.

    My educational robotics experience ranges from FRC- and FLL-level FIRST robotics teams to using the Hummingbird Robotics kits in the classroom and at summer camp.  For the past three years I have worked with the Girls of Steel FIRST Robotics team at Carnegie Mellon University as a high school faculty advisor/business mentor, and in each of the past two years my high school human anatomy students at The Ellis School used Hummingbird Robotics kits to create a robotic arm as a part of the unit on muscles. Most recently, my robot arm lab lesson was successfully adapted for middle school students in a C-MITES summer camp at CMU called “Anatomy and Robotics.”


    The greater Pittsburgh area in Pennsylvania is a wonderful place to get a robotics education if you are a middle school or high school student.  Teachers here have access to workshops and professional development programs so they can be trained to bring robotics into the classroom using the Hummingbird Robotics kit or the LEGO Robotics kits. Students can enroll in summer camps to learn robotics at the Carnegie Science Center, Carnegie Mellon University, or Sarah Heinz House, and they can join FIRST or VEX community robotics teams at CMU (Girls of Steel) and Sarah Heinz House or the teams at their schools.

    Read more answers →

    by   -   September 15, 2013

    We are looking for researchers who are highly motivated, and who are passionate about seeing the results of their research come to fruition and be used by industry or the public. They should have a demonstrated ability to conduct internationally-recognized, cutting-edge research in robotics, and have a great publication record.

    We expect senior staff to have experience in acquiring new project funding and transitioning technologies to applications. Our roboticists should also have a strong interest in deploying robotic technologies in new domains, and in demonstrating these technologies in the field.

    Our staff come from all over the world. We have no preference for geographic region. We want the best staff possible – it’s as simple as that.

    Read more answers →

    by   -   September 15, 2013

    When hiring at BlueBotics, we first assess the personal profile, soft competencies, and team compatibility. After that, we go into a deep technical assessment.

    Today, the product sales with the ANT navigation product line are becoming more important than the custom-specific engineering services we provide for mobile robotics. This means that we mainly hire specialists, and we plan for them to be active in production, quality control, deployment, and support.

    We still hire also R&D personnel, but we primarily want to have the best specialists, not necessarily robotics generalists. Of course, we value a background in robotics, but this is not a must.

    We firstly search within our network, and especially look to our contacts at EPFL and ETH. I also look into all the spontaneous offers we regularly receive and post the profile in my LinkedIn network. Finally, we use local head hunters.

    We then get quite a bunch of CVs, which we assess internally before starting with face-to-face meetings.

    Read more answers →

    by   -   August 15, 2013 a robot animator I can attest to the fact that robots don’t “need” heads to be treated as social entities. Research has shown that people will befriend a stick as long as it moves properly [1].

    We have a long-standing habit of anthropomorphizing things that aren’t human by attributing to them human-level personality traits or internal motivations based on cognitive-affective architectures that just aren’t there. Animators have relied on the audience’s willingness to suspend disbelief and, in essence, co-animate things into existence: from a sack of flour to a magic broom. It’s possible to incorporate the user’s willingness to bring a robot to life by appropriately setting expectations and being acutely aware of how the context of interaction affects possible outcomes.

    In human lifeforms, a head usually circumscribes a face, whereas in a robot a face can be placed anywhere. Although wonderfully complex, in high degree of freedom (DOF) robot heads, the facial muscles can be challenging to orchestrate with sufficient timing precision. If your robot design facilitates expression through the careful control of the quality of motion rendered, a head isn’t necessary in order to communicate essential non-verbal cues. As long as you consider a means for revealing the robot’s internal state, a head simply isn’t needed. A robot’s intentions can be conveyed through expressive motion and sound regardless of its form or body configuration.

    [1] Harris, J., & Sharlin, E. (2011, July). Exploring the affect of abstract motion in social human-robot interaction. In RO-MAN, 2011 IEEE (pp. 441-448). IEEE.


    by   -   August 15, 2013 you curious about what your future robotic assistants will look like?

    My bet is that by the time you buy your very first robotic butler, it will have a friendly head on it that moves. In fact, it would be a good idea to make robots with heads if they are intended to share spaces and objects with people. That’s because the head is a really expressive part of our body we naturally use (a lot) to convey essential information to each other.  Robots will need to do the same if they are going to hang out with us soft-tissued human beings at our homes and offices.

    For example, when people are attending to something, they tend to be looking at the thing they are attending to. People also look at the direction they are headed when they walk, and make eye contact when they talk. People nod with their head when they want to show agreement about what is being said. Without these nonverbal cues from the head interacting with each other would be much more difficult, because we wouldn’t know what each other are doing.

    Rodney Brooks, a pioneer in robotics and now Chairman and CTO of Rethink Robotics, had this in mind when he built Baxter. Although Baxter’s arms are as bulky-looking as its traditional industrial robotics predecessors, one of the innovative components of it is the fact that it features a moving head that makes its interaction with not-so-trained users very intuitive

    If robots are to do meaningful things around us in a safe manner, it’s essential that we know what the robot is attending to, where it is headed, and what it is about to do – a lot of which a robot head can help with. That way, we won’t have to be a roboticist to know when it is safe to be around a robot holding on to a giant knife to make you cucumber salad. <optionally embed cucumber slicing robot

    by   -   August 15, 2013 don’t know about you, but if something has a head I assume it has thoughts. When watching a movie I stare at the character’s face because I want to know what they feel. So for me a head’s a pretty important thing. If I’m going to talk with a robot I’d like it to have some kind of discernable head. It’s a useful thing if you want people to have warm fuzzy feelings about your robot. Its useful if people are interfacing with the robot.

    Simply: a head allows a face, and a face allows interface.

    So a head’s only needed if the robot has to interface with people (or other headed animals, say). A head is a design feature but the main function of an android is its form: it has to look like humans. Giving it a head is a function-follows-form decision. Wasn’t it Hunter S. Thompson who wrote, “Kill the head and the body will die?” Well, this should not be the case for military robots. The beheaded design can be improved. Saying that all robots need to have faces is like saying all animals need to have gills. For the deadly, dangerous, and downright dastardly work that robots today need to perform, like gastro-intestinal surgery, or military surveillance, a head won’t do much more than get stuck or blown off.

    A head, like hands or a face, is a design decision that’s best left for the robots working directly with humans.

    by   -   August 15, 2013 obvious answer to this question is “No: there are lots of robots without heads.” It’s not even clear that social robots necessarily require a head, as even mundane robots like the Roomba are anthropomorphized (taking on human-like qualities) without a head. A follow-up question might be, “How are heads useful?” For humans, the reasons are apparent: food intake, a vessel for our brain, a locus for sensors (eyes and ears), and high-bandwidth communication via expression. What about for robots …?

    • Food intake: Probably not.
    • Computational storage: Again, probably not.
    • Location for sensors: Indeed, the apex of a robot is a natural, obstacle-free vantage point for non-contact sensors. But a “head” form factor is not a strict requirement.
    • Emotion and expression: Ah, the real meat of this question… “Do robots need to express emotion?”

    This is a funny question to ask someone who once (in)famously advocated for either (A) extremely utilitarian designs: “I want my eventual home robot to be as unobtrusive as a trashcan or dishwasher”, or (B) designs unconstrained by the human form factor: “Why not give robots lots of arms (or only one)? Why impose human-like joint limits, arm configurations, and sensing? We can design our own mechanical lifeforms!”

    My views have softened a bit over time. Early (expensive) general-purpose home robots will almost certainly have humanoid characteristics and have heads with the ability to express emotions (i.e. be social) — if nothing else, to appeal to the paying masses. And these robots will be useful: doing my laundry, cleaning my dishes, and cooking my meals. In the early attempts, I will still find their shallow attempts at emotion mundane and I will probably detest the sales pitches about “AI” and “robots that feel.” But as the emotional expressions become more natural and nuanced, and the robots become more capable, I will probably warm up to the idea myself.

    TL;DR: No, many robots do not need heads. Even social robots may not need heads, but (whether I want them to or not) they probably will, because paying consumers will expect it.

    by   -   July 16, 2013

    Policy is really about long-term thinking — a process we should do but don’t do for various reasons. Though China is a notable exception, very few governments make long-term planning a priority.

    Corporations are more disciplined and less prevailed upon by conflicting interests than governments; hence long-term planning is a regular part of their management practice. But corporations have neither ethics nor loyalties, and often do marginally (if not outright) immoral things to preserve the profitability of the company over the welfare of the community and workforce.

    by   -   July 15, 2013

    I am not sure how to describe the specifics of what policy makers should do, but I think there are two gaps that policy makers should think about that are associated with the economic development impact of robotics:

    1. sufficient funding to support an emerging robotics marketplace, and
    2. detailed descriptions of the innovations needed to solve specific problems.
    by and   -   July 15, 2013

    [RBI Editors]
    As an active robotics investor, a leading authority on the business of robotics, and the author of The Robot Report and Everything Roboticyou are at the pulse of the field’s economic development. In a nutshell, what’s happening in robotics today?

    [Frank Tobe]
    I think the biggest thing happening today is the acceptance of the low-cost Baxter and Universal robots into SMEs and small factories everywhere.  Sales will likely be 2% of the total; 5% in 2014, and possibly 15% in 2015. That’s growth! And that’s before the big four robot makers start selling their low-cost entry robots for SMEs. This has more near-term promise than unmanned aerial or ground vehicles in agriculture and elsewhere. These co-robots are proving that we need more high-tech people and fewer low-skilled people in this globally competitive economy.

    by   -   June 17, 2013

    Well, I’m lucky enough to be a gentleman scientist, so I concurrently study problems on minimal dynamical control systems (optimizing performance to silicon ratios), power regeneration and efficiency, alien robot morphologies (weird bodies outside the conventional biomimetic), sensor design, situational awareness and integration, motor and mechanical operational extension, locomotion and loading, and anything else that allows for useful, clean, interesting, semi-perpetual automatons.

    Y’know, the basics.  Bringing good things to life.  Moo Ha ha.

    So robotics research is excellent for those with ADHD – the field’s problem and feature is it’s not just anything, it’s everything that’s techno fun. However every now and again there’s something that skitters, flops, pronks, spins, walks, tumbles, or bounces across the desk that could really use … a brain.

    So the short answer is I’d put (other people’s) money into researching affordable competent minds that could help organize any mechanical body, sensor or environment they are given.  Small, quick, cheap, and with a voice interface so I can encourage it to effectiveness without a million keystrokes.  Power on and it asks “Hello, what is my name?”

    Yes, that’d be handy.  Do they have an App for that yet?

    Read more answers →

    by   -   June 17, 2013

    Community empowerment through massive robotic sensing.

    There is no question we live in a world that is changing. Pollutants are changing the dynamics of the air we breathe, the water we drink and even the soil on which we live. Yet the power to measure pollution, measure human behavior (including Emergency Room visits) and correlate the values is held tightly by government and corporate players.  They have the money to focus on sensors and values that make their case, and they have the marketing skills to then present those values in the best possible light for reelection and for corporate profit.

    But in fact those most touched by a changing world are ordinary citizens, and it is the citizen who has the potential to make decisions that immediately impact health and future legislation, from what neighborhood to live in to which politician to elect. Robotic sensing technologies are rapidly becoming less expensive, and with the right infusion of research I believe we could develop the networking, data visualization and interaction smarts to have global, publicly accessible information about all sources of pollution. This would empower citizens and communities to make far more informed decisions, and to fight biased information presentations with their own re-interpretation of source data. This will take new innovation in sensing technologies, networking, Big Data storage, search, retrieval and evaluation.

    It is the stuff of robotics, through and through, applied to the deep goal of community empowerment at an international scale.

    Read more answers →

    by   -   May 15, 2013

    Well it depends on what you mean by mainstream. For a number of  major industry sectors robotics is already mainstream. In assembly-line automation, for instance; or undersea oil well maintenance and inspection. You could argue that robotics is well established as the technology of choice for planetary exploration. And in human culture too, robots are already decidedly mainstream. Make believe robots are everywhere, from toys and children’s cartoons, to TV ads and big budget Hollywood movies. Robots are so rooted in our cultural landscape that public attitudes are, I believe, informed – or rather misinformed – primarily by fictional rather than real-world robots.

    But I understand the sentiment behind the question. In robotics we have a shared sense of a technology that has yet to reach its true potential; of a dream unfulfilled.

    The question asks what is the single biggest obstacle. In my view some of the biggest  immediate obstacles are not technical but human. Let me explain with an example. We already have some very capable tele-operated robots for disaster response. They are rugged, reliable and some are well field-tested. Yet why it is that robots like these are not standard equipment with fire brigades? I see no technical reason that fire tenders shouldn’t have, as standard, a compartment with a tele-operated robot – charged and ready for use when it’s needed. There are, in my view, no real technical obstacles. The problem I think is that such robots need to become accepted by fire departments and the fire fighters themselves, with all that this entails for training, in-use experience and revised operational procedures.

    In the longer term we need to ask what it would mean for robotics to go mainstream. Would it mean everyone having a personal robot, in the same we all now have personal computing devices? Or, when all cars are driverless perhaps? Or, when everyone whose lives would be improved with a robot assistant, could reasonably expect to be able to afford one? Some versions of mainstream are maybe not a good idea: I’m not sure I want to contemplate a world in there are as many personal mobile robots, as there are mobile phones now (~4.5 billion). Would this create robot smog, as Illah Nourbakhsh calls it in his new book Robot Futures?

    Right now I don’t have a clear idea of what it would mean for robots to go mainstream, but one thing’s for sure: we should be thinking about what kind of sustainable, humanity benefitting and life enhancing mainstream robot futures we really want.

    Read more answers →

    by   -   May 15, 2013

    The biggest obstacle to broader adoption of robotics is that only experienced roboticists can develop robotics applications.  To make a robot reliably and robustly do something useful, you need a deep understanding of a broad variety of topics, from state estimation to perception to path planning.  While few people in the world have this expertise, many people can write software.  What we need is more of those software developers involved in the business of developing robotics applications.

    I say “applications” to distinguish this work from that of developing new algorithms or core building blocks.  Making an analogy to traditional software development, I don’t need to understand how process schedulers, or file systems, or memory managers work in order to develop useful desktop applications.  And I don’t need to know the details of DNS, web servers, or web sockets to develop portable web applications.  Knowing more about the underpinnings of the system will always be useful, of course.  But the key is that, once the building blocks are established, understood, documented, and tutorialized, the barrier has been greatly lowered: you just need to be able to write code.

    Beyond just getting more people working with robots, we need better ideas for how robotics technology can be usefully and profitably employed to support people in their everyday lives.  My experience in the robotics community over the last 15 years has convinced me that roboticists are pathologically bad at coming up with application ideas.  We’re enamored of the technology, which is good in that it motivates us to work hard on important problems.  But it also leads us to concentrate on “robotic” solutions to problems, without regard to what people who experience those problems really need.  We can fix this problem by adding orders of magnitude more developers to our community, each of whom comes with a new and different perspective. And we can do that by making the development of robotics applications accessible to any competent programmer.

    The Android and iOS platforms made it possible for people with no more than a passing understanding of 3G, GPS, or touch screens to build useful, even world-changing mobile applications.  We can do the same for robotics.  We’re on the right path, with a lot of effort going into open, shared software platforms for robotics.  We just need to keep pushing, and to keep the non-robotics engineer in mind when we’re building things.

    Read more answers →

    by   -   May 15, 2013

    From experience, the single biggest obstacle to personal robotic markets is cost, both in money and time. Robots have the disadvantage of being over-promoted in fictional media while over-priced on the shelves. Sci-fi is fine to inspire if builders feel the money-time is justified, but the number of half-finished robots internationally greatly outnumber those completed, more’s the pity.

    Frustration is compounded when the budding obsessive-compulsive hits what I call the ‘complexity barrier’, where for a linear increase in device competence, an exponential increase in money-time is required.  This often leads to a condition I call FIBA (for “F&@k It, Build Another”) where enthusiasm fades as prototypes fall further into dusty closets.

    The problem with FIBA is that the evolutionary discoveries and skills from completing the mechanoid never resolve.  Also, unlike failed code, film, books, or other virtual projects, the half-finished device will be too expensive to throw away and will likely to haunt the builders ambitions for years (dammit!).

    It doesn’t inspire, especially when Hollywood (and nature) makes it seem so easy.  There has to be a way to reduce all the factors so that robots can be put together at a price appropriate not just to encourage robo-evolution, but also allow users to put them at risk.  Right now conventional personal robots are precious things subject to de-acceleration trauma, but the best discoveries I’ve had have not been from artificial life, but artificial danger.  We can simulate a robot, but not the real-world it lives in, and how it deals with those problems provide vital clues for subsequent generations (robots and builders alike).  We have to get them cheap enough so they can make mistakes, or they/we will never learn … the vital step to proctoring these creatures into reality.

    It’s a forced evolution, and sometimes painful/funny to watch, but if you’ve got a dozen of these dumbos in the lab, let’s see what the dog thinks of one.

    Speaking of, anyone know how to get fang marks out of aluminum?

    Read more answers →

    by   -   April 15, 2013 days it is hard to read an article about the future of robots that does not include a reference to jobs. As a pure roboticist I object to the constant connection between the two, but as a concerned citizen I think it is a very worthwhile discussion.  Since the year 2000, the US has lost more than 6 million manufacturing jobs — that is more than 1/3 of all direct manufacturing jobs in the US and the fastest drop in a single decade on record.

    by   -   April 14, 2013

    There is no doubt that robots, and automation in general, replace humans in the work-force: all productivity-enhancing tools, by definition, result in a decrease in the number of man-hours required to perform a given task. 

    This Robotics By Invitation contribution is part of Robohub’s Jobs Focus.

    There may be some regional effects that result in an immediate increase in jobs (for example, setting up a new manufacturing plant and hiring workers to maintain the machines), but the global effect is indisputable: overall, robots replace human workers.

    What is also true, however, is that robots create jobs as well.  This is simply Economics 101: there is a redistribution of labor from low skilled jobs – what robots can do now, and the foreseeable future – to higher skilled jobs. An analogy from the North Carolina Department of Agriculture: “In 1790, 93% of the population of the United States was rural, most of them farmers. By 1990, only 200 years later, barely 2% of our population are farmers.”  What is also true is that there are many more software engineers now than there were in 1790; or mechanics; or physiotherapists; or professional athletes; or artists.

    So the debate about robots replacing human workers is, for the most part, a tired and old one; just replace the word ‘robot’ with any productivity-enhancing tool or development. And as long as the process is gradual, one can reasonably argue that society benefits as a whole.

    But the question does have merit, because human workers are at an artificial disadvantage relative to their robot counterparts, and the culprit is artificially low interest rates.  Large companies such as Procter and Gamble can issue 10 year corporate bonds that have astronomically low yields of 2.3%.  With money so cheap, productivity tools – such as robots – that would not be economically viable under normal interest rates and yields are now a bargain.  Why should a company ‘rent’ labor (a human worker) when it can ‘buy’ it (a robot)?  Have we not seen this storyline before?

    Read more answers →

    See all the posts in Robohub’s Jobs Focus →


    [Photo credit: Petr Kratochvil.]

    by   -   April 14, 2013

    Robots do kill jobs but they’re crappy jobs, so good riddance.  If you’ve ever had a job you were desperate for the money, but immediately regretted after you got it, then you know what I mean.

    This Robotics By Invitation contribution is part of Robohub’s Jobs Focus.

    The anxiety occurs when robots have anthropomorphic similarities that people wrongly associate with human ambition.  When a (semi) humanoid takes away the whole menial job that used to be done by a person, there’s an instinctive focus to blame the machine, not the corporation optimizing its bottom line.  Optimizing tasks to reduce costs is a good thing.  It’s just a shame we haven’t kept up with the social reforms needed so people who had those jobs before could find better jobs now.

    So the short answer is robot-brained corporations kill jobs.  Robots are just the anthropomorphic patsies that get blamed.

    Still, now I have to go and stare worriedly at my toaster.

    Read more answers →

    See all the posts in Robohub’s Jobs Focus →