In the constantly changing landscape of today’s global digital workspace, AI’s presence grows in almost every industry. Retail giants like Amazon and Alibaba are using algorithms written by machine learning software to add value to the customer experience. Machine learning is also prevalent in the new Service Robotics world as robots transition from blind, dumb and caged to mobile and perceptive.
Competition is particularly focused between the US and China even though other countries and global corporations have large AI programs as well. The competition is real, fierce and dramatic. Talent is hard to find and costly. It’s a complex field that few fully understand, consequently the talent pool is limited. Grabs of key players and companies headline the news every few days. “Apple hires away Google’s chief of search and AI.” “Amazon acquires AI cybersecurity startup.” “IBM invests millions into MIT AI research lab.” “Oracle acquires Zenedge.” “Ford acquires auto tech startup Argo AI.” “Baidu hires three world-renowned artificial intelligence scientists.”
Media, partly from the complexity of the subject, and partly from lack of knowledge, frighten people with scare headlines about misuse and autonomous weaponry. They exaggerate the competition into a hotly contested war for mastery of the field. It’s not really a “war” but it is dramatic and it’s playing out right now on many levels: immigration law, intellectual property transgressions, trade war fears, labor cost and availability challenges, and unfair competitive practices as well as technological breakthroughs and lower costs enabling experimentation and testing.
Two recent trends have sparked widespread use of machine learning: the availability of massive amounts of training data, and powerful and efficient parallel computing. GPUs are parallel processors and are used to train these deep neural networks. GPUs do so in less time, using far less datacenter infrastructure than non-parallel-processing super-computers.
Service and mobile robots often need to have all their computing power onboard as compared to stationary robots with control systems in separate nearby boxes. Sometimes onboard computing involves multiple processors; other times it necessitates super-computing power such as offered by chip makers that offer parallel processing and super-computer speeds. Nvidia’s Jetson chip, Isaac lab, and toolset are an example.
The recent Nvidia GPU Developers Conference held in San Jose last month highlighted Nvidia’s goal to capture the robotics AI market. They’ve set up an SDK and lab to help robotics companies capture and learn from the amount of data they are processing as they go about their tasks in mobility and vision processing.
Nvidia’s Jetson GPU, SDK, toolset and simulation platform are designed to help roboticists build and test robotics applications and simultaneously manage all the various onboard processes such as perception, navigation and manipulation. As a demonstration of the breath of capabilities in their toolset, Nvidia had a delivery robot to cart around objects at the show.
Nvidia is offering libraries, SDK, APIs, an open source deep learning accelerator, and other tools to encourage the use by robot makers for them to incorporate Nvidia chips into their products. Nvidia sees this as a future source of revenue. Right now it is mostly all research and experimentation.
In a recent CBInsights graphic categorizing the 2018 AI 100, 12 companies were highlighted in the robotics and auto technology sectors. Note from the Venn Diagram that not all AI companies are involved with robotics (in fact, most aren’t – there were 2,000+ startups in the pool of companies from which the 100 were chosen). The same is true for robotics.
Here are four use cases of robot companies using AI chips in their products:
Twelve years ago, as a national long-term strategic goal, China crafted 5-year plans with specific goals to encourage the use of robots in manufacturing to enhance quality and reduce the need for unskilled labor, and to establish the manufacture of robots in-country to reduce the reliance on foreign suppliers. After three successive well funded and fully incentivized 5-year robotics plans, one can easily see the transformation: robot and component manufacturers have grown from fewer than 10 to more than 700 while companies using robots in their manufacturing and material handling process have grown similarly.
[NOTE: During the same period, America implemented various manufacturing initiatives involving robotics, however none were comparably funded or, more importantly, continuously funded over time.]
Recently China turned its focus to artificial intelligence. Specifically, they’ve set out a three-pronged plan to catch up by 2020, achieve mid-term parity in autonomous vehicles, image recognition and, perhaps, simultaneous translation by 2025, and lead the world in AI and machine learning by 2030.
Western companies doing business in China have been plagued by intellectual property thievery, copying and reverse engineering, and heavy-handed partnerships and joint ventures where IP must be given to the Chinese venture. Steve Dickinson, a lawyer with Harris | Bricken, a Seattle law firm whose slogan is “Tough Markets; Bold Lawyers,” wrote:
“With respect to appropriating the technology and then selling it back into the developed market from which it came: that is of course the Chinese strategy. It is the strategy of businesses in every developing country. The U.S. followed this approach during the entire 19th and early 20th centuries. Japan and Korea and Taiwan did it with great success in the post WWII era. That is how technical progress is made.”
“It is clear that appropriating foreign AI technology is the goal of every Chinese company operating in this sector [robotics, e-commerce, logistics and manufacturing]. For that reason, all foreign entities that work with Chinese companies in any way must be aware of the significant risk and must take the steps required to protect themselves.”
What is really clear is that where data in large quantity is available, as in China, and where speed is normal and privacy is nil, as in China, AI techniques such as machine and deep learning can thrive and achieve remarkable results at breakneck speed. That’s what is happening right now in China.
Growth in the service robotics sector is still a promise more than a reality and there is a pressing need to deliver on those promises. We have seen tremendous progress on processors, sensors, cameras and communications but so far the integration is lacking. One roboticist characterized the integration of all that data as a need for a “reality sensor”, i.e., a higher-level indicator of what is being seen or processed. If the sensors pick up a series of pixels that are interpreted to be a person, and the processing determines its motion to be intersecting with your robot, it would be helpful to know whether it’s a pedestrian, a policeman, a fireman, a sanitation worker, a construction worker, a surveyor, etc. That information would help refine the prediction and your actions. It would add reality to image processing and visual perception.
Even as the ratio of development in hardware to software shifts more toward software, there are still many challenges to overcome. Henrik Christensen, the director of the Institute for Contextual Robotics at the University of California San Diego, cited a few of those challenges:
One often forgets the science involved in robotics, embedded AI, and the many challenges remaining until we have a functional fully-capable, fully-interactive service robot.