Robohub.org
ep.

181

podcast
 

bStem with Todd Hylton

by
01 May 2015



share this:


In this episode, Audrow Nash interviews Todd Hylton, Senior Vice President at Brain Corporation, about neuromorphic computers. They discuss the robotics development board bStem, which approximates a neuromorphic computer, as well as the eyeRover: a small balancing robot that demonstrates how the bStem can be used in mobile robots.

youtu.be/uGcUB5MAVsY

Todd Hylton 

As Senior Vice President of Brain Corporation, Dr. Todd Hylton leads the development of business and technical strategies within the company. A scientist and co founder of a small semi-conductor equipment manufacturer, Hylton brings 25 years of experience in the semiconductor, optical communications, data storage and defense industries alongside a broad technical entrepreneurial background in research and development, small business, marketing and government programs.

Links:


Transcript

Audrow Nash: Hi, welcome to robot’s podcast. Can you introduce yourself?

Todd Hylton: I’m the senior VP of Strategy at Brain Corporation. Brain Corporation is a technology startup located in San Diego, California working on computing hardware and software technology for robotics.

Audrow Nash: Can you tell me the goal and motivation behind the company?

Todd Hylton: The company’s mission is to provide computing technology – hardware, software and cloud-based or web-based services – for people who want to build robots, so that robots can become more a part of everyday life than they currently are.

I often ask people, “How many robots did you see today?” And usually the answer is “none” unless they have a [robot] vacuum cleaner at home. Our mission is to make it possible for many more robots to be created by providing some of the foundational technologies that are, frankly, quite challenging to produce if you’re building a robot from scratch.

Audrow Nash: To begin, can you give me an overview of neuromorphic computing and its advantages over using a CPU?

Todd Hylton: This is a topic near and dear to my heart, and certainly part of what Brain Corporation does.

Before working at Brain Corporation, I was at DARPA where I sponsored a project in neuromorphic computing called SyNAPSE. The goal for neuromorphic computing in general was to see if it was possible to build a different kind of computing architecture, one in which computing was massively distributed, and memory was embedded very near the computing elements. To draw inspiration from biology: very simple processing units (which you might think of as neurons) sending messages to other processing units (via things you might think of as spikes) with connections between them, which are the memory around the processors (what you might that of as synapses).

The goal for that project was to see if it was possible to build a large-scale neuromorphic computer and also to push boundaries in memory technology. You need a lot of memory on board to make a good neuromorphic computer, and [we wanted] to push our understanding of what we would be able to teach these chips to do, and how we would teach them by having synaptic modifications and neuronal dynamics built into the chips.

Though Brain Corportation is not building neuromorphic chips, many of the learning algorithms that we write for our robots have ideas like neurons and synapses underneath them, so we would very much like to be able to buy these neuromorphic chips when they become available because we think they will be a great enabler for our robotics packages.

To come back to your original question about CPU versus neuromorphic processor, CPU’s are great for general purpose computing. You can give them gigantic lists of instructions, they’re pretty simple to use, and that’s why we like them. But if you need to do a whole bunch of things in parallel – if you need to coordinate many different things in time and space – they’re not very efficient. It’s just too much energy wasted moving stuff back and forth in the chip.

Audrow Nash: An example of that is vision processing … ?

Todd Hylton: Yes, a good example of that is vision processing, which goes right back to robotics. One of the key things that robots need that they mostly don’t have are vision systems. There are different levels of vision system, and some are quite simple, but we envision at some point in the not too distant future a vision system that actually begins to learn the dynamics and the statistic of its environment in a spontaneous way. And that really needs a different kind of computer to do it well.

Audrow Nash: You guys have created bStem. Can you tell me a bit about it?

Todd Hylton: bStem does not include a neuromorphic processor – those neuromorphic processors are still in their very early phases of being built and the tools needed to make them are not commercially available yet. But bStem is the closest thing we can get right now.

We wanted a processor that was low power, low cost, but still with a lot of computational capacity, because you don’t have a lot of power and you need a lot of computation if you’ve got a mobile robot. bStem’s basic technology is derived from the mobile phone industry. Chips that are available for mobile phones provide lots of different computing capabilities: GPUs, CPUs, DSPs, various codecs for vision – all of these things are part of what we need. So we have taken state-of-the-art cell phone chipsets made by Qualcomm and repurposed them for robotics on this little board we called bStem. It also comes with sensors, drivers for various motors, and breakout boards for various motors that the robots may have.

Audrow Nash: Are there similar development boards out there?

Todd Hylton: I don’t know of any similar development boards that use state-of-the-art mobile phone technology for robotics. But there certainly are development boards that use mobile phone technology. If you want to build a phone or tablet you can get one of these boards. Unfortunately they’re a long way from what a roboticist really wants. With bStem, we closed the gap between those development boards built for phones and development boards built for robots. If get a bStem, it’s going to look and feel just like what you would expect it to be as a roboticist and not as a mobile phone developer.

Audrow Nash: What kind of reactions have you gotten from people who are using bStem?

Todd Hylton: We’ve given bStem to a handful of developers. The basic reaction is, “Wow I can’t believe it. I’ve got a whole Ubuntu package on this tiny little board, it takes about two watts to run it, I can develop my whole robotic controller on this board, I can do Gmail on it if I feel like it, plus it’s got a whole bunch of other tools and goodies that make it easy for me to write the controllers that I want.”

Audrow Nash: Can you tell me a bit about eyeRover?

Todd Hylton: As I said in the beginning we aspire to build the computing technology for people that want to build robots. We don’t plan to build and sell robots, but we built the eyeRover because we had to integrate the whole technology stack: the computer hardware, the software, put it on a robot, put the sensors on it and we wanted to be able to train the robots so it’s got a learning system that we put on it as well. It basically forced us to build the whole technology piece so that we could begin to show our potential partners what it is we can do.

Audrow Nash: Can you describe some of the sensors and actuators that eyeRover has?

Todd Hylton: eyeRover is a two-wheel balancing robot, sort of a Segway-type robot. It’s got two cameras and a relatively simple vision system with which it can see its environment. It has a bStem board, onboard IMUs, and magnetometers (which are used in some of the algorithms).

There are a lot of things on bStem that it doesn’t use because we didn’t need them for this particular robot, but other roboticists would probably like them. bStem also includes a user interface, which you can use to both remote control the robot and to train the robot in a behavior. That remote controller looks like a game pad/iPad combination. It’s sort of like a video game controller for the robot.

Audrow Nash: It looks to me like a PSP, the older kind.

Todd Hylton: Yeah, yeah.

Audrow Nash: Can you talk about eyeRover’s modes of learning and how you’ve been able to train him?

Todd Hylton: All of the learning modes on eyeRover are what we call ‘supervised learning’. Supervised learning is when you essentially show the robot what it is supposed to do; it learns to associate its sensory input with whatever it is you told it to do. For example, if the robot is supposed to turn right whenever it sees a green block on the left, I just show it that behavior a few times and it makes that association. It’s a well-known way of doing training.

We’ve built many different supervised learning algorithms, and the ones that we do on the eyeRover we’re doing primarily because we wanted to show navigation behaviors; we basically built our own special supervised algorithm that we thought was particularly effective and also didn’t require gigantic computational resources.

In the future, as we build out the learning piece – which we call the Brain Operating System, that’s the software stack that allows the robot to learn – it will also have what people generally think of as ‘unsupervised learning,’ where by exploring its environment the robot begins to learn the structure of its world. It will likely have some reinforcement learning as well, which is what I kind of think of as an impoverished supervised learning where instead of telling the robot exactly what it’s supposed to do, you just tell it “that was good” or “that was bad.”

Audrow Nash: Can you talk about some of your examples of actually training eyeRover?

Todd Hylton: We’ve trained it to do simple navigation around objects that may be on the floor, like around your desk, around your chair, around your trash can, do some loops and figure eights, go from point A to Point B. The way we train it is to show this behavior by remote control a few times, we show it a few loops around the path it’s supposed to do, and then we let it go. And if it’s correctly learned it, it does the same behavior. You can also train it incrementally, where you train it, you let it go, and you see if it’s doing what you want and you just give small corrections when it makes mistakes. Over time it does better at those areas where it was making mistakes. For the kinds of things that we’re doing now, it usually only takes a couple of minutes to train a path like the ones I just described.

We’ve also given it some simple gestures and it can learn to come to you or go away from you. On other robots we’ve trained them to play fetch and we can show them an object that they’re supposed to find: put the object out somewhere, it’ll go get the object and bring it back to a base, and it’s all trained.

The cool thing about that is it’s hard to write a piece of code that would just do that generically. You can’t foresee every environment even if they’re pretty simple, and you can’t really code it. So one of the key things we think robots need to be able to do to be part of everyday life is that they need to be able to adapt or be trained in the environments in which they’re going to perform because no coder can anticipate every single situation a robot’s going to see.

Audrow Nash: Will this bring the development of robots and their behaviors to the masses … because you can train the robot to behave as you want?

Todd Hylton: Exactly.

Audrow Nash: Is that the goal of Brain Corp?

Todd Hylton: That’s one of the goals, and I think it’ll go in stages. There will be some pre-training by the people who build the robots so that the thing doesn’t have to learn from scratch when you buy it from Best Buy and stick in your home. That way, when you get it in the home, it will be much more capable and much more useful, [especially] if you can give it additional training.

One of the challenges for us, though, is to come up with user interfaces so that people who aren’t roboticists or technologists can train a robot. There’s a big stack of things that we worry about, such as what can people actually do, how can we actually control the robot, how can we make it clear, how do you communicate the task to the robot … the AI pieces that try to put together what the robot senses and what it’s being told to do … all the computing hardware and then a whole cloud infrastructure so that you can get updates, you can upload brains, you can upload what the robot’s done so we can diagnose it if something bad has happened and it’s not performing the way it should be. We are doing all of those things.

Audrow Nash: Now that you’re using bStem, what are some of the other applications that you anticipate seeing from the user community?

Todd Hylton: That’s what we’re just now beginning to learn in detail. We were in hibernation for many years getting the technology together, and it’s only in the last month or so we’ve actually been telling people. We didn’t really want to start saying much until we had a platform on which we can demonstrate it, which is why the eyeRover exists. 

What I see in the robotics industry in general, like at this show – we’re at RoboBusiness now –it’s mostly very small companies in narrow niches. [Of course this is a] silly exaggeration, but it seems that most of the companies bring in gears and motors and sheet metal and plastic and a few chips and a loading dock, and a robot comes out the other end. Basically, they have to do the whole technology piece, which is really challenging.

What we’re doing took a gigantic amount of resources to actually accomplish. Our business proposition is that it doesn’t make sense for every company to do that; we’ll do it for you. Pretty much all the robots that I’ve seen at this meeting are simple robots that roll around on four wheels and see stuff, there are some that have grippers, there are some humanoid robots or some combinations of those things. I think all of those robots could benefit from the technologies that we’re developing.

Another thing that we would like to see happen in the robotics industry is for it to be much easier for new companies to form around building robots. There’s a huge economic barrier to get it going because the computing systems that you need mostly don’t exist and you have to build them from scratch. The people that really know robotics or who have some deep domain expertise in whatever the problem is they’re trying to solve – whether it’s drilling holes or sweeping floors or cleaning windows or whatever. There’s no reason they should have to know the entire software stack on a modern system on a chip – it’s crazy.

If we can get these technology pieces together and available – and that’s our plan of course – then I think there will be an explosion of new robotics companies. Two guys in a garage out of engineering school can build a robot, rapidly prototype it, show that it works, take it to an investor and validate that they can do what they say they want to do with very little investment. That will enable an explosion of robotic applications and different niches, and that’s the world as we see it in the future. Sort of what we have now, except it would be vastly larger.

And of course there will some big consumer plays where everybody’s got the robot that cleans the table, or whatever the killer application is, and we’re still talking to people about that.

Audrow Nash: In developing this technology, something like neuromorphic computing, what are some of the major challenges you’ve encountered and what is the lesson to learn from them?

Todd Hylton: The major challenges now and for the foreseeable future will be getting sufficient computational capacity on board small mobile robots. One of the ways we are working to mitigate is to try to shift part of the computation off into the cloud. That turns out to be a very complicated problem, too. It’s not as simple as saying, “There’s big computers in the cloud and we’ll just do it all there,” because there are all sorts of issues about moving data around and latency that it may make sense for some applications but not for others.

From my perspective, the biggest technical challenge is the computing capacity. If the neuromorphic computing technology matures rapidly, it would alleviate a great fraction of it. Most of the computation that we do now, even on eyeRover, is vision – vision algorithms of kind or another – and that’s a real sweet spot for neuromorphic computers.

I think the other challenge for us is that the industry structure needs to change, and I’m not sure how long that will take. As a business, you’ve got only so much capital to work with, and ou’ve got to become self-sustaining and profitable before you run out of money. That depends on the ability of the industry to adopt what we’re doing even if they maybe don’t think about it the way we do yet. So that’s a certainly a risk for us.

In terms of lessons learned, I would say it’s really challenging to take a state-of-the-art system on a chip built for a mobile phone and make it useful on a robotics platform. When we first started, we thought it was going to be a whole lot easier than it was. A lot of people talk about how mobile phone technology is going to enable robotics, and it will, but actually getting there from where we are now was a huge investment.

An additional lesson that we learned is in doing the learning algorithms; that there’s not one way to do them, there are many ways. There is no general-purpose learning algorithm out there – at least not yet. So you have to do some experimentation to figure out what the best ones are. I like to tell people that the work is the process of eliminating all the bad complicated ideas in favor of the simple ones that work. Sometimes, though, when you find a simple solution you kick yourself and wonder, “Why didn’t I think of that a long time ago?” but that’s just the way it is. The goal is to find a simple solution, and that takes a lot of work. Brain Corporation’s got some very capable people working for it, but we’re just a small group of people. There are all kinds of people who can write these kinds of algorithms, and there’s no reason we shouldn’t make it possible for them to do that.

So part of our strategy going forward, hopefully sometime next year, is to make our Brain OS system available with APIs so that if you are into AI, or a neural nets, or machine learning, you could start writing your own learning algorithms. You already have the whole robotic platform, all the infrastructure and so forth, and you can focus on that piece, because that’s a lot of work too. If I can multiplex that out into the world then I think there will be a much faster spread of the technology, and I think it’s a much better business strategy for us as well.

I think the other lesson I’ve learned is that it’s very challenging to try to build a new technology and build a new business model and into a new market at the same time. It forces us to make some educated guesses when it’s still quite unknown what the result will be. It takes a real leap of faith to say, “We’re going to build an eyeRover,” or “We’re going to spend a year building this little plastic robot that runs around so you can train it,” because if I don’t have it, I can’t show anybody what I’ve got. And then you say, “What if the eyeRover is the wrong sort of thing? What if nobody likes it?” or “What’s the application?” and so forth. You have a constant chicken and egg problem, but that’s also what makes it so exciting.

I have all these great people that want to work the hard technology problems, the business problems, and fortunately there’s an appetite for it now, so investors are interested and that enables us to do it.

Audrow Nash: For those that have less expertise, what would be some ways that they can get involved, learn more and eventually contribute to this technology?

Todd Hylton: Currently the eyeRover is in a closed beta program – we’ve just got a handful of people using it so we can get feedback on what we’ve got and shake out the bugs that we’ve missed. But sometime next year it’ll be possible to buy that computing system from us, the boards and the software and so forth. The eyeRover is mostly 3D printed, so we’ll just make those files and the parts list available, and you can buy the board and build your own. We’ll probably put a couple more robotic designs that are 3D printed like that. If people are enthusiastic about it and have access to some resources like a 3D printer (and of course you order 3D printed parts online), then that’s one easy way to get going. You still need some degree of expertise in computers and to use the board and change things on it. But it’s a Linux operating system; you basically turn it on, you plug the robot into your monitor and keyboard, and the screens come up. It looks just like your desktop, so it’s not nearly as intimidating as it used to be.

Audrow Nash: Fpr Brain Corp, what is your future direction and some of your future goals?

Todd Hylton: It’s pretty clear on the technical side what we’re doing for the next year, and it’s pretty ambitious. We’ve got lots of developments going on and new hardware, new learning algorithms, APIs for developers. On the business side we are just breaking out box, so we are talking to a lot of people, and we need to refine our business model. We were a technology vendor essentially, but there are different ways to do that. How do you capture it in such a way that you provide the best value to the customer, but you also make money on it? In order to do what we really want, we need to be someday a really big tech company. So we’ve got to figure out a business model that makes it possible for that to happen.

Audrow Nash: Wrapping up, what do you think is the future of robotics?

Todd Hylton: Robotics is always the technology that’s just around the corner and never quite gets here. That says to me that it’s hard. You can ask yourself whether now is the time or not, but there are certain things that suggest that maybe now is the time, and of course if I didn’t believe it I wouldn’t be doing what I’m doing. On the one hand there is the explosion in low cost computing hardware – it’s gotten really cheap thanks to Moore’s Law. And that’s going to continue for a while so we’re going to get better and cheaper computers, and maybe the neuromorphic stuff will come along, too, and that will be a big tech enabler.

There has been a huge amount of work in neural nets, AI, and machine learning over the past three decades. It really hasn’t had much of an impact on robotics yet, but a lot of people know how to do that now. There are a lot of tools out there, and people are trained in it so you can hire them. That’s a definite tech enabler for robotics, too, that in the past hasn’t been as prominent as it is now.

On the economics side are the big tech companies, the Intels and Qualcomms and Googles of the world. They’re looking for the next big thing and they can’t avoid robotics. Everybody says we’ve got to have robots to take care of old people, and I will be one soon enough. And it’s true. We really do need them. There is going to be a huge demand if we can get the technology and the industry structured appropriately in time. That’s why I’m bullish on it and I think it is a good time for robotics. Another important thing is that we’ve done a lot of work studying the brain lately, and although we are never going to put a mouse brain on a robot, knowing something about how brains work is really important for giving us inspiration for how to do it in a different kind of technology, like a computing technology. So that’s also why I think it’s a great time.

Audrow Nash: Thank you.

Todd Hylton: Thank you.

All audio interviews are transcribed and edited for clarity with great care, however, we cannot assume responsibility for their accuracy.

 



tags: , , , , , , , , ,


Audrow Nash is a Software Engineer at Open Robotics and the host of the Sense Think Act Podcast
Audrow Nash is a Software Engineer at Open Robotics and the host of the Sense Think Act Podcast





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association