Origin Story of the OAK-D with Brandon Gilles

01 July 2022

share this:

Brandon Gilles, Founder and CEO of Luxonis, tells us his story about how Luxonis designed one of the most versatile perception platforms on the market.

Brandon took the lessons learned from his time at Ubiquiti, which transformed networking with network-on-a-chip architectures, and applied the mastery of embedded hardware and software to the OAK-D camera and the broader OAK line of products.

To refer to the OAK-D as a stereovision camera tells only part of the story. Aside from depth sensing, the OAK-D leverages the Intel Myriad X to perform perception computations directly on the camera in a highly power-efficient architecture.

Customers can also instantly leverage a wide array of open-source computer vision and AI packages that are pre-calibrated to the optics system.

Additionally, by leveraging a system-on-a-module design, the Luxonis team easily churns out a multitude of variations of the hardware platform to fit the wide variety of customer use cases. Tune in for more.

Brandon Gilles

Brandon Gilles is the Founder and CEO of Luxonis, maker of the OAK-D line of cameras. Brandon comes from a background in Electrical and RF Engineering. He spent his early career as a UniFi Lead at Ubiquiti, where his team helped bring Ubiquiti’s highly performant and power-efficient Unifi products to market.



Abate: [00:00:00] Welcome to robohub. I’m Abate I’m the co-founder of fluid dev. And today I have with me, Brandon Gilles CEO of Luxonis, maker of the Oak D line of cameras and super excited to have you on here.

Brandon Gilles: Yeah. Thanks for having me.

Abate: Awesome. So before we dive into Luxonis is too deeply, tell us a little bit about your, your background and what was your journey like in your career and your life.

Brandon Gilles: So, it’s a great question. Elon Musk is, is probably like retroactively, like my hero in terms of like doing engineering things. I just realized on this call that going into engineering… so, I did an electrical engineering undergrad, and master’s, I really just wanted to learn how the world works and specifically how things like modern human existence was made, and how to, to like further that craft of just being able to build all the amazing things that can be built in the world.

And so I just wanted to learn like engineering, which, was like a naive, like probably childish view of like the possibilities of what you can cram into a human brain. And so going into college, I was like, what do you mean you have to like only do one of them? You have to do like electrical or mechanical or civil or, you know, go into physics or something like that.

Which physics is probably the closest to like learning them all. and so electrical seems like the one where then I could secretly do all of them. So like, cause it felt like it touched like nearly everything. Especially if you, at least I went to the university of Colorado, which was heavy onteaching software engineering and firmware engineering as part of the electrical engineering program.

And so and that touched like nearly everything. so, so I subdiscipline there, but then in electrical engineering, I already viewed that I had like made a compromise by having to get into electrical engineering. and then once I got into electrical engineering, they were like, well, now you need to subdiscipline again.

Like, are you going to be like, radio-frequency analog IC design. And I was like, what’s that? And they were like, well, you need to pick one. You can’t just be an electrical engineer. And so, largely with the help of my advisor, I was able to say no to that. and so I did about everything that, that I possibly could in terms of trying to learn all the things that you can do as an electrical engineer.

Um, so did aerospace engineering did wireless charging? one of my mentors got time invention of the year in 2007 for wireless charging. And I was graced with the opportunity to work under him. so I said aerospace, wireless charging. did nitty-gritty power electronics did, radio frequency, electronics, even even took that analog IC design course that I talked about and did the radio frequency equivalent of it.

Um, and, yeah, just, just try to do as much as I could in electrical engineering. And then through my career, I kind of viewed the same thing where I just wanted to be able to touch like anything and everything. and I remember actually, when I was explaining why I chose electrical engineering, I was like, well, if I want to work for a formula one team, at some point, I feel like, I’m actually one, one of the engineers here competed in the Indy Autonomous Grand Challenge, which, which kind of fits it’s indie, not formula, but I was like, I feel like electrical engineering is my highest probability that, that I would actually be able to be involved with that.

Um, with all the things I’m interested in. So that’s my background, electrical engineering, but just all over the place. and I saw an opportunity to, to get into AI and computer vision. After one of my mentors actually hard switched from, networking equipment. So like switches, routers, wifi access point outdoor long, long distance stuff.

He told me AI was going to be the biggest opportunity of his career. And I had no idea what AI was. And so I, so I switched industries again to get into computer vision and AI

Abate: Yeah. Yeah, You know, it’s very interesting. And I think this is the path that a lot of people in robotics take as well. You know, I think I was faced with the same crossroads and I decided to do mechanical engineering because that felt like, you know, you get your foot in a lot of doors, and then graduated and then realized that, you know, there’s more that I wanted to do then what was taught in school, which is where robotics, and robotics being the, you have your foot in every corner of the engineering space.

So that that’s definitely what called out to me. And I think a lot of other roboticists out there.

Brandon Gilles: Yeah. Yeah, absolutely. And it’s, it’s extremely multidisciplinary and that’s why robotics is super cool. I think it was Kat, at, open robotics. I asked her why she was in to robotics. Why she does what she does, like what got her into it. She was like, well, it’s just the coolest job you can have. It’s like, there’s just not a cooler job than robotics.

And I was like, it’s a good point. And then part of your answer is because it is so multidisciplinary. You’ve got computer vision, you’ve got physics, you’ve got route planning, you’ve got mechanical engineering. You’ve got mechatronics, you’ve got board [00:05:00] design, you’ve got power design, you’ve got system engineering and some cases you also have aerospace.

She was sending some robotics system up into space.

Abate: Yeah. And, So, you know, you graduated with your electrical engineering degree and then you went off and you worked in, switches and networking equipment. I know that you did some, some work at a Ubiquiti. and yeah, And then you had this mentor who, who told you about machine learning? AI? What was that experience like?

Brandon Gilles: Yeah. So, yeah, I was working at Ubiquiti, huge fan of a company, still a huge fan of a company. you know, my whole career path has enabled, because of Ubiquiti and the fine folks there. And, uh, w one of the many there. So, so Robert, the owner, you know, owe a huge thanks to, and then, Robert Pera and then John Sanford, who who’d worked with Robert Pera for a long time, was another one of those mentors and, and Ben Moore was another.

And John Sanford, he’s the CTO there, and things are going great. And there’s the expression actually that Robert, the owner taught me, which is winning cures, all. So like in these companies where you have like infighting or, or, you know, one person hates someone else, like if you can just fix the problem of not winning, then like people will all just be happy… if I go into it… once you’re winning, and then just all those problems go away.

And when, when you’re not winning, that’s when all those problems come up and we were winning and like winning really big, like, it’s just like where it’s like the winning cures all, for sure. Like, we had hit the winning cures, all threshold and then like pole vaulted way past that. And, and John Sanford resigned, the CTO resigned and I was like, what, like, what does this mean?

And so I really. like really like interviewed him on it. And ultimately he flew out to Colorado because I asked him so many questions just so we could like sit down together for like a day and discuss it. And, you know, the TLDR was, he was leaving because if you’d AI is the biggest opportunity of his whole career, and he didn’t have like a little career.

Um, he, you know, had, had founded multiple companies that had gone to a hundred million dollar plus valuation and sales. And he had personally done all of that. He had mentored who became the youngest billionaire in the world right. Directly helping to scale that company to a multi-billion dollar company.

Um, and those were just the things I knew about. Right. and had this huge impact on all sorts of design things worldwide. And his tools were used by all sorts of engineering companies behind the scenes. And so him saying that this the AI was the biggest opportunity of his career really like landed hard on me and hints by here.

Volunteered to fly out, to meet with me. And, the only thing I knew about AI before that was that it was useless. so as my, my roommate actually, Albert Wu in college was taking a course in AI in 2004 and he came over and I asked him, I was like, AI what’s that about? And he’s like, it’s useless.

And I was like, really? He’s like, yeah, like I’m programming lists, you can’t do anything. And this is just like research, like, and I don’t know if he used the term AI winter, he probably didn’t, but it really like solidified the idea that we were in an AI winter. and, and so that was like, my last mental model of AI was useless.

And then John Sanford, resigned and we had that whole conversation

Abate: And what year was that?

Brandon Gilles: that was in, I think, 2016, I believe. And, So late 2016. And, so then that just like kind of burst that bubble. And he told me about like, you know, deep, deep neural networks and machine learning and all these advances in computation being possible.

And, and one of the things that, that John had spearheaded in history, or is he actually used clusters of, like hundreds of computers and, and, uh, what is it? Genetic algorithms. So it’s like evolutionary antenna design effectively where it like self experiments. So it was already in the direction of AI and that’s what pulled him into this.

And so he explained all that to me. And I was like, holy cow. And so I started researching and digging into it more and more and more, and just kind of like the, the whole cell phone, boom, like the whole app store, boom. It kind of like came and passed when I worked on like nitty gritty RF engineering stuff.

And in like five years have passed and I was like "that would have been a good idea to get into". I learned about AI in like 2016 and I was like, okay. So 2012 was really the year to get into this. Like I missed by four plus years, again, maybe five years to do this. but anyway, I’ve got, got all my wheels spinning on and my, my mind turning on all the potential here.

Um, and that was really like the seed for all of this and, and the core reason that I, I didn’t continue working at Ubiquiti cause, cause I loved working there.

Abate: Yeah. Yeah. You definitely always feel in the moment like, oh, you know, a little bit late to the show. There’s already a lot of players that are already in here. And then it’s only really, in retrospect, years later that, you know, it’s like, it was still [00:10:00] a good idea to just jump in head first, back in 2016.

Brandon Gilles: Yeah. And specifically, so I didn’t jump into this, but What had happened is in cloud, starting in 2012. So all these companies like that laid the groundwork and were acquired to form Siri and Cortana and Alexa and all of those raw cloud-based right. And, and all those surfaces still are fundamentally cloud-based except for like the wake words effectively.

Um, and so cloud, it just felt like whoa, missed that whole boat, but then edge was still relatively new. Maybe I was like a couple years late, but then in embedded, which is, oh, that was the other thing in college, embedded systems. It was like a core focus of mine. so I, was the teacher’s assistant.

That’s how I paid for grad school as, as the teacher’s assistant for the embedded systems design class, embedded was like near and dear to my heart. A lot of things that I did, whether it was RF or space or what have you all involve some embedded system, it seemed like it was largely. only like one player and only covering like one niche, which actually was OpenMV with Kwabena who’s, who’s, well-known in the industry and in his niche is embedded AI and CV.

Um, and I think he’s like the go-to platform, Arduino is partnered with, with OpenMV. And so I saw like, okay, well, cloud I’m like really late. Everyone’s already sold their companies to apple and Google with Microsoft and so forth, edge. It seems like there’s an opportunity. And that’s what I initially pursued.

And then, with embedded, like actually being able to have an embedded product that does all that, like a little, little depth camera or system on module, you can not put in some tiny standalone thing. It felt like the market was actually wide open. and so started in edge and then moved, moved more into just purely embedded, where, where it was, was really early in the market.

And actually the concern was like, is it too early? So it, it kind of flipped on its head. So, so I would caution and actually focus. You know, the most important thing is team. but, but timing’s, timing’s really important too. And I would say though, on that, like maybe four years being late, I have since seen companies go nearly purely into cloud in that time, starting about the same time that Luxonis did and just totally dominate the market, like a hundred, a hundred million dollar market cap company.

So I think, I think my initial read probably wasn’t wrong, but I was a little terrified to step into something where potentially we’d be competing with folks that have like a, a four year advantage if that makes sense.

Abate: Yeah. Yeah. no, it’s definitely very intimidating. and so with Luxonis you’re, you’re taking over this to the edge and machine learning, computer vision, and all of these things on device. can you walk us through what your company is offering? and then how this stands out from what is already there in legacy in the market.

Brandon Gilles: Yeah. Yeah. That’s a great question. So, The, the story behind founding the company is I saw that there was all these, like use cases, if you could use this on the edge or if you can embed it. and so the first thing, that I went after it, and what I actually intended to found as a company was I, I love looking at things as like basis functions, like, you know, in math, like the basis function on which you can build all sorts of things.

And in technology, like new basis functions arise, and then you can build new things because you have those basis functions. And so, like the thing that I sought to build, which then this flows in, hopefully it isn’t too circuitous of an answer, but it flows into our product offering is, I saw like, I’ve always liked laser tag, even from like a kid and growing up to then be an electrical engineer.

I was like, oh, cool. The new basis function that gave like, cause laser tag to exist, it was like laser diodes, right? And like, photo sensors and so forth and electrical engineers, like I can make a game out of this. and so I viewed like a new set of basis functions with all this like edge AI, computer vision, spacial sensing, high resolution, simultaneous localization and mapping and so forth.

There’s a new set of basis functions. And I was like, all right, well, some clever person in like the 1980s, they saw laser diodes. And it was like, that’s a basis function to make a cool game. I saw those things, spatial sensing, AI and so forth as maybe a new basis function to make like a real life action sport, video game playing.

So like imagine halo, but like the best halo player is, is really good. Not just because he’s smart and clever, but cause he can sprint faster than other people. and so that’s actually what it looks like I personally started. And was, is wanting to build is like a real life laser tag with a virtual reality.

So like you’re in a physical space with physical walls and they’re augmented, real-time both you and the other players. So you’re playing physical people, you’re sprinting around. And, [00:15:00] so I was working on like edge spatial AI stuff. And when it’s trying to recruit game developers and to make this whole virtual experience, so you have this like very social, very like athletic, like a new sport effectively.

Um, that was virtual reality. and what ended up happening is when I was trying to recruit top tech talent around here in Colorado, very stereotypical to Colorado. we, when I, when I met up with folks, there was, there was tragic news about kind of a stereotypical Colorado thing, which is like, we ride bikes everywhere.

Um, and, and like to just like bike commute, free exercise and so forth. four folks in my circle, it turned out, had been hit by distracted drivers. while they were just riding their bicycles, not bad people, just people that looked down at their phone at the wrong time. And like my business partner had like hit a street sign once doing the same thing.

And he’s just lucky as a street sign, not, not a person. And he keeps his mirror all mangled for that reason. So, when I found out about that, so one was killed just by a mirror. So someone just drifted out of their lane enough to clip, clip, the person killed them. It was the founder of a hackerspace near me.

Um, one got a traumatic brain injury and then two were bedridden for months broken back femurs and shattered hips. I kind of felt like my modern version of laser tag was really dumb after that. And so, so I hard pivoted the business, but already, if you think about that, it was, it was kind of. Robotic perception, like what you’d need for robotic perception system, because you need to know where things are, what they’re doing.

It’s has like strong corollaries with machine guarding, but it was more edge based. so I, I hard pivoted myself. I actually had two co-founders at the time and I was like, let’s just hard pivot. And they were like, no. And so they stayed in their direction, and I started a new business Luxonis and, it was all about seeing if we could solve that problem, which brought us down this technical direction of moving.

You know, we talked about cloud to then edge is where I was working. Cause on that like laser tag system, like you could have like the equivalent of like format books on you, you know, you play for like five or 10 minutes. You can have a Mac book on the chest Mac book on your back and like the equivalent of one on your head and maybe like additional processing and like armed guards and stuff.

Right. So it was very edge. Like you can put Mac books at it and this, safety thing, the safety solution, trying to protect people. You know, both the driver who accidentally clips and killed someone because they’re text messaging and the person who is on the bike who gets killed, that regarded it to be an embedded system, that had all this capability, spatial sensing, high resolution, high frame rate multi-sensor, depth sensing so that you can know like where a vehicle is in physical space where its trajectory AI.

So, you know, it’s a vehicle and not just like, you know, another gaggle of bikers or something that pose no risk. Right. and then CV, cause you need to tie it all together. So it brought what I was already working on, very similar, what things are, where they are in physical world in real time. So you can augment the world to, from edge where it’s a lot easier.

It’s an embedded system, whereas a lot harder. and I was curious if we were at that point yet. so I went to a bunch of conferences, actually got to talk to the CTO of Waymo at one. I was like that dude who like obsessively goes first to the stage to try to talk to him. And then everyone was like, yeah, I think that’s probably possible now about, you know, like I think you can do that.

Maybe it’s a, you know, a size, weight and power is gonna be a concern, Movidius had just come out. which was this network on chip architecture. It was the first chip set in the world that allowed you to take this, like four Macbook level thing and put it in embedded systems. So it had, you know, it could be an

Abate: what is network on chip? Exactly. That’s unpack that, that term a bit.

Brandon Gilles: Yeah, yeah. That’s, that’s a great question. So, in the networking world, network on chip is the terminology cause you’re already coming from networking. But what happened is you have the whole industry went from, being like CPU based where, where you have like a thousand watt TDP system, total dissipated powers, TDP.

And you just go with a faster processor to solve your routing or switching or wifi problems. It’s like the host of the wifi, and some network or some chip architects looked at it and said, well, you’re sure doing a lot of the same functions. What if we actually just baked those into Silicon for all those specific functions, instead of having a really fast CPU, you have all these disparate hardware blocks that perform the functions that you would be running on a CPU.

And you just have a little CPU that just coordinates those. And so ubiquity, that was like the, the core technical insight that allowed ubiquity to do so well is Ubiquiti is software company primarily that made it, so these, network on chip architectures that [00:20:00] took, say a total dissipated power of a thousand Watts for given performance down to five watts.

The challenge with network on chips is instead of one CPU, and you got to learn the instruction set for one CPU. There’s 38 architectures. And so you have to have a software team that’s capable of learning those 38 architectures because they’re all different chip architectures, literally from the ground up designed for a specific task.

And so you have to learn those and get them to be coordinated. The advantage is if you can solve that software problem, go from a thousand Watts and relatively comparatively high latency and high costs to watts, low costs. And so you see that with like, that’s why Unifi access points and edge routers and all those we’re able to vastly outperform these custom built, CPU systems.

Cause they were network on chip. And the reason that network on chip has fallen apart traditionally in the industry is that lack of software. so that’s, that’s the core problem. and in software is the hard part because you’re just having to write across all these disparate architectures and usually have these really high-speed caches that connect the disparate hardware architectures so that you can build these pipelines.

In that case of networking functions, routing, and packet filtering and deep packet inspection and, you know, access point functions and TDMA and all that. And then in, the computer vision world, having come from that and seeing that like just dominate the industry, like everything that now to the computer vision world and Movidius, was one of actually several that were early on seeing that. Hey, like packet switching, routing access points, how those have dedicated functions that are always running computer vision is actually even more well suited for that because you have things that you just know you’re always gonna want, like warp and de-warp and feature extraction and, and vectorize processing and, you know, neural inference, acceleration and all of these things, that, that go together and on robotic perception systems.

And so Movidius was, was the first. And maybe not to see that, but there were first to execute well on it, in computer vision space. So there are other startups around the world, that were doing this Movidius was a startup that then was acquired by Intel. But a lot of them ended up in this Sophie’s choice area where it’s like, okay, we’ve got like our AI engine working and now USB three doesn’t work.

And they’re like, we fixed USB three and now feature extraction doesn’t work. And so like, and the key with these chips is, is that basis function thing you need, you need it to run as an embedded system. You need it. so it can be standalone and perform these functions and offload your robotic perception.

You need high resolution, high frame rate. You need spatial sensing for, for robotics. You need AI and you need the computer vision. And so all of these other competitors have these like Sophie stories where you like delete one. And you’re like, well, it’s kind of useless without AI, right? Or like on the computer vision is like your… Wait… your video encoder doesn’t work?

Um, and so that’s why we chose Movidius as they were the first one to execute with all of the core things that, that we viewed were needed to solve this safety problem, which then was, is fundamentally a robotic vision problem, because it had all the things that our robot needs. And in fact, the solution to that safety problem is just a robot.

It’s a little robot that tells when you’re at risk and can honk a car horn or vibrate your seat post, or make a notification or, you know, make super bright LEDs flash that otherwise you wouldn’t be able to flash all the time because you’d run out of battery in like five minutes. so it’s, it’s a robotic actuation problems specifically.

And so we saw that this chip set exists. but there, there wasn’t a platform yet for it. It’s, it’s really tricky to build platforms for these network on chip architectures. and we had seen in tech history, a lot of network on chip architectures just fail because no software platform was adequately written for them.

And so it’s a really long answer and I apologize, but the, the, the core of what we do is then the software that, that makes it. So you can take advantage of going from like this thousand watt TDP system to a complete robotic perception thing where, where you can just define the pipeline that you want to run.

Uh, so an open source example that, a hobbyist and France built using this pipeline, he uses our IOT series, which, which runs completely standalone it’s it’s this one, And it, it runs pipelines of depth processing and AI and computer vision so that, you know, it’ll find him where he is anywhere in his house based on a person detector.

Once it finds him, it runs all on camera and runs skeletal pose. so they can figure out where his hands are. even when they’re far away where a hand detector, normally wouldn’t be able to pick them up. And then he uses the guide of like where the wrist ins, to feed that area into a Palmer and dorsal detector, which is kind of a short range Palmer and dorsal detector.

And [00:25:00] because he’s using that approach, he can see it up to like, I think it’s like eight meters or something. So really far away. And from there, he does full skeletal hand pose. And since we have a 12 megapixel camera on the standard models, he actually gets really high resolution of the hand. so we can do a full 3d hand pose and from there, and he passes it into American sign language, character recognition.

So now he has where are his hands. What American sign language, character, like, you know, basic like 1, 2, 3, 4, or five sort of thing, or like thumbs up or what have you anywhere in his house. So now he just never has to have a remote for anything for his lights. so it’s, it’s that same sort of robotic perception where they do machine guarding.

And that’s the core of what we build. We build the hardware, of course. So folks can just buy a camera and bolt it to something. And there’s these got USB three, 10 gigabit per second. We’ve got power over ethernet with IP 67 sealed got power of reason at, with like M 12 X coded and hardware sync output.

So we build all the hardware layers. We abstract there and we have system on modules. so folks can, can quickly customize and, and a lot of this, I think all of it actually has open source reference design. So if you’d like this and you’re like, I need different field of view or different number of cameras at different form factor built on a system on module.

So you can go build your own custom thing, but most importantly, the firmware software. AI training and simulation and then cloud deployment management insight is where we add the most, most value. So folks don’t have to go reinvent that wheel when they’re building a robotic system, because we did, we saw that there was no platform like this, if you needed all of those.

And so we saw a huge opportunity to allow folks in all of these disparate robotics, automation, or robotics industries, to not have to redo all this work. and, and we love building platforms.

Abate: Yeah.

Brandon Gilles: as, as a huge opportunity.

Abate: yeah,

You can see that, you know, when you’re, when you’re deciding to build a robotic platform and then you have multiple different pieces and sensors and all of these things that you’re trying to pull it together and then write all of their own, software packages for each, and then what you end up at the end of the day is something that’s, consumes a lot of battery power.

And then that right there can be a stopper to, a lot of robotics projects that you want to make commercial. so seeing something that goes from a thousand Watts down to, you said five watts, that’s that’s now even a USB can power much more than five watts. so that, that definitely is something that enables robotics, So, you know, you mentioned a lot of different product offerings that your company is selling.

Um, why, what was the reasoning behind going with multiple different hardware platforms? and then what are your, what are the main sellers, from these product offerings?

Brandon Gilles: yeah. That’s a great question. So, you know, we were pretty new to the market and the whole market’s new, right? Like, 10, 10 years ago, a lot of the robotics problems that you can now, like that are now just like standard engineering problems. Were kind of like science fiction 10 years ago. Right. And so there’s like, everyone’s discovering a lot of things.

And we’re all kind of discovering together, like, Hey, there’s all these robotic perception tasks that we keep having to solve in all of our disparate industries. You know, whether, whether you’re, you know, working on like a tennis court cleaning robot, or you’re working on a warehousing robot or a grocery store robot, or, you know, a fish counting robot.

Um, and so there’s just a lot of learning. and, and we believe that our customers are, are the best folks to design our products. So, so we’ve architected everything to be able to iterate fast, and to be able to like, not, you know, spend a bunch of time thinking that we’re geniuses, that we can make like the best product for the market, but instead, how do we make it so we can just build products and, and kind of see what fits and what doesn’t and how we move forward and what we double down on.

And so we, before we actually had anything done, we just reached out to all the smart people. We could. Ask them, you know, what they need and what their pain points are. And so like the number one voted thing, but by people who weren’t paying for something, but just throwing an opinion. And it was this thing, which is actually a hat for a raspberry PI.

And this was like by far, like maybe 90% of people said, like, that’s what you should build. That somebody, your killer products, we made that. but before we made it, we, we got all sorts of other feedback. This is what I thought was gonna be like the killer product, which was to integrate a raspberry PI compute module in the back and have all of the things I talked about.

So you literally just provide power and it boots up doing all the things, right? Depth, sensing, object detection, you know, you just plug in a monitor in it or a little touch screen. I thought this thing was gonna be the hit. And then Kwabena at OpenMV, he was an official advisor. He was like, [00:30:00] nah, your OAK-D is going to be a hit and it wasn’t named OAK-D, but he described what is exactly this.

Don’t listen to everyone else, just build this. And, and so we got that feedback. Most of the market, 90% said to build the pie hat, I was convinced that the Raspberry Pi compute module one was the thing Kwabena, uh, who was right. said build the OAK-D.

Abate: and then the OAK-D, like, just describe what that is.

Brandon Gilles: yeah, so, so the Oak D was, why don’t you have a triple camera that just has a USB power?

Um, so it gives you depth perception at a 12, 12 megapixel color. and so all of these would have like the same core functionality of 12 megapixel color. They have depth, reception, it’s just interfacing and form factor. The PI hat one just plugs onto a Pi. And so it gives all this robotic perception directly as a hat, to a Pi with these, like.

Flexible floppy, flat cables as I like to call them. So you’d like modularly put the cameras, this one is all integrated in just the one thing. And with the OAK-D and originally it was just a board, it’s just a USB powered interface to it. So it’s just a USB cable going to it. And so we had all these disparate pull where it was hard to tell who is right.

Um, Kwabena seemed like a super smart guy and inclined to like him. 90% of the market was saying to build this. And then my conviction is, it was, this is the thing that matter. And that actually, in combination with one of our first customers made us realize that well, the most important thing, would be to just be able to iterate and build things cheaply.

So we actually decided to not build any of those as our first product and build a system on model. Cause we said, well, this is probably going to be a problem generally for robotics and already it’s a problem for us. What is the right form factor? Everyone’s saying different things. So we built the system on module so that we were able to make the pie hat in four hours.

So it was four hours of design work based on the system on module, the Oak D design was only maybe like a day or two, to, to do the design, because all the complexities on the system on module, and then this was the most complex because we actually had to design a whole raspberry PI into it. So this was about a week.

And so what that allows us to do is we spent the core amount in the system on module, and then we can explore the trade space really efficiently. so we don’t have to make a big bet on who’s actually right here. It turns out if we were just to bet, we should’ve just asked Kwabena and done what he said.

Abate: So you, you know, just to dive in on that a little bit, you know, when 90% of your customers are asking for something and then you have a feeling and then, you know, one of your advisors has a feeling that they’re wrong. How do you go against that amount of data? how do you go against what everybody else is saying?

And not just jump in and build a million, raspberry PI hats.

Brandon Gilles: Yeah. Well, we didn’t go against it large, largely what we saw it. I love starting with like the why on things like why, why do folks want things? And so one of the areas I think, where we got lucky is, we viewed this as, okay, well, what the market really wants isn’t any one of these, what the market wants is flexibility.

Clearly, clearly there’s a lot of disparate demands and we also got lucky there because one of our customers was just super smart. And so we were presenting this to them and they wanted a fourth thing, which out of respect for their privacy, I want to say, what is. And so they came back to us and they’re like, well, I mean, clearly you should just make a system on module, right?

Like if you’re getting all these disparate needs, if we need a system on module. It sounds like you could build all those products off this system on module. And then, and then even if those four that we’re thinking about right now, aren’t the hit, you’ll be able to explore into other products very quickly and easily, which, which we did.

Um, so then we made the, the Oak D, which is all included with an, there’s a, there’s an ethernet interface in here. This is water sealed, it’s IP 67. And it uses that same system on modules. So it allowed us to make that really quickly. and then we also made some IOT versions, which I was talking about that gentlemen in France used.

So we actually didn’t go against the market. We just used the, kind of the confusion we were getting from the market as a sign. That that’s how we should architect things. So we should architect it. So you can move nimbly at low cost. with, with the help of just like an ecosystem of smart people that just took the data that we had and, and told us the smart thing to do.

Abate: Is this something that a lot of other companies are also, using to build multiple different like hardware platforms? and are there any, like trade-offs negative trade-offs that come from this approach as opposed to one singular, fully integrated product?

Brandon Gilles: Yeah. That’s a great question. to jump to the second part of it. So [00:35:00] we use the system on module approach and we made Oak D that actually has, you know, the system on module right in the back of. and we made the PI hat that, you know, the system on module literally like clips on a, if I can do it live, clips on right here.

Um, so this, this is a system on module, and then we made this raspberry PI compute module that has the system on module behind that black heat sync. And what we saw is that no one wanted these. We D we don’t end of life, anything. So is, there’s actually like a couple of customers who, who, who still buy these most support them forever.

And the system on module makes that easy. This, some people want it and they like it, but pretty much everyone wanted OAK-D and so, so we may made our series two OAK-D that actually doesn’t use the system on module. And as a result, it’s a bit smaller. So there is a trade there on that flexibility. And we could have also with the system on module made this smaller.

Abate: Not, not just that, but also cheaper. Right?

Brandon Gilles: Yeah. Yeah. It’s, it’s less expensive and more reliable to produce. because it’s a simple products. you know, the system on module is really beneficial still when folks are integrating into a more complex product, the more complex the products, the more you want it to have a modular design, because if you have some other single board computer, we have a lot of folks who use this as the front end of a perception system to like a Jetson nano or a savior.

Um, and so if like they mess up their baseboard, they want to be able to in like the yield isn’t right. They want to be able to pop the Xavier module off and pop our module off just in production and test and use it on a different piece of hardware. But when it’s just a more simple device that there isn’t a huge advantage to have system on module because, our yield is like a hundred percent now.

So it’s when it’s just a central central camera. So that’s, that is the trade. And so what we do now is. We do all our first designs of a new product using the system on module. And then if that looks good and the market likes it, then we’ll make a chip down design that we sell at volume. And what that serves is people who just want a smaller, less expensive, more thermally, efficient design.

They’re just buying a standard product by this. that’s a chip down. And then folks who want to integrate into their more complex system generally they’ll use the design files of that, open source version based on the system on module. so that’s, that’s how the ecosystem works now. And then to your question on like trades, we then have a whole slew of customers.

So like one half of the customers buy, you know, standard products like OAK-D-PRO-POE right. and bolt it to a robot in thousands to tens of thousands, tend to be the volume. Then we have a whole different, and those can happen fast because you, you have robots. you replaced maybe existing sensors or, or you’re doing a whole new build of robots and use these.

Um, then we have a class of, custom products that are built, and that’s like its own whole side of the business. And those take a lot longer. I call it like PI years, for those to actually be built. And those are just from the ground up, built around, around our system on module. And then this is clutch because it allows them to like, de-risk, they’re designed in generally those also have other things in there.

And that’s where that, that modularity is, is really beneficial at production time.

Abate: Yeah. no de-risk is an excellent word because I think one of the greatest things about buying this product is that you you’re buying a piece of hardware, but on top of that hardware, you’re getting access to a large database of. different software packages for like gesture detection, hand detection.

Um, and you know, maybe you can dive in a little bit more into what all of those offerings are.

Brandon Gilles: Yeah. Yeah. So, you know, w like we talked about in terms of the functionality of the device, the thing that was missing in the market was being able to embed it, like it’s small, low power, fast boot performance, a high resolution, high frame rate.

Multi-sensor spatial sensing, onboard AI and CV. And that’s the core of everything that we’re focused on because we view that’s what, robotics needs, right. And when, when you’re building a robotic system, you end up needing all of those all the time. There are other industries that also need those like automated sports filming.

Which I think that just comes down to like, it’s what I call like a trapped robot. It’s like, you know, you’re, maybe you’re not physically actuating something because you’re just spanning across multiple image sensors, but you’re, you’re replacing what, what you could otherwise architect is just like a full humanoid robot with a camera.

Right. so, so that’s like the core of it. It’s all that robotic perception, but there are layers and I view it as five layers of, of abstraction. So one is hardware like finished camera products or system on module. So you just get a leg up, you don’t have to build all that. Right. then the next is firmware and that’s where a ton of our work goes [00:40:00] is making it so that you have this high performance system.

That’s still abstracted to then the software layer where as a robotic engineer, instead of having to deal with that network on chip, which is really painful engineering, or having to deal with the fact that you have this really high thermal output system, because it’s less efficient than network on chip. We have, a note and graph pipeline builder system that allows you to just describe, you know, like I, I talked about with, gesture control, describe the, the graph of robotic perception that you want to do.

Um, so in those are those things fight against each other, right? The abstraction while still being performant. So that’s why we spend a bunch of time there. And then on those examples, we have things for machine guarding. So like telling, you know, where are, where is someone away from a dangerous machine, like to protect the driver of a machine from hurting someone or protect the, someone who might be walking towards the woodchipper, right.

Or walking into the stream of some dangerous material in an industrial setting or so forth, to tell where they are, where their hands are. There’s a lot of like examples for that. So we literally have one, you know, we didn’t want to risk anyone’s hands following an example.

Uh, set a Coca-Cola or a wine bottle as dangerous. And whenever your hand gets in like physical proximity that, you know, in, in full physical space proximity to that, it triggers a warning. I think, I think the warning that’s printed "it’s not 5:00 PM yet". but we have these across all sorts of industries, you know, whether it’s machine guarding or it’s, you know, following, we’re going to have more examples even with, with ROS or like, robotic navigation that whole stack running in full ground vehicle autonomy.

Um, and I’m spacing. There’s, there’s so many, I think we have 250 different AI architectures that are converted and then somewhere about a hundred different examples that this span all across all sorts of industries, whether it’s, you know, lossless zooming, which is that like trapped robot where it’s like, you’ve discovered where the action is, you run the image, sensor it 12 megapixel.

And then, uh, zoom in, you know, and you get two megapixel output following the action in a sport, or similarly, you’re trying to find some, a feature on a product and automated QA or robotics where you’re looking at the full 12 megapixel. You find the feature AI guided feature, and then you crop out of the 12 megapixel to get that information.

And then you do like OCR off of it. For example, we have an OCR example doing that or for license plates. So there’s this whole suite of, of examples that then you can base your thing off of. You’re like, that’s pretty close to like the features that I’m looking for. And then above that we have open source, re retraining and training notebooks, that, that you can use to then train for your specific application.

And then as you get more serious with training, we, we plug in very cleanly with robo flow. Which w who we recommend for doing like dataset management. So when you move from like a prototype of just maybe using our open source scripts to train, I mean, you’re like, you’re starting to put your model into production and you say, okay, I need to figure out like, what is in my data set and how to balance it out.

What other data to collect, to really get my model to peak performance. so that’s like kind of the AI. and then we help with simulation. We have plugins for unity. so you can simulate things which can be extremely useful when you’re architecting a robotic perception thing, because you’d just be like, well, what if I put a camera here or here?

And how does this neural network work on this data? You know, I just generated a million images to train my AI model so that while I’m still architecting my neural model or experimenting with my pipeline, I don’t have to go pay, you know, $4 million to label a million images. You can just do it overnight in unity and then get metrics for the whole performance.

Um, so that’s the, like, that’s where the unity unity plugin plays in. And then the next layer above that, which, which isn’t out yet. So that’s the fifth layer is cloud insights and management of all of these. So, there’s a ton of interest in strawberry picking, for example, as, as a robotic problem and strawberry picking, I like to pick on it pun intended, because it’s very visual on like what it’s doing and then what the, what things can go wrong.

So, first you want to just identify an object detector, right. Where’s the strawberry, and then from there, you want to run an image classifier or generally multiple image classifiers. they will give you information of like, how ripe is it?

Does it have mildew? Does it have some other defect? Is it the result of over or under watering or over under nutrients or lack of things in the soil? And then based on that you want to make a decision? Do I want to pick it as one of the first ones [00:45:00] and generally the answer’s yes. I want to pick it, but some, maybe it’s just not ripe enough.

Um, and then once you’ve decided you want to pick it, then, then you want to pull out say a semantic map of the strawberry. So that’s another thing that would run on camera, so that you can like soft grip it. And then from there, you need to align that with depth. So you can know where is exactly in physical space and where are the edges and physical space.

So the interesting thing about that robotic pipeline, this perception pipeline is you go from 7.5 gigabits per second of data. That’s coming in to like an Oak D or a Pro POE just from the sensor. And that perception pipeline that’s running entirely on the camera, takes that and produces two kilobytes of data, which is where all the strawberries, what do I do with the strawberries?

And, and if they’re ripe enough, how do I zero cost sort them by ripeness? Cause you can pick the strawberry and then a huge business value in strawberry picking is. If it’s very ripe, put it in a container of all very ripe. And that goes from like a F a farm to table goes to a farm to table restaurant.

So it’s like, they’re going to be perfectly ripe, right they’re eating that night at dinner, if they’re not quite that ripe, then put them in a different container and you’re sorting as you’re picking. So it’s actually like practically zero costs and that gets ships shipped to Boston to go to a store shelf.

And it ripens on the way. So 7.5 gigabits per second to two kilobytes per second of what the robotic arm should do all on camera. That’s amazing. Yeah, it’s, it’s really, really, really useful. But when you look at it from a scale perspective, and we’re all about making this easy for robotic engineers, robotic perception engineers, which we view perception as the hard part of robotics, like the really hard part, you know, like Johnny five and, short-circuit was, was pretty cool mechatronics and robotics motion.

Um, if you think about all the stages, so you’ve got object detection, a bunch of image classifiers, depth, depth, sensing, semantic depths, oh, and an edge filter as well to get fine edges because the semantic might not be perfect. And if you do it with edges and you can get a much better, like that’s how apple does their like Bokeh effect, for example, as AI, with edge filtering, and depth depth aware edge filtering.

So you run all that and you get this two kilobytes per second, but when things go wrong, what the hell is going wrong? Right? You have all these different things in there that could be going wrong. And so the fifth layer, our cloud monitoring and deployment and AB testing is all about having programmatic hooks, because if something goes wrong and you need to record 7.5 gigabits per second of data to figure it out, You know, the end goal of this is you want to have a hundred thousand of these strawberry pickers out there, right?

7.5 gigabits per second times, a hundred thousand strawberry pickers times 20 cameras per strawberry picker is just all of the internet’s data, all of a sudden, right? It’s just totally intractable. So the goal of the robot hub is to make it so that you can programmatically set at different stages, insights, and then data recording of what is going wrong.

So that then say if the depth confidence gets below a threshold, or the ripeness confidence gets below a lower threshold on camera, you can have this video encoding, that’s happening all the time. And then you just decide to no longer throw it away. So you get lossless JPEG or MJPEG, or H265 or H264.

And then you can decide with robot hub when these conditions happen, the ripeness isn’t right. Or the disparity depth doesn’t look right, or all of those things in that, robotics vision pipeline. Then you can record and that just saves you tremendous. The encoding alone saves you a lot because that takes 7.5 gigabits per second down to like 75 megabits per second.

Right. Which is huge. But then the capability to only record when something’s going wrong and based on these thresholds and choose to save to disk or put it up to the cloud directly to robo flow or, uh, pun intended, myriad other options is just so incredibly useful. So as we’re seeing these customers go from prototype of like 1 to 10 to 100, we see that and then to hundreds of thousands, we see the biggest problem being, these are really complex vision pipelines, which means when things go wrong, they’re confusing because there’s so many stages.

And so having that insight in what’s happening on the, the engineering insight is extremely valuable, but then also just the business value insight. So I talked about pulling off, like under over-watering or mildew or any of those. Having a dashboard when you’re the company making a strawberry picking robot, having a dashboard that shows the farmer, Hey, you’re watering too much here.

Or Hey, you have mildew on this whole section of the crop is extremely useful. We must think alike because this is a robot hub and then robo hub. I’m on a robo hub podcast talking about robot hub. so that’s, that’s what we name it. And it’s every, we view everything as a robot.

There are flying robots and swimming robots and running robots and driving robots, and then trapped robots, that are robots [00:50:00] that have to solve all the perception problems. But they’re generally replacing some mechanical automation with just observation that then like autonomous checkout is a perfect example of that.

You know, things no longer have to be moved by a robot that like scans things, right. It just allows you to all autonomously check out. So robot hub allows you to collect all that ground truth data, ship it off to say robo flow. It’s all about robotics to then retrain models. And then also allows you to have AB testing.

Cause you’ve got this pipeline of say like 11 neural networks and all these computer vision functions. You change one thing wanted to deploy it only to Ohio in the morning and have that run in Ohio in the morning to see if that actually solves the problem there. And then you can start to trickle AB test it out.

Um, so that’s, that’s the thing that there’s always been. Are the thing that we’ve wanted to build, but it takes awhile to, you know, first is building hardware, then firmware, then software, then the AI and simulation. And then in April re releasing like the first like alpha version of that, of that robot hub that does all that


Abate: yeah,

To give an anecdote from my own experience as well. You know? my, so the first, the first startup I joined, out of college was actually this, autonomous sports filming, industry. So we actually built one of these cameras. We did it out of like Nvidia Jetson and, multiple cameras stitching and then doing all of that on board and then uploading three 4k camera streams to the cloud, and then I’m doing all of the magic up there. And one of the best decisions that we made was to take all of that work and then do it locally on device and just optimize the algorithms. So now you’re no longer sending, you’re sending a fraction of the data that you used to be.

And then this unlocks some massive things, especially in mobile hardware products, like being able to upload over LTE and, you know, affordable way. and then, you know that several gigabit per second down to, getting the megabyte kilobyte per second range, that’s, that’s where you start unlocking value and being able to scale massively.

Um, yeah.

so I think that’s like, to me the most exciting thing about the, advancement and evolution of doing edge computing.


Brandon Gilles: Yeah, absolutely. And even more so than say the sports filming example. Cause I’m sports filming. Maybe you’re filming a game. Like if you’re really overzealous about it, you’ll have like five cameras, right. And per like game that you’re filming, but probably for a lot of the market, like one is enough or like two is enough, but in a lot of these robotics automation problems in a given site, you have 2000 cameras or 10,000 cameras.

And then you’re talking about like hundreds or thousands of sites ultimately as these rollout. And so the benefits, oh, and then also in the filming example, like a lot of times you want a live stream, right. And sports swimming, you want a live stream to be going. So like you get business value out of a compressed video going somewhere.

Right. And so you’re okay with that cost in a lot of these robotics cases, like ideally, you know, you want a situation where no data ever has to leave the platform. Right. And so the value add is even higher because in the, you know, the ideal end case. you know, with the geopolitical situation that’s happening now, you know, none of us are paying attention to the robots anymore.

Something awful horrible is happening in there. The robotics strawberry pickers, like Wally out, they’re just still picking strawberries. And so that’s that it, because there’s so many of them. and so yeah, in, in robotics, in so many industries, is it unlocks new applications to be able to do this on the edge in robotics is just absolutely critical.

It’s like, another order of magnitude or multiple orders of magnitude higher value to have all this like embedded into the camera, to, to unlock all these new robotics applications.

Abate: Yeah, absolutely. and you know, so one thing that I’ve always been curious about with, with Luxonis so, you know, the software, the firmware that you guys write is a massive value and a big selling point of the product, because you can just buy it, plug it in, do like all the things that you want to do.

And maybe you want to make it a little bit better or whatever it is for your specific product. but you can instantly test now as your, as your customer base grows and then say, you’ve got like four store strawberry, picking companies using your platform. Is there a type of network effect that happens where, you know, maybe there’s some like contributions to open source software that’s being written.

That’s going to be more publicly available for everyone who buys a product. So after five years, the platform is better because of the larger customer base.

Brandon Gilles: Yeah, absolutely. and we’re already seeing that a [00:55:00] ton across industries. and so it’s, it’s, it’s really, really advantageous. And especially in new markets like this, like maybe 10 years from now, you know, when. Like the way I look at it is like, there’s, there’s just all these disparate vacuums, right? Of like, you know, here’s this vacuum of this whole huge industry.

And there’s like these tiny little startups bouncing around in the vacuum. Right? And so in these, each disparate markets improvements and, robustness and testing and deployment ends up helping across all sorts of other verticals. So folks that are in filming for example, have done IQ tuning and it’s on our docs.

Uh, IQ is image quality tuning. And so there’s an alternate image quality tuning, on our website that that folks can use as a result and just even the robustness. So, so that’s the goal. And that’s a lot of the reason we have the business model that we do, which is, The, I stole this from ubiquity. So folks who were very familiar with Ubiquiti and or investor calls, like I started out, they’re a publicly traded company.

So I started out as an investor at Ubiquiti and then loved it so much and wanted to work there and did, but on the investor calls, the owner would say, you know, we’re, we’re a software company, the monetizes on hardware that really, that worked really well in the networking space, because cause you were selling to engineers, you’re selling to technical folks that, wanted to buy something and you know, for $70 and like our OAK-D Lite on Kickstarter, it was $74.

Um, and then just get the whole software experience without having to like, do I have to pay like 80 grand a year to like figure out whether this thing’s useful. And so we have that exact same model, which is you, you buy the hardware. It’s like that model. And applied to this field, in, in wifi networking, you never really needed to build a custom product.

You could cover all the needs of wifi and networking by just building standard products and that’s all you sell. So that’s, that’s what ubiquity and Unifi did. In robotics, you can cover a lot of the market with, with standard products, but when you get to these really scaled applications, you know, maybe three cameras doesn’t make sense anymore.

You need nine or maybe, you need two cameras and they need 2.3 megapixel because of the specifics and so forth. so, so you end up in a situation where you need to customize. So that’s why our, our business bifurcates between, you know, standard products and system on module. So you can customize, but, but core to it is since we monetize on selling hardware, like when, when we build an opensource like this whole complex design is open source MIT, licensed and MIT license, for those who don’t know, listening. It’s, is kind of like, it’s like Joseph Redmond, like the do what the F you want license literally means like you can take the code, put it in closed source or open source or whatever you want. Doesn’t matter, just run with the code. And so we literally do then just bake our hardware in is as it’s just one of the components on the design, right?

If it’s a system, a model or as just the camera. And so at that modality, it allows folks to buy this and not just have all the software for free, but have all the software be open source MIT licensed, which is just like, as an engineer working for any company. That’s so nice whether you’re working for a huge company, because what it means is, is an engineer can buy this on a Friday.

Um, take the whole code base, like the whole depth code base, integrate it into an existing, huge monolithic code base. That’s all proprietary show up to work on Monday and have someone in a meeting and be like, wow, well I’d like that, but they’ll either, you’ll never be able to integrate it in code base and be able to say, it’s all integrated.

Like it’s already working with our whole software system and the reason they can do that is it’s MIT open source. And so for folks who, who literally can just take that, there’s still value that comes back cause they’ll integrate it. And they’ll put a GitHub issue of like what crashes in this corner case that no one ever thought about.

And then someone in another industry benefits from it. But in a lot of cases, when we’ve seen this folks who see that MIT open source and they’re like, like, it’s so nice. We’ll literally just contribute back to the code base as well as fixes. Like I think. Diab bold. I think he’s our number one open source contributor.

He, he probably does like five a day, like of like pretty major things that he’s found. it’s just, just the nature, the nature of him. I think he’s, he’s a very, detail oriented programmer. So yes, that’s, that’s the goal. And then what, this allows the whole mission of the platform. It’s a lot, so robotic engineers don’t have to reinvent the wheel, but as this platform becomes the defacto, then it just becomes so much more of a no brainer because it’s been so ruggedized across so many different use cases.

Abate: Yeah. Yeah. Do you have any projects that you’re [01:00:00] excited about?

Brandon Gilles: Yeah. So, we have a ton of them. Our whole, series two Oak is, is like soft launching. Now we were wondering about doing, a Kickstarter, another Kickstarter. So we’ve done two Kickstarters so far we did the first one was like all the Oak models. so OAK-D and Oak one. and OAK-D IOT 75 and OAK-D-POE, OAK-1-POE talk about exploring the market.

Right? And so I made the horrible car call of, of doing a Kickstarter that was five products. but it did well, we raised $1.5 million. And then, one of the things we learned from that is that there are a lot of folks that don’t need such high-end depth resolution. And we learned that a lot of folks there just want to know, like, where is the hand generally?

They don’t need to like precisely map a room. so we made Oak D Lite which was our lowest end version. We sold for $74 on Kickstarter. And in parallel to working on that, we were working on our, so that’s like a series one product we’re working on our series two, which is like a better version of and a better version of Oak D and so forth.

Um, and so this adds what is entirely missing in the OAK-D ecosystem. Not sure if you’ll be able to see it on. But it’s, there’s a laser dot projector. so it’s got a laser dot projector and then, also I R led, so what this gives is, is night vision, night computer vision. So you can do no light or super high crop contrast light, where it’s really bright in one area and otherwise be dark.

And the other enabled by this and that laser dot projector gives you night depth. so RealSense, for example, it gives you a night depth, which, which is useful, but a lot of customers, have a hard time if they’re navigating only having night depth, not night computer vision, because with depth information, great, like you can not run into things.

But if you don’t have feature tracking and, feature extraction and tracking and so forth, you can’t do localization and mapping, which means like you have no idea where the hell you are. And so in high contrast environments, robots, or like in the, what is it called? Like the kidnapped robot problem, that robot just has to wait for like human help when it runs into that environment.

So that solves this problem, active stereo death for, for night depths and no light… no ambient light depth, and then blanket IR illimination. And those are interweavable. so you can do them on even and odd frames. So you get depth information and feature tracking.

Um, so these are coming up. It’s actually eight different permutations. So there’s USB and this is M 12 X Coded. Power over Ethernet. and these come either active or passive that’s one permutation that you can order. And also standard field of view, which is, like 70 degree horizontal 85 degree diagonal or wide field of view, which is 127 degrees horizontal, 150 degrees diagonal.

And so between those permutations active or passive standard field of view or wide field view, it’s a, or USB or ethernet, it’s actually eight, eight products. and we found that. Folks really just want all of those.

Folks who are outdoors want passive because it performs best because IR Illimination really doesn’t mean anything in a lot of cases, outdoors, except for some cases in agriculture where IR is wanted, because they’re pointed down and there’s like really bright leaf, and then there’s like a super shaded leaf underneath an IR, laser dot projection, IR blanket illumination helps.

Uh, and then indoors, IR illumination is wanted. And in some cases, folks want really wide field of view. So you can do a SLAM mapping, other cases, folks want the narrow field of view cause they’re looking at a product on a production line for like QA inspection and so forth.

Um, so those are the, those are the ones that are soft launching right now. And it’s actually internally modular too. So you can, replace the cameras. they have this modular thing. And so that’s another thing is we support with our series 2. factory, configurability options. so like if you want an, all of them to be global shutter, or do you want all of them to be 12 megapixel or 13 megapixel, you can do that as like a factory order.

And we’ve already had, even though these are soft launching, now we have them in our beta store. We’ve, we’ve actually already had several customers do orders like 50. we got an order for 70 of this one with the custom order, all global Shutter today, actually. so that’s, that’s an exciting one. And then we’re also the, in addition to that launch, so those are all like available.

Actually, you can just order those on our website and our beta stores. So we do this like soaking stage and then the robot hub launches in April. Which I think will be huge. That’s what takes us from like, you know, having to download, git hub repository and like, [01:05:00] you know, tippy- tapping on the keyboard to get things running and just be like, Ooh, like follow me, example.

Yes, please. Or like control all my lights example. Yes, please. where folks can just demonstrate capabilities to themselves, to their boss, to their investors, like really quickly to show that, you know, this isn’t just science fiction. And then they have the full source code of that and the capability to deploy it against across thousands or hundreds of thousands of devices, so that then they could just modify it as needed and get all the insights out of it all with a working example.

So that’s probably the most exciting one. and then, so I talked about our series two. we generally are, are working like multiple series in the future. So then later this year we also have our series three. So where you take, all of this, which does all the things that I talked about, series three, also does all of that, but faster and better.

And that, that will largely come out like end of 2022 to early 2023. and it also adds a, a quad core, 64 bit 1.5 gigahertz Linux system in there. and what that allows is for robotics applications that are either simple enough where that’s enough of a host, you can literally just build the whole robot off of it.

Um, just the whole thing, right? All the actuation, all the perception and so on. And then conversely for robotic applications that have a lot of complexity say strawberry picking, you can then offload just a tremendous amount of perception all to the camera because you’ve got more AI power. You’ve got faster depth sensing.

You’ve got all these things and you have a quad core Linux system running Yocto. and so that’s exciting for both sides where it becomes the whole robot or where folks are like, man, like we really love all this, but it sure would be nice to just like, we’ve got all this open CV code that runs, you know, in Linux.

Like we’d, we’d love to just be able to run all that Linux stuff on the camera as well. So that then we’ll be coming out.

Uh, it just, just series three, series three Oak. So it’ll like all of the same permutations that you see here. it’s it’s based on, we just aligned our naming with, Movidius that happened to work out.

So, so Gen One Oak or a series, one Oak and series two Oak are all, gen two Movidius based and then series three, Oak is Gen Three Movidius based. so yeah, that’s that’s into the year. And the cool thing about that. that has a Linux host built in. So robot hub will, we’ll just tie directly into that with no other hardware being needed.

Whereas when, when you’re running this, there would be some Linux systems somewhere that robot hub would talk to. And this is talking to the Linux system, whether it’s, you know, over ethernet or over USB with series three, it’s all just, it can all be directly to the camera if you want.

Abate: Awesome. Thank you so much for coming on the show and talking with us today.

Brandon Gilles: Yeah, absolutely.


tags: , , , , ,

Abate De Mey Podcast Leader and Robotics Founder
Abate De Mey Podcast Leader and Robotics Founder

Related posts :

#RoboCup2024 – daily digest: 21 July

In the last of our digests, we report on the closing day of competitions in Eindhoven.
21 July 2024, by and

#RoboCup2024 – daily digest: 20 July

In the second of our daily round-ups, we bring you a taste of the action from Eindhoven.
20 July 2024, by and

#RoboCup2024 – daily digest: 19 July

Welcome to the first of our daily round-ups from RoboCup2024 in Eindhoven.
19 July 2024, by and

Robot Talk Episode 90 – Robotically Augmented People

In this special live recording at the Victoria and Albert Museum, Claire chatted to Milia Helena Hasbani, Benjamin Metcalfe, and Dani Clode about robotic prosthetics and human augmentation.
21 June 2024, by

Robot Talk Episode 89 – Simone Schuerle

In the latest episode of the Robot Talk podcast, Claire chatted to Simone Schuerle from ETH Zürich all about microrobots, medicine and science.
14 June 2024, by

Robot Talk Episode 88 – Lord Ara Darzi

In the latest episode of the Robot Talk podcast, Claire chatted to Lord Ara Darzi from Imperial College London all about robotic surgery - past, present and future.
07 June 2024, by

Robohub is supported by:

Would you like to learn how to tell impactful stories about your robot or AI system?

training the next generation of science communicators in robotics & AI

©2024 - Association for the Understanding of Artificial Intelligence


©2021 - ROBOTS Association