Robohub.org
ep.

347

podcast
 

Event Cameras – An Evolution in Visual Data Capture with Davide Scaramuzza

by
08 March 2022



share this:


Over the past decade, camera technology has made gradual, and significant improvements thanks to the mobile phone industry. This has accelerated multiple industries, including Robotics. Today, Davide Scaramuzza discusses a step-change in camera innovation that has the potential to dramatically accelerate vision-based robotics applications.

Davide Scaramuzza deep dives on Event Cameras, which operate fundamentally different from traditional cameras. Instead of sampling every pixel on an imaging sensor at a fixed frequency, the “pixels” on an event camera all operate independently, and each responds to changes in illumination. This technology unlocks a multitude of benefits, including extremely highspeed imaging, removal of the concept of “framerate”, removal of data corruption due to having the sun in the sensor, reduced data throughput, and low power consumption. Tune in for more.

Davide Scaramuzza

Davide Scaramuzza is a Professor of Robotics and Perception at both departments of Informatics (University of Zurich) and Neuroinformatics (joint between the University of Zurich and ETH Zurich), where he directs the Robotics and Perception Group. His research lies at the intersection of robotics, computer vision, and machine learning, using standard cameras and event cameras, and aims to enable autonomous, agile, navigation of micro drones in search-and-rescue applications.

Links



transcript

Abate De Mey: Hey, welcome to Robohub.

Davide Scaramuzza: Hi, thank you.

Abate De Mey: So firstly, I’d like to give a little bit of background about why I reached out and invited you to the show today. So over the past few months, I’ve been working a lot with my team at fluid dev, where we’ve been building a platform, helping robotics companies scale.

And while we were working with one of the companies on that platform, we were digging into a lot of open source VSLAM algorithms. Um, and we just kept running into your name as we were doing research and reading up on this. So your name and your team at the university of Zurich. So I’m super excited to have you on today and I’d love to learn just a little bit more about yourself and what your team is doing.

Davide Scaramuzza: Thank you. It’s my honor to be here with this.

Abate De Mey: Awesome. Yeah. So could you tell me a little bit about yourself and your background.

Davide Scaramuzza: So, yeah, I am a professor of robotics and perception at the university of Zurich where I lead the, the robotics and perception group, which is actually now 10 years old. Uh, we are about 15 researchers and we do research at the intersection of robotics, computer, vision, learning and control. Our main goal is to basically understand that how we can make robots understand environment in order to navigate autonomously from a to B.

And our main uh, robotic platform is actually drones. Quadcopters, because they are super agile and they can actually do things much faster than their ground robot counterpart. And one, main characteristic of our lab is that we, we use only cameras as the main sensor modality plus inertial measurement units (IMUs).

And we use either a standard cameras or event cameras, or a combination of both

Abate De Mey: yeah. And so you’ve been with this team for quite a while. So what was your journey like when you started over there? How long ago was that? And then how did it transform to where it is today?

Davide Scaramuzza: So, yeah, when I started I was just an assistant professor. I had no PhD student, so I applied for a lot of proposals and then that’s how actually, then I was able to hire so many people. So at the moment there are like 10 PhD students and three post docs. So we started initially with the, with the drone navigation.

And then a few years later, we started working on event cameras because we realized that actually, if you want to be faster than humans in in perceiving and reacting to changes in the environment, you actually need to use a very fast sensor. So this is something that we must think about if we want to use robots eventually in the future to replace humans, in repetitive actions, this is what happening, for example, in assembly chains, for example, where our robotic arms have already replaced humans.

So robots are useful in repetitive actions, but they only useful if they are more efficient. That means that if they are really able to accomplish the task more efficiently, so that means you need to be able to not only reason faster, but also perceive faster. And that’s why we started working on event cameras because they perceive much faster than standard cameras.

Abate De Mey: Yeah. So what exactly are event cameras.

Davide Scaramuzza: So an event camera is a camera. First of all, it has pixels, but what distinguishes an event camera from a standard camera is the fact that these pixels are all independent of each other. Each pixel has a microchip behind that basically allow the pixel to monitor the scene and whenever that pixel detects a change of intensity.

Caused by movement or by blinking patterns, then that pixel trigger an event, an event manifest itself, basically with a binary stream, it can be a positive event if it’s a positive change of intensity or a negative event, if it’s a negative change of intensity. So what you get out of an event camera, is basically not an image.

You don’t get frames, but you get a per pixel intensity changes at that time they occur. To be more precise. If you move your hand in front of an event camera, you wouldn’t see images like RGB or grayscale images, but you would rather see only the edges of my arm because only the edges trigger changes of intensity.

Right. And now the interesting thing is that these events occur continuously in time, and so an event camera doesn’t sample this changes at a fixed time interval like a standard camera, but rather continuously in time. So you have a resolution of microsecond.

Abate De Mey: So when you say continuously, you mean as in, it’s just a very high frame rate to the point, which it looks like it’s happening continuously.

So something

much higher frame rate.

Davide Scaramuzza: Not not it’s after, so that’s a, that’s a problem. So it’s not, there is no frames. Okay. So you don’t get, you don’t get at all images, but you get basically a stream of events where each event that contains the the, the position of the pixel spiking, you also the microsecond time resolution and the, the sign of the change of intensity positive or negative.

So that means, for example, if you’re, let’s try to explain it in a different way. If you have a fan rotating in front of an event camera, you don’t get the frames at the high frame rate. Not at all. You will rather get, spiral of events in space and time. Exactly. The spiral of events in space and time. So we call this the space-time visualization of events. Okay. Because we have the time dimension that you don’t get to be standard cameras because cameras sample the scene a fixed time in the past. So then the time is the same for all the pixels. When the camera captures a frame, while here the time is different for them.

Abate De Mey: Yes. And so also, if you were to interpret this data visually, how would it look compared to a standard care?

Davide Scaramuzza: So it will look, so it will exactly look like a motion activated the edge detector. So you will, you will see edges. If you represent are the event. In a frame like fashion. So that is another way to represent this event. So you just accumulate the events over a small time window of say other, not only save on so one minute Sagan’s and then you visualize it each as a frame.

And in this case you will actually see edges, but you must remember that the row in formation is actually a space-time volume of events. Okay. So it’s not flat.

Abate De Mey: Yeah. So what are some of the other benefits that you get when you compare this to a standard camera? And let’s say for applications like you know, doing V slam on a drone, that’s traveling very quickly.

Davide Scaramuzza: So the applications for robotics range from a city estimation that doesn’t break. No matter the motion. For example, we showed the three, four years ago, a paper called the ultimate SLAM, where we used an event. Uh, to be able to, unlock SLAM. So simultaneous localization and mapping scenarios where standard cameras fail.

And the scenario we actually consider was that of a camera that was been spinned as a last loss of like a cowboy through the USB cable of the camera. So we were spinning the camera IDs and they come in, I was on, I was recording this scene. So now you can, you can imagine that the frames recorded by standard camera will be completely blurred and pictures will be also washed out because of the drastic changes of elimination.

Instead, the output of the van camera is. And so we were, we sh we demonstrated that, but thanks to the high temporal resolution of the event camera, we were able to detect. Features, of course this, this was a different type of features, not stand up corners because now you have to re-invent coordinators for even cameras.

We were able to track them these corners over time, fuse this information with the national measurement unit. And then we were able to recover the trajectory of the loss. So with extreme accuracy, that will not be possible with a standard camera. So we showed that the way, if you use an event comedy, you can boost the performance of.

By at least the 85% in scenarios that are inaccessible to standard cameras. And you’re talking about the high speed, but also high dynamic range. So either dynamic ranges and other programs to women cameras, when cameras have a dynamic range, we choose an eight or there’s of magnitude superior to standard camera.

So you can see very well in low light, as well as when you, for example, exit the tunnel. So we demonstrated this with another paper at CVPR in Palmy, where basically we showed the individualized. If you’re using event camera, when you exit the tunnel, you can actually um, events into standard, very skill images, or even quality images.

If you use a color camera where actually you can see very clearly this guy and all the other objects around you, like other cars in conditions, that would actually be very challenging for standard cameras. For example, when you have the sunlight in the field of the other camera, or when you exit from Atlanta,

And then another about robotic applications that we did was for drones.

Uh, actually we have tool for event cameras. We applied to this ultimate SLAM. So the state is super fast, the state estimation algorithm, to a drone that experiences a rotor failure. So. You know, now that the autonomous drones are becoming widespread, especially in Switzerland, which was the first country to approve a autonomous navigation of drones beyond normal flight.

We have had two crashes out of 5,000 autonomous flights and One of these crashes was actually caused by the failure of a model. So we can expect that this will become more and more frequent as the number of drones flying over our head that will increase over the next decades. So we thought of an algorithm that could possibly use the remaining three rollers in order to continue stable flight.

So this has already been demonstrated by five Andrea in this group and also in a, in a two Delta, but they were using the position information coming from, GPS Or from motion capture system. Now, what we wanted to do is to try to use only onboard cameras. So we tried first with a standard camera.

We realized that actually we were able to, estimate reliably the motion of the drone during the spinning, because. If a propeller fails, basically what happens is that the photo starts spinning on itself. And this high rotational motion causes actually typically will cause a motion blur.

But apparently if you’re adding a bright day, the motion blur is actually not significant. So it’s actually manageable. And so with the standard slump pipeline, like SVO, you were able to actually sustain motion and before a stabilized the drone, despite this very fast relational moment.

Abate De Mey: And this is with a standard camera or with

Davide Scaramuzza: This, we manage

with a standard camera in bright light condition.

Now then what we did is that we start to dim in the light and we realize that the light intensity fell below 50 lux then, which basically like artificial light conditions. Like now it’s indoors. Then in this case, for example, the camera was to blur in order to be able to detect and track features. And in this case, we’re only able to sustain flight to using the event camera and we’d even come with.

To actually continue to stabilize the drone up to an illumination as low as 10 Lux, which is close to full Moonlight. So that’s quite a significant, and finally they answer the last thing. I wanted to point out. Another application of event cameras to drones has been for dodging a quickly moving objects.

For example, we have a paper and a video in science robotics. What basically. It’s student is throwing an object, like a ball or other objects to the drone while the drone is already moving, leading towards the object. And then it drone eventually dog, just so this fast moving objects. And we use a camera because, because we show that we’re doing comedy, we’re able to detect and stuff.

There is a man who wears with only 3.5 millisecond latency. While we standard cameras, you will need at least 30 milliseconds because you need to acquire two frames and then do all. Image processing pipeline to detective position and and the velocity of the incoming object.

Abate De Mey: Yeah. So within that 3.2 milliseconds, you said, so that’s including an algorithm. That’s able to also detect that, oh, this is an object and it’s coming to me.

Davide Scaramuzza: that’s correct.

Abate De Mey: Okay. Um, so I mean, you know, one of the advantages of say standard camera is that one, you could use it for your computer vision algorithms, your machine learning, et cetera.

Um, but you could also then have a person look at it and intuitively understand. All of the data that’s coming off of it is, you know, the big advantage of cameras. So yeah, if you were to, if you were to say, use a event camera on your drone is there, would there be a, an intuitive way that you could also, as an operator view that output and have it like really make sense?

Davide Scaramuzza: So. Directly, no, there is no way that you can verify, recognize a person from the footage recorded from an event camera – from the raw footage recorded from an event camera. However, we showed the another paper published the CVPR that you can train a neural network to reconstruct, um visually correct images from raw events.

Basically, we have a recurrant neural network that was trained in simulation only because we have a very accurate event camera simulator. And in simulation, it was trained to actually um, reconstruct, this grayscale images. And we were comparing the reconstructed images with ground truth, which we possessed in simulation and now what we found is that actually this also works in practice with any.

Sort of event cameras, you know, the different event camera companies. So also different models for each company. So we’re actually quite impressed by the fact that it works with the event camera. So that means that event cameras don’t really preserve your privacy. So they, they actually can be used and they have people’s process in order to reveal the identity of, of people.

But I will uh, So go back to your original question. I will say that event cameras should not be used alone as the only you know, camera by the should always be combined with standard cameras, because an event camera is a high pass filter. So a standard camera can record footage. Also when there is no motion, of course you may ask, okay, "but what is interesting is there is no motion", but this actually comes very.

Um, handy in the autonomous cars because when you stop and there is a traffic light and that you want to wait, you know, the point is that the also stationary information is important for seasonal understanding. Okay. So an event camera can not detect anything if nothing is moving. So as soon as you start moving, then you get information.

That’s why the best is to combine it with that with a standard camera in order to get this additional information.

Abate De Mey: Yeah. So, I mean, you mentioned autonomous cars, so are there any. Places in industry that these are being actively deployed. Um, how accessible is this to say startups that are in robotics that are looking to improve their

Davide Scaramuzza: We are working with a top tier company to investigate the use of event camerasfor automotive, applications, and that we are working on. On HDR imaging. So trying to render images much better quality than you can with standard cameras, especially when you have the sun light in the field of view. Um, also we are looking at uh, pedestrian detection and tracking at the moment.

If you look at the standard, cameras like a Mobileye, they take around 30 milliseconds to detect pedestrians and other vehicles. And also estimate their speed, the relative motion with respect to your car. Uh, with event cameras we speculate that this a latency should drop below 10 milliseconds. Okay.

Uh, because still you want to be very, very reliable. Okay. So if you want to have the same accuracy in detecting all these other vehicles and pedestrians. So that’s the type of things that we are investigating. Um, It can also be used for in-car monitoring, for example, to monitor the activity within the car, blinking.

Um, eyes or for example, so for gesture recognition within the car, so these are things that are being explored by other automotive companies, not by us. Um, Another thing that is actually very important about event cameras is the fact that, that they need that much less memory memory footage than standard camera.

So this is a work that we published a CVPR last year, and it was about the video frame interpolation. So we combined a standard high resolution RGB camera. FLIR camera. So very good quality. with a high resolution event camera. Um, but still of course the resolution of event camera is still smaller than standard cameras.

So the maximum you can get at the moment, it’s a 10 80 pixels. Uh, and so we combined them together. So basically the output of this new sensor was a stream of frames across some intervals. Plus events in the blank time between consecutive frames. Okay. So you have a, a lot of information. And then what we did is that we use the, the events in the blind time between two frames to reconstruct arbitrary frames.

at any time at any arbitrary time okay. By using basically the information of the events just before the time of which we wanted to generate the frame and events just after the reconstructed frame. Okay. So we take two frames, we look at the events left and right. And then we reconstruct basically images between and we were able to sample the video up to 50 times.

By doing so up to 50 times. So we call it this this paper timelines. Um, and so we showed that for example, we were able to generate then a slow motion video. So with impressive quality. Uh, for example, in scenes containing as balloons being smashed on the floor. Balloons filled with water then is smashed on the floor or balloons filled with air being, for example, popped other things that we showed the were.

Um for example, fire other things moving super fast, like people you know, running or spinning objects. And we were able to show that actually you could get this using not a high cost equipment, like a high-speed cameras.

Abate De Mey: Yeah.

Davide Scaramuzza: And then what we also show that is that using event camera, you can record the slow motion video with 40 times less memory footprint.

than you will need that with a standard RGB camera. So just, if I remember correctly, we showed that the Huawei P40 pro phone which at the moment, I think is the best phone camera. So at the moment there, you, if you record the video up to eight, kilohertz, then it has a footprint of 16 gigabytes per second, the video

Abate De Mey: Yeah. So that’s like 8,000 frames per second. Um, the, I mean the, the resolution, if I remember right. I don’t know if the video is 64 megapixels

Davide Scaramuzza: Well, w we, we limited that we

limited the resolution for that experiment. Now at the same resolution as the event camera, because we wanted to make a fair comparison. So for the same resolution as they event camera basically we get 16 gigabytes per second of videos on motion video, and with the event camera we were able to reduce this to four gigabytes per second of video.

Okay. So 40 times improvement, not only. We also showed that while with the standard, high speed camera or the Huawei phone, you can only record a very short phenomena for a maximum of 125 milliseconds. Thanks to the event camera we were able to record them for much longer. You’re talking about minutes. Or even hours, depending on the dynamics of the scene.

So this means that also for automotive we could possibly also significantly reduce, you know, the memory storage of the things that we need, in order to, you know, for our training algorithms and so on. So now we are focusing more and more actually on deep learning with the event cameras.

Abate De Mey: Yeah. I mean, you know, that, that’s definitely a very massive thing. Uh we’ve we’ve seen before where a SSDs that are being written to again and again for video even in the autonomous car world have been failing due to old age. So, and then just to get an idea of how much data it’s required to record 10 80 P video.

So that’s 1920 by 10 80 pixels for on an event camera that would just be one pixel with one binary value for every pixel. Right.

Davide Scaramuzza: Yes, but not only., actually, you need. Uh, it’s around 40 bits. So yes, you need basically 20 bits for the position. Then you need another. And other 20 bits about for the time resolution plus one bit for the sign of intensity change. So that’s always around the 40 beats, but actually now there are.. that is 40 bits.

Okay. Because 20 bits is for the time the timestamp at microsecond resolution. Now, though there are um, new algorithms coming from the company Prophesee. And that also uses event cameras that compress the time information by only sending basically the increment of time since last event and by doing so, they were able to drastically reduce the bandwidth by another 50%.

And this is already available with the newest sensors.

Abate De Mey: Yeah. So you, you know, this is almost like a, an evolution and encoding too as well. Um, at least for certain applications that have both of these sensors available. And then I think right now, you know, I looked up the price of event cameras and they’re, they’re, they’re still quite expensive and not from many manufacturers.

Um, do you have an idea of roughly how much they cost and um, if there’s, you know, any sort of vision into the future for how their price comes down with adoption.

Davide Scaramuzza: At the moment, the cost is between three and five K $5,000. Depending if you buy them in a low or high resolution and with, or without the academic discount. And these other prices I’m telling you from firsthand user experience and about the price. I mean what these companies are saying very exclusively is that the as soon as a killer application is found, then they will start mass production.

And then the cost of the sensor would certainly go below $5. However, before doing that, you need to reach, you know, a mass production and I will say that we are experiencing what happened with depth sensors, you know, depth sensors, depth cameras were available already from the nineties.

I remember during my PhD with Roland Siegwart, we had the Swiss ranger, which was one of the first depth sensors made by a swiss startup and at the time it cost a $10,000. And that was in 2005. So now you can find them in every iPhone. And so, but you know, almost 20 years have passed.

So event cameras reached now an acceptable resolution. That is a basic , megapixel resolution only two years ago in 2020, before they were actually in the resolution of 100 by 100 pixels. So I would say now that we have the solution, people start to buy them and to make experience with them.

And at the same time, also companies start to also investigate what their use cases could possibly be. So it will take time. It would take time. I cannot speak of it, how much time it would take, because I’m not the futurologist, but I think that eventually they will be using something. Um, now other other things where I believe they will also find a lot of applications are for example, for activity recognition.

And I’m aware already that in China, they are using quite a lot for monitoring, for example, So there is a company in Zurich called the SynSense that, that pairs event cameras with the neuromorphic chips that are running a spike in your networks. So the camera plus the chip that is doing a neural network inference for face recognition all of it consumes about 1 millivolt.

And you only need to change the batteries every few years. So you can start this cameras, you know, some shops, so in, for your house and they’ll, and forget about changing the battery for a few years. So that’s quite amazing. So, but so we have talking about basically, you know, edge computing. And always on devices.

Okay. So this is also another interesting application, then we, of course we, I speak a little to defense that is also DARPA program running for event cameras called the FENCE program that is trying to build a new event camera with even much higher resolution, a much higher dynamic range, a much higher temporal resolution. And we can understand what possible applications can be for defense fast-tracking of targets and so on for rockets as well.

Um, Eh, for a combination of photography, I already mentioned the slow motion video, but also de-blurring there has been work done by other colleagues where they show that you can, for example, unblur a blurry video using information from an event camera. To be honest, there are so many applications. So there is also been a synthetic imaging.

So to see through clutter uh, I think two years ago ICCV. So there is a lot coming out. So we end up, I’m actually always super excited to look at the proceedings of conferences to see what the imagination people, actually, creativity, people that are unlocking to use event cameras.

Abate De Mey: Yeah. Yeah. And, you know, I can imagine also uses in low light situations. Um, you know, and I know your team does a lot of work with search and rescue for drones, where you get into a lot of these. Um, not lit or dark situations that it would be super helpful. Um, is there a good way to, to gauge like a, say distance to an object using one of these cameras or maybe in combination with the traditional camera.

Davide Scaramuzza: Yes we did it, we’ve done it in different ways. So of course the easiest way will be to Use a single event cameras plus IMU, and we can do it, so Monocular-visual-inertial odometry. So, but you need to move in order to estimate the depth you can, of course, estimate depth to using uh, monocular, event cameras, plus a deep learning.

And we also showed that in a paper two months ago, you can combine two event cameras together in stereo configuration, and then triangulate points. Also this, we did it and many people did it. You can also have a hybrid stereo event camera where a single camera, one camera is an RGB camera. And the other one is an event camera.

So you can actually get in this case, both the, you know, the, the, the photometric information, as well as low latency of the event camera, but actually what we started doing last year Uh, in collaboration with Sony Zurich is actually to combine an event camera with a laser point projector.

And basically what we have assembled is now very fast active, depth sensor, that basically, you know, we have a moving dot that scans the scene, the, from left to right. And then we have the event camera, and I can actually track this dot at impressive speed. And now you get a super fast depth camera.

And we showed that actually we could we would need the less than 60 milliseconds for each of it. Actually, we are limited by the speed of the laser point projector because, you know, we didn’t buy very expensive laser point projector, but this shows that actually it’s possible to shrink the acquisition time by these laser based depth sensor.

So I think this is quite new, and we just published that 3DV a few months ago, and we are super excited about this also. SONY is super excited. It could have also significant applications in phones and also for our indoor robotics, I’m saying indoors because typically, you know, when you have, a laser you are limited by the external lights, or you have to have a lot of, you have to meet a lot of power.

Of course, if you want to make it work outdoors, And other things that we actually are very excited about in terms of active vision. So with lasers is a event driven LIDARs. So again, in collaboration with Sony, what we showed is that if you use LIDAR for automotive, they illuminate the scene uniformly.

Regardless of the scene content. So also when the scene is stationary, that actually causes a huge amount of power consumption. Now we know you may come at us on the react to moving, moving things. And we evaluated that on a typical automotive scenario. A car driving down an urban canyon.

Only 10% of the pixels are excited. Okay. And this is because an event camera has a threshold. So you basically every time that the, the, the, the intensity changes, so it goes over a threshold, then any event is triggered. Okay. So you can tune the threshold in order to get more or less events. Of course.

So.

Abate De Mey: So just to understand, like, let’s say there’s a car driving down the street and it’s got an, event camera on, on its hood. Um, You know, everything you would imagine is moving, except for maybe things on the horizon or whatever, but you’re able to set the threshold so that you can adjust what is considered motion and what is not.

Davide Scaramuzza: That is correct. So we can subtract the Ego motion from the absolute motion. So this can be done. We already done it. We have a framework called contrast maximization where we can subtract the Ego motion. So then you will get only the things that, that really moving. And so we can then guide the laser to all only give us depth information in correspondence of those regions.

Of course, we are very conservative in this approach. So we don’t say, give me the depth for this specific pixel. What we say is that there is a region of interest. So rectangle typically, and then we ask basically the LIDAR to crop it to only give us information in specific sparse, rectangular regions within the image.

So that’s, that’s something that we just we just published. Uh, it’s it’s, it’s a premium result. I mean, there is a lot to improve there, but we are curious to see how the community will react on that. Okay.

Abate De Mey: Yeah. Yeah. I mean, you know, just listening to you speak, there’s so many projects that are happening. There’s so much research that’s going on in articles being written. Um, what would, you know, what are one you’re the high level goals for your team of like what research went accomplish and what changes you want to bring to robotics?

Um, and then how can people keep up with it?

Davide Scaramuzza: Okay. So we also work a lot on drones. Okay. We work with like 50% on drones and 50% of the camera. So at the moment, I’m very excited about drone racing. I don’t know if you want to talk about this now or later, but to stick to event cameras and as a I’m really interested in in understanding where event cameras could possibly help, in any, in any application scenario in robotics and computer vision., And so all the ones that I mentioned so far to you are the ones I’m very excited about.

And if people want to start working on event cameras, actually, we maintain a list of resources, on event cameras, first of all, we organize every two years now, a regular workshop, at CVPR or ICRA, we alternate the years. So we have done so far three workshops, so you can find them on our event camera webpage.

You can find all the links. From the same page. We also link a list of event camera resources, which contain all the papers ever published on event cameras in the last 10 years ever published. So we had, we have over 1000 papers which is actually not a lot. If you think about then we also list all the event camera companies, we also list all the open source algorithms and we organize all the algorithms depending on the application from SLAM to optical flow to scene understanding there is also.

A lot too. So I would say to the novices who want to jump into event cameras, first of all, you don’t need to buy an event camera. There is also plenty of datasets that are all listed from this from our webpage, all the there’s a solicitors. And and so just start that with that, we also have a tutorial paper, survey paper on event camera.

They explained how event cameras work. We also have courses so because it’s part of my lecture at University of Zurich and Eth Zurich, computer vision and robotics. So I also teach event cameras also my former Post-Doc, Guillermo Gallego, runs a full course on event cameras. So for several weeks, if you want, if you really want to follow a course of, there is a lot of resources that are all linked to from our webpage.

Abate De Mey: awesome. Awesome. Well, thank you so much for speaking with us today. It’s been a pleasure.

Davide Scaramuzza: My pleasure.



transcript



tags: , , , , ,


Abate De Mey Podcast Leader and Robotics Founder
Abate De Mey Podcast Leader and Robotics Founder





Related posts :



Robot Talk Episode 98 – Gabriella Pizzuto

In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.
13 November 2024, by

Robot Talk Episode 97 – Pratap Tokekar

In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.
08 November 2024, by

Robot Talk Episode 96 – Maria Elena Giannaccini

In the latest episode of the Robot Talk podcast, Claire chatted to Maria Elena Giannaccini from the University of Aberdeen about soft and bioinspired robotics for healthcare and beyond.
01 November 2024, by

Robot Talk Episode 95 – Jonathan Walker

In the latest episode of the Robot Talk podcast, Claire chatted to Jonathan Walker from Innovate UK about translating robotics research into the commercial sector.
25 October 2024, by

Robot Talk Episode 94 – Esyin Chew

In the latest episode of the Robot Talk podcast, Claire chatted to Esyin Chew from Cardiff Metropolitan University about service and social humanoid robots in healthcare and education.
18 October 2024, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association