Robohub.org
 

Robocar news from CES 2016

by
13 January 2016



share this:

I’m back from CES 2016 with a raft of robocar news, some of which was reported before the show. Almost everybody in the robocar space had something to say — even if it was only to have something to say! I have many more photos with coverage in my CES 2016 Photo Gallery.

Ford makes strong commitment

Ford’s CEO talks like he gets it. Ford did not have too much to show — they announced they will be moving to Velodyne’s new lower cost 32-laser puck-sized LIDAR for their research, and boosting their research fleet to 30 vehicles. They plan for full-auto operation in limited regions fairly soon.

Ford is also making its own efforts into one-way car share (similar to Daimler Car2Go and BMW DriveNow) called GoDrive, which pushes Ford more firmly into the idea of selling rides rather than cars. The car companies are clearly believing this sooner than I expected, and the reason is very clearly the success of Uber. (As I have said, it’s a mistake to think of Uber as competition for the taxi companies. Uber is competition for the car companies.)

Ford is also doing an interesting “car swap” product. While details are scant, it seems what the service will do is let you swap your Ford for somebody else’s different Ford. For example, if somebody has an F-150 or Transit Van and they know they won’t use the cargo features on particular day, you could drive over with your ordinary sedan and temporarily swap it for their truck — presumably with a small amount of money flowing to the more popular vehicle. Useful idea.

The big announcement that didn’t happen was the much-rumoured alliance between Ford and Google. Ford did not overtly refute it, but suggested they had enough partners at present. The alliance would be a good idea, but either the rumours were wrong, or they are waiting for another event (such as the upcoming Detroit Auto Show) to talk about it.

Faraday Future, where art thou?

The Faraday Future concept electric racecar is cool, but seems to be completely at odds with everything we had heard about what FF was up to. Photo credit: Brad Templeton

The Faraday Future concept electric racecar is cool, but seems to be completely at odds with everything we had heard about what FF was up to. Photo credit: Brad Templeton

The big disappointment of the event was the silly concept racecar by Faraday Future. Oh, sure, it’s a cool electric racecar, but it has absolutely nothing to do with everything we’ve heard about this company: namely, that they are building a consumer electric car-on-demand service with autonomous delivery. Everybody wondered if they had booked the space and did not have their real demo ready on time. Their plans will stay secret for a while, it seems, though recent hires such as Jan Becker (former head of the autonomous lab for Bosch) suggest they are definitely going autonomous.

Mapping heats up

Google’s car drives by having super-detailed maps of all the roads, and that’s the correct approach in my opinion. Google is unlikely to hand out its maps, so both Here/Navteq (now owned by a consortium of auto companies in Germany) and TomTom are working to produce similar maps to licence to non-Google robocar teams. They are taking fairly different approaches, which will be the subject of a future article.

One interesting edge is that these companies plan to partner with big automakers and not just give them map data but expect data in return. That means that each company will have a giant fleet of cars constantly scanning the road, and immediately reporting any differences between the map and the territory. With proper scale, they should get reports on changes to the road literally within minutes of the changes occurring. The first car to encounter a change will still need to be able to handle it — possibly by pulling over and/or asking the human passenger to help — but this will be a very rare event.

MobilEye has announced a similar plan, and they are already the camera in a large fraction of advanced cars on the road today. MobilEye’s primary focus is on vision (rather than LIDAR), but it will have lots of sources of data. Tesla has also been uploading data from their cars, though it does not (as far as I know) make such extensive use of detailed maps, even though it relies on maps generally.

Google is the world’s number one mapping company, and thanks to the hundreds of millions who drive with Android phones, it has tremendous access to data on the speed patterns of cars. But here is one area that Google might lose out on: the partners of Here and TomTom may have a lot more cars scanning the road than Google will have for some time, even if the scans are not as advanced.

Of course, Apple also has a mapping division, and plans to enter this space. And the “Navigation Data Standard” consortium is trying to build a standard format for advanced map data so robocar systems an easily switch between map vendors. Lots of competition is good for the public.

Nvida makes a huge push to be the platform

NVIDIA's software suite was shown off.  Here we see not the localization but the their computer vision system tagging cars and trying to identify them. Photo credit: Brad Templeton

NVIDIA’s software suite was shown off. Here we see not the localization but the their computer vision system tagging cars and trying to identify them. Photo credit: Brad Templeton

NVIDIA, which makes the graphics chips in most of your computers, has increased their already large push in this area. GPUs are today’s supercomputers, and NVIDIA is promoting their new Drive PX2 board, which features multiple GPS and other processors to effectively create a supercomputer for use in cars. All this computing power then gets applied to computer and machine vision, particularly systems trained with convolutional neural networks like Deep Learning, to try to do the many tasks needed in a self-driving car: localization, sensor fusion, perception and motion planning. NVIDA’s bet is that general purpose super-high-powered computing is more useful for this task than the specialized vision processing hardware that MobilEye makes, and that it can probably all be made to work from cameras, without LIDAR. (Though to be fair, NVIDIA is not as anti-LIDAR as Elon Musk or MobilEye, saying that their supercomputers and software will still be valuable combined with LIDAR.) One of the NVIDIA demos in pedestrian detection combined a Quanergy 8 plane LIDAR and their camera systems. In the demo, they used water jets to simulate rain, to show that in wet conditions, it is the vision that fails and the LIDAR that keeps detecting the pedestrians. Quanergy’s LIDAR is able to distinguish between signal returns from raindrops (which are more dispersed) and those coming off of solid objects.

NVIDIA isn’t the only chip company with an interest, but it is the most advanced. NXP and Qualcomm both had tech on display (mostly along the theme of the “connected car”), as did Intel and a few others, such as OS company QNX (a unit of Blackberry founded by classmates of mine in Canada). Other chip companies have been making efforts towards the ADAS market (such as Intel, TI and CEVA) and the huge success of MobilEye there will draw in more companies seeking to be suppliers for robocars.

New entrants — VF and IAV

Just about every player out there has something in the space now. German Tier One VF was showing both existing parts (such as electric steering motors and ADAS tools) as well as complete systems for OEMs not wanting to do the work themselves.

The most silly demo in the parking lots came from a partnership of IAV, Microsoft and some others. In this demo, the car would drive a test course towards a light where a pedestrian was waiting, hidden behind an SUV. The pedestrian’s Microsoft wristband was sending her GPS coordinates up to a server, which then sent them down to a traffic light. The traffic light was then able to use DSRC (V2V/V2I) to tell the car about the pedestrian, as well as the green/red state of the light. Upon learning this, the car would slow down as it passed the parked SUV.

It seems reasonable at first, but even the demonstrators agreed that it’s not practical for cars to always slow down every time a pedestrian is close to the street. In addition, the odds that people will be constantly uploading their location from wristbands or phones — even if GPS were accurate enough for this — will be pretty low for years to come.

Here is an Engadget article on the demo.

Toyota’s billion dollar bet on AI

Two wacky Toyota concept vehicles.  I could not figure the small one: it banks, but it does not seem to have a place for a person even though it is not meant for cargo. Photo credit: Brad Templeton

Two wacky Toyota concept vehicles. I could not figure the small one: it banks, but it does not seem to have a place for a person even though it is not meant for cargo. Photo credit: Brad Templeton

The strange concept cars seen here in the Toyota booth were not the real Toyota story at CES. Far more interesting is the creation of a new AI research lab inside Toyota with a $1 billion budget, run by Gill Pratt, who also ran the DARPA Robotics Challenge I reported on last year. That’s a huge budget, and will fund some pure research as well as researched aimed at driving.

New Quanergy Lidar announced

For full disclosure, I am on the advisory board in Quanergy and own stock, but I was pleased to see the announcement (ahead of schedule) of Quanergy’s first solid state LIDAR, an 8 line phased-array LIDAR with a 120 degree field of view. While 8 planes is on the low end for self-driving, several of these LIDARS can be combined at this low price. The LIDAR will also help boost smaller robots indoors and out. Those who dismissed LIDAR as overly expensive have, as I have predicted, made a mistake.

Quanergy has partnered with Delphi to market the LIDAR to automakers.

New Velodyne and Valeo Lidars

Valeo showed off the low cost 4 plane LIDAR they are making based on IBEO's original product.  It's $250. Photo credit: Brad Templeton

Valeo showed off the low cost 4 plane LIDAR they are making based on IBEO’s original product. It’s $250. Photo credit: Brad Templeton

Quanergy wasn’t the only LIDAR being promoted. Velodyne, which makes the expensive but powerful 64-laser LIDAR used in many research cars (including Google’s before they built their own) has a new unit that is the size of a large puck with 32 lasers. It’s going to be used in Ford’s new research cars, among others. Cost was not named, but will be under $10,000 — the current cost of Velodyne’s 16-laser LIDAR. It is claimed to be solid state, but no details are available on how it works.

Valeo is now in production on the $250 4-laser LIDAR based on designs from IBEO. They have branded it SCALA, and while it will primarily be used in the ADAS market, they are hoping for even more.

While not related to robocars, Valeo also showed off their Sightstream product, which replaces side-view mirrors with cameras and screens. Today, the regulations demand side-view mirrors — they are even on Google’s steering-wheel-free 3rd generation car — but they cause a lot of drag and get hit in accidents, vendors have wanted to replace them with cameras for a while. Cameras are superior in many ways, not the least of which is that they don’t have blindspots.

Nissan promises 10 models with autonomy

Nissan was not at CES, but their CEO held a press conference at Nissan’s research lab in Sunnyvale to divert some attention. He promised Nissan will put autonomous drive features into 10 different Nissan models, without naming them. By about 2018 you can expect autopilot functions (like you see in Tesla and others), and by 2020 you can expect some greater level of autonomy — possibly the standby supervision approach incorrectly called Level 3 by NHTSA/SAE.

Nissan has been a leader in the Japanese market, but Toyota’s new AI efforts may push it ahead. Honda, Mazda and Subaru continue to show very little in the way of effort.

Nissan has made the statement that it does not want to embrace car-sharing like Uber, Lyft and robotaxis, but instead wants to focus on cars aimed at single owners. I judge that an error, but we’ll see.

Tesla car summon

You have to hand it to Elon Musk for bravado, claiming that in two years he will have fully capable unmanned-level autonomy. The other day he said:

“Within two years you’ll be able to summon your car from across the country, ” Musk said on Sunday in a teleconference with the media, adding that “ I might be slightly optimistic on that.”

 

He went to to say that you might be in New York and summon your Tesla from Los Angeles. I’m a big fan of this concept (I call it the “whistlecar”) but he is indeed a bit optimistic, if for no other reason that it almost surely won’t be legal for an unmanned car to cross the country in two years.

Reaching the safety levels for unmanned operation on a wide array of streets in 2018 is also a big challenge, especially without a LIDAR. More on that, later.

Audi says little

Audi’s self-driving efforts have been quite significant, but at CES they only reaffirmed their commitment to self-driving systems without adding anything concrete. Audi, which is part of VW, is reeling from the emissions scandal. Will customers trust them and their software?

 

BMW shows a concept

BMW's concept car with 3 driving modes: manual, assisted and autonomous.  Not that it could actually do that -- it was just there to look cool. Photo credit: Brad Templeton

BMW’s concept car with 3 driving modes: manual, assisted and autonomous. Not that it could actually do that — it was just there to look cool. Photo credit: Brad Templeton

BMW has long been one of the leaders among big car companies, in spite of the irony of them building the “ultimate non-driving machine.” This year they mostly went the imaginary concept route, displaying a futuristic car and claiming it had “3 modes” — manual, assisted and self-driving. But the car didn’t actually do those things and the only real demo was some self-parking outside. (Last year they had the BMW i3 cars available for test drive, and the unmanned cars rolled up to you for your test drive on a special course.)

Concept cars like this (meant only to express possible future ideas) are common at car shows, but at CES they seem bizarrely out of place. To the electronics crowd, they make no sense — they are vapourware at best, and pointless at worst. Why show a product you have no intention of building? The CES audience accepts (and likes) seeing pre-release products, but wants to know what BMW is actually doing.

For more coverage, including non-car coverage, check out my CES 2016 Photo Gallery.

 

This post originally appeared on robocars.com.



tags: , , , ,


Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



Open Robotics Launches the Open Source Robotics Alliance

The Open Source Robotics Foundation (OSRF) is pleased to announce the creation of the Open Source Robotics Alliance (OSRA), a new initiative to strengthen the governance of our open-source robotics so...

Robot Talk Episode 77 – Patricia Shaw

In the latest episode of the Robot Talk podcast, Claire chatted to Patricia Shaw from Aberystwyth University all about home assistance robots, and robot learning and development.
18 March 2024, by

Robot Talk Episode 64 – Rav Chunilal

In the latest episode of the Robot Talk podcast, Claire chatted to Rav Chunilal from Sellafield all about robotics and AI for nuclear decommissioning.
31 December 2023, by

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season....
31 December 2023, by and

Faced with dwindling bee colonies, scientists are arming queens with robots and smart hives

By Farshad Arvin, Martin Stefanec, and Tomas Krajnik Be it the news or the dwindling number of creatures hitting your windscreens, it will not have evaded you that the insect world in bad shape. ...
31 December 2023, by

Robot Talk Episode 63 – Ayse Kucukyilmaz

In the latest episode of the Robot Talk podcast, Claire chatted to Ayse Kucukyilmaz from the University of Nottingham about collaboration, conflict and failure in human-robot interactions.
31 December 2023, by





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association