Robohub.org
 

Warner Brothers and Intel experiment with in-robocar entertainment. Is that a good idea?


by
07 December 2017



share this:

Intel and Warner made a splash at the LA Auto Show announcing how Warner will develop entertainment for viewing while riding in robotaxis. It’s not just movies to watch, their hope is to produce something more like an amusement park ride to keep you engaged on your journey.

Like most partnership announcements around robocars, this one is mainly there for PR since they haven’t built anything yet. The idea is both interesting and hype.

I’ll start with the negative. I think people will carry their entertainment with them in their pockets, and not want it from their cars. Why would I want a different music system with a different interface when my own music and videos are already curated by me and stored in my phone? All I really want is a speaker and screen to display them on.

This is becoming very clear on planes, where I prefer to watch movies I have pre-downloaded on my phone than what is on the bigger screen of the in-flight entertainment system. There are several reasons for that:

  • The UIs on most in-flight systems suck really, really badly. I mean it’s amazing how bad most of them are. (Turns out there is a reason for that.) Cars will probably do it better but the history is not promising.
  • Your personal device is usually newer with more advanced technology because you replace it every 2 years. You have curated the content in it and know the interface.
  • On airplanes in particular, they believe rules force them to pause your experience so that they can announce that duty free sales are now open in 3 languages. And 20 or more other interruptions, only a couple of which are actually important to hear for an experienced flyer.

So Warner is wise in putting a focus on doing something you can’t do with your personal gear, such as a VR experience, or immersive screens around the car. There is a unique opportunity to tune the VR experience to the actual motions of the car. In traffic, you can only tune to the needed motions. On the open road, you might actually be able to program a trip that deliberately slows or speeds up or turns when nobody else is around to make a cool VR experience.

While that might be nice, it would be mostly a gimmick, more like a ride you try once. I don’t think people will want to go everywhere in the batmobile. As such it will be more of a niche, or marketing trick.


More interesting is the ability to reduce carsickness with audio-visual techniques. Some people get pretty queasy if they look down for a long time at a book or laptop. Others are less bothered. A phone held in the hand seems to be easier to use for most than something heavier, perhaps because it moves with the motion of the car. For many years I have proposed that cars communicate their upcoming plans with subtle audio or visual cues so that people know when they are about to turn or slow down. Some experiments are now being reported on this and it will be interesting to see the results.

If you ride on a subway, bus or commuter train today, the scene is now always the same. A row of people, all staring at their phones.

Advertising

Some commenters have speculated that another goal here may be to present advertising to hapless taxi passengers. After all, ride a New York cab and many others and you will see an annoying video loop playing. Each time you have to go through the menus to mute the volume. With hailed taxis, you can’t shop, and so they can also get away with doing this — what are you going to do, get out of the cab and wait for the next one?

I hope that with mobile-phone hail, competition prevents this sort of attempt to monetize the time of the customer. I definitely want my peace and quiet, and the revenue from the advertising — typically well under a dollar an hour — can’t possibly offset that for me.




Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.
Brad Templeton, Robocars.com is an EFF board member, Singularity U faculty, a self-driving car consultant, and entrepreneur.





Related posts :



A flexible lens controlled by light-activated artificial muscles promises to let soft machines see

  30 Oct 2025
Researchers have designed an adaptive lens made of soft, light-responsive, tissue-like materials.

Social media round-up from #IROS2025

  27 Oct 2025
Take a look at what participants got up to at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Using generative AI to diversify virtual training grounds for robots

  24 Oct 2025
New tool from MIT CSAIL creates realistic virtual kitchens and living rooms where simulated robots can interact with models of real-world objects, scaling up training data for robot foundation models.

Robot Talk Episode 130 – Robots learning from humans, with Chad Jenkins

  24 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Chad Jenkins from University of Michigan about how robots can learn from people and assist us in our daily lives.

Robot Talk at the Smart City Robotics Competition

  22 Oct 2025
In a special bonus episode of the podcast, Claire chatted to competitors, exhibitors, and attendees at the Smart City Robotics Competition in Milton Keynes.

Robot Talk Episode 129 – Automating museum experiments, with Yuen Ting Chan

  17 Oct 2025
In the latest episode of the Robot Talk podcast, Claire chatted to Yuen Ting Chan from Natural History Museum about using robots to automate molecular biology experiments.

What’s coming up at #IROS2025?

  15 Oct 2025
Find out what the International Conference on Intelligent Robots and Systems has in store.

From sea to space, this robot is on a roll

  13 Oct 2025
Graduate students in the aptly named "RAD Lab" are working to improve RoboBall, the robot in an airbag.



 

Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


 












©2025.05 - Association for the Understanding of Artificial Intelligence