Today, I want to look at some implications of Tesla’s Master Plan, Part Deux which caused some buzz this week. The one part of the plan that I have trouble with is the idea of combining solar generation with battery storage.
Stories about racist Twitter accounts and crashing self-driving cars can make us think that artificial intelligence (AI) is a work in progress. But while these headline-grabbing mistakes reveal the frontiers of AI, versions of this technology are already invisibly embedded in many systems that we use everyday.
A variety of robots and tools are being created to solve one specific piece of the environmental climate change crisis. Together, the engineers who develop them and the bots doing the work will be able to make a difference and lessen the impact of human activity destroying the planet.
The cell phone ride hail apps like Uber and Lyft are now reporting great success with actual ride-sharing, under the names UberPool, LyftLines and Lyft Carpool. In addition, a whole new raft of apps to enable semi-planned and planned carpooling are out making changes.
Ever since Elon Musk’s recent admission that he’s a simulationist, several people have asked me what I think of the proposition that we are living inside a simulation. My view is very firmly that the Universe we are right now experiencing is real. Here are my reasons.
Brad Templeton describes Tesla’s Autopilot as a ‘distant cousin of a real robocar’ that primarily uses a MobilEye EyeQ3 camera combined with radars and ultrasonic sensors. Unlike robocar sensors, Tesla doesn’t have a lidar or use a map to help it understand the road and environment.
At Starship, we announced our first pilot projects for robotic delivery which will begin operating this summer. We’ll be working with a London food delivery startup Pronto as well as German parcel company Hermes and the Metro Group of retailers, plus Just Eat restaurant food delivery to trial on-your-schedule delivery of packages, groceries and meals to people’s homes.
In this article, Brad Templeton provides a rundown of different approaches for validation of self-driving and driver assist systems, a recommendation to Tesla and others to have countermeasures to detect drivers not watching the road, and permanently disable their Autopilot if they show a pattern of inattention.
A Tesla blog post describes the first fatality involving a self drive system. A Tesla was driving on autopilot down a divided highway. For unknown reasons, a truck crossed the highway (something may have been wrong for that to happen.) A white truck body against a bright sky is not something the camera system in the Tesla perceives well, and a truck crossing perpendicular to you on the highway is also an unusual situation.
What does that car of the future look like? There is no one answer; in this world, the car that is sent to pick you up can be tailored for your trip. The more people traveling, the bigger the car. If your trip does not involve a highway, it may not be a car capable for the highway.
Reports from Tesla suggest they are gathering massive amounts of driving data from logs in their cars — 780 million miles of driving, and as much as 100 million miles in autopilot mode. This contrasts with the 1.6 million miles of test operations at Google. Huge numbers, but what do they mean now, and in the future?
Brad Templeton argues that government interference with robocar safety regulations at these early stages, rather than 10-20 years after deployment, could significantly slow down the development of safety technologies for cars. Regulations and standards generally codify existing practice and conventional wisdom. Instead, he offers another solution.