You could also offer only steering and leave both throttle and brakes to the driver. Or you could do mild braking — the kind needed to maintain speed behind a car you’re following — but depend on the driver for harder braking, with hard braking also requiring taking control of the wheel.
You can also do it in reverse — the driver steers but the computer handles the speed. That’s essentially adaptive cruise control, of course, and people like that, but again, with ACC you have the wheel and are unlikely to stop paying attention to the road. You will get a bit more relaxed, though.
Supervised autopilot with “countermeasures”
Some cars with autopilots or even lane keeping take steps to make sure the driver is paying attention. One common approach is to make sure they touch the steering wheel every so often. Tesla does not require regular touches, as some cars do, but does in certain situations ask for a touch or contact. In some cases, though, drivers who want to over-trust the system have found tricks to disable the regular touch countermeasure.
A more advanced countermeasure is a camera looking at the driver’s eyes. This camera can notice if the driver takes his eyes off the road for too long, and start beeping, or even slowing the car down to eventually pull off the road.
Of course, those who want to over-trust the system hate the countermeasures and find they diminish the value of the product. And most people who over-trust do get away with it, if the system is good, which reinforces that.
Disable the autopilot if it is not used diligently
One approach not yet tried in privately owned cars would be countermeasures which punish the driver who does not properly supervise. If a countermeasure alert goes off too often, for example, it could mean that the autopilot function is disabled on that car for some period of time, or forever. People who like the function would stay more diligent, afraid of losing it. They would have to be warned that this was possible.
Drivers who lose the function may or may not get a partial refund if they paid for it. Because of this risk, you would not want people who borrowed a car to be able to use the function without you present, which might mean having a special key used only by the owner which enables the autopilot, and another key to give to family, kids or friends.
Of course, for many, this would diminish the value of the autopilot because they were hoping to not have to be so diligent with it. For now, it may be wisest to diminish the value for them.
Deliberate “fake failures” countermeasure
Cars could even do a special countermeasure, namely a brief deliberate (but safe) mistake made by the system. If the driver corrects it, all is good, but if they don’t, the system corrects it and sounds the alert and disables autopilot for a while — and eventually permanently.
The mistakes must be safe, which is challenging. One easy example would be drifting out of the lane when there is nobody nearby in the adjacent lane. The car would correct when it got to the line (extra safe mode) or after getting a few inches into the unoccupied adjacent lane.
This would keep drivers trained to watch for steering errors, but you also want to have them watch for braking and speed errors as well. This presents some risk as you must get “too close” to a car ahead of you. To do that safely, the normal following distance would need to be expanded, and the car would every so often crowd to a closer, but still safe distance. It would also be sure it could brake (perhaps harder than is comfortable) if the car in front were to change speed.
The problem is that the legal definition of too close — namely a 2-second headway — is much wider a gap than most drivers leave, and if you leave a 2-second gap, other drivers will regularly move into it if traffic gets thick. Most ACCs let you set the gap from 1 to 2.5 seconds, but sometimes to even less. You might let the user set a headway, and train to know what is “too close” and demands an intervention.
To do this well, the car might need better sensors than a mere autopilot demands. For example, the Tesla’s rear radars are perhaps not good enough to reliably spot a car overtaking you at very high speed, and so a fully automated lane change is not perfectly safe. However, in time the sensors will be enough for that. The combination of radars and blindspot-tracking ultrasonics are probably good enough to know it’s safe to nudge a few inches into the other lane to test the driver.
This method can also be used to test the quality of professional safety drivers, who should be taken off the job and retrained if they fail it.
Rather than fake failures, you could just have a heads-up display which displays “intervene now” in the windshield for a second or two. If you don’t take over — beep — you failed. That’s a safe way to do this, but might well be gamed.
Not letting everybody try out the autopilot
A further alternative would be to consider that systems’ in Tesla’s class — which is to say driving assist systems which can’t do everything but which do so much that they will lull some people into misusing them — should perhaps only be offered to people who can demonstrate they will use them responsibly, perhaps by undergoing some training or test, and of course losing the function if they set off the countermeasure alarm too often.
Make customers responsible
Tesla used some countermeasures, but mostly took that path of stressing to the customer that their system was not a self-drive system and drivers should supervise it. Generally, letting people be free to take their own risks is the norm in free societies. After all, you can crash a car in 1,000 ways if you don’t pay attention, and people are and should be responsible for their own behaviour.
Except, of course, for the issue of putting others on the road at risk. But is that an absolute? Clearly not, since lots of car features enable risky behaviour on the part of drivers which put others at risk. Any high-performance sports car (including Tesla’s “Ludicrous” mode) and the ability to drive 200mph is a function which presents potential risks for other road users as well as the driver. The same is true of distractions in the car, such as radios, phone interfaces and more. Still, we want to hold risk to others to a higher, though not absolute standard.
Just watch
You can get some value from having a system which requires the driver to drive completely manually, but compares what the self-drive system would have done to that, and identifies any important differences. This is easy to do and as safe as other manual driving — actually safer because you can implement very good ADAS in such a system — but the hard part is pulling out useful information.
The self-drive system will differ from a human driver quite a lot. You need automated tools to pull out the truly important differences and have them considered and re-run in simulator by a test engineer. To make this workable, you need good tools to only bother the test engineer with worthwhile situations. If the driver brakes suddenly or swerves suddenly and the self-drive system did not want to, that’s probably useful information.
This requires a car with a full suite of self-drive sensors, as well as possibly a set of all-around cameras for the test engineer to study. The problem is that this is expensive, and since it’s not giving the car owner much benefit, the car owner is unlikely to pay for it. So you could deploy that to a subset of your cars if you have a lot of cars on the road.
Watching is still always valuable, no matter what, and also offers training data for neural networks and updates to map data, so you want to do it in as many cars as you can.
Real beta tests
Tesla would probably assert they were not “testing” their autopilot using customers. They, and all the others makers of similar products would say that they believed their products to be as safe as other such functions, as long as they were properly supervised by an alert driver. And their current safety record actually backs up that this is broadly true.
Some smaller startups may actually desire to release tools before they are ready, actively recruiting customers as true beta testers. This is particularly true for small companies who have thought about doing retrofit of existing cars. Larger companies want to test the specific configuration that will go on the road, and thus don’t want to do retrofit — with a retrofit, a customer car will go out on the road in a configuration that’s never been tried before.
This is the riskiest strategy, and as such the beta testing customers must be held to a high standard with good countermeasures. In fact, perhaps they should be vetted before being recruited.
Testing, validation and training
While I’ve used the word testing above, there are many things you gain from on-road operation. Tesla would say they were not testing their system — they tested it before deployment. It is intended only for supervised operation and was judged safe in that situation, like cruise controls or lane keeping. On the other hand, lots of supervised operation does validate a system and also helps you learn things about it which could be improved or changed, as well as things about the road. Most importantly, it informs you of new situations you did not anticipate.
That is the value of this approach, with diligent supervisors. You can keep improving an autopilot system and watch how well it does. Some day you will find that it doesn’t need many interventions anymore. Once it gets to 200,000 miles or more per intervention, you have well surpassed the average human driver. Now you can claim to have a self-driving system, and perhaps tell users they can stop supervising. This is the evolutionary approach car makers intend, while the non-car companies (like Google, Tesla and others) seek to go directly to the full self-driving car.
The approaches are so different that the car companies made sure there was a carve-out in many of the laws that were passed to regulate self-driving cars. Cars that need supervision, that can’t operate without it, are exempted from the self-driving car laws. They are viewed by the law as just plain old cars with driver assist.
Conclusion
I think that for Tesla and the other companies marketing high-end driver assist autopilots that require supervision, the following course might be best:
- Have a sensor package able to reliably detect that it is safe to wander a short distance into adjacent lanes.
- Have sufficient countermeasures to assure the driver is paying attention, such as an eye tracking camera.
- Consider having a “strange event” countermeasure where the vehicle drifts a bit out of its lane (when safe) and if the driver fails to correct it, the car does and signals a countermeasure alert.
- Too many countermeasure alerts and the drive assist function is disabled long term, with or without refund as indicated in the contract.