Pete Kelly, Managing Director
23 July 2019
23 July 2019
Engaging with Autonomous Vehicle disengagement
Developers of Autonomous Vehicles (AVs) testing in California regularly report the number of disengagements that take place in their vehicles. A disengagement is a situation in which the autonomous system stops working, or the safety driver feels that it is necessary to take control of the vehicle to correct a potentially unsafe action, or inaction, by the AV. We noted the potential to use disengagements for validation purposes in a recent blog post.
Progress looks good so far (see chart below), with ever-increasing distances driven between disengagements reported by leading developers.
Source: California DMV
It has been suggested that this might be a successful model for measuring AV performance as part of validation and regulation, as AVs move towards a wider real-world deployment.
There are two important reasons why this probably will not work.
The first is that the definition of a disengagement is not set to an agreed standard: the AV developers themselves decide what numbers to report. Did the self-driving system fail completely? Was the AV driving too close to another vehicle for the safety driver’s comfort? Was the AV unable to decide on a course of action in a complex situation? Was an imminent collision only avoided because of human intervention? Before disengagements can be used in validation, the reporting rules will need to be defined far more clearly.
“A high number of disengagements could occur because the AV developer is subjecting its vehicle to particularly challenging situations, exploring new and unlikely edge cases, perhaps even beyond what might be expected from a human-driven vehicle.”
The second is that a high disengagement rate during testing or validation is not necessarily a bad thing. A high number of disengagements could occur because the AV developer is subjecting its vehicle to particularly challenging situations, exploring new and unlikely edge cases, perhaps even beyond what might be expected from a human-driven vehicle. Conversely, low reported numbers of disengagements could signify a low-risk test environment, perhaps for an AV that finds complex real-world conditions challenging. Measurement of disengagements may be internally consistent within each organisation, but not across all of them.
A general level of validation appears to be a requirement for AVs, but disengagements alone do not appear to be up to this task. Clear standards for disengagement would need to be agreed, together with a much higher level of disclosure on the circumstances for all disengagements.