Pete Kelly, Managing Director
19 July 2019
19 July 2019
Allowing for Autonomous Vehicles: how can they proliferate if there is no clear path for release?
Autonomous Vehicles (AVs)* are being tested in numerous locations in the US after permissions to carry out this task have been granted in a number of states. So does this mean that the way is now clear for widespread launch? The short but clear answer is no.
Permission to test is a world away from a working model for validation, safety certification and a clear set of regulations. And it appears that, even after some years of grappling with these issues, the way in which AVs will be legally offered to the market is still uncertain.
Under today’s regulations for human-driven vehicles, manufacturers must certify that their vehicles can meet expected safety standards, for example braking safely within a reasonable stopping distance. The technology for meeting these requirements is not always specified because it does not need to be. OEMs are free to use whatever solutions (disc, drum, ABS etc.) are needed to meet the standard. Particularly successful technologies can later become a requirement and ABS is now, of course, mandated on new vehicles in many locations.
AVs could also be subject to similar measures, but there is a key difference: the AV is also the driver. Today’s drivers are expected to meet their own level of competence through tests. Conventional vehicles must be certified for safe operation, while the driver must be qualified to operate the vehicle.
“Should specific technological functions – such as sensor performance, or driving decision-making confidence – have maximum permitted error rates? Who decides what they will be, and at what level?”
The problem is that there is, as yet, no agreement or clear way forward in defining what characteristics of AVs should be measured. Should they have crash risk assessments, with incidents per million miles of operation defined? If so, how will this be measured? (Side note: disengagements of the kind reported in California would not be effective for this – we will cover this subject in a future blog post). Should specific technological functions – such as sensor performance, or driving decision-making confidence – have maximum permitted error rates? Who decides what they will be, and at what level? Might specific technologies be required for safety purposes, as they are in other industries, such as aviation and rail? Will there be a minimum requirement for failsafe procedures (even supercomputers will inevitably crash)?
The more this is examined, the more complex it becomes.
In the US, where a lead has developed in AV development more generally, the political timetable for passing (currently stalled) AV legislation implies that it could be several years before laws and regulations are more explicitly framed, assuming even then that some of the above issues are adequately addressed or circumvented.
For some time, regulators have pointed out that, without significant progress in oversight of AV usage, regulation could become a barrier to the widespread adoption of AVs. These warnings look increasingly credible.
*Here we discuss only vehicles at SAE Level 4 and above.