Are we personally liable for accidents caused by our autonomous vehicles? What happens when a machine has to decide who will be injured and who not in an unavoidable accident? How can drivers lose their scepticism about autonomous driving and shake off possible fears?
Increasing trust in autonomous systems through transparency and knowledge
A survey we carried out on driver acceptance in Germany, USA and China revealed that consumers are very open to new technologies. However, as automation increases, driver confidence in the technology decreases. “The overwhelming majority of those surveyed would like the option of taking over control in critical situations,” explains Frank Schierge. “And this despite 99 percent of all accidents being caused by human error, and not even one percent by technical problems.”
According to the expert, the more knowledge, the greater the willingness to give control to electronic systems. “This is all the more true when convenience is noticeably increased. Drivers are, however, clear in making distinctions: sophisticated driver assistance systems increase safety and comfort, but AI takes decision-making away from them.”
Positive perception of vehicles that clearly signal limitations
Work is therefore required on creating trust. “Road tests revealed to us that drivers react very positively when they see that the switch from autonomous to manual mode works. They want to be reliably informed by an acoustic signal or symbol when the system switches off. Trusting the system also means knowing its limitations. Acceptance is then all the greater.”
The fact is that today, driver assistance systems already intervene when an accident is imminent or unavoidable – a classic example is autonomous emergency braking. When sensor and camera systems detect that a forward collision might occur, the system ultimately makes the decision to brake without consulting the driver. “Other systems will be making more far-reaching decisions,” stresses Frank Schierge. “We then need to ask ourselves whether we will still want to continue to intervene even though human reactions are worse in most situations? Who do I trust more – humans or machines?”
“We then need to ask ourselves whether we will still want to continue to intervene even though human reactions are worse in most situations?
Who do I trust more – humans or machines?”
Increasing driver willingness to purchase
Schierge and his team also identified a major “test drive effect” in field trials: “Drivers wouldn’t choose many systems when configuring their vehicles. But when they experience the systems in practice, they’re often very impressed.” Points that should be highly interesting to manufacturers: if a system can be deactivated by the driver, reacts in a logical way, its limitations and functionality are transparent, and the driver can experience it in practice, then the willingness to purchase it increases.
From other tests carried out by TÜV Rheinland, Frank Schierge sees a clear requirement: “Operation of the assistance systems must become easier. For example, in a rental car the renter needs to know which systems are installed and activated and what feedback they give.” The same applies when switching between cars with different equipment. Familiarization with a system can sometimes outweigh its gains in safety. Those used to the blind spot assist may stop looking over their shoulder; those used to backing up with parking sensors may have a nasty surprise in a vehicle without them.
“For both assistive technology and handing over of driving tasks research is still required on getting them accepted,” emphasizes Frank Schierge and adds “Gradual implementation will make it easier to get used to them.”
TÜV Rheinland plays a key role in automated system inspections and approvals
We give the same intensity to research into highly developed driver assistance systems and automated driving as we do to our responsibility in approving and introducing them into the public space. Our inspection and certification services make us a key link between automobile manufacturers, suppliers and developers on the one hand, and approval authorities on the other. This is because “Driving systems … need official licensing and monitoring” – as clearly stated by the Ethics Commission.
We would like to know what you think about the ethical issues around automated systems
- Vehicles drive more reliably, they have no emotions, their behavior can be programmed to be more predictable, and they react within milliseconds. Do you still want to be able to overrule an autonomous system?
- How should AI systems decide in inevitable accident dilemmas? Should such responsibility lie with the vehicle manufacturer, the programmer or with another authority?
- How should rules on humans re-taking control be defined and what time frame do you think is appropriate for a handover?
- Does autonomous emergency braking already pose ethical questions? It does have an impact on road users behind us without autonomous emergency braking who are then more likely to have an accident …
Tell us what you think!
Smart Mobility Team
Modern vehicles are always “online” and are sharing data. How can data protection and data use be meaningfully reconciled here?
Advanced assistance systems and automated vehicles raise new questions:Do consumers want them? What does cooperation between humans and vehicles look like?
The way is the goal – at least that is how many bikers see it. But if you’re going to ride, you have to be able to brake properly. You can find tips on stopping distance, braking, and ABS in the article: