Are we personally liable for accidents caused by our autonomous vehicles? What happens when a machine has to decide who will be injured and who not in an unavoidable accident? How can drivers lose their scepticism about autonomous driving and shake off possible fears?

This is the second part of the article. You can find the first part also on our blog.

Increasing trust in autonomous systems through transparency and knowledge

A survey we carried out on driver acceptance in Germany, USA and China revealed that consumers are very open to new technologies. However, as automation increases, driver confidence in the technology decreases. “The overwhelming majority of those surveyed would like the option of taking over control in critical situations,” explains Frank Schierge. “And this despite 99 percent of all accidents being caused by human error, and not even one percent by technical problems.”

According to the expert, the more knowledge, the greater the willingness to give control to electronic systems. “This is all the more true when convenience is noticeably increased. Drivers are, however, clear in making distinctions: sophisticated driver assistance systems increase safety and comfort, but AI takes decision-making away from them.”

Positive perception of vehicles that clearly signal limitations

Work is therefore required on creating trust. “Road tests revealed to us that drivers react very positively when they see that the switch from autonomous to manual mode works. They want to be reliably informed by an acoustic signal or symbol when the system switches off. Trusting the system also means knowing its limitations. Acceptance is then all the greater.”

The fact is that today, driver assistance systems already intervene when an accident is imminent or unavoidable – a classic example is autonomous emergency braking. When sensor and camera systems detect that a forward collision might occur, the system ultimately makes the decision to brake without consulting the driver. “Other systems will be making more far-reaching decisions,” stresses Frank Schierge. “We then need to ask ourselves whether we will still want to continue to intervene even though human reactions are worse in most situations? Who do I trust more – humans or machines?”

“We then need to ask ourselves whether we will still want to continue to intervene even though human reactions are worse in most situations?
Who do I trust more – humans or machines?”

Increasing driver willingness to purchase

Schierge and his team also identified a major “test drive effect” in field trials: “Drivers wouldn’t choose many systems when configuring their vehicles. But when they experience the systems in practice, they’re often very impressed.” Points that should be highly interesting to manufacturers: if a system can be deactivated by the driver, reacts in a logical way, its limitations and functionality are transparent, and the driver can experience it in practice, then the willingness to purchase it increases.

autonomous systems

From other tests carried out by TÜV Rheinland, Frank Schierge sees a clear requirement: “Operation of the assistance systems must become easier. For example, in a rental car the renter needs to know which systems are installed and activated and what feedback they give.” The same applies when switching between cars with different equipment. Familiarization with a system can sometimes outweigh its gains in safety. Those used to the blind spot assist may stop looking over their shoulder; those used to backing up with parking sensors may have a nasty surprise in a vehicle without them.

“For both assistive technology and handing over of driving tasks research is still required on getting them accepted,” emphasizes Frank Schierge and adds “Gradual implementation will make it easier to get used to them.”

TÜV Rheinland plays a key role in automated system inspections and approvals

We give the same intensity to research into highly developed driver assistance systems and automated driving as we do to our responsibility in approving and introducing them into the public space. Our inspection and certification services make us a key link between automobile manufacturers, suppliers and developers on the one hand, and approval authorities on the other. This is because “Driving systems … need official licensing and monitoring” – as clearly stated by the Ethics Commission.

We would like to know what you think about the ethical issues around automated systems

 

  • Vehicles drive more reliably, they have no emotions, their behavior can be programmed to be more predictable, and they react within milliseconds. Do you still want to be able to overrule an autonomous system?
  • How should AI systems decide in inevitable accident dilemmas? Should such responsibility lie with the vehicle manufacturer, the programmer or with another authority?
  • How should rules on humans re-taking control be defined and what time frame do you think is appropriate for a handover?
  • Does autonomous emergency braking already pose ethical questions? It does have an impact on road users behind us without autonomous emergency braking who are then more likely to have an accident …

Tell us what you think!

We look forward to your opinion in the comments.
Author of the article
Smart Mobility Team

Smart Mobility Team

Editorial Team

The Smart Mobility Team is an editorial team that deals with all topics related to the mobility of the future.

Related posts

TÜV as data trustee

TÜV as data trustee

Modern vehicles are always “online” and are sharing data. How can data protection and data use be meaningfully reconciled here?

read more
How to brake correctly with the motorcycle

How to brake correctly with the motorcycle

The way is the goal – at least that is how many bikers see it. But if you’re going to ride, you have to be able to brake properly. You can find tips on stopping distance, braking, and ABS in the article:

read more

Comments

1 Comment

  1. Bernd Nürnberger

    How about treating autonomous vehicles much the same like human drivers? Same rules, rewards, and penalties. Society seems to be fine with assuming most of us drive as good as we can. For a century, better driver training, better vehicles, better legal framework, and better medical support have helped bring down accidents, injuries and traffic deaths.

    Did we see broad discussion of ethical issues around human drivers? If no, how is it helpful for autonomous algorithms assumed or proven to cause fewer accidents? Not really relevant in my view. When used repetitively without valid evidence it resembles a weapon of mass distraction.

    My take: Let the autonomous vehicles prove their mettle daily in real life traffic, like everyone else. Zero risk exists nowhere. Ask an actuary (insurance mathematician). As common people, we know a little how to minimize risks, so we can enjoy the opportunities. Let’s try this.

    1. Autonomous vehicle shall pass the driver’s test in a car with dual controls for the qualified driving instructor on the right seat. As usual, the TÜV inspector on the back seat keeps score. Route instructions may be not recognized when spoken, so allow route programming. Pass-fail criteria are the same. If the driving instructor has to touch any pedals or the wheel, the candidate has failed the test. Back to driving school. Try again later

    2. If the autonomy software masters passes the test, congratulations. Issue a “Driver’s License” (permit) normal for the vehicle class. The permit is valid for the specific software version and sensor configuration, including defined learning and limits, just like a beginner driver. New software version, new test.

    3. Insurance premium is high for beginner drivers, just the same for autonomous vehicles. If accident free over time, insurance may give the usual no-claims discount.

    Additional considerations: How to simplify and standardize the way autonomous systems inform the driver about capabilities and limitations. Should it be part of the system license, and required to be present in the vehicle like an airworthiness certificate is required in an airplane? How about security against remote control and hacking? Prohibit over-the-air updates of driving software? Like you are not supposed to speak to the driver while the bus is in motion. No driving under the influence? What if sensor data conflict or the software acts abnormally, can the system find this, get the human driver to take control, or stop safely?

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to our newsletter

Blog Newsletter
Receive regularly updated blog articles on mobility, classic cars and smart mobility from TÜV Rheinland by e-mail.