top of page

Can "safe" technology actually make us less safe?

If you aren't in the race to produce the world's first self-driving car you obviously have been distracted by other things. Outside of launching rockets to re-supply the ISS, the drive to build autonomous vehicles to replace human drivers has been speeding up recently.

But what does it really mean to be 'autonomous'?

While the Tesla system was cleared after a fatal incident the National Highway Traffic Safety Administration had some clear warnings for automakers that included a recommendation to rename the self-driving capabilities. Investigators claimed the system performed as intended but found the actual performance criteria was much less than implied by calling the system 'Autopilot'.

This raises an important consideration in designing and deploying automated capabilities that still need to work closely with their human operators - observability. If a person is required to share the responsibility for complex tasks involved in driving with an intelligent machine, it needs the ability to clearly communicate and coordinate with the technology about who is doing what task, what each party intends to do next and how handoffs of control should be handled. We wouldn't expect two human beings to jointly drive a car together without clear understanding of and proper communication about these issues so why do we not expect it person and computer!?

In the race to claim to be the first truly autonomous vehicle, automakers need to consider shared cognitive challenges and strategies to keep drivers, passengers and pedestrians safe. Until they do so, we need to recognize the tradeoffs we make in replacing humans largely capable of successful performance across a range of conditions with computers that can only handle relatively simple conditions.

Recent Posts
bottom of page