Are we fit to drive?

# Going beyond our biological limits

On May 7, 2016, a man died when his car drove under a tractor trailer at 119 km/h on Route 27 in Florida; his windshield hitting the underside of the trailer. The accident became headline news because the car was a Tesla Model S and this was the first road death caused by the malfunctioning of a ‘self-driving’ car. It seems both car and driver failed to spot a white trailer against bright sunshine and there was no attempt to slowdown the vehicle. Having clocked up over 208 million kilometres without a fatal accident, compared to one accident per 150 million kilometres amongst all vehicles in the United States, Tesla argued that its autopilot (the brand-name for its semi-autonomous driving solution), was still safer than the human equivalent. This raises the question of which is better at detecting and responding to an emergency, man or machine?

Evolution has not left us particularly well prepared for travelling at high speed. Humans are adapted for walking, which we can do with fifty percent better oxygen efficiency than running (Cunningham, Schilling, Anders, & Carrier, 2010). The world’s fastest man, Usain Bolt, clocked a maximum speed of 44 km/h when he set the one hundred metre record, whilst the best distance runners average 25 km/h at best. These are the exceptions, the majority of the time we walk which requires us to safely we need to navigate our environment is around a metre a second. This is about one tenth of Usain Bolt’s top speed, and twenty times less than what is required to drive a car on the motorway.

With the exception of when someone gives you some ‘auditory feedback’ on your driving skills, the only sensory system we have to rely on whilst driving is our vision. Calculating the distance and velocity of other cars on the motorway challenges our depth perception capability beyond its limit. At close range we can calculate distance precisely by comparing the positional difference between the image captured by each eye, known as binocular vision. However, beyond five metres there is very little difference between the images, so we use cues such as relative size, texture and shadow to add depth to the two-dimensional image that hits our retinas. Turning off the lights takes these cues away from us and all we have to go on whilst driving at night is the distance between brake lights, assuming they are a standard distance apart. The floor in this strategy is that it makes smaller cars seem further away, which can cause your heart to skip a beat if you come up behind a small car travelling slowly on a pitch-black motorway.

It was not until I experienced Tesla autopilot for myself that I realised just how poor I am at judging relative distance and speed on the motorway. In addition to using cameras, Teslas are equipped with a forward facing radar that has a range of over one hundred and fifty metres. It is quite spooky how autopilot starts to decelerate due to a slow-moving vehicle ahead, several seconds before you could notice this for yourself. In addition to using radar for distance perception, it also has ultrasonic sensors that monitor a few metres around the car, which detect objects to your side that check the space immediately around the car. The most recent Tesla models have six cameras around thetesla autopilot car in total that give it a view in all directions, simultaneously. It is not only like having eyes in the back of your head, it is like having them on the sides too. These are used to determine whether it is safe to change lanes or should a driver not spot you in their blind-spot, autopilot will move away provided that it is safe to do so. Autopilot’s vision processing software uses a brain mimicking, neural-network approach to match objects to a pre-set list. Whilst this software can only identify a limited number of objects compared to a real brain, the number of sensors and the ability to monitor them simultaneous gives autopilot a major advantage over human drivers when it comes to identifying danger front, side and back.

Once an emergency has been detected the most important thing is to react as quickly as possible. Typical human reaction times to visual stimuli are around two hundred and fifty milliseconds but it also takes time to assess potential danger and initiate a response. This response time needs to be added to the perception time to give the total reaction time. Green (2000) investigated perception-response times to motoring emergencies by reviewing data from 27 previous studies. He found that it takes about three quarters of a second to detect an emergency event and a further three quarters of a second to start braking. When driving at one hundred kilometres an hour you travel fifty metres in these one and a half seconds. This is more than the emergency braking distance for most modern cars.

Whilst Olympic sprinters regularly hit the blocks in under one hundred and fifty milliseconds (Hanratty, 2017), Formula One drivers’ response times are surprisingly average. What sets them apart are their finely tuned motor skills. When Top Gear presenter, Richard Hammond, learned to drive a Formula One car he was advised that he needed to stop leaving such a ‘large gap’ between switching from the accelerator to the brake (BBC Top Gear, 2007). It turned out that for a Formula One driver a ‘large gap’ is actually only half a second. It is not possible to achieve initiates such rapid responses consciously, for this they must be using a brain area known as the cerebellum. From the Latin for ‘little brain’, the cerebellum is responsible for automatic, rapid, precise movements that require accurate timing. Its position at the bottom of the brain and at the top of the spinal cord puts it in pole position to respond to sensory information from the body with motor neuron patterns that have been previously rehearsed. The more you practice the more patterns get encoded into your cerebellum and the faster your intuitive motor responses get. Performing an emergency stop is not something a regular driver needs to do often, and it is quite an unnatural movement to change between pedals that requires conscious thought. We could stop a car a lot quicker if there w ere a more intuitive mechanism such as yelling or an emergency stop button. Self-driving cars have a considerable advantage when it comes to perception-response times. Once an emergency has been detected, an autonomous emergency braking (AEB) system can respond instantaneously.

A study conducted by the New Zealand Centre for Automotive Safety Research made the conservative assumption that an AEB system detects and responds to an emergency in two hundred milliseconds (Doecke, Anderson, Mackenzie, & Ponte, 2012). They analysed data from 103 car accidents and calculated the probable outcome had automatic emergency braking (AEB) been available. They found that around 65 percent of the accidents could have been avoided, or the impact reduced to less than ten kilometres per hour, had AEB been available. Clearly this would have prevented a lot of driver injuries but the study did not assess the benefits for cyclists or pedestrians. This is particularly pertinent in the Netherlands given that the number cyclist killed in accidents exceed the number of motorists killed, for the first time in 2017.

In the three months I have been living here I have already seen one bike wiped out by a car on a roundabout cycle lane. As a cyclist and a motorist, I must confess that a lot of the time I have no idea whose right of way it is in Amsterdam. Generally, I just watc h what other people do and follow them. Whilst an Amsterdammer may think I am being foolish, the use of social inference in the absence of knowledge is actually quite an advanced cognitive that us and well beyond the capability of Artificial Intelligence (AI), currently. For example, autopilot does not identify particular brands of cars that tend to drive more aggressively and cannot identify that the driver in front is driving erratically because they are using their phone. Worse still, if a driver ahead does move out of the way to avoid danger, autopilot will actually accelerate towards the danger they were avoiding if there is nothing obviously blocking the lane. The reason why younger drivers tend to have more accidents is their lack of intuition and in this regard, autopilot is still a ‘learner driver’ that needs diligent adult supervision.

Another cause for concern with allowing fully autonomous cars on European roads is that most of the software is being developed and tested for the American market, where the roads are wide and there are few cyclists. European manufacturers are leading the way when it comes to autonomous cyclist protection with Volvo announcing a dedicated cyclist detection system over five years ago. More manufacturers have followed suit and earlier this year the Euro New Car Assessment Programme introduced a test for Cyclist AEB, funded by the Dutch government (Euro NCAP, 2018). The first car to pass this test was the new, all-electric Nissan Leaf. Being almost silent, electric cars are particularly dangerous to cyclists and pedestrians, who can’t hear them coming. Euro NCAP hopes that these systems lead to a similar reduction in cyclist deaths to those seen for motorists over the five years since certification for AEB systems was introduced.

When a self-driving car makes a mistake it is global news. However, human attention failure is very common. It seems that the driver of the Model S that collided with the truck was watching a Harry Potter film on a portable DVD player at the time of the accident. Autopilot is still ‘in beta mode’ and drivers are required to pay attention at all times. The lack of attention from the driver would seem this was as much to blame as autopilot’s failure to pick out a white trailer against the bright sun. Whilst autonomous vehicles have an advantage in terms of perception-response times, only a human driver can intuitively sense dangerous driving conditions. Man with machine is much better equipped for high-speed driving than man or machine on their own.

Whilst car companies seem committed to producing cars with greater autonomy in the future, there are many legal and ethical problems to be resolved in addition to the technical ones. For example, should a car kill its driver to save two pedestrians or an adult to save a child? Autonomous motorway driving, however, is much more straightforward use-case and there is growing evidence that it is safer. This is because high-speed driving not only challenges both our perception and motor skills but also our capacity to maintain concentration on a single task for a long period. I sold my Tesla to fund my studies. Whilst I am enjoying learning the psychology behind all of this, I do miss my bionic co-pilot.

Paul Cook

Reference List

BBC Top gear, Series 10, episode 8 (2007). Renault R25 Formula One Car. Retrieved on 13/12/18 from

Euro NCAP press release, retrieved on 13/12/18 from

Cunningham, C. B., Schilling, N., Anders, C., & Carrier, D. R. (2010). The influence of foot posture on the cost of transport in humans. Journal of Experimental Biology, 213(5), 790–797.

Doecke, S. D., Anderson, R. W. G., Mackenzie, J. R. R., & Ponte, G. (2012). The potential of autonomous emergency braking systems to mitigate passenger vehicle crashes. Paper presented at Australasian Road Safety Research, Policing and Education Conference, Wellington, New Zealand.

Green, M. (2000). "How Long Does It Take to Stop?" Methodological Analysis of Driver Perception-Brake Times. Transportation Human Factors, 2(3), 195–216.

Hanratty, M. (2017, August 13). Bolt’s last 100m race. Retrieved December 3, 2018, from

This article first appeared in Spiegeloog, January 2019. Spiegeloog is the UvA psychology department magazine -


Loading Conversation