Select Page

2023 Driver Education Round 2 – Epistemic Duties of Drivers

Name: Destiny Green
From: Tucson, Arizona
Votes: 14

Epistemic Duties of Drivers

Millions of Americans engage with the public traffic system every day. Many are drivers themselves, while others, as passengers or pedestrians, are directly impacted by drivers. Approximately 32,700 people die every year in motor vehicle crashes, while about 6,500 suffer injuries due to vehicle accidents every day. [III]. Humans are imperfect reasoners – they make judgments based on limited evidence and sometimes perform actions based on those unsubstantiated judgments. While driving, humans have both physical and cognitive blind spots. These blind spots cannot be eliminated and ought to be taken into account when making judgments as a driver. Doing so will help drivers avoid what the National Highway Traffic Safety Administration calls human errors, which are responsible for 94% of motor vehicle accidents.

We drive to get from position x1 to position x2. While driving, we aim to avoid being hit by objects external to the vehicle we control and to avoid hitting external objects with our vehicle. Combining the function of driving itself with the goals of the act of driving yields a succinct [goal] for drivers: As a driver, one’s [goal] is to safely move from position x1 to position x2. Now, in order to achieve this [goal], certain cognitive limitations must be acknowledged and overcome. The ways in which we may overcome our cognitive limitations will entail our epistemic duties as drivers. Those epistemic duties will lead to a higher rate of success in achieving the practical [goal] of driving.

A discussion of the abilities and limitations of human cognition will help us identify the tools that drivers have to use and the weaknesses they will need to compensate for in order to achieve their [goal]. The fields of philosophy and psychology both have vast literatures regarding human cognition and its strengths and weaknesses. One significant piece, written by the modern psychologist, Daniel Kahneman, is titled Thinking Fast and Slow. In this book, Kahneman describes the human mind in terms of two distinct, yet highly interconnected, systems which he dubbs as ‘System 1’ and ‘System 2’ [S1 and S2 from here on out].

Under Kahneman’s description, S1 is the ever-surveying part of our minds that perceives everything, including the things we are unconscious of experiencing. The operations of S1 are automatic, and “continually construct a coherent interpretation of what is going on in our world at any instant” [TFS 11]. To do so, S1 has access to the immediate information it gets from the environment and information that is readily available to the mind [words heard a few minutes before, a recent class discussion, concepts such as basic arithmetic, etc]. S1 is eager to make sense of the world it perceives and links events and perceptions to establish causations in a way that best makes sense of the evidence available to it.

S2, on the other hand, has access to the further depths of memory and has the ability to weigh multiple things and think about different ways things could be linked to one another rather than simply the best explanation – but this takes quite a bit of effort and energy, and so is a limited faculty. Because of this, S2 “usually adopts the suggestions of [S1] with little or no modification” [TFS 20]. When something seems off about the environment or our S2 detects that an error might be made, its controlled operations are activated, bringing more focus to the situation that caused our S2 to feel such a ‘bump in the road’ and need to step in.

Kahneman claims that S1 usually does a good job of modeling familiar situations and making short-term predictions, and that our initial reactions to challenges, produced by S1, are generally appropriate. Yet there are times when conflict arises between an automatic reaction and what we actually want/aim to do. Some drivers have experienced icy roads and the struggle of following the rehearsed instructions that negate what they naturally do: “Steer into the skid, and whatever you do, do not touch the brakes!” [TFS 22]. Indeed, overcoming the impulses of S1 is one of the tasks of S2. Urging us to seek out further information we do not already have access to is another feature of S2 – as when a driver checks out their side windows to ensure nothing is in their blind spots.

Loss Aversion is something that is “built into the automatic evaluations of System 1” [TFS 281]. Kahneman goes over a few experiments which express the concept that when faced with two options, one being a loss that is certain and one being a loss that is probable, individuals tend to go for the probable loss, in hopes of avoiding any losses at all.

This can be seen at any random intersection that allows for left turns but lacks a left turn arrow. Say a driver is waiting to make a left turn at such an intersection. They must wait for a clearing before they can safely make their turn. As with most intersections of this sort, the flow of traffic is steady, leaving no room for our driver to make his turn before the yellow light strikes. At this point, our driver has two choices: stay put and wait for the next round [a sure loss] or take his chances and make his turn [a possible gain, a possible major loss]. A few things will surely influence our driver’s decision. If he has a clear view of the flow of traffic and sees that the incoming cars start slowing down when the light turns yellow, he has good reason to trust that he can make his turn safely. However, in reality, it is often rather difficult to tell whether cars are actually slowing down or not – and this is why there will always be a chance for a loss when making risky decisions such as this one. Most of us most certainly end up taking a chance turn rather than simply waiting at an intersection for hours before all traffic clears and we know for ‘sure’ that the coast is clear.

Most of us have won the bets we’ve placed when making those left-hand turns, as we had waited until the probability of our making a left turn safely had outweighed the probability of another vehicle running through the intersection at the same time we ultimately chose to. This would probably not be such a successful driving maneuver if we did not give such hefty weights to rare events. Kahneman explained that “people overestimate the probabilities of unlikely events… and overweight unlikely events in their decisions” [TFS 310]. If a driver is waiting to make a left turn at an intersection and cannot clearly see the line of traffic from behind the left turners on the opposite side of the intersection, then our driver might try to judge the probability that there are more vehicles coming his way. If our S1 only has access to what is immediately available to it, then not seeing cars for a few seconds might lead someone’s S1 to conclude that it is safe to make the left turn.

Luckily, most drivers’ S2 is activated while they are anticipating the best time to make a left turn. Usually, a driver’s S2 will urge the driver to accept that they cannot make the call and will err on the side of caution, judging the probability of oncoming traffic to be higher than it really is. Depending on factors, such as whether or not they [believe they] will be getting a green arrow, a driver might still decide to take their chances and make a left turn, if they feel that being stuck for another round of lights is a big enough loss, even when they believe that the probability of oncoming traffic is very high.

The German psychologist, Gerd Gigerenzer, would probably not be a big fan of this talk of using probabilities in our immediate driving judgments, however. This is because even if we had access to real-time statistics regarding the flow of traffic on any given road, such as how many cars move through a given intersection at specific times of the day and at what rate, “Good judgment under uncertainty is more than mechanically applying a formula… to a real-world problem” [Gig. 106]. For the Austrian scientist, Richard von Mises, the term ‘probability,’ when it refers to a single event, “has no meaning at all for us” [Mises 11]. Gigerenzer expresses this tension with two short thought experiments.

(i) You wish to buy a new car. Today you must choose between two alternatives: to purchase either a Volvo or a Saab. You use only one criterion for that choice, the car’s life expectancy. You have information from Consumer Reports that in a sample of several hundred cars, the Volvo has the better record. Just yesterday a neighbor told you that his new Volvo broke down. Which car do you buy?

(ii) You live in a jungle. Today you must choose between two alternatives: to let your child swim in the river or to let it climb trees instead. You use only one criterion for that choice, your child’s life expectancy. You have information that in the last 100 years there was only one accident in the river, in which a child was eaten by a crocodile, whereas a dozen children have been killed by falling from trees. Just yesterday your neighbor told you that her child was eaten by a crocodile. Where do you send your child? [Gig. 106].

Problem (i) was originally written by Lee Ross and Richard Nisbett and evokes the intuitive response of going with a Volvo. The new bit of information given by the neighbor does not have much weight in the overall scheme of the class of Volvos versus the class of Saabs. Gigerenzer took this problem and simply altered the context to create problem (ii), which evokes the intuitive response of sending one’s child to the trees. The new information given by the neighbor in this situation changes more than just the probability of a child dying in the river, it tells the parent that something in the ‘river environment’ has changed.

Abstracting from content, they are the exact same problem, yet they result in different intuitive answers. Gigerenzer claims that “the structure of natural environments is usually richer than what any standard formula could offer” [Gig. 108]. In the real world, there are too many variances between situations and surrounding circumstances for humans to follow some standard formula in order to determine what to do when facing a real-world problem. Even for drivers, there is no step-by-step manual for how to ensure a safe trip. There is simply no way for a formula, capable of use by any layperson, to account for every possible factor in a given situation. Gigerenzer urges that “… the content of problems is of central importance for understanding judgment – it embodies implicit knowledge about the structure of the environment” [Gig. 107]. Drivers have a general understanding of how traffic usually functions and what to generally expect while driving. Thoughts of pedestrians are not always at the forefront of a driver’s mind, but the possibility of a pedestrian crossing the road the driver wishes to make a left turn onto will be taken into account by the driver – either by their S2, if the potential for a pedestrian is anticipated before they decide to accelerate, or by their S1 if the pedestrian is not seen until the driver starts accelerating towards the pedestrian.

Based on our discussion thus far, it seems evident that “natural environments often have surplus structure, a structure that goes beyond prior probabilities and likelihoods” [Gig. 107]. On top of making real-time judgments on their own, drivers must also engage, and often work, with other drivers. In their Enigma of Reason, the cognitive scientists Hugo Mercier and Dan Sperber establish that “in interactions where reasons play a role, the people interacting may have converging or diverging interests” [EoR 334]. Drivers often have both converging and diverging interests with one another. Every driver has the same [goal] of safely moving from position x1 to position x2. But because every driver’s start and end positions are different and because each driver is autonomous [as they choose what speed to maintain or change, what position they keep themselves in relative to other objects on the road, when and where to move their positions, and so on], conflicts in the execution of achieving the ultimate [goal] easily and often arise. It is up to the drivers whose immediate interests conflict to figure out the best course of action while maintaining their shared [goal].

The environment of the world of drivers creates a unique situation in which reasoners cannot formally communicate but must rely on other cues from one another. The shareability theory, proposed by psychologist Jennifer Freyd, suggests that “internal (e.g. perceptual, emotional, imagistic) information often is qualitatively different from external (e.g. spoken, written) information, and that such internal information is often not particularly shareable” [Freyd], and yet we are still able to communicate the ‘jist’ of what we internally experience. Humans are also particularly good at reading body language and can determine what another person is feeling based solely on external cues. In the context of driving, an alert driver can often tell when another driver is attempting to change lanes when they notice the other driver either speeding up or slowing down, in search of an opening in a different lane. Mercier and Sperber included a discussion about ‘mindreading’ in their book, “We open, maintain, and update “mental files” on all the people we know [including people we only know of and fictional characters]. All kinds of information goes into these files, information also about what is in their mental files” [EoR 97].

The concept of mindreading is illustrated well by the Caterpillar Experiment, also discussed by Mercier and Sperber. In that experiment, infants were shown a scene of a caterpillar witnessing a piece of cheese being placed behind a screen to its left and an apple behind a screen to its right. The caterpillar opted for the cheese and went straight for the left screen every time. After several repetitions, the scene was changed. The crucial test scene excluded the caterpillar, at first, and only showed the infant that the cheese and apple switched places. The caterpillar was brought onto the scene after the food items had been hidden. The infants ended up paying attention for longer when the caterpillar ended up going for the screen on the right, where the cheese was truly hidden. It was drawn from this evidence that the infants must have updated their “… own representation of the location of the cheese, but not their metarepresentation of the caterpillar’s representation” [EoR 99].

Drivers hold metarepresentations about other drivers by making assumptions about what those drivers know or will do. “Humans exploit… a modularized competence informed by social conventions that guide the kind of situation they happen to be, while physically close to strangers because of goals that are parallel but not shared” [EoR 100]. The general knowledge of how traffic usually functions informs each driver’s S1. Once a driver has a few years of experience on the road, they will naturally develop a set of heuristics and a stronger understanding of how to ‘read’ other drivers. Kahneman tells us that the “technical definition of a heuristic is a simple procedure that helps find adequate, though often imperfect answers to difficult questions” [TFS 91].

While there is much controversy regarding the subject of heuristics, namely that they lead to impermissible biases [i.e. systematic errors], there are a couple that Kahneman provided strong evidence for and which fit rather well into the scheme of driving. One particularly fitting and telling heuristic is the representativeness heuristic which occurs when an individual judges the probability of an event based on how it compares to situations the individual has experienced before. In doing so, an individual compares the new situation to a mental stereotype they hold regarding similar situations. A driver who witnesses the driver in front of them slow down to about ten miles under the speed limit will recall the previous occasions other drivers in front of them have done this same thing and will likely conclude that it is highly probable that the driver currently in front of them will make a turn soon, as most of the times the driver has witnessed this sort of behavior, a turn followed shortly after.

Oftentimes, a driver’s assumption of that sort will turn out to be right, but there are plenty of times that a driver’s assumptions turn out to be wrong. Incorrect assumptions become dangerous when a driver takes a risk based on an incorrect assumption – such as making a risky left turn when the flow of oncoming traffic is obscured from view. Since this concept has come up a number of times by this point, I think we can pull an epistemic duty from it: Be aware of your assumptions. And because an individual driver can never know whether their assumption is correct or not in real-time, another duty can be drawn: It is best to err on the side of caution than to take unsubstantiated risks. Akin to the famous slogan, you can’t win if you don’t play; you can’t lose if you don’t take a gamble.

Mindreading and making assumptions about other drivers are unavoidable on the road and can actually prove to be highly beneficial. One tactic of defensive driving is to assume that other drivers are unaware of your presence. This ensures that the driver will not make any further assumptions that would increase their judgments of the probability that it is safe to perform maneuver x or y. This forces the driver to rely on stronger, substantial evidence to base their judgments on [such as the relative speeds of their own vehicle and others, relative positions, the timing of traffic lights, etc]. This limitation on assumptions will also keep a driver’s S2 on guard for cues from other drivers that indicate that those drivers are about to make a risky maneuver, giving enough time for the driver to react to the uncertain actions of those other drivers.

From all we have discussed, it is clear that there is no way for drivers to train themselves to predict what is going to happen on the road. They can, however, better prepare themselves to react to the actions of other drivers. An enumeration of the causes of accidents will be of use now. The National Highway Traffic Safety Administration has reported that recognition errors are the cause of 41% of accidents. These recognition errors include; inattention, distractions, and inadequate surveillance. Our limited capacity for attention causes us to miss out on even surprising things when our S2 is preoccupied with demanding tasks. This is well illustrated by the Invisible Gorilla Experiment conducted by psychologists Christopher Chabris and Daniel Simons.

Chabris and Simons recorded a scene where two teams of players passed basketballs between one another. One team wore black shirts and the other wore white. Volunteers were then asked to keep track of the number of passes made between the white-shirted team. About halfway through the video, “a student wearing a full-body gorilla suit walked into the scene, stopped in the middle of the players, faced the camera, thumped her chest, and then walked off” [IG 9]. Short interviews following the viewing of the video revealed that about half of the subjects did not notice the gorilla. This missing out on unexpected changes in the environment occurs when our S2 focuses on one or more things and has no capacity to spare to tend to unexpected objects. Chabris and Simons conclude that “the problem is not with limitations on motor control, but with limitations on attentional resources and awareness” [IG 24]. This leads us to another epistemic duty for drivers: Limit, if not eliminate, any distractions that may take your attention away from the road and observations of traffic.

Decision errors are the next biggest culprit that causes accidents. These make up 33% of accidents and include speeding, making incorrect assumptions about other drivers, and performing illegal maneuvers. We have already discussed the danger of making unwarranted assumptions about other drivers which resulted in a duty to refrain from taking chances based on such assumptions.

On top of the decision errors, aggressive driving, which includes speeding, is a factor in 56% of fatal accidents. The psychologist, and well-known marital relations expert, John Gottman, “observed that the long-term success of a relationship depends far more on avoiding the negative than on seeking the positive” [TFS 288]. This applies quite well to the short relationships we hold with other drivers while on the road. We have also discussed the conflicting interests of drivers which are difficult to remedy in an environment lacking direct lines of communication. If all drivers keep in mind that every driver has the same [goal] of getting to their destination safely, then drivers might work together more often than trying to gain or impede on each other’s efforts. From this, we can draw a final epistemic duty for drivers: Go with the flow of traffic. There is no reason to attempt to overtake other drivers who are going the speed limit. There is no reason to be going either abundantly under the speed limit, so as to give a reason for other drivers to overtake you, or dangerously over the limit, as to increase the risk of an error being made.

Humans are highly adaptable creatures that are equipped to learn the structure of a given environment and to take advantage of the normalcies found within that environment. In the world of drivers, there is a general and normal flow of traffic. What makes the traffic environment unique from most of the other environments that humans find themselves in is that multiple autonomous individuals are in control of the environment. While there is a general and normal flow within traffic environments, there is also much room for the interests of different drivers to conflict and for human errors to be made. Individual cognitive abilities and limitations can both help and hinder the avoidance of this danger. Such cognitive abilities and limitations have been discussed throughout this paper and have helped us to extract some epistemic duties that drivers may follow in order to more successfully avoid the dangers of human conflict and errors on the road. To recall and conclude, these epistemic duties are:

  1. Be aware of your assumptions

  2. It is best to err on the side of caution than to take unsubstantiated risks

  3. Limit, if not eliminate, any distractions that may take your attention away from the road and observations of traffic

  4. Go with the flow of traffic

Practicing these epistemic duties while driving will help drivers overcome their cognitive limitations and achieve the practical [goal]. Driving is a major part of modern American society, and many parts of the world. It is also one of the most dangerous environments a human can find themselves in and must not be trekked in an unwary fashion. In such a dangerous environment, where information comes quickly and incomplete, one must remain alert and vigilant.

References

[EoR] Mercier, Hugo, and Dan Sperber. The Enigma of Reason. Harvard University Press, 2017.

Freyd, J.J. (2005). What is Shareability? Retrieved May 7, 2023, http://pages.uoregon.edu/dynamic/jjf/defineshareability.html.

Gigerenzer, Gerd. “How to Make Cognitive Illusions Disappear: Beyond ‘Heuristics and Biases.’” European Review of Social Psychology, Salzburg, Austria, 1991, pp. 83–115.

[IG] Chabris, Christopher, and Daniel Simons. The Invisible Gorilla. Crown, 2010.

[III] Insurance Information Institution, “Other Insurance Topics.” Retrieved 2023, www.iii.org/insurance-basics/other-insurance-topics/safety.

National Highway Traffic Safety Administration [NHTSA]. “Traffic Safety Facts.” U.S. Department of Transportation, Feb. 2015.

Mises, R. von (1957). Probability, Statistics, and Truth, London: Allen and Unwin (original work published in 1928).

Ross, L., & Nisbett, R. E. (1991). The person and the situation: Perspectives of social psychology. Mcgraw-Hill Book Company.

[TFS] Kahneman, Daniel. Thinking Fast and Slow. Farrar, Straus and Giroux, 2013.