Sunday, May 13, 2018

Minding Minds: Motivated reasoning and the limits of reason and persuasion


Have you ever had an argument with someone about an issue that you cared deeply about, and you just knew you were right? But the other person kept citing statistics and studies and factual claims that felt suspect to you, but you couldn't prove it on the spot. So you went and studied it. And you discovered you were right all along! The statistics they cited didn't assume all the factors, the studies they cited were either biased or not strong enough for their claims, and the factual claims have been disproven in many places. How could your debating opponent have been so wrong? Maybe it was because they were so invested in their side of their argument they were willing to believe cranks, read only a few things that proved their side, and accept less rigorous work that supported their pre-existing beliefs. Or maybe you're the one so invested you ended up believing in false things?

I know I have been in the above situation, and I assume most of you have as well. And this is a traditional situation of motivated reasoning, in which our desired outcome for a situation shapes how we reason and evaluate evidence. As Ziva Kunda argues in her foundational paper on the subject, "motivation may affect reasoning through reliance on a biased set of cognitive processes: strategies for accessing, constructing, and evaluating beliefs" (480). She continues:
I propose that people motivated to arrive at a particular conclusion attempt to be rational and to construct a justification of their desired conclusion that would persuade a dispassionate observer. [...] In other words, they maintain an "illusion of objectivity" (Pyszczynski & Greenberg, 1987 ; cf. Kruglanski, 1980). To this end, they search memory for those beliefs and rules that could support their desired conclusion. They may also creatively combine accessed knowledge to construct new beliefs that could logically support the desired conclusion.[...] The objectivity of this justification construction process is illusory because people do not realize that the process is biased by their goals, that they are accessing only a subset of their relevant knowledge, that they would probably access different beliefs and rules in the presence of different directional goals, and that they might even be capable of justifying opposite conclusions on different occasions. (482-483)
Motivated reasoning helps us understand why people are not convinced by the overwhelming evidence of human caused global warming, evolution, or the safety and effectiveness of vaccines. It is also useful to help us understand a variety of issues in political science, such as the tendency of people to often support their candidate even more when exposed to negative information about their candidate. If vegetarianism is, as Bill Martin has argued, an already won argument, perhaps motivated reasoning can help us understand why we seem to still have such trouble winning the already won argument

As I argued in my last post, one of the ways we can understand so many of the bad and factually incorrect arguments about eating other animals and the environment is because of motivated reasoning. Motivated reasoning is a way we help solve what is called cognitive dissonance, the problem of putting together two contradictory elements in our lives. Let me give you an example from a classic 1967 study. In it, participants were required to listen to recordings that produced information about cigarettes and cancer. But the recordings had a lot of a static, which could be solved by pushing an "anti-static" button. Smokers tended to let the static play over the parts that talked about cigarettes linking to cancer, and decreased the static when the recording talked about smoking not being linked to cancer. Non-smokers usually did the opposite. We have a lot of the elements here of motivated reasoning and cognitive dissonance. Say you smoke, and you want to keep smoking, but you also don't want to be at a higher risk of ill health. So you do two things: you tune out information explaining why smoking is bad for you, and seek out information explaining why smoking is not so bad for you. But this doesn't even fully capture how motivated reasoning changes your perceptions.

In my last post, I included a graph about how different foods affected the environment. On a discussion on social media about my post, I saw someone say they were pleased that milk didn't cause that much environmental harm--that it was comparative to plants we eat--and that she could get it humanely from her local farm. If you go back, that's clearly not what the graph says. What occurred is what Kahan et al. refer to as motivated innumeracy. In the study, participants were given information about a new skin-rash treatment. The information was a little complicated, and not everyone was able to understand it, but people who had high levels of numeracy were able to follow. However, when given information in the same format about the relationships of gun ownership and violence, those same people were often unable to correctly understand the information if it went against their pre-existing political beliefs. That is to say, liberal democrats had trouble processing information that gun ownership decreased violence, and conservative republicans had trouble processing information that gun ownership increased violence. Numeracy didn't protect people from these false readings of the data, indeed, the higher the numeracy, the more likely the person was to make a mistake when it came to the data about guns. Motivated reasoning doesn't just guide what information you remember, or seek out, it shapes your very ability to process information. Let's take an example from our (the pro-animal) side. The documentary Cowspiracy claims that at least 51% of all greenhouse gases (GHG) comes from animal agriculture. They get this number from an article by Goodland and Anhang, and those numbers have been attacked by a writer for The Union of Concerned Scientists, and several academics. Goodland and Anhang have responded. But if you don't want to do a lot of homework, I will break it down for you. Most studies put the numbers at about 15-20% of GHGs are from animal agriculture. There is a lot of fights about what numbers are appropriate to include, and how accurate certain counterfactuals are. If you are wanting, as Kunda puts it, to engage in an illusion of objectivity, the Goodland and Anhang will allow animal advocates that illusion while also claiming that stopping animal agriculture is the single most important environmental issue (as opposed to simply among the most important environmental issues). But how often do you believe a single analysis that produces significantly different numbers than most of the other people working the field? Normally this is not evidence we would find completely credible in situations where we are not interested in the outcome. Goodland and Anhang may be right, and I am persuaded by many of their particular arguments, but our tendency to believe them is almost certainly shaped by our desire for them to be right. This leads us to the second worrying conclusion about research in motivated reasoning.

In psychology there is a theory, going back in some form to William James, known as the dual process theory. Essentially we all think in two ways, one mostly unconsciously and emotionally, and another consciously and carefully. These two ways are often known as either implicit and explicit thinking, or System 1 and System 2 thinking (I'm drawing this story from Kahneman's enjoyable Thinking, Fast and Slow). So when it comes to motivated reasoning, the usual understanding is that our System 1 thinking--our fast, emotional, heuristic thinking--is impeding our rational and slower System 2 thinking. A possible solution to motivated reasoning might be, then, to get people to engage in more System 2 thinking. But increasingly that doesn't seem workable. Remember from the Kahan study above, people with higher levels of numeracy were more likely to get the gun ownership problem wrong. This follows previous work from Kahan that indicates "the experimental component of the study demonstrated that the disposition to engage in conscious and effortful System 2 information processing—as measured by the Cognitive Reflection Test (CRT)—actually magnifies the impact of motivated reasoning." In other words, the more careful we are, the smarter we are, the more rational we are, the better our motivated reasoning. We are just better able to research to find things that support our biases, or better able to think of reasons why we are right. And this has implications for persuasion in animal rights.

As Kahan explains in his innumeracy study, the problem is not that people just don't have the correct information, or actively avoid the correct information (though those both might be true), it is also that they actively distort the information they are given. This is part of the identity-protection cognition thesis. We engage cognitive processes that seek to protect our identities, and our sense of goodness and correctness. This is why Kant, in the Groundwork, was so afraid of utilitarianism. Because he believed we could use it to justify any action as moral. So the eating of animals, which is central to so many people's identities, would be something we could expect to see a lot of identity protection around. This might explain why people, when eating out, are more likely to eat vegetarian food if it is not labeled as such and put in a separate vegetarian section of a menu. Furthermore, one of the key reasons people state for why some animals are allowed to be eaten, and others are not, is the quality of intelligence. But as Bastian et al have demonstrated, people routinely undervalue the minds of animals they eat, even if they are willing to think animals they don't eat have complex minds. Think here of how Americans believe in the hyper intelligence of dogs, but routinely undermine the minds of pigs, and especially cows. Expanding on this work, Piazza and Loughnan conducted a study that tests people's perceptions of minds in a fictional alien animal species, a real species we don't eat, and pigs. The study gave participants information about a fictional alien animal species. When they described the species in ways that showed clear intelligence, people felt they shouldn't be eaten. When they described the species in ways without clear intelligence, meat eaters believed they should be eaten. They then presented the same information showing clear intelligence about the alien species, tapirs, and pigs to the participants. Meat eaters felt that the alien species and the tapirs were clearly intelligent, but discredited the minds of the pigs even with the information in front of them. The problem here is clear, presenting clear information and engaging people in rational argumentation is not likely to change many minds and actions because they are trying to protect their identities.

So, if rational argumentation doesn't work, what are we supposed to do? Assuming I get my act together, that will be the third part of this blog series, where I plan to take up Kwame Anthony Appiah on honor worlds and Cristina Bicchieri on norms to explain how moral revolutions and social change can occur.