Tag Archives: confirmation bias

The Socratic problem for fallacy theory

How do you explain that someone is being irrational? What does even mean to be irrational? What does it mean to explain irrationality? After all, “it seemed right at the the time” is a perpetual phenomenological condition–this is the problem Aristotle tried to account for in his discussion of Akrasia (weakness of will; incontinence) in book VII of the NIcomachean Ethics: how can someone know that they should Phi, intend to do Phi, but then fail to Phi? You can’t explain this by referring to reasons because the reasons, at least the motivating ones, are inoperative in some important sense. Fans ofThe Philosopher know that he struggled mightily with this problem after rejecting the Socratic claim that akrasia is just ignorance. In a lot of ways he ends up embracing that view, though in doing so he seems to identify a different shade of the problem: there are different kinds of reasons.

Something akin to this problem haunts argumentation theory. For, it seems obvious that people commit fallacies all of the time. This is to say, on one account, they see premises as supporting a conclusion when they don’t. One problem for fallacy theory is that they seem to them to support the conclusion, so fallacies aren’t really irrational. This is the Socratic problem for fallacy theory. There are not fallacies because no one ever seems to be irrational to themselves.

To the Socratic problem for fallacy theory there’s the Aristotelian distinction between kinds of reasons. And of course when we say reasons we also mean, just like Aristotle, explanations (which is what the Greek seems to mean anyway). So we can explain someone’s holding p in a way that doesn’t entail that holding p was rational (or justified, which is similar but different).

Lots of things might count as accounts of irrationality; one common one is bias. This has the handy virtue of locating the skewing of someone’s reason in some kind of psychological tendency to mess up some key element of the reasoning process in a way that’s undetectable to them. So, confirmation bias, for example, standardly consists in noticing only that evidence that appears to confirm your desired outcome.

Since you cannot will yourself to believe some particular conclusion, this works out great, because you can look at (or better not look at) evidence that might produce it (or avoid that which will). Of course, you can’t be completely be aware of this going on (thus–bias). This is what Aristotle was trying to represent.

This is one very cursory account of the relation between what people mean by irrationality in argumentation and what others mean by it. There is, by the way, a lot of confusion about what it means to teach this stuff–to teach about it, to teach to avoid it, etc. More on that here. I recommend that article for anyone interested in teaching critical thinking.

Having said all of this, there is interesting research (outside of my wheelhouse sadly) on bias being going in psychology and elsewhere. Here is one example. A sample graph:

However, over the course of my research, I’ve come to question all of these assumptions. As I begun exploring the literature on confirmation bias in more depth, I first realised that there is not just one thing referred to by ‘confirmation bias’, but a whole host of different tendencies, often overlapping but not well connected. I realised that this is because of course a ‘confirmation bias’ can arise at different stages of reasoning: in how we seek out new information, in how we decide what questions to ask, in how we interpret and evaluate information, and in how we actually update our beliefs. I realised that the term ‘confirmation bias’ was much more poorly defined and less well understood than I’d thought, and that the findings often used to justify it were disparate, disconnected, and not always that robust.

The questions about bias lead to other ones about open-mindedness:

All of this investigation led me to seriously question the assumptions that I had started with: that confirmation bias was pervasive, ubiquitous, and problematic, and that more open-mindedness was always better. Some of this can be explained as terminological confusion: as I scrutinised the terms I’d been using unquestioningly, I realised that different interpretations led to different conclusions. I have attempted to clarify some of the terminological confusion that arises around these issues: distinguishing between different things we might mean when we say a ‘confirmation bias’ exists (from bias as simply an inclination in one direction, to a systematic deviation from normative standards), and distinguishing between ‘open-mindedness’ as a descriptive, normative, or prescriptive concept. However, some substantive issues remained, leading me to conclusions I would not have expected myself to be sympathetic to a few years ago: that the extent to which our prior beliefs influence reasoning may well be adaptive across a range of scenarios given the various goals we are pursuing, and that it may not always be better to be ‘more open-minded’. It’s easy to say that people should be more willing to consider alternatives and less influenced by what they believe, but much harder to say how one does this. Being a total ‘blank slate’ with no assumptions or preconceptions is not a desirable or realistic starting point, and temporarily ‘setting aside’ one’s beliefs and assumptions whenever it would be useful to consider alternatives is incredibly cognitively demanding, if possible to do at all. There are tradeoffs we have to make, between the benefits of certainty and assumptions, and the benefits of having an ‘open mind’, that I had not acknowledged before.

What is interesting is how questions about one kind of account (the bias one, which is explanatory) lead back to the questions they were in a sense meant to solve (the normative one). But perhaps this distinction is mistaken.

The argumentative theory

The argumentative theory of argumentation maintains that reasoning is for arguing–actually, for winning arguments (but not in the philosophy way). Here’s the idea (from here):

Reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments. That’s why they call it The Argumentative Theory of Reasoning. So, as they put it, “The evidence reviewed here shows not only that reasoning falls quite short of reliably delivering rational beliefs and rational decisions. It may even be, in a variety of cases, detrimental to rationality. Reasoning can lead to poor outcomes, not because humans are bad at it, but because they systematically strive for arguments that justify their beliefs or their actions. This explains the confirmation bias, motivated reasoning, and reason-based choice, among other things.

Here’s an interview with Hugo Mercier I stumbled across that gives a shorter and less formal version of the idea. A sample:

And the beauty of this theory is that not only is it more evolutionarily plausible, but it also accounts for a wide range of data in psychology. Maybe the most salient of phenomena that the argumentative theory explains is the confirmation bias.

Psychologists have shown that people have a very, very strong, robust confirmation bias. What this means is that when they have an idea, and they start to reason about that idea, they are going to mostly find arguments for their own idea. They’re going to come up with reasons why they’re right, they’re going to come up with justifications for their decisions. They’re not going to challenge themselves. 

But maybe these people are terrible at reasoning.  Ok, joking (sort of). The interview is well worth reading. There’s even a little video.

The confidence man

Nate Silver, nerdy statistician at 538.com, correctly predicated the outcome of the recent election (with the exception, by the way, of one Senate race in North Dakota).  Mitt Romney and Paul Ryan, "numbers" guys by their own descriptions, did not.  An article at CBS.com (my first time there too!) had this to say:

Romney and his campaign had gone into the evening confident they had a good path to victory, for emotional and intellectual reasons. The huge and enthusiastic crowds in swing state after swing state in recent weeks – not only for Romney but also for Paul Ryan – bolstered what they believed intellectually: that Obama would not get the kind of turnout he had in 2008.

They thought intensity and enthusiasm were on their side this time – poll after poll showed Republicans were more motivated to vote than Democrats – and that would translate into votes for Romney. 

As a result, they believed the public/media polls were skewed – they thought those polls oversampled Democrats and didn't reflect Republican enthusiasm. They based their own internal polls on turnout levels more favorable to Romney. That was a grave miscalculation, as they would see on election night.

Those assumptions drove their campaign strategy: their internal polling showed them leading in key states, so they decided to make a play for a broad victory: go to places like Pennsylvania while also playing it safe in the last two weeks.

What is interesting about this account is that the Romney campaign found a way to convince itself of the power of confidence, motivation, and enthusiasm over simple numbers.  But that is why we have numbers, because those things are meaningless

Here was Romney's approach to the economy (from, by the way, the same tape where he made the "47 percent" comment):

If it looks like I'm going to win, the markets will be happy. If it looks like the president's going to win, the markets should not be terribly happy. It depends of course which markets you're talking about, which types of commodities and so forth, but my own view is that if we win on November 6th, there will be a great deal of optimism about the future of this country. We'll see capital come back and we'll see — without actually doing anything — we'll actually get a boost in the economy.

I'm glad he did not win, for his losing has been so instructive.