The Socratic problem for fallacy theory

How do you explain that someone is being irrational? What does even mean to be irrational? What does it mean to explain irrationality? After all, “it seemed right at the the time” is a perpetual phenomenological condition–this is the problem Aristotle tried to account for in his discussion of Akrasia (weakness of will; incontinence) in book VII of the NIcomachean Ethics: how can someone know that they should Phi, intend to do Phi, but then fail to Phi? You can’t explain this by referring to reasons because the reasons, at least the motivating ones, are inoperative in some important sense. Fans ofThe Philosopher know that he struggled mightily with this problem after rejecting the Socratic claim that akrasia is just ignorance. In a lot of ways he ends up embracing that view, though in doing so he seems to identify a different shade of the problem: there are different kinds of reasons.

Something akin to this problem haunts argumentation theory. For, it seems obvious that people commit fallacies all of the time. This is to say, on one account, they see premises as supporting a conclusion when they don’t. One problem for fallacy theory is that they seem to them to support the conclusion, so fallacies aren’t really irrational. This is the Socratic problem for fallacy theory. There are not fallacies because no one ever seems to be irrational to themselves.

To the Socratic problem for fallacy theory there’s the Aristotelian distinction between kinds of reasons. And of course when we say reasons we also mean, just like Aristotle, explanations (which is what the Greek seems to mean anyway). So we can explain someone’s holding p in a way that doesn’t entail that holding p was rational (or justified, which is similar but different).

Lots of things might count as accounts of irrationality; one common one is bias. This has the handy virtue of locating the skewing of someone’s reason in some kind of psychological tendency to mess up some key element of the reasoning process in a way that’s undetectable to them. So, confirmation bias, for example, standardly consists in noticing only that evidence that appears to confirm your desired outcome.

Since you cannot will yourself to believe some particular conclusion, this works out great, because you can look at (or better not look at) evidence that might produce it (or avoid that which will). Of course, you can’t be completely be aware of this going on (thus–bias). This is what Aristotle was trying to represent.

This is one very cursory account of the relation between what people mean by irrationality in argumentation and what others mean by it. There is, by the way, a lot of confusion about what it means to teach this stuff–to teach about it, to teach to avoid it, etc. More on that here. I recommend that article for anyone interested in teaching critical thinking.

Having said all of this, there is interesting research (outside of my wheelhouse sadly) on bias being going in psychology and elsewhere. Here is one example. A sample graph:

However, over the course of my research, I’ve come to question all of these assumptions. As I begun exploring the literature on confirmation bias in more depth, I first realised that there is not just one thing referred to by ‘confirmation bias’, but a whole host of different tendencies, often overlapping but not well connected. I realised that this is because of course a ‘confirmation bias’ can arise at different stages of reasoning: in how we seek out new information, in how we decide what questions to ask, in how we interpret and evaluate information, and in how we actually update our beliefs. I realised that the term ‘confirmation bias’ was much more poorly defined and less well understood than I’d thought, and that the findings often used to justify it were disparate, disconnected, and not always that robust.

The questions about bias lead to other ones about open-mindedness:

All of this investigation led me to seriously question the assumptions that I had started with: that confirmation bias was pervasive, ubiquitous, and problematic, and that more open-mindedness was always better. Some of this can be explained as terminological confusion: as I scrutinised the terms I’d been using unquestioningly, I realised that different interpretations led to different conclusions. I have attempted to clarify some of the terminological confusion that arises around these issues: distinguishing between different things we might mean when we say a ‘confirmation bias’ exists (from bias as simply an inclination in one direction, to a systematic deviation from normative standards), and distinguishing between ‘open-mindedness’ as a descriptive, normative, or prescriptive concept. However, some substantive issues remained, leading me to conclusions I would not have expected myself to be sympathetic to a few years ago: that the extent to which our prior beliefs influence reasoning may well be adaptive across a range of scenarios given the various goals we are pursuing, and that it may not always be better to be ‘more open-minded’. It’s easy to say that people should be more willing to consider alternatives and less influenced by what they believe, but much harder to say how one does this. Being a total ‘blank slate’ with no assumptions or preconceptions is not a desirable or realistic starting point, and temporarily ‘setting aside’ one’s beliefs and assumptions whenever it would be useful to consider alternatives is incredibly cognitively demanding, if possible to do at all. There are tradeoffs we have to make, between the benefits of certainty and assumptions, and the benefits of having an ‘open mind’, that I had not acknowledged before.

What is interesting is how questions about one kind of account (the bias one, which is explanatory) lead back to the questions they were in a sense meant to solve (the normative one). But perhaps this distinction is mistaken.