Magic words

Some of what argumentation theorists do is produce a metalanguage of argument. They make up names for stuff. Stuff you shouldn’t do (hollow man) stuff you should sometimes do (iron man). It’s partially a normative study, so the metalanguage is normative. As the Owl of Minerva Problem points out, however, there’s an inherent challenge in that the metalanguage for argument warps our performances. It’s a new thing to keep track of and it alters the way we interact. The thing it was meant to solve isn’t solved. It gets absorbed into the problem it was trying to solve. Interestingly, this is also the case for the Owl of Minerva problem.

Here is a variation on the Owl of Minerva Problem. Recall that the Owl of Minerva is retrospective, and productive of new normative terms. In some cases, once these terms get introduced, they are so powerful that they can never be used. This is to say that once a term becomes associated with a certain kind of extreme failure, it becomes magical. It’s a normative term with actual descriptive power. Take “racism.” Though there are significant disagreements about what really is the issue (ask a philosopher of race), there are no (significant) disagreements that it is bad, very bad. The same is true (with some perverse exceptions) of Nazism). No one wants to be a Nazi, even people who literally hold Nazi views. This video pretty much sums this up:

A more recent version of this featured three police officers caught on tape discussing their desire to engage in racially motivated homicide and start a race war with genocidal objectives. In their own defense, the officers said they weren’t racist:

Later, according to the investigation, Piner told Moore that he feels a civil war is coming and that he is ready. Piner said he was going to buy a new assault rifle, and soon “we are just going to go out and start slaughtering them (expletive)” Blacks. “I can’t wait. God, I can’t wait.” Moore responded that he wouldn’t do that.

Piner then told Moore that he felt a civil war was needed to “wipe them off the (expletive) map. That’ll put them back about four or five generations.” Moore told Piner he was “crazy,” and the recording stopped a short time later.

According to police, the officers admitted it was their voices on the video and didn’t deny any of the content. While the officers denied that they were racists, they blamed their comments on the stress on law enforcement in light of the protests over the death of George Floyd. Floyd, a Black man, died last month after a Minneapolis police officer put his knee on Floyd’s neck for several minutes.

I’d be happy to hear if someone has identified this phenomenon and given it a funny name. It’s something like the Harry Potter Problem, where one invokes fallacy names in place of (hopefully constructive) criticism and discussion. But in this case the invocation of the magic word necessarily backfires. It casts a kind of reverse spell. So one discovers a new powerful and descriptive normative concept, but its very power means its real targets will never accept it.

The Socratic problem for fallacy theory

How do you explain that someone is being irrational? What does even mean to be irrational? What does it mean to explain irrationality? After all, “it seemed right at the the time” is a perpetual phenomenological condition–this is the problem Aristotle tried to account for in his discussion of Akrasia (weakness of will; incontinence) in book VII of the NIcomachean Ethics: how can someone know that they should Phi, intend to do Phi, but then fail to Phi? You can’t explain this by referring to reasons because the reasons, at least the motivating ones, are inoperative in some important sense. Fans ofThe Philosopher know that he struggled mightily with this problem after rejecting the Socratic claim that akrasia is just ignorance. In a lot of ways he ends up embracing that view, though in doing so he seems to identify a different shade of the problem: there are different kinds of reasons.

Something akin to this problem haunts argumentation theory. For, it seems obvious that people commit fallacies all of the time. This is to say, on one account, they see premises as supporting a conclusion when they don’t. One problem for fallacy theory is that they seem to them to support the conclusion, so fallacies aren’t really irrational. This is the Socratic problem for fallacy theory. There are not fallacies because no one ever seems to be irrational to themselves.

To the Socratic problem for fallacy theory there’s the Aristotelian distinction between kinds of reasons. And of course when we say reasons we also mean, just like Aristotle, explanations (which is what the Greek seems to mean anyway). So we can explain someone’s holding p in a way that doesn’t entail that holding p was rational (or justified, which is similar but different).

Lots of things might count as accounts of irrationality; one common one is bias. This has the handy virtue of locating the skewing of someone’s reason in some kind of psychological tendency to mess up some key element of the reasoning process in a way that’s undetectable to them. So, confirmation bias, for example, standardly consists in noticing only that evidence that appears to confirm your desired outcome.

Since you cannot will yourself to believe some particular conclusion, this works out great, because you can look at (or better not look at) evidence that might produce it (or avoid that which will). Of course, you can’t be completely be aware of this going on (thus–bias). This is what Aristotle was trying to represent.

This is one very cursory account of the relation between what people mean by irrationality in argumentation and what others mean by it. There is, by the way, a lot of confusion about what it means to teach this stuff–to teach about it, to teach to avoid it, etc. More on that here. I recommend that article for anyone interested in teaching critical thinking.

Having said all of this, there is interesting research (outside of my wheelhouse sadly) on bias being going in psychology and elsewhere. Here is one example. A sample graph:

However, over the course of my research, I’ve come to question all of these assumptions. As I begun exploring the literature on confirmation bias in more depth, I first realised that there is not just one thing referred to by ‘confirmation bias’, but a whole host of different tendencies, often overlapping but not well connected. I realised that this is because of course a ‘confirmation bias’ can arise at different stages of reasoning: in how we seek out new information, in how we decide what questions to ask, in how we interpret and evaluate information, and in how we actually update our beliefs. I realised that the term ‘confirmation bias’ was much more poorly defined and less well understood than I’d thought, and that the findings often used to justify it were disparate, disconnected, and not always that robust.

The questions about bias lead to other ones about open-mindedness:

All of this investigation led me to seriously question the assumptions that I had started with: that confirmation bias was pervasive, ubiquitous, and problematic, and that more open-mindedness was always better. Some of this can be explained as terminological confusion: as I scrutinised the terms I’d been using unquestioningly, I realised that different interpretations led to different conclusions. I have attempted to clarify some of the terminological confusion that arises around these issues: distinguishing between different things we might mean when we say a ‘confirmation bias’ exists (from bias as simply an inclination in one direction, to a systematic deviation from normative standards), and distinguishing between ‘open-mindedness’ as a descriptive, normative, or prescriptive concept. However, some substantive issues remained, leading me to conclusions I would not have expected myself to be sympathetic to a few years ago: that the extent to which our prior beliefs influence reasoning may well be adaptive across a range of scenarios given the various goals we are pursuing, and that it may not always be better to be ‘more open-minded’. It’s easy to say that people should be more willing to consider alternatives and less influenced by what they believe, but much harder to say how one does this. Being a total ‘blank slate’ with no assumptions or preconceptions is not a desirable or realistic starting point, and temporarily ‘setting aside’ one’s beliefs and assumptions whenever it would be useful to consider alternatives is incredibly cognitively demanding, if possible to do at all. There are tradeoffs we have to make, between the benefits of certainty and assumptions, and the benefits of having an ‘open mind’, that I had not acknowledged before.

What is interesting is how questions about one kind of account (the bias one, which is explanatory) lead back to the questions they were in a sense meant to solve (the normative one). But perhaps this distinction is mistaken.