Tag Archives: fallacy theory

The Socratic problem for fallacy theory

How do you explain that someone is being irrational? What does even mean to be irrational? What does it mean to explain irrationality? After all, “it seemed right at the the time” is a perpetual phenomenological condition–this is the problem Aristotle tried to account for in his discussion of Akrasia (weakness of will; incontinence) in book VII of the NIcomachean Ethics: how can someone know that they should Phi, intend to do Phi, but then fail to Phi? You can’t explain this by referring to reasons because the reasons, at least the motivating ones, are inoperative in some important sense. Fans ofThe Philosopher know that he struggled mightily with this problem after rejecting the Socratic claim that akrasia is just ignorance. In a lot of ways he ends up embracing that view, though in doing so he seems to identify a different shade of the problem: there are different kinds of reasons.

Something akin to this problem haunts argumentation theory. For, it seems obvious that people commit fallacies all of the time. This is to say, on one account, they see premises as supporting a conclusion when they don’t. One problem for fallacy theory is that they seem to them to support the conclusion, so fallacies aren’t really irrational. This is the Socratic problem for fallacy theory. There are not fallacies because no one ever seems to be irrational to themselves.

To the Socratic problem for fallacy theory there’s the Aristotelian distinction between kinds of reasons. And of course when we say reasons we also mean, just like Aristotle, explanations (which is what the Greek seems to mean anyway). So we can explain someone’s holding p in a way that doesn’t entail that holding p was rational (or justified, which is similar but different).

Lots of things might count as accounts of irrationality; one common one is bias. This has the handy virtue of locating the skewing of someone’s reason in some kind of psychological tendency to mess up some key element of the reasoning process in a way that’s undetectable to them. So, confirmation bias, for example, standardly consists in noticing only that evidence that appears to confirm your desired outcome.

Since you cannot will yourself to believe some particular conclusion, this works out great, because you can look at (or better not look at) evidence that might produce it (or avoid that which will). Of course, you can’t be completely be aware of this going on (thus–bias). This is what Aristotle was trying to represent.

This is one very cursory account of the relation between what people mean by irrationality in argumentation and what others mean by it. There is, by the way, a lot of confusion about what it means to teach this stuff–to teach about it, to teach to avoid it, etc. More on that here. I recommend that article for anyone interested in teaching critical thinking.

Having said all of this, there is interesting research (outside of my wheelhouse sadly) on bias being going in psychology and elsewhere. Here is one example. A sample graph:

However, over the course of my research, I’ve come to question all of these assumptions. As I begun exploring the literature on confirmation bias in more depth, I first realised that there is not just one thing referred to by ‘confirmation bias’, but a whole host of different tendencies, often overlapping but not well connected. I realised that this is because of course a ‘confirmation bias’ can arise at different stages of reasoning: in how we seek out new information, in how we decide what questions to ask, in how we interpret and evaluate information, and in how we actually update our beliefs. I realised that the term ‘confirmation bias’ was much more poorly defined and less well understood than I’d thought, and that the findings often used to justify it were disparate, disconnected, and not always that robust.

The questions about bias lead to other ones about open-mindedness:

All of this investigation led me to seriously question the assumptions that I had started with: that confirmation bias was pervasive, ubiquitous, and problematic, and that more open-mindedness was always better. Some of this can be explained as terminological confusion: as I scrutinised the terms I’d been using unquestioningly, I realised that different interpretations led to different conclusions. I have attempted to clarify some of the terminological confusion that arises around these issues: distinguishing between different things we might mean when we say a ‘confirmation bias’ exists (from bias as simply an inclination in one direction, to a systematic deviation from normative standards), and distinguishing between ‘open-mindedness’ as a descriptive, normative, or prescriptive concept. However, some substantive issues remained, leading me to conclusions I would not have expected myself to be sympathetic to a few years ago: that the extent to which our prior beliefs influence reasoning may well be adaptive across a range of scenarios given the various goals we are pursuing, and that it may not always be better to be ‘more open-minded’. It’s easy to say that people should be more willing to consider alternatives and less influenced by what they believe, but much harder to say how one does this. Being a total ‘blank slate’ with no assumptions or preconceptions is not a desirable or realistic starting point, and temporarily ‘setting aside’ one’s beliefs and assumptions whenever it would be useful to consider alternatives is incredibly cognitively demanding, if possible to do at all. There are tradeoffs we have to make, between the benefits of certainty and assumptions, and the benefits of having an ‘open mind’, that I had not acknowledged before.

What is interesting is how questions about one kind of account (the bias one, which is explanatory) lead back to the questions they were in a sense meant to solve (the normative one). But perhaps this distinction is mistaken.

Argumentative clutter

A while back, not that long ago actually, you couldn’t escape memes about Marie Kondo, the Japanese de-cluttering expert and reality TV personality. The most famous one was to ask, about any object that you have laying around your house: does it spark joy? If it doesn’t, then you get rid of it.

Over at Philosophy15, run by our own Scott Aikin and Robert Talisse, they run another version of the “Owl of Minerva Problem.” Here’s the video (it’s a two-parter, this is part I):

A common stoic-type (Scott can confirm this) argument against extra stuff (one that I unsuccessfully employ all of the time) is that stuff just creates the need for more stuff. There’s a version of the this in Boethius’s Consolation.

Interestingly, this works for arguments as well, though there is no Marie Kondo here to help you. The better you get at arguments, the more argument furniture, rugs, tchotchkes you gather in the form of argument vocabulary, fallacy names, etc. In a sense, gathering this stuff is what it means, in the minds of many at least, to be good at arguing. The problem is that it gets subsumed into arguments such that you then have to gather more of it–more second (third?) order vocabulary, and so forth, to manage the misemployment of fallacy vocabulary, for instance.

One quick example of that. The Harry Potter Problem, so I call it, is the employment fallacy names (expecto ad hominem!) in place of ordinary language critique of argument. The Harry Potter problem only arises because we have a second-order vocabulary.

Anyway, back to the main point: you can get rid of stuff, lead a more simple life. This is not an option with arguments, even though the cause of the problem is pretty much the same. We’re stuck with the clutter. The only solution is more clutter.


Self straw manning

Image result for straw man

This is a continuation of Scott’s post from yesterday, where he observed that you can perform a kind of self straw man. You say something vague, knowing that you’re going to be “misinterpreted” and then you complain that you have been misinterpreted.

This kind of move–and I’ll give a slightly more subtle version of this in a moment–nicely illustrates the Owl of Minerva Problem for fallacy theory. The Owl of Minerva problem, as Scott and Robert Talisse describe it over at 3 Quarks Daily, runs like this:

But the Owl of Minerva Problem raises distinctive trouble for our politics, especially when politics is driven by argument and discourse. Here is why: once we have a critical concept, say, of a fallacy, we can deploy it in criticizing arguments. We may use it to correct an interlocutor. But once our interlocutors have that concept, that knowledge changes their behavior. They can use the concept not only to criticize our arguments, but it will change the way they argue, too. Moreover, it will also become another thing about which we argue. And so, when our concepts for describing and evaluating human argumentative behavior is used amidst those humans, it changes their behavior. They adopt it, adapt to it. They, because of the vocabulary, are moving targets, and the vocabulary becomes either otiose or abused very quickly.

The introduction of a metavocabulary will change the way we argue and it will, inevitably, become a thing we argue about.  The theoretical question is whether there is any distinction between the levels of meta-argumentation. The practical question is whether there is anything we can do about the seemingly inexorable journey to meta-argumentation. I have a theory on this but I’ll save that for another time.

Now for self straw manning.  This is a slightly more subtle version of yesterday’s example. Here’s the text (a bit longish, sorry) from a recent profile of Sam Harris by Nathan J.Robinson.

A number of critics labeled Harris “racist” or “Islamophobic” for his commentary on Muslims, charges that enraged him. First, he said, Islam is not a race, but a set of ideas. And second, while a phobia is an irrational fear, his belief about the dangers of Islam was perfectly rational, based on an understanding of its theological doctrines. The criticisms did not lead him to rethink the way he spoke about Islam,[4] but convinced him that ignorant Western leftists were using silly terms like “Islamophobia” to avoid facing the harsh truth that, contra “tolerance” rhetoric, Islam is not an “otherwise peaceful religion that has been ‘hijacked’ by extremists” but a religion that is “fundamentalist” and warlike at its core.[5]

Each time Harris said something about Islam that created outrage, he had a defense prepared. When he wondered why anybody would want any more “fucking Muslims,” he was merely playing “Devil’s advocate.” When he said that airport security should profile “Muslims, or anyone who looks like he or she could conceivably be Muslim, and we should be honest about it,” he was simply demanding acknowledgment that a 22-year old Syrian man was objectively more likely to engage in terrorism than a 90-year-old Iowan grandmother. (Harris also said that he wasn’t advocating that only Muslims should be profiled, and that people with his own demographic characteristics should also be given extra scrutiny.) And when he suggested that if an avowedly suicidal Islamist government achieved long-range nuclear weapons capability, “the only thing likely to ensure our survival may be a nuclear first strike of our own,” he was simply referring to a hypothetical situation and not in any way suggesting nuking the cities of actually-existing Muslims.[6]

It’s not necessary to use “Islamophobia” or the r-word in order to conclude that Harris was doing something both disturbing and irrational here. As James Croft of Patheos noted, Harris would follow a common pattern when talking about Islam: (1) Say something that sounds deeply extreme and bigoted. (2) Carefully build in a qualification that makes it possible to deny that the statement is literally bigoted. (3) When audiences react with predictable horror, point to the qualification in order to insist the audience must be stupid and irrational. How can you be upset with him for merely playing Devil’s Advocate? How can you be upset with him for advocating profiling, when he also said that he himself should be profiled? How can you object, unless your “tolerance” is downright pathological, to the idea that it would be legitimate to destroy a country that was bent on destroying yours?

Sam Harris is certainly a divisive figure. I’d also venture to guess that he is smart enough to know his audience, some of whom (such as Robinson here above) strongly disagree with him. He might be expected, therefore, for the purposes of having a productive debate, to make his commitments absolutely clear. This would involve, one would hope, avoiding bombastic utterances bound to provoke strong reactions or misinterpretations.

But, crucially, arguments are not always about convincing new people to adhere to your view, but to strengthen the attitudes of your followers. It seems to me that just such a tactic as the self-straw man is ideal. You get an opponent (cleverly, this case) to embody the very stereotype of the unreasonable, ideology-driven mismanager of fallacy vocabulary by setting up a straw man of your own view for them. They’re drawn to that but not to your qualifications and so the trap closes.

With consistency a great soul has simply nothing to do

There is no question that President Trump has done a 180 on military intervention in the Middle East. You can see the tweet record here.

It is reasonable, I think, to call this hypocrisy or inconsistency. That’s why we have those terms. They’re shorthand for saying, “you have changed your view without signaling any reasons for having done so.” Part of what this evaluation points out, in other words, is that it’s time for reasons. After all, there’s been a change, and we normally expect there to be something to justify the change.

So this is a discussion we ought to have and “hypocrite” or “inconsistent” are terms we need to use.  But that’s just me. Here’s Josh Marshall from TPM.

Donald Trump has said all manner of contradictory things about Syria and unilateral airstrikes. He said Obama shouldn’t attack in 2013 and insisted he needed congressional authorization to do so. Now he is contradicting both points. But whether or not Trump is hypocritical is not a terribly important point at the moment. Whether he’s changed his position isn’t that important. But the rapidity and totality with which he’s done so is important. There are compelling arguments on both sides of the intervention question. But impulsive, reactive, unconsidered actions seldom generate happy results.

Another way to put this is that while I agree it’s silly for the now to focus on calling Trump a hypocrite, the man’s mercurial and inconstant nature makes his manner of coming to the decision as important as the decision itself. That tells us whether he’ll have the same worldview tomorrow, whether this is part of any larger plan. There are arguments for intervention and restraint. But given what we know of Trump, it is highly uncertain that this is part of either approach. It may simply be blowing some stuff up.

Which is another way of saying his hypocrisy raises questions. This is why we have  meta-linguistic terminology. And the important thing about the metalanguage  is that it makes our analytical work easier. We don’t need to build new theories every time we encounter a problem.

James Brown’s hair

One reason we started this blog so many years ago was to create a repository of examples of bad arguments. There were, we thought, so many. There are, we still think, so many.

Since then, we’ve expanded our focus to theoretical questions about argumentation. One such question is whether there are actually any fallacious arguments at all. Part of this question concerns the usefulness of a meta-language of argument evaluation. Argument has a tendency to eat everything around it, which means evaluations of arguments will be included in the argument itself. To use a sports analogy, penalties are not separate from the game, they’re part of the strategy of the game. The use of fallacies, then, is just another layer of argument strategy and practice.

That’s not the usual argument, I think, against employing a meta-language of fallacy evaluation. Often rather the discussion hinges one whether such moves can be precisely identified, or whether it’s practically useful to point them out. These, like the first, are both excellent considerations.

On the other hand, there’s a heuristic usefulness to a set of meta-terms for argument evaluation. For one, it’s nice to have an organized mind about these things.  Second, people tend to make the same moves over and over. Consider this one from Bill O’Reilly last week:

https://www.youtube.com/watch?v=KWkanjdiMSc

In case you can’t watch, a brief summary (courtesy of CNN):

During an appearance on “Fox & Friends,” O’Reilly reacted to a clip of Rep. Maxine Waters (D-CA) delivering a speech on the floor of the House of Representatives.

I didn’t hear a word she said,” O’Reilly said of Waters. “I was looking at the James Brown wig.”

“If we have a picture of James Brown — it’s the same wig,” he added.

The classical version of the ad hominem goes like this: some speaker is disqualified on grounds not relevant to their competence, accuracy, etc. This seems like a pretty textbook example.

This brings me to another reason people have for skepticism about the usefulness of fallacy theory: fallacies, such as the one above, are so rare that it’s just not useful to spend time theorizing about them.

I don’t think so.

 

Fallacy theory and democracy

Instead of writing something myself today, I thought I’d post a link to this interesting piece by Scott Aikin and Robert Talisse on Democracy and the Owl of Minerva Problem. A critical graph:

We argue in our natural languages, and so often when we argue, we argue over economies, animals, environments, poverty, and so on. But arguments are structured collections of statements that are alleged to manifest certain kinds of logical relations; consequently, they, too, can be the subject of scrutiny and disagreement. And often in order to evaluate a claim about, say, poverty, we need to attend specifically to the argument alleged to support it. In order to discuss arguments, as arguments, we must develop a language about the argumentative use of language. That is, we must develop a metalanguage. The objective in developing a metalanguage about argument is to enable us to talk about a given argument’s quality without taking a side in the debate over the truth of its conclusion.

The critical idea is that our theory about deliberative debate always follows the debate itself. This explains our ill-preparedness for what these debates offer. See: 2016.

We’re Back

Sorry for the long hiatus–work and some wordpress issues. Anyway, we’ll be back to posting occasionally.

Here’s a paper worth reading: “The Fake, the Flimsy, and the Fallacious: Demarcating Arguments in Real Life”  by Boudry, Paglieri, and Pigliucci. Here’s the key argument:

We outline a destructive dilemma we refer to as the Fallacy Fork: on the one hand, if fallacies are construed as demonstrably invalid form of reasoning, then they have very limited applicability in real life (few actual instances). On the other hand, if our definitions of fallacies are sophisticated enough to capture real-life complexities, they can no longer be held up as an effective tool for discriminating good and bad forms of reasoning.

In addition to other questions (which I’ll maybe discuss later), I wonder very strongly about the empirical verifiability of the first horn.

Applied epistemology

Interesting read over at the Leiter Reports (by guest blogger Peter Ludlow).  A taste:

Yesterday some friends on Facebook were kicking around the question of whether there is such a thing as applied epistemology and if so what it covers.  There are plenty of candidates, but there is one notion of applied epistemology that I’ve been pushing for a while – the idea that groups engage in strategies to undermine the epistemic position of their adversaries.

In the military context this is part of irregular warfare (IW) and it often employs elements of PSYOPS (psychological operations).  Applied epistemology should help us develop strategies for armoring ourselves against these PSYOPS.   I wrote a brief essay on the idea here. What most people don’t realize is that PSYOPS aren’t just deployed in the battlefield, but they are currently being deployed in our day-to-day lives, and I don’t just mean via advertising and public relations.

This very much seems like a job for fallacy theory, broadly speaking.  Here’s an example from the article referred to above:

One of the key observations by Waltz is that an epistemic attack on an organization does not necessarily need to induce false belief into the organization; it can sometimes be just as effective to induce uncertainty about information which is in point of fact reliable. When false belief does exist in an organization (as it surely does in every organization and group) the goal might then be to induce confidence in the veracity of these false beliefs. In other words, epistemic attack is not just about getting a group to believe what is false, it is about getting the group to have diminished credence in what is true and increased credence in what is false.

One obvious mechanism for this goal is the time-honored art of sophistry.

Thanks Phil Mayo for the pointer!