Tag Archives: fallacy theory

Self straw manning

Image result for straw man

This is a continuation of Scott’s post from yesterday, where he observed that you can perform a kind of self straw man. You say something vague, knowing that you’re going to be “misinterpreted” and then you complain that you have been misinterpreted.

This kind of move–and I’ll give a slightly more subtle version of this in a moment–nicely illustrates the Owl of Minerva Problem for fallacy theory. The Owl of Minerva problem, as Scott and Robert Talisse describe it over at 3 Quarks Daily, runs like this:

But the Owl of Minerva Problem raises distinctive trouble for our politics, especially when politics is driven by argument and discourse. Here is why: once we have a critical concept, say, of a fallacy, we can deploy it in criticizing arguments. We may use it to correct an interlocutor. But once our interlocutors have that concept, that knowledge changes their behavior. They can use the concept not only to criticize our arguments, but it will change the way they argue, too. Moreover, it will also become another thing about which we argue. And so, when our concepts for describing and evaluating human argumentative behavior is used amidst those humans, it changes their behavior. They adopt it, adapt to it. They, because of the vocabulary, are moving targets, and the vocabulary becomes either otiose or abused very quickly.

The introduction of a metavocabulary will change the way we argue and it will, inevitably, become a thing we argue about.  The theoretical question is whether there is any distinction between the levels of meta-argumentation. The practical question is whether there is anything we can do about the seemingly inexorable journey to meta-argumentation. I have a theory on this but I’ll save that for another time.

Now for self straw manning.  This is a slightly more subtle version of yesterday’s example. Here’s the text (a bit longish, sorry) from a recent profile of Sam Harris by Nathan J.Robinson.

A number of critics labeled Harris “racist” or “Islamophobic” for his commentary on Muslims, charges that enraged him. First, he said, Islam is not a race, but a set of ideas. And second, while a phobia is an irrational fear, his belief about the dangers of Islam was perfectly rational, based on an understanding of its theological doctrines. The criticisms did not lead him to rethink the way he spoke about Islam,[4] but convinced him that ignorant Western leftists were using silly terms like “Islamophobia” to avoid facing the harsh truth that, contra “tolerance” rhetoric, Islam is not an “otherwise peaceful religion that has been ‘hijacked’ by extremists” but a religion that is “fundamentalist” and warlike at its core.[5]

Each time Harris said something about Islam that created outrage, he had a defense prepared. When he wondered why anybody would want any more “fucking Muslims,” he was merely playing “Devil’s advocate.” When he said that airport security should profile “Muslims, or anyone who looks like he or she could conceivably be Muslim, and we should be honest about it,” he was simply demanding acknowledgment that a 22-year old Syrian man was objectively more likely to engage in terrorism than a 90-year-old Iowan grandmother. (Harris also said that he wasn’t advocating that only Muslims should be profiled, and that people with his own demographic characteristics should also be given extra scrutiny.) And when he suggested that if an avowedly suicidal Islamist government achieved long-range nuclear weapons capability, “the only thing likely to ensure our survival may be a nuclear first strike of our own,” he was simply referring to a hypothetical situation and not in any way suggesting nuking the cities of actually-existing Muslims.[6]

It’s not necessary to use “Islamophobia” or the r-word in order to conclude that Harris was doing something both disturbing and irrational here. As James Croft of Patheos noted, Harris would follow a common pattern when talking about Islam: (1) Say something that sounds deeply extreme and bigoted. (2) Carefully build in a qualification that makes it possible to deny that the statement is literally bigoted. (3) When audiences react with predictable horror, point to the qualification in order to insist the audience must be stupid and irrational. How can you be upset with him for merely playing Devil’s Advocate? How can you be upset with him for advocating profiling, when he also said that he himself should be profiled? How can you object, unless your “tolerance” is downright pathological, to the idea that it would be legitimate to destroy a country that was bent on destroying yours?

Sam Harris is certainly a divisive figure. I’d also venture to guess that he is smart enough to know his audience, some of whom (such as Robinson here above) strongly disagree with him. He might be expected, therefore, for the purposes of having a productive debate, to make his commitments absolutely clear. This would involve, one would hope, avoiding bombastic utterances bound to provoke strong reactions or misinterpretations.

But, crucially, arguments are not always about convincing new people to adhere to your view, but to strengthen the attitudes of your followers. It seems to me that just such a tactic as the self-straw man is ideal. You get an opponent (cleverly, this case) to embody the very stereotype of the unreasonable, ideology-driven mismanager of fallacy vocabulary by setting up a straw man of your own view for them. They’re drawn to that but not to your qualifications and so the trap closes.

With consistency a great soul has simply nothing to do

There is no question that President Trump has done a 180 on military intervention in the Middle East. You can see the tweet record here.

It is reasonable, I think, to call this hypocrisy or inconsistency. That’s why we have those terms. They’re shorthand for saying, “you have changed your view without signaling any reasons for having done so.” Part of what this evaluation points out, in other words, is that it’s time for reasons. After all, there’s been a change, and we normally expect there to be something to justify the change.

So this is a discussion we ought to have and “hypocrite” or “inconsistent” are terms we need to use.  But that’s just me. Here’s Josh Marshall from TPM.

Donald Trump has said all manner of contradictory things about Syria and unilateral airstrikes. He said Obama shouldn’t attack in 2013 and insisted he needed congressional authorization to do so. Now he is contradicting both points. But whether or not Trump is hypocritical is not a terribly important point at the moment. Whether he’s changed his position isn’t that important. But the rapidity and totality with which he’s done so is important. There are compelling arguments on both sides of the intervention question. But impulsive, reactive, unconsidered actions seldom generate happy results.

Another way to put this is that while I agree it’s silly for the now to focus on calling Trump a hypocrite, the man’s mercurial and inconstant nature makes his manner of coming to the decision as important as the decision itself. That tells us whether he’ll have the same worldview tomorrow, whether this is part of any larger plan. There are arguments for intervention and restraint. But given what we know of Trump, it is highly uncertain that this is part of either approach. It may simply be blowing some stuff up.

Which is another way of saying his hypocrisy raises questions. This is why we have  meta-linguistic terminology. And the important thing about the metalanguage  is that it makes our analytical work easier. We don’t need to build new theories every time we encounter a problem.

James Brown’s hair

One reason we started this blog so many years ago was to create a repository of examples of bad arguments. There were, we thought, so many. There are, we still think, so many.

Since then, we’ve expanded our focus to theoretical questions about argumentation. One such question is whether there are actually any fallacious arguments at all. Part of this question concerns the usefulness of a meta-language of argument evaluation. Argument has a tendency to eat everything around it, which means evaluations of arguments will be included in the argument itself. To use a sports analogy, penalties are not separate from the game, they’re part of the strategy of the game. The use of fallacies, then, is just another layer of argument strategy and practice.

That’s not the usual argument, I think, against employing a meta-language of fallacy evaluation. Often rather the discussion hinges one whether such moves can be precisely identified, or whether it’s practically useful to point them out. These, like the first, are both excellent considerations.

On the other hand, there’s a heuristic usefulness to a set of meta-terms for argument evaluation. For one, it’s nice to have an organized mind about these things.  Second, people tend to make the same moves over and over. Consider this one from Bill O’Reilly last week:

In case you can’t watch, a brief summary (courtesy of CNN):

During an appearance on “Fox & Friends,” O’Reilly reacted to a clip of Rep. Maxine Waters (D-CA) delivering a speech on the floor of the House of Representatives.

I didn’t hear a word she said,” O’Reilly said of Waters. “I was looking at the James Brown wig.”

“If we have a picture of James Brown — it’s the same wig,” he added.

The classical version of the ad hominem goes like this: some speaker is disqualified on grounds not relevant to their competence, accuracy, etc. This seems like a pretty textbook example.

This brings me to another reason people have for skepticism about the usefulness of fallacy theory: fallacies, such as the one above, are so rare that it’s just not useful to spend time theorizing about them.

I don’t think so.


Fallacy theory and democracy

Instead of writing something myself today, I thought I’d post a link to this interesting piece by Scott Aikin and Robert Talisse on Democracy and the Owl of Minerva Problem. A critical graph:

We argue in our natural languages, and so often when we argue, we argue over economies, animals, environments, poverty, and so on. But arguments are structured collections of statements that are alleged to manifest certain kinds of logical relations; consequently, they, too, can be the subject of scrutiny and disagreement. And often in order to evaluate a claim about, say, poverty, we need to attend specifically to the argument alleged to support it. In order to discuss arguments, as arguments, we must develop a language about the argumentative use of language. That is, we must develop a metalanguage. The objective in developing a metalanguage about argument is to enable us to talk about a given argument’s quality without taking a side in the debate over the truth of its conclusion.

The critical idea is that our theory about deliberative debate always follows the debate itself. This explains our ill-preparedness for what these debates offer. See: 2016.

We’re Back

Sorry for the long hiatus–work and some wordpress issues. Anyway, we’ll be back to posting occasionally.

Here’s a paper worth reading: “The Fake, the Flimsy, and the Fallacious: Demarcating Arguments in Real Life”  by Boudry, Paglieri, and Pigliucci. Here’s the key argument:

We outline a destructive dilemma we refer to as the Fallacy Fork: on the one hand, if fallacies are construed as demonstrably invalid form of reasoning, then they have very limited applicability in real life (few actual instances). On the other hand, if our definitions of fallacies are sophisticated enough to capture real-life complexities, they can no longer be held up as an effective tool for discriminating good and bad forms of reasoning.

In addition to other questions (which I’ll maybe discuss later), I wonder very strongly about the empirical verifiability of the first horn.

Applied epistemology

Interesting read over at the Leiter Reports (by guest blogger Peter Ludlow).  A taste:

Yesterday some friends on Facebook were kicking around the question of whether there is such a thing as applied epistemology and if so what it covers.  There are plenty of candidates, but there is one notion of applied epistemology that I’ve been pushing for a while – the idea that groups engage in strategies to undermine the epistemic position of their adversaries.

In the military context this is part of irregular warfare (IW) and it often employs elements of PSYOPS (psychological operations).  Applied epistemology should help us develop strategies for armoring ourselves against these PSYOPS.   I wrote a brief essay on the idea here. What most people don’t realize is that PSYOPS aren’t just deployed in the battlefield, but they are currently being deployed in our day-to-day lives, and I don’t just mean via advertising and public relations.

This very much seems like a job for fallacy theory, broadly speaking.  Here’s an example from the article referred to above:

One of the key observations by Waltz is that an epistemic attack on an organization does not necessarily need to induce false belief into the organization; it can sometimes be just as effective to induce uncertainty about information which is in point of fact reliable. When false belief does exist in an organization (as it surely does in every organization and group) the goal might then be to induce confidence in the veracity of these false beliefs. In other words, epistemic attack is not just about getting a group to believe what is false, it is about getting the group to have diminished credence in what is true and increased credence in what is false.

One obvious mechanism for this goal is the time-honored art of sophistry.

Thanks Phil Mayo for the pointer!