All posts by John Casey

Blogger

The cure is worse than the disease

Image result for the horse's mouth

The intuition that political polarization is caused by lack of access to dissenting views has much to recommend it. First of all, if you don’t know what these views are, you can’t learn about them. Second, if you only know strongly dialectical or distorted (straw man) versions of them, you’re unlikely to find your opponents to be reasonable people with plausible views. The obvious antidote to this would seem to be to sit and listen to dissenting voices in their own words.  Let’s call this view, “the horse’s mouth” Looking into the horse’s mouth will have a moderating effect; for,  people are eminently reasonable, so if you just listen to them in their own reasonable words you’ll be compelled to admit that (and so abandon your polarized, straw man versions of their view).

Now comes science to spoil everyone’s intuitions. Some political scientists have tested whether this decreases polarization. The long and the short of it is that it doesn’t and it may (though this result was within the margin of error) increase it. From their paper:

Social media sites are often blamed for exacerbating political polarization by creating “echo chambers” that prevent people from being exposed to information that contradicts their preexisting beliefs. We conducted a field experiment that offered a large group of Democrats and Republicans financial compensation to follow bots that retweeted messages by elected officials and opinion leaders with opposing political views. Republican participants expressed substantially more conservative views after following a liberal Twitter bot, whereas Democrats’ attitudes became slightly more liberal after following a conservative Twitter bot—although this effect was not statistically significant. Despite several limitations, this study has important implications for the emerging field of computational social science and ongoing efforts to reduce political polarization online.

This is disappointing in part because things were looking good for the horse’s mouth view. For it has recently been shown that another representationalist paradox–the backfire effect–had failed to replicate. In the “backfire effect” study, Brendan Nyhan and Jason Reifler found that attempts to correct mistaken information would backfire in certain circumstances. The idea, in other words, is that exposure to facts is not sufficient for correction and may in fact make one retrench.

Naturally, we should be cautious with such results, as the authors themselves warn:

Although our findings should not be generalized beyond party-identified Americans who use Twitter frequently, we note that recent studies indicate this population has an outsized influence on the trajectory of public discussion—particularly as the media itself has come to rely upon Twitter as a source of news and a window into public opinion (47).

In closing here I might venture a hypothesis for why people didn’t moderate their view. Prominent politicians on Twitter, from what I’ve observed, produce content for a partisan audience.  Often that partisan audience is already polarized and it isn’t particularly well-informed. Content that appeals to them, viewed by an observer, might only tend to confirm the worst views about them.  If you see a bunch of tweets urging you to “lock her up,” you can hardly be blamed for thinking them to be idiots.

Self straw manning

Image result for straw man

This is a continuation of Scott’s post from yesterday, where he observed that you can perform a kind of self straw man. You say something vague, knowing that you’re going to be “misinterpreted” and then you complain that you have been misinterpreted.

This kind of move–and I’ll give a slightly more subtle version of this in a moment–nicely illustrates the Owl of Minerva Problem for fallacy theory. The Owl of Minerva problem, as Scott and Robert Talisse describe it over at 3 Quarks Daily, runs like this:

But the Owl of Minerva Problem raises distinctive trouble for our politics, especially when politics is driven by argument and discourse. Here is why: once we have a critical concept, say, of a fallacy, we can deploy it in criticizing arguments. We may use it to correct an interlocutor. But once our interlocutors have that concept, that knowledge changes their behavior. They can use the concept not only to criticize our arguments, but it will change the way they argue, too. Moreover, it will also become another thing about which we argue. And so, when our concepts for describing and evaluating human argumentative behavior is used amidst those humans, it changes their behavior. They adopt it, adapt to it. They, because of the vocabulary, are moving targets, and the vocabulary becomes either otiose or abused very quickly.

The introduction of a metavocabulary will change the way we argue and it will, inevitably, become a thing we argue about.  The theoretical question is whether there is any distinction between the levels of meta-argumentation. The practical question is whether there is anything we can do about the seemingly inexorable journey to meta-argumentation. I have a theory on this but I’ll save that for another time.

Now for self straw manning.  This is a slightly more subtle version of yesterday’s example. Here’s the text (a bit longish, sorry) from a recent profile of Sam Harris by Nathan J.Robinson.

A number of critics labeled Harris “racist” or “Islamophobic” for his commentary on Muslims, charges that enraged him. First, he said, Islam is not a race, but a set of ideas. And second, while a phobia is an irrational fear, his belief about the dangers of Islam was perfectly rational, based on an understanding of its theological doctrines. The criticisms did not lead him to rethink the way he spoke about Islam,[4] but convinced him that ignorant Western leftists were using silly terms like “Islamophobia” to avoid facing the harsh truth that, contra “tolerance” rhetoric, Islam is not an “otherwise peaceful religion that has been ‘hijacked’ by extremists” but a religion that is “fundamentalist” and warlike at its core.[5]

Each time Harris said something about Islam that created outrage, he had a defense prepared. When he wondered why anybody would want any more “fucking Muslims,” he was merely playing “Devil’s advocate.” When he said that airport security should profile “Muslims, or anyone who looks like he or she could conceivably be Muslim, and we should be honest about it,” he was simply demanding acknowledgment that a 22-year old Syrian man was objectively more likely to engage in terrorism than a 90-year-old Iowan grandmother. (Harris also said that he wasn’t advocating that only Muslims should be profiled, and that people with his own demographic characteristics should also be given extra scrutiny.) And when he suggested that if an avowedly suicidal Islamist government achieved long-range nuclear weapons capability, “the only thing likely to ensure our survival may be a nuclear first strike of our own,” he was simply referring to a hypothetical situation and not in any way suggesting nuking the cities of actually-existing Muslims.[6]

It’s not necessary to use “Islamophobia” or the r-word in order to conclude that Harris was doing something both disturbing and irrational here. As James Croft of Patheos noted, Harris would follow a common pattern when talking about Islam: (1) Say something that sounds deeply extreme and bigoted. (2) Carefully build in a qualification that makes it possible to deny that the statement is literally bigoted. (3) When audiences react with predictable horror, point to the qualification in order to insist the audience must be stupid and irrational. How can you be upset with him for merely playing Devil’s Advocate? How can you be upset with him for advocating profiling, when he also said that he himself should be profiled? How can you object, unless your “tolerance” is downright pathological, to the idea that it would be legitimate to destroy a country that was bent on destroying yours?

Sam Harris is certainly a divisive figure. I’d also venture to guess that he is smart enough to know his audience, some of whom (such as Robinson here above) strongly disagree with him. He might be expected, therefore, for the purposes of having a productive debate, to make his commitments absolutely clear. This would involve, one would hope, avoiding bombastic utterances bound to provoke strong reactions or misinterpretations.

But, crucially, arguments are not always about convincing new people to adhere to your view, but to strengthen the attitudes of your followers. It seems to me that just such a tactic as the self-straw man is ideal. You get an opponent (cleverly, this case) to embody the very stereotype of the unreasonable, ideology-driven mismanager of fallacy vocabulary by setting up a straw man of your own view for them. They’re drawn to that but not to your qualifications and so the trap closes.

Poe’s law and hoaxes

Some of you may be familiar by now with the second in a series of hoaxes perpetrated by Peter Boghossian* (Portland State University’s Philosophy Department), James Lindsay, and Helen Pluckrose  (the editor of Areo, the online journal that published the hoaxes findings). The first of these hoaxes, by Boghossian and Lindsay, got a fraudulent  (what that means we’ll have to discuss) article into a very weak pay-to-play journal. They then drew dark conclusions about that fact for the future of scholarship.  You can read a very sound rebuttal of their work  by CUNY’s Massimo Pigliucci here. TL;DR: the hoax was if anything a hoax on many credulous members of the so-called skeptical movement, who thought that posting a crap article in a crap journal meant something.

The latest version of the hoax improves upon the methodology of the first one significantly–it avoided, from what I can tell, the pay-to-play journals and, importantly, it produced a larger number of fraudulent (I’m still not sure this is the right term) article.  In all, the trio wrote 20 and managed to get seven accepted. They even managed to get one of these articles accepted by Hypatia (which has had its own problems recently).

A couple of minor criticisms before I move on to the main point of this post. Other than Hypatia the other journals are hardly top-tier.  (e.g., Journal of Poetry Therapy?).  I’m also puzzled that they call this stuff “humanities” (in the introduction to the project and elsewhere). Other than the Hypatia piece, most of the stuff is what humanities people such as myself would call “social science.” While on the surface this might appear to be a minor terminological issue, there’s a big difference when you get down to it. People may think they have shown something about history, philosophy, and literature when only two of twenty had that focus.

If you’re interested in reading more criticisms, this piece in Buzzfeed does a pretty good job of summarizing the main complaints.

As an argumentative matter, I think this is a lot of wasted effort. Are there absolutely crappy papers that make it through the publishing process? Absolutely. I bet you could ask anyone who reads this stuff and they could point you to some. Sorting this stuff out, however, is just what one does in Academia–this article was bad, let me refute it; this article was bad, so bad we’re going to ignore it. Those are criticisms. And cumulatively over time these criticisms yield results of a kind–results far better than producing some bad work narrowly tailored to pass muster at gullible journals.

If they’ve shown anything conclusively here, it’s that you can produce shoddy work insincerely. Some of the work they produced was accepted only after revisions. Doing those revisions meant insincerely adapting their work to some kind of standard. Whether that standard is a good one is what people dispute (and why, ultimately, there’s  ranking of journals and so forth). But, speaking of insincerity, you can accidentally stumble into a good point. Consider this bit from one of the hoax papers:

Thesis: When a man privately masturbates while fantasizing about a woman who has not given him permission to do so, or while fantasizing about her in ways she hasn’t consented to, he has committed “metasexual” violence against her, even if she never finds out. “Metasexual” violence is described as a kind of nonphysical sexual violence that causes depersonalization of the woman by sexually objectifying her and making her a kind of mental prop used to facilitate male orgasm.

Purpose: To see if the definition of sexual violence can be expanded into thought crimes..

This was from a paper that was rejected. Oddly, they’ve stumbled into a sort of virtue theory argument here. Certain activities are wrong not because they actively harm another person only, but also because they turn their perpetrator into the kind of person who would do that kind of bad thing or at least enjoy that kind of thing. It’s bad, but for primarily self-regarding reasons. Stated this way it’s not great (remember the paper was rejected) but in all of the attempt to do a clever hoax, they actually run over the line into something plausible. The fact, however, that they can’t see the line is evidence that their failure to grasp the meaning of the term “humanities” was more than a mere oversight.

So there’s one problem with hoaxing: you might accidentally make the matter hinge on sincerity. Again, the fact that people write insincere papers is not particularly surprising. Demonstrating this fact is certainly not worth the effort they put into it.

Another feature of the hoax–its baseline logical feature–comes out of Poe’s Law–the eponymous internet law that says that a view is absurd to the extent that it’s impossible to create believable satire of it without saying explicitly: this is satire. As it happens, Scott discussed this here (also, follow the references at the end for more). There, the thought was that there are always weak adherents of views to turn the satire into reportage.

So it’s true in this case. It’s not a secret that there exists really crappy, politically-motivated, or downright unethical work in academia. It’s also not surprising that if you try to satirize some of that work, some people will not recognize it as satire and will take it as genuine work. The more direct route to that thesis is just to look at the work. Such work exists, of course, as it was the premise of the entire hoax.

A somewhat sad coda to this was the tweet thread of the graduate student who refereed that paper. He spent hours crafting feedback for what he thought was an earnest, but inexperienced, scholar. Journals such as these are where such earnest scholars go to continue the discussion and to continue their professional development. So, the net effect of the hoax is that some one of these apparently earnest but inexperienced scholars might be an earnest but insincere person looking to waste your time.

*Not to be confused with philosopher Paul Boghossian (NYU) who is now dealing with mistaken requests for interviews.

Why we argue

The second edition of Why we argue  (and how we should) by Robert Talisse and our own Scott Aikin is now out. You can get it here or (what’s better) at your local bookstore.

Devoted readers of this site will recognize some of the ideas, but (and perhaps I’m biased) all will appreciate its lively approach to the topic of disagreement and informal logic. It’s primary virtue is that  it’s a self-aware discussion of informal reasoning–it recognizes that everyone is already familiar with the metalanguage of argument and this is what amounts to its biggest challenges.  Along these lines, the new edition has stuff on deep disagreement, the Owl of Minerva Problem, and online arguing.

It will be worth your time.

The fake straw man

Typically, a straw man argument is some kind of misrepresentation (by selection, by distortion, or by invention) in order to conclude that some alternative position is stronger by comparison. We often think that last part–that some alternative position is stronger–is the key move. You use a straw man to go somewhere else with the argument.

So, for instance, “the Affordable Care Act (ACA or Obamacare) is communism,” distorts the ACA in favor of a more sensible, non-communist version.

This morning I was struck by an account of a strategic use of distortion that skips the last, crucial step in straw manning: the sensible alternative. Here it is:

For context, this self-retweet is meant to characterize President Trump’s approach to revisions (rather, alleged revisions) to the North American Free Trade Agreement (NAFTA). The argument runs: NAFTA is bad (for exaggerated reasons), engage in a lengthy back-and-forth, NAFTA is fixed (when it’s the same).

You can see from the example that the distortion is almost entirely self-enclosed. In the first stage, it presents a distorted account of the current realty. So far, that’s very straw manny. But, rather than offering an allegedly more sensible alternative, it offers a second distortion, which takes us back to a non-distorted version of the status quo.

This version–I don’t know what to call it–retains all of the puffery of the standard versions: look at how dumb my opponents are! And it doubles that puffery by turning the exchange entirely into a show about how awesome you are.  You’re not as awesome if you have to share the credit with someone else.

Perhaps the more precise account is this: you distort an interlocutor’s position so that you can occupy the non-distorted version. So, the alternative position is strong enough as it is. The only problem is who is occupying it–not you. You have to steal it. To do that, you have to trick your opponent into leaving it.

There are some natural advantages to this. It’s easier to occupy an already constructed position than to make up a new one. Just ask the Great Horned Owl.  There’s got to be a real estate version of this scam. The closest I can find is the real estate practice of blockbusting, where unscrupulous developers scarred white people out of their homes in order to resell them at much higher prices to black families.

Rudy *follow up*

Yesterday it seemed to me that Rudy Giuliani was not doing the standard relativist argument, but, more ominously for our democratic institutions, was (incoherently) challenging the adversarial process for settling questions of fact.

The initial relativism of factual claims is the condition of the adversarial legal system. Someone says x, someone says y, and an impartial judge listens to their arguments and makes a factual determination. The presupposition is that both cannot be correct.

Giuliani seemed to be arguing that because of the disagreement over the factual claims at issue in this case, no resolution is possible, and so any result at all is bound to be unjust to one of the parties. Naturally, this view itself favors one of the parties (conveniently, his).

Today, however, it seems he’s just about to go full relativism, but only with regard to certain questions.

Consider the following exchange from an interview with Fox News:

MacCallum: What did you mean by that?

Giuliani: Oh, very simple. I’m talking about in this particular situation, one person says the Flynn conversation took place. The other person says the Flynn conversation didn’t take place. What’s the truth? You tell me how you figure out the truth. 

MacCallum: Well either it did or it didn’t.

Giuliani: It’s like the tree falling in the forest. Did anybody hear it? I mean how do we know what the truth is?  

MacCallum: You’re talking about whether or not the president asked James Comey to go easy on Michael Flynn. And James Comey says he did, and the president says he didn’t.

Giuliani: That’s right and they will possibly charge him with perjury should he give that answer. That’s why I’m saying in situations like this, to prosecutors, the truth is relative and it’s not absolute like some philosophical concept.

Unlike the tree case, there were observers to whatever conversation we’re talking about–namely, the participants. So points off Giuliani for not noticing that.

Now again it seems to me that the law does have a system for handling cases of, what to call them, extreme factual disagreement–a trial. If it is the case, as Giuliani alleges, that nothing can be known for certain, then there’s a default setting (to the defense).

So Giuliani again balks at embracing full-throated relativism. He’s only a relativist when it’s convenient.

 

Pontification on moral theology

In a conversation with NBC’s Chuck Todd on Meet the Press, Donald Trump’s personal lawyer, Rudy Giuliani, remarked, puzzlingly, that “truth isn’t truth.” Here’s Politico’s reconstruction of the exchange:

“When you tell me that, you know, he should testify because he’s going to tell the truth and he shouldn’t worry, well that’s so silly because it’s somebody’s version of the truth. Not the truth,” Giuliani told Chuck Todd on NBC’s “Meet the Press” on Sunday morning

“Truth is truth,” Todd responded.

“No, no, it isn’t truth,” Giuliani said. “Truth isn’t truth. The President of the United States says, “I didn’t …”

A startled Todd answered: “Truth isn’t truth?”

Giuliani: “No, no, no.”

Todd said: “This is going to become a bad meme.“

This has occasioned lessons in metaphysics from former FBI chief, James Comey:

Not that these guys need any iron-manning, but it seems to me that this (like Kellyanne Conway’s alternative facts)  is pretty banal claim inartfully stated. Even the Politico reconstruction makes this obvious: Giuliani’s worry is that Mueller will be working with a different set of alleged facts, so there might be disagreement that looks bad for Trump. I think it’s hard to disagree with this view.

There’s a better version of the objection, I think (and I haven’t seen it yet, but I’m guessing someone somewhere has said this).

A slightly more uncharitable version of Giuliani’s utterance might go like this: Giuliani (and Conway before him) mean to undermine our processes of finding the truth. Part of the process for discovering the truth in our adversarial legal system is an interview such as the one Mueller wants to hold.  It is of course true that Mueller has (probably) collected, at this stage, a set of claims he thinks are true. But, as far as I know, and I am not a lawyer, Mueller is an investigator and not a judge and a jury. He likely also knows this. The problem, then with Giuliani’s claim is that it rejects the adversarial process for the reason that there will be disagreement over which claims are true, which is, after all, the very point of the adversarial process.

Late update. Here’s Giuliani’s Twitter clarification:

The view seems to be that if you have contradictory statements, he-said-she-said, then no process is adequate to discover it. Take note, criminal defendants!

Dissensus profundus

To have a meaningful or maybe productive disagreement you should be able to identify what it is you disagree about. Once, for example, I had a disagreement with a neighbor over whether some or other species of vine was an invasive (it was and she was right). It was easy in that case to point to the source of our disagreement: some factual claim about Boston Ivy (irrelevant side note: there are no climbing ivy species native to Chicago). Crucially, it was also easy to point to a source for confidence in such claims about plants: a plant manual (or something like that).

Sometimes, however, it’s easy to point to what you disagree about, but not easy to find a solution–this is because you disagree about what a solution would be. This is a deep disagreement (check on this project on the topic). You disagree so fundamentally that you disagree about disagreeing.

On this topic today I learned, courtesy of Dr. Sara J.Uckelman’s Medieval Logic and Semantics blog, a Latin phrase for this situation:

Contra negantem principia non est disputandum

Or: “against someone who denies principles there can’t be a debate.”

Well, in some cases, according to Duns Scotus, there is one thing you can do:

Et ideo negantes talia manifesta indigent poena vel scientia vel sensu, quia secundum Avicennam primo Metaphysicae : Negantes primum principium sunt vapulandi vel exponendi igni, quousque concedant quod non est idem comburi et non comburi, vapulari et non vapulari.

And thus those who deny such manifest things need punishment or knowledge or sense, because, According to Avicenna (I Metaphysics): those denying a first principle ought to be beaten or burnt until they concede that being burned is not the same as not being burned and being beaten is not the same as not being beaten.

There you might have a valid case of ad baculum, though I don’t recommend this as a general principle.

Too much argument

In a recent TED-X talk in Nashville, Robert Talisse (Vanderbilt) argues that to save democracy, we need to do less of it. Here’s the video:

There’s such a strong connection between argument and democracy that I think what we’re being asked to less of is not so much democracy, but argument. We should argue less in order to argue better.