All posts by John Casey

Blogger

The Straw We

Lately with all of the doings I’ve run across a few examples of what we might call the “Straw We.” Here’s  (a not particularly ideal) one:

The idea is that this is false. Many indeed would have and did think this.

As far as I can tell, Steven Pinker first identified the “Straw We” in a review of a book by Malcolm Gladwell (we discussed it here briefly, some years back). The idea is that someone represents the view held by us, most of us, or some subset of us, to be more weak, less nuanced, less sophisticated, etc. as it actually is for the purposes, I imagine, of highlighting the ingeniousness of some discovery or other.

Here’s how Pinker put it:

The banalities come from a gimmick that can be called the Straw We. First Gladwell disarmingly includes himself and the reader in a dubious consensus — for example, that “we” believe that jailing an executive will end corporate malfeasance, or that geniuses are invariably self-made prodigies or that eliminating a risk can make a system 100 percent safe. He then knocks it down with an ambiguous observation, such as that “risks are not easily manageable, accidents are not easily preventable.” As a generic statement, this is true but trite: of course many things can go wrong in a complex system, and of course people sometimes trade off safety for cost and convenience (we don’t drive to work wearing crash helmets in Mack trucks at 10 miles per hour). But as a more substantive claim that accident investigations are meaningless “rituals of reassurance” with no effect on safety, or that people have a “fundamental tendency to compensate for lower risks in one area by taking greater risks in another,” it is demonstrably false.

At first glance, the Straw We might seem to be a narrative device meant to enhance the originality or insight of what responds to it. I can even think of several pastiches of it (mostly from the Simpsons). This technique might serve somewhat innocently to increase the uptake of whatever it is one is trying to say (or maybe sell) by way of heightening the contrast. You sharpen the contrast in order to make the lesson clearer.

The Straw We’s pernicious use mirrors the pernicious use of the regular straw argument strategy: it misrepresents the dialectical terrain: things aren’t really so shallow or silly.  But the Straw We draws attention to a particular feature of straw arguments, noticed by someone whose name I don’t have handy (but will fill in later). In a straw argument, the misrepresentation of an opponent enhances the stature of the one doing the misrepresenting (and at the cost of the ones who are misrepresented) to an onlooking audience. Someone in other words looks a fool so that someone else can claim insight. The same account might be given of the Iron Man or Iron Argument: the enhancement of the argument (and consequent impugning of critics) enhances the stature of the one doing the enhancing.

The Straw We, then, highlights the performative aspects of argument. You can rarely just assert your authority on some matter (trust me, I’ve tried) and expect to be believed. You have to perform it. An easy to way to perform it for an onlooking audience, and so establish your own awesomeness, is to find some dumb thing to beat up on: Look at me, I’m winning.

Hypocrisy that isn’t

Image result for hypocrisy

Many years back, Megan McArdle wrote that people who want higher taxes can just tax themselves and donate the excess. Their failure do this reveals their hypocrisy, so they should STFU.

That’s silly.

There was another example of this over the weekend. Tomi Lahren,  a conservative internet personality, decries Obamacare to beat the band. It so turns out that the 24 year-old benefits from a central provision of the law (that children can remain on their parents’ health insurance until the age of 26). To some (and to judge by internet trends many) this is some kind of rank hypocrisy:

Tomi Lahren says the Affordable Care Act is a disaster that’s in a death spiral, but that hasn’t stopped her from benefitting from it.

During an interview with comedian Chelsea Handler at Politicon, the controversial 24-year-old conservative commentator trashed Obamacare for 10 minutes before admitting to the crowd that she was, in fact, still on her parents’ health insurance plan.

“Okay, so do you have a health care plan or no?” Handler asked.

“Well, luckily I’m 24, so I am still on my parents’,” Lahren said.

The irony was not lost on the audience, which promptly started to boo, laugh and chant, “Thanks, Obama!”

A law as complex as Obamacare (it was many pages long, remember), affects pretty much everyone. This makes it nearly impossible for one to avoid benefiting (or suffering from) some of its provisions. The fact that Lahren benefits from a provision of it does not make her a hypocrite anymore than the non-volunteering tax payer of Megan McArdle’s imagination.

Indeed, as we’ve probably pointed out before, the practical impossibility of avoiding something is often a condition of laws. If this fact–the fact that you’re subject to thing you disagree with–were disqualifying, it’d be very hard to have disagreements in a democracy at all.

For more on tu quoque arguments, see this post by Scott.

Virtue signaling

Image result for semaphoreThe term “virtue signaling” has been around for a few years now, though there’s some dispute about its origin. This guy claims he invented it.  But that seems to be false, the term has been in circulation for much longer than that.

In any case, in its barest sense, signaling is a kind of implicature. I signal one thing by doing another. Virtue signaling borrows from this somewhat imperfectly. Instead of signaling my virtue by doing some other kind of thing, I signal virtue by making arguments or statements regarding virtue kinds of things. It’s not, in other words, the doing of one thing (taking out my recycling, for instance) to signal another (I love the planet). Rather, it’s the arguing or the saying itself that is the signaling.

Here’s how the pretend inventor puts it:

I coined the phrase in an article here in The Spectator (18 April) in which I described the way in which many people say or write things to indicate that they are virtuous. Sometimes it is quite subtle. By saying that they hate the Daily Mail or Ukip, they are really telling you that they are admirably non-racist, left-wing or open-minded. One of the crucial aspects of virtue signalling is that it does not require actually doing anything virtuous. It does not involve delivering lunches to elderly neighbours or staying together with a spouse for the sake of the children. It takes no effort or sacrifice at all.

I’m all for neologisms in the service of argument analysis. Scott and I have even coined a few of them. This one, however, seems confusing (see above), unnecessary, and (like qualunquismo) self-refuting.

As for the unnecessary part, there’s already a handy term (or maybe two) for what virtue-signaling means to single out: ad hominem circumstantial (on some accounts). The one who employs the VS charge, in other words, means to claim that a person is making a certain claim not for epistemic reasons (because they think it’s true) but rather to signal belonging in a group (I’m on the side of the angels). I don’t mean to say that this is inherently wrong, people after all make inauthentic pronouncements all of the time. But it’s certainly a difficult charge to make. You have to make a further claim of inconsistency to show that the person does not actually believe what they say (this would be another version of the VS). This is kind of hard. Often, in any case, it’s not relevant, thus the great risk that leveling the VS charge is just to commit the ad hominem circumstantial fallacy: you ignore the reasons and go for the alleged motives.

Now for the self-refuting part. If I by making some pronouncement regarding some moral claim M virtue signal, then does it not follow that the VS charge is itself subject, or potentially subject, to the same charge? It is at bottom a claim about what kinds of claims are proper to make in certain contexts. By leveling the charge I’m signaling that certain kinds of claims are improper in certain circumstances–that, in other words, it’s virtuous not to make them.

Hamilton

Image result for hamilton duel

I posted a while back about the ad baculum argument. Roughly, “you hold p, but p, is false, and if you don’t agree I will punch you in the nose.” The typical account is that the nose-punching is irrelevant to the truth of p, so the ad baculum in this version is a fallacy.  Put another way: the punching, shooting, or threatening are not epistemic reasons for p, though they may be pragmatic ones. Whether pragmatic reasons can bring about belief is another question.

A standard objection to the existence of the ad baculum is that this just never really happens this way, and there are all sorts of more subtle things at work–such as consequences of one’s commitments, tests of hypocrisy, and so on. I think there’s a lot of truth to this (though I think there’s much to be said for the ad baculum). One bit of evidence in favor of the deflationary view is that it’s just hard to come up with plausible sounding examples. They all sound so contrived.

Until now. Here is Blake Farenthold, Republican Congressman from Texas (via TPM)

“The fact that the Senate does not have the courage to do some things that every Republican in the Senate promised to do is just absolutely repugnant to me. … Some of the people that are opposed to this, they’re some female senators from the Northeast,” he said, likely referring to Sen. Susan Collins, a Republican from Maine, who has been vocal about her opposition to each of the Senate’s health plans from the start. She said over the weekend that she’s opposed to the delayed repeal bill.

Sens. Lisa Murkowski (R-AK) and Shelly Moore-Capito (R-WV) have also been clear about their opposition to various versions of the Republican health care plan.

Farenthold suggested if it were a man from his state blocking the repeal bill, he might ask him to “step outside and settle this Aaron Burr style,” he said, referencing the famed gun duel between the former vice president and Alexander Hamilton, a former secretary of the Treasury who had longstanding political differences. The gun fight ended in Hamilton’s death.

Man or woman, that wouldn’t settle it really–well, other than to subtract one vote either way (ok, two votes, because duels are illegal).

Qualunquismo

Image result for qualunquismo

One sense of the difficult-to-translate Italian term “qualunquismo” (average-Joe-ism might be a start)  is a distrust of politics. Underneath this notion is the idea that what animates politics, disagreement, is motivated mostly by self-interested people. Most people, average people, or what they call l’uomo qualunque, know that these disagreements are pernicious.

Strolling through Twitter this morning, I stumbled upon a repost of an article from NPR Illinois comparing then-candidate Trump to current Illinois Governor Bruce Rauner. The basis of the comparison is not the bombast, but rather the outsiderist pitch: I’m not in government, I’m average, I distrust it like you, it’s a swamp of special interests, etc.

Much has been made of Trump’s appeal among voters who tend toward authoritarianism. But that’s not Rauner. Instead, political science offers a better explanation of the appeal of the governor’s pitch: stealth democracy. The idea was outlined by John Hibbing and Elizabeth Theiss-Morse in their 2002 book Stealth Democracy: Americans’ Beliefs about How Government Should Work. It goes like this: people are angry, but not because they don’t like the policy outcomes of our political system. Rather, they don’t like the process. The three main components of the idea have to do with misunderstanding how much people agree on a public agenda, a disdain for self-interested policymakers and intense dislike of the arguments and mess inherent in democratic governance. Seen through the framework of stealth democracy, Rauner is a most typical American.

“People tend to see their own attitudes as typical, so they overestimate the degree to which others share their opinions,” Hibbing and Theiss-Morse write. Last week, Rauner said Illinoisans needed to make their voices heard in the Capitol: “We need democracy to get restored in Illinois, and we need the people to put pressure on members of Speaker Madigan’s caucus to do the right thing.” Of course, thousands of people are doing just that. But among the Democratic supermajorities in the House and Senate, they’re being pressured to do a “right thing” that is not what Rauner has in mind. Where Democrats would balance the budget with a combination of tax hikes and spending cuts, Rauner says he would balance the budget with a combination of tax hikes and spending cuts only after passing business-friendly legislation and weakening collective bargaining.

When the governor makes this case, which he’s done again and again, Rauner is playing on the Stealth Democracy idea that most voters don’t understand why politicians are always fighting. Hibbing and Theiss-Morse write that because most people are not interested in getting informed on more than a few issues — if that — they can’t see what all the fuss is about: “When it is apparent that the political arena is filled with intense policy disagreement, people conclude that the reason must be illegitimate — namely, the influence of special interests.”
….
“People’s tendency to see the policy world in such a detached, generic and simplistic form explains why Ross Perot’s claim during his presidential campaigns in 1992 and 1996 that he would ‘just fix it’ resonated so deeply with the people,” Hibbing and Theiss-Morse explain. Remember Rauner’s campaign slogan? “Shake up Springfield. Bring back Illinois.” And Trump’s? “Make America great again.” They could slogan-swap without missing a beat. Stealth Democracy tells us that that since most Americans think everyone else agrees with them on what’s best for the nation, and that achieving those results ought to be as simple as putting a bill up and voting for it, we should not be surprised when people see no need for debate and compromise.

This thesis of Stealth Democracy seems to be that people are essentially qualunquisti. Underlying the qualunquista thesis is a fundamental intolerance of disagreement, especially motivated, partisan disagreement.

In the end, qualunquismo is somewhat of a meta position. It’s a position about positions whereby the taking of a position is inherently suspect. Or alternatively, the existence of disagreement is ipso facto a sign of something amiss. This is a very attractive view to hold when you don’t have any knowledge of what people are disagreeing about. Normally, or rather, to some, the existence of a disagreement is a sign that views about that position diverge. The existence of divergent views, about which one is unaware, is strong evidence then that there’s something important one knows nothing about.

Taking the qualunquistic approach saves one the trouble of thinking themselves ignorant of something important or consequential. It also rewards one with the feeling that they’ve seen through the disagreement about the subject they know nothing about. They’ve seen that it’s partisan bluster or corrupt, machine politics. In Illinois, we might call this Madiganism, after Michael Madigan, the Democratic Speaker of the House everyone seems to blame for the fact that our state went two years without a state budget. He appears infrequently in public so he makes this easy.

It seems obvious to me that qualunquismo is self-refuting. Not having a view is a view for the same reason that a color-blind society is silly racial politics.

The trick is that qualunquismo has a built-in defense: it’s almost impossible to explain why, if they don’t trust disagreement, they’re wrong.

Too soon?

John Darkow, Columbia Daily Tribune, Missouri

You very often cannot control the basic circumstances of argument, especially public argument. A public argument, let’s say, is one you have in public, with, um, the public, about matters that concern the public (I suppose this could be anything). You can try to bring about a public argument on your own by inviting those around you, or the people who read your blog, or maybe someone in some comment thread. But you’re more likely to be at the mercy of events. I think this is the point behind trending topics on Twitter. You’d be jump on board because by yourself you can’t start a trend (unless you’re somebody famous). You have to take advantage of the opportunities as they present themselves.

This may run counter, however, to certain social norms. One such norm is not to speak ill of the dead or dying, or not to take advantage of misfortune to “score political points,” or the more general comedic injunction to avoid making jokes, “too soon.”

As an epistemic matter, however, arguments require you to put evidence before your audience. This means you must spring upon them where they are and when they are there.

The injunction against taking advantage or forcing unkind thoughts runs counter to the imperative to present your case when the opportunity arises. My case in some circumstance might involve alighting upon some uncomfortable aspect of a public official at some weak point in their life, or using someone’s misfortune as an example.

This struck me the other day in the wake of someone’s misfortune. Oddly, I can’t bring myself to name it.

Stress test

Image result for prior restraint john goodman big lebowski

To my mind, argumentation studies doesn’t pay enough attention to the psycho-economics (and the just plain economics) of argumentation. How much, for example, does it cost you to engage (or not engage) in an argument with someone? How much do you have invested in your beliefs? What will it cost you in time,  money, and shame to change them? There’s a cost to everything.

One of the costs that comes with believing (or maybe just being) is stress. Yesterday there was an op-ed on point in the NYT by Lisa Feldman Barrett of Northeastern (Boston). She writes:

But scientifically speaking, it’s not that simple. Words can have a powerful effect on your nervous system. Certain types of adversity, even those involving no physical contact, can make you sick, alter your brain — even kill neurons — and shorten your life.

Your body’s immune system includes little proteins called proinflammatory cytokines that cause inflammation when you’re physically injured. Under certain conditions, however, these cytokines themselves can cause physical illness. What are those conditions? One of them is chronic stress.

Your body also contains little packets of genetic material that sit on the ends of your chromosomes. They’re called telomeres. Each time your cells divide, their telomeres get a little shorter, and when they become too short, you die. This is normal aging. But guess what else shrinks your telomeres? Chronic stress.

If words can cause stress, and if prolonged stress can cause physical harm, then it seems that speech — at least certain types of speech — can be a form of violence. But which types?

That last question is a critical one. Barrett’s answer seems to depend on the duration of the stress caused by the speech:

That’s also true of a political climate in which groups of people endlessly hurl hateful words at one another, and of rampant bullying in school or on social media. A culture of constant, casual brutality is toxic to the body, and we suffer for it.

Here’s the payoff:

That’s why it’s reasonable, scientifically speaking, not to allow a provocateur and hatemonger like Milo Yiannopoulos to speak at your school. He is part of something noxious, a campaign of abuse. There is nothing to be gained from debating him, for debate is not what he is offering.

Well, there’s the problem. In the first place, to Milo’s many adoring fans, he’s not abusing anyone. If anything, he’s got to put up with your abuse (as they frequently allege). Besides, they might claim they get a rush of pleasure from the truth he speaks and that the discomfort people feel is the pain of cognitive dissonance.  Second, there’s an easy to way to avoid Milo’s noxious message: don’t go to his talk.

I’m sympathetic to the idea that there’s a psychological cost to unwelcome ideas. I’m also sympathetic to taking that into account as we offer them. But it’s difficult to see how these two things yield banning Milo. That his beliefs impose a high cost on hearers doesn’t seem sufficient to ban or even avoid them. I’ll leave it to the reader as an exercise to come up with counterexamples to Barrett’s view.

Handel with care

Karen Handel, now member of the US Congress from Georgia, sat for an interview in which she was pressed for answers about gay marriage and gay adoption. Here’s a video.

It’s a little long (well ok it’s five minutes). The interesting remark, for me at least today, comes at the end. Asked (at about 4:55) why she thinks gay parents are not as legitimate as heterosexual parents, she responds:

Because I don’t.

That’s a puzzling answer. In the first place, she certainly has a reason. She has even, earlier in the Q&A, given it: Christianity demands it. Second, does anyone or rather can anyone hold a view for no reason at all? Is “I just don’t” ever an answer to such a question?

I just don’t think so.

This is just not the nature of beliefs. Try it yourself. You don’t of course have to articulate those beliefs, but they’re always there. Hers, I imagine, is just too alienating or silly or (more likely) question-begging.

Hillbilly resistance

There is now a cottage industry that produces essays having the following form: the reason Trump got elected is because liberal snobs have long looked down their noses at regular folks and the regular folks were just plum tired of it so voted for Trump despite his evident shortcomings. I read the first one of these in the Chronicle of Higher Education or Inside Higher Ed within days of the election. They have followed at a steady trickle.

Here’s a variation the other day from someone in Philadelphia:

A lot of people out there are tired of being called stupid, whether directly to their faces or indirectly with the raised eyebrow of the highbrow. I almost think they can deal with being called racist, sexist or homophobic (which some are, some aren’t and who cares anyway, since liberals are exactly the same,) but cannot deal with being ridiculed for their allegedly inferior intellects.

When people do that, they just galvanize the Hillbilly Resistance to reject any notion that the press is in danger, that Trump is a beast, that Ivanka is a Stepford daughter, that Melania lives in a tower and lets down her hair on weekends, and that we are in danger of another revolution.

I have two comments. Before those, a confession. I hate being called stupid. I hate it because, to be honest, I fear that it may be true. When someone’s accusation is particularly well phrased, it costs me a lot of time (and maybe some money if I have to buy books or something) to consider the question. Back to my comment.

First, these people are snowflakes, apparently. They so bristle at the thought of having their beliefs questioned that the behave irrationally. I can’t think of much that’s more insulting than that claim.

Second, if someone knows a way you can disagree with someone without there being the very real implication that one of you is mistaken and has therefore failed in some kind of cognitive obligation (i.e., is stupid), then I’m all ears.  Your answer may make me feel bad because I currently think there isn’t one.

In closing, the implication that people with whom you disagree are deficient is not something that has suddenly just appeared, by the way:

Image result for liberalism is a mental disorder

Body slam!

Image result for body slam creative commons

An interesting example of ad baculum (appeal to force) reasoning came up last night. A candidate for Congress in Montana body-slammed a reporter for asking a question about the CBO score of the AHCA.  This got me thinking about the ad baculum.

The textbook ad baculum argument is something of a puzzle. Here’s what we might call a fairly standard version:

The fallacy of appeal to force occurs whenever an arguer poses a conclusion to another person and tells that person either implicitly or explicitly that some harm will come to him or her if he or she does not accept the conclusion. (Hurley Concise Introduction to Logic 2008, p. 116).

As the text goes on to explain, the fallacy works by blinding the listener to the weakness of an argument with the threat of sanction. Other texts of this type make similar claims (see the Hurley-esque Baronett 2013 or here at the Fallacy Files).

On the other hand, some research-based approaches do not seem to include it (e.g., Groarke and Tindale Good Reasoning Matters! don’t mention it at all).  Walton, in contrast, includes a discussion of “fear or threat” arguments, though he stresses the ways they are passable (and considers the relevance question “outrageous”) (see Walton Fundamentals of Critical Argumentation 2006, p. 288).

Like Walton, I’ve long struggled with whether this is anything. You can’t force anyone to believe anything. Your forcing, or threats of forcing, will likely have the opposite effect. You will reinforce their believe or raise their suspicions. Beliefs just don’t work like this.

One common suggestion is that such moves aren’t really arguments, so they’re not really fallacies. It’s been used on me (and Scott) before to discount some one of our dialectical examples. It would go like this. My threats to punch you if you keep asking about the CBO score aren’t “argumentative” in any real sense. They’re just threats to get you to engage in some action or other. They are threats, in other words, to get you to do something (not conclude) something.

I’m loathe to give up on threats and violence as common distortions of dialectical exchanges. They happen too often, I think, for us to ignore them. If our model of fallaciousness can’t capture them, then we need to rethink it.  I have therefore two suggestions. The first is this: the aim of the ad baculum is indeed an action–the action is “accpetance.” You are going to “accept” (rather than believe) that some proposition is true. You are going to include it in your practical reasoning. If I threaten you to accept some proposition as true, then you will act as if it is. Whether you believe it in your heart of hearts is irrelevant.

The second suggestion: my threats are not aimed at your believing, they’re aimed at your doing and the believing of others. If I can get you to stop blabbing on about the CBO score, even though you think it’s important, I can shield that evidence from others and therefore control (however indirectly) their believing. You control believing, after all, in this indirect way.