Tag Archives: Brendan Nyhan

The cure is worse than the disease

Image result for the horse's mouth

The intuition that political polarization is caused by lack of access to dissenting views has much to recommend it. First of all, if you don’t know what these views are, you can’t learn about them. Second, if you only know strongly dialectical or distorted (straw man) versions of them, you’re unlikely to find your opponents to be reasonable people with plausible views. The obvious antidote to this would seem to be to sit and listen to dissenting voices in their own words.  Let’s call this view, “the horse’s mouth” Looking into the horse’s mouth will have a moderating effect; for,  people are eminently reasonable, so if you just listen to them in their own reasonable words you’ll be compelled to admit that (and so abandon your polarized, straw man versions of their view).

Now comes science to spoil everyone’s intuitions. Some political scientists have tested whether this decreases polarization. The long and the short of it is that it doesn’t and it may (though this result was within the margin of error) increase it. From their paper:

Social media sites are often blamed for exacerbating political polarization by creating “echo chambers” that prevent people from being exposed to information that contradicts their preexisting beliefs. We conducted a field experiment that offered a large group of Democrats and Republicans financial compensation to follow bots that retweeted messages by elected officials and opinion leaders with opposing political views. Republican participants expressed substantially more conservative views after following a liberal Twitter bot, whereas Democrats’ attitudes became slightly more liberal after following a conservative Twitter bot—although this effect was not statistically significant. Despite several limitations, this study has important implications for the emerging field of computational social science and ongoing efforts to reduce political polarization online.

This is disappointing in part because things were looking good for the horse’s mouth view. For it has recently been shown that another representationalist paradox–the backfire effect–had failed to replicate. In the “backfire effect” study, Brendan Nyhan and Jason Reifler found that attempts to correct mistaken information would backfire in certain circumstances. The idea, in other words, is that exposure to facts is not sufficient for correction and may in fact make one retrench.

Naturally, we should be cautious with such results, as the authors themselves warn:

Although our findings should not be generalized beyond party-identified Americans who use Twitter frequently, we note that recent studies indicate this population has an outsized influence on the trajectory of public discussion—particularly as the media itself has come to rely upon Twitter as a source of news and a window into public opinion (47).

In closing here I might venture a hypothesis for why people didn’t moderate their view. Prominent politicians on Twitter, from what I’ve observed, produce content for a partisan audience.  Often that partisan audience is already polarized and it isn’t particularly well-informed. Content that appeals to them, viewed by an observer, might only tend to confirm the worst views about them.  If you see a bunch of tweets urging you to “lock her up,” you can hardly be blamed for thinking them to be idiots.

Believing is seeing

Nice little piece by Brendan Nyhan at the New York Times’ “The Upshot” about how ideology and factual beliefs collide.  Here’s a taste:

Mr. Kahan’s study suggests that more people know what scientists think about high-profile scientific controversies than polls suggest; they just aren’t willing to endorse the consensus when it contradicts their political or religious views. This finding helps us understand why my colleagues and I have found that factual and scientific evidence is often ineffective at reducing misperceptions and can even backfire on issues like weapons of mass destruction,health care reform and vaccines. With science as with politics, identity often trumps the facts.

So what should we do? One implication of Mr. Kahan’s study and other research in this field is that we need to try to break the association between identity and factual beliefs on high-profile issues – for instance, by making clear that you can believe in human-induced climate change and still be a conservative Republican like former Representative Bob Inglis or an evangelical Christian like the climate scientist Katharine Hayhoe.

….

The deeper problem is that citizens participate in public life precisely because they believe the issues at stake relate to their values and ideals, especially when political parties and other identity-based groups get involved – an outcome that is inevitable on high-profile issues. Those groups can help to mobilize the public and represent their interests, but they also help to produce the factual divisions that are one of the most toxic byproducts of our polarized era. Unfortunately, knowing what scientists think is ultimately no substitute for actually believing it.

All of this seems right to me.   The last point is especially interesting.  It reminds me (somewhat tangentially) of a paper (by Marcin Lewinksi and Mark Aakhus) on polylogical reasoning I saw at ISSA last week.  Though perhaps not the point of the research (I’m only vaguely familiar with it), the problem is that we have fora for dialogues (or di-logues), but none for the poly-logues that more satisfactorily represent the actual dialectical terrain.  This forces ideological alliances such as the GOP one, where you’re pretty much forced to take positions on factual issues in order to belong to the club.  I imagine the Democratic position then forms in contrast (or t’other way round).  If you want to be in the game, you have to be on a team.  Well, it’s a stupid game.

When argument doesn’t work, try argument

Fig 1: arguing badly by going for the jugular

Courtesy of a former student, here’s an interesting read from Pacific Standard about the effectiveness of counter arguments and contrary information on people’s attitudes towards their own beliefs.  TL;DR: counter information makes people more likely to persist in their false beliefs:

Research by Nyhan and Reifler on what they’ve termed the “backfire effect” also suggests that the more a piece of information lowers self-worth, the less likely it is to have the desired impact. Specifically, they have found that when people are presented with corrective information that runs counter to their ideology, those who most strongly identify with the ideology will intensify their incorrect beliefs.

When conservatives read that the CBO claimed the Bush tax cuts did not increase government revenue, for example, they became more likely to believe that the tax cuts had indeed increased revenue (PDF).

In another study by Nyhan, Reifler, and Peter Ubel, politically knowledgeable Sarah Palin supporters became more likely to believe that death panels were real when they were presented with information demonstrating that death panels were a myth. The researchers’ favored explanation is that the information is so threatening it causes people to create counterarguments, even to the point that they overcompensate and become more convinced of their original view. The overall story is the same as in the self-affirmation research: When information presents a greater threat, it’s less likely to have an impact.

This naturally raises the question: are we doomed?  Part of the problem, I think, is that people generally argue very badly.  This is part of the point of Scott and Rob’s book: Why We Argue.  See here for a post the other day.  Take a look, for instance, at the following claim:

This plays out over and over in politics. The arguments that are most threatening to opponents are viewed as the strongest and cited most often. Liberals are baby-killers while conservatives won’t let women control their own body. Gun control is against the constitution, but a lack of gun control leads to innocent deaths. Each argument is game-set-match for those already partial to it, but too threatening to those who aren’t. We argue like boxers wildly throwing powerful haymakers that have no chance of landing. What if instead we threw carefully planned jabs that were weaker but stood a good chance of connecting?

I don’t have any issues with this advice.  Indeed, I think it does not show that argument of the basic logical variety we endorse here doesn’t work.  On the contrary, it works really well; this is just how you do it.

To rephrase the author’s advice: you’ve been arguing badly all along.  Constantly going for the knock out argument is a bad strategy primarily because it’s bad argumentation.  Such moves are very likely to distort the views of the person you’re trying to convince and in so doing alienate them.  What’s better is the slow accumulation of evidence and the careful demonstration of the truth or acceptability of your beliefs.

He tweeted me so unfairly

I always (I think) name names here because it's hard to cite someone's arguments without naming them by name.  But sometimes, I've noticed, one does hear the expression, "I won't name names here."  I ran across an instance of this at the Washington Monthly today.  One fellow, Brendan Nyhan, is all upset over having been referred to (not named) with identifiable phrases he thinks taken out of context.  Here is what he is complaining about (it's a post by Nathan Silver–everyone's favorite numbers nerd):

The jobs numbers are awful, but they’ve also provided fodder for some poor political punditry. 

I won’t name names, since the people in question are normally thoughtful writers. But you can already find an article keyed off the news with the headline “How a one-term president is made.” And a political scientist in my Twitter feed wrote of how numbers like these will have Mitt Romney “measuring the drapes” in the White House.

I do not mean to suggest that the unemployment numbers are unimportant as a news story. To the contrary, recent polls find that four times as many people list jobs rather than the budget deficit as a top priority, even though the latter issue has gotten more press attention lately.

But if you’re going to write about the jobs numbers as a horse race story, you ought to do it right, and that means keeping an eye on the big picture.

Following up on this post from yesterday, this seems like a somewhat polite use of the "some say" trope.  You don't identify your opponents not because they don't exist; you avoid doing so in order to be nice.  Let's hope, perhaps is the thought, no one inquires but the guilty party gets the point.  This seems reasonable, as the point of the criticism is friendly correction, rather than triumphalist douchebaggery.

This strategy does not work, however, when the accused publicly complains about being strawmanned.  On this score, the criticism in question was directed at a tweet.  Two things:  One, don't tweet easily misunderstood condensed arguments (which require, as Nyhan maintains in his own defense, you to refer to your vast body of not-tweeted work) and expect to be tweeted fairly; and two, criticizing tweets is almost nutpicking, because tweets are usually dumb.