Tag Archives: Bias

The Socratic problem for fallacy theory

How do you explain that someone is being irrational? What does even mean to be irrational? What does it mean to explain irrationality? After all, “it seemed right at the the time” is a perpetual phenomenological condition–this is the problem Aristotle tried to account for in his discussion of Akrasia (weakness of will; incontinence) in book VII of the NIcomachean Ethics: how can someone know that they should Phi, intend to do Phi, but then fail to Phi? You can’t explain this by referring to reasons because the reasons, at least the motivating ones, are inoperative in some important sense. Fans ofThe Philosopher know that he struggled mightily with this problem after rejecting the Socratic claim that akrasia is just ignorance. In a lot of ways he ends up embracing that view, though in doing so he seems to identify a different shade of the problem: there are different kinds of reasons.

Something akin to this problem haunts argumentation theory. For, it seems obvious that people commit fallacies all of the time. This is to say, on one account, they see premises as supporting a conclusion when they don’t. One problem for fallacy theory is that they seem to them to support the conclusion, so fallacies aren’t really irrational. This is the Socratic problem for fallacy theory. There are not fallacies because no one ever seems to be irrational to themselves.

To the Socratic problem for fallacy theory there’s the Aristotelian distinction between kinds of reasons. And of course when we say reasons we also mean, just like Aristotle, explanations (which is what the Greek seems to mean anyway). So we can explain someone’s holding p in a way that doesn’t entail that holding p was rational (or justified, which is similar but different).

Lots of things might count as accounts of irrationality; one common one is bias. This has the handy virtue of locating the skewing of someone’s reason in some kind of psychological tendency to mess up some key element of the reasoning process in a way that’s undetectable to them. So, confirmation bias, for example, standardly consists in noticing only that evidence that appears to confirm your desired outcome.

Since you cannot will yourself to believe some particular conclusion, this works out great, because you can look at (or better not look at) evidence that might produce it (or avoid that which will). Of course, you can’t be completely be aware of this going on (thus–bias). This is what Aristotle was trying to represent.

This is one very cursory account of the relation between what people mean by irrationality in argumentation and what others mean by it. There is, by the way, a lot of confusion about what it means to teach this stuff–to teach about it, to teach to avoid it, etc. More on that here. I recommend that article for anyone interested in teaching critical thinking.

Having said all of this, there is interesting research (outside of my wheelhouse sadly) on bias being going in psychology and elsewhere. Here is one example. A sample graph:

However, over the course of my research, I’ve come to question all of these assumptions. As I begun exploring the literature on confirmation bias in more depth, I first realised that there is not just one thing referred to by ‘confirmation bias’, but a whole host of different tendencies, often overlapping but not well connected. I realised that this is because of course a ‘confirmation bias’ can arise at different stages of reasoning: in how we seek out new information, in how we decide what questions to ask, in how we interpret and evaluate information, and in how we actually update our beliefs. I realised that the term ‘confirmation bias’ was much more poorly defined and less well understood than I’d thought, and that the findings often used to justify it were disparate, disconnected, and not always that robust.

The questions about bias lead to other ones about open-mindedness:

All of this investigation led me to seriously question the assumptions that I had started with: that confirmation bias was pervasive, ubiquitous, and problematic, and that more open-mindedness was always better. Some of this can be explained as terminological confusion: as I scrutinised the terms I’d been using unquestioningly, I realised that different interpretations led to different conclusions. I have attempted to clarify some of the terminological confusion that arises around these issues: distinguishing between different things we might mean when we say a ‘confirmation bias’ exists (from bias as simply an inclination in one direction, to a systematic deviation from normative standards), and distinguishing between ‘open-mindedness’ as a descriptive, normative, or prescriptive concept. However, some substantive issues remained, leading me to conclusions I would not have expected myself to be sympathetic to a few years ago: that the extent to which our prior beliefs influence reasoning may well be adaptive across a range of scenarios given the various goals we are pursuing, and that it may not always be better to be ‘more open-minded’. It’s easy to say that people should be more willing to consider alternatives and less influenced by what they believe, but much harder to say how one does this. Being a total ‘blank slate’ with no assumptions or preconceptions is not a desirable or realistic starting point, and temporarily ‘setting aside’ one’s beliefs and assumptions whenever it would be useful to consider alternatives is incredibly cognitively demanding, if possible to do at all. There are tradeoffs we have to make, between the benefits of certainty and assumptions, and the benefits of having an ‘open mind’, that I had not acknowledged before.

What is interesting is how questions about one kind of account (the bias one, which is explanatory) lead back to the questions they were in a sense meant to solve (the normative one). But perhaps this distinction is mistaken.

Closed-minded conservatives excel at detecting liberal bias

Inside Higher Ed just ran a story titled, "Eye of the Beholder," which reports on an article showing that there is a strong correlation between being conservative and not open to changing one's mind and perceiving liberal bias in the classroom.  Similar thing happens with closed-minded liberals — they have the habit of seeing conservative bias.

The study found that students — even in the same classrooms — didn't perceive bias in the same ways (or at all), and those who perceived bias were those who were resistant to changing any of their views. The finding extended to some who identified themselves as being far on the left and resistant to change, and who believed that they had some biased conservative professors. But among both left-leaning and right-leaning students who didn't score high on resistance to new ideas, there was little perception of bias.

In short, if you're dogmatic about your views, you're likely the one to report having a biased professor. (Sidebar: my experience is perfectly consistent with this, as pretty much every person who's ever accused me of classroom bias has either been a blinkered conservative or a raging Marxist.  That said, this, apparently is true of me.)

What explains this variation in perception of bias in the classroom?  The lead researcher, Darren Linvill of Clemson University, proposes:

…[T]here may be elite colleges and universities where students arrive as freshmen used to having their views challenged by teachers, and that might still be "an ideal." But he said that the reality he sees from his research is that this is a foreign concept to many entering college students today.

That's it – challenging a view is a case of bias.  In a way, yes, it is.  It is biased against dogmatism.  (A question: is this a case of self-serving bias, as the dogmatic students tend to do poorly in discussion and blame it on professor bias instead of their own lack of preparation? Is it self-serving to offer that as an explanation?)

Team players

I pick on "conservative" columnists a lot here.  I've noted elsewhere (click here) why this is so.  Now I am not the only one making this observation. From County Fair:

Last week, I noted that the numerical advantage conservatives have on the nation's op-ed pages doesn't tell the whole story:

There's a huge qualitative difference between the conservatives given newspaper columns and their progressive counterparts as well. The conservatives tend to be more partisan, more aggressive, and more reliable advocates for their "team."

The Washington Post employs as a columnist Bill Kristol, a hyperpartisan neocon Republican strategist who has been a key player in GOP efforts to block health care and start unnecessary wars. Who is supposed to be Kristol's counterpart? Richard Cohen, who opposes affirmative action, supports torture, and attacked liberals who opposed Kristol's war in Iraq?

Now, here's what you see if you turn to the op-ed page of today's Washington Post:

Former Bush speechwriter and current Post columnist Michael Gerson on "The Democrats' Assault on the CIA."

Conservative Post columnist Kathleen Parker on chaos in the GOP.

Former Bush aide Ed Gillespie, misleading readers about his party's historical reaction to Supreme Court nominees by Democratic presidents.

Centrist Post columnist David Ignatius on President Obama's approach to Israel

Liberal Post columnist Ruth Marcus writing about her new puppy.

So that's three conservatives, including two former Bush aides, a centrist, and a progressive. One conservative attacking Democrats, one conservative misleading readers about the Supreme Court and attacking Democrats, one conservative noting disarray in the GOP, and a liberal writing about her dog.

I invite those who hunger for balance on this page to produce the party-line liberal columnists in national newspapers.

Lay bare the structure

Discussions of bias seem to take on a similar pattern.  Aside from the groundless hurling of the "you're biased" accusation, someone will quickly make the claim that "bias" is inevitable and that we all have our own unjustified biases, so why bother.  Here is yet another way, the Stanley Fish way, to deal with questions of bias:

I agree that it is important to have a position on such questions of truth, but the classroom is not the place to work that position out; the classroom is, however, the place to consider the efforts of men and women to work it out in the course of centuries. Steven Brence may or may not be right when he announces that an “untenable” Hobbesian notion of individualism is responsible for “much of contemporary conservative thought.” But “untenable” is not a judgment he should render, although he should make an historical argument about conservative thought’s indebtedness to Hobbes. Save “untenable” for the soapbox.

Sarah asks, what good does academic conversation “do us if it does not put us in a better position to assess current theories and thoughts?” It depends what you mean by assess. If you mean analyze, lay bare the structure of, trace the antecedents of, then well and good. But if it means pronouncing on the great issues of the day — yes we should export democracy to the rest of the world or no we shouldn’t — then what she calls assessing I call preaching.

Sarah touches on what is perhaps the most urgent question one could put to the enterprise of liberal education. What, after all, justifies it? The demand for justification, as I have said in other places, always come from those outside the enterprise. Those inside the enterprise should resist it, because to justify something is to diminish it by implying that its value lies elsewhere. If the question What justifies what you do? won’t go away, the best answer to give is “nothing.”

Now I hate to be the guy who draws the facile conclusion, but isn't "laying bear the structure of" a kind of "pronouncing on"?  I mean, if I say, "this argument has the structure (and say content) of an equivocation," aren't I pronouncing on it?  Or should I not teach logic, because it's biased?