Sunday, January 10, 2016

Rationalization

Schwitzgebel and Ellis have an interesting discussion of rationalization, in which they ask the question, "Would it be epistemically bad if moral and philosophical thinking were, to a substantial extent, highly biased post-hoc rationalization?" After giving three possible reasons for thinking that No is the correct answer, they give four reasons for thinking that Yes is the correct answer, claiming that the costs of rationalization outweigh the benefits. The four are:

(A) Rationalization leads to overconfidence.
(B) Rationalization impedes peer critique.
(C) Rationalization undermines self-critique.
(D) Rationalization disrupts the cooperative enterprise of dialogue.

None of these seem particularly strong. A major problem in general with taking rationalization to have a major effect on inquiry, individual or communal, is that we are almost never in a position to know whether an argument is a rationalization or not. Nothing about the argument works any differently; the only difference is the cause of its being put forward. It takes causal analysis, and, what is more, causal analysis of motives, to assess whether something is a rationalization. In most forms of inquiry, in most kinds of dialogue, in most kinds of peer interaction, we simply don't have enough information to know; the question of whether it's a rationalization or not will be invisible in those contexts. Why would one think that inquiry, dialogue, peer critique, are so fragile that subtle differences in motivations gum up the enterprise? And subtle they often are; we often have difficulty determining in our own case whether we are rationalizing or not. We still have to do the same kind of causal analysis on ourselves, and, while we have more information about ourselves than others, the experience of having difficulty sorting it is a common one.

It's unclear why they take rationalization to be a particularly significant cause of overconfidence, for instance. The argument is that "If one favors conclusion P and systematically pursues and evaluates evidence concerning P in a highly biased manner, it's likely (though not inevitable) that one will end up more confident in the truth of P than is epistemically warranted." But what's missing is a reason to think that rationalization is any more likely to be "highly biased" in this way than any other kind of reasoning, particularly given that we often have difficulty distinguishing rationalization from other kinds of reasoning. To be sure, the question is specifically about "highly biased post-hoc reasoning", but why would one be fretting about the post-hoc part if one already knew that it was highly biased? Why think rationalization is the problem when you are already postulating severe biases?

I've talked before about what I call convalidation of rationalization, in which what is originally a rationalization becomes, over time, our real reason for holding something. Rationalization is one source of real reasons. Schwitzgebel and Ellis seem not to countenance such a possibility, because (B), (C), and (D) seem to require that a rationalization is permanently a rationalization. Motives in reasoning, however, can change. What is more, they both seem to make significant assumptions. If one held a view that a major purpose of peer critique and dialogue is to understand possible reasons or to find public reasons or develop shared arguments (for instance), arguments and reasons that both groups can use regardless of how central they actually take them to be, would it really make any sense to say that rationalization impedes or disrupts this? Why assume that peer critique should always and everywhere examine the "real basis" for the argument? If I destroy someone's argument by showing that it is incoherent, and they just hunt around for a new argument, why does that even matter? I'll destroy that one, too, or, if I fail to do so, the discussion will at least have been upgraded to one in which we're not dealing with obviously incoherent arguments.

There are, of course, goals we might have in mind that would be interfered with by rationalization -- persuasion being the most obvious case. But there are good independent reasons going back to Plato for denying that rational dialogue and interaction should be primarily driven by persuasion. Other goals that would be messed with by rationalization seem all to be cooperative -- that is, we'd both already made a commitment that implicitly requires [rejecting] rationalization. They don't seem generalizable.

The strongest of the four reasons given is (C). But we're not always able to determine what our real reasons for believing are -- there are plenty of cases where the evidence will be ambiguous even to ourselves whether an argument is the real reason why we believe something. What do you do if you are not sure? It seems that you would just have to explore various arguments. Schwitzgebel and Ellis give sober assessment of evidence as a contrast to rationalization. But if you already believe something, how do you distinguish sober assessment of evidence that confirms your belief from rationalization? I see no reason to think we can do so consistently. The argument given seems to make the assumption that all rationalization is deliberate; but this is surely not so, and does not follow from their description of rationalization. Likewise, it seems to assume that we have extraordinary introspective clarity on these things; but in reality the only cases in which we can tell rationalization immediately is when we are deliberately lying.

It's also not clear how rationalization would itself impair self-critique. Surely one of the things self-critique is supposed to do, when it is possible, is uncover rationalization? The particular argument that Schwitzgebel and Ellis gives does not cover all self-critique, just an "important type"; but however important it is, there's no obvious reason why all reasoning, or even all philosophical reasoning and argument, needs to conform to this important type.

Rather amusingly, I'm always suspicious that arguments that rationalization is epistemically bad are really themselves rationalizations. The real reason most of us have a problem with rationalization is that there are lots of cases where it is morally bad -- I don't think it's true that most cases of rationalization are morally bad, but there are certainly some very morally bad situations that can arise through rationalization. And if it were true that "moral and philosophical thinking were, to a substantial extent, highly biased post-hoc rationalization" one would at least be reasonable to worry about the intellectual integrity and courage of people engaging in the moral and philosophical thinking, or to hope that there are things in place to compensate for the potential bad effects. But is it epistemically bad? Aren't we just trying to beef up our sense that rationalization is morally bad by finding ways it could be epistemically bad, too, in the way that people try to beef up their moral conclusions by saying that the bad thing is also unhealthy? There are likely some goals that are interfered with by some kinds of rationalization. But there are possible reasonable goals that can be interfered with by much more reputable kinds of reasoning than rationalization. And as I note above, it makes very little sense to suggest that our processes of inquiry, dialogue, and critique are so fragile that they can't handle or compensate for small, often undetectable, differences in motivations.