Wednesday, September 2, 2015

On Disagreement, Part 5: Objections and Conclusion

"Honest disagreement is often a sign of progress"
- Mahatma Gandhi

So far, I’ve discussed three cases (which I’ll call clocks, dress, and cardiology) that illustrate what I consider to be a more widely applicable principle (p) indicating that the right thing to do upon discovering that an epistemic peer disagrees with one after full disclosure is to suspend belief. Today, I’d like to begin examining two lines of criticism of – or disagreement with – this proposition. I'll address them in reverse order. The first is that my proposition is self defeating, and I will bring this up at the end of the post. The second, which I'll deal with first, comes in the form of several other cases where epistemic peers disagree but apparently, “nearly everyone supposes that it is perfectly acceptable for one to (nevertheless) hold fast - i.e. to fail to adjust credences to bring them closer to the credences of another”(1). So let's begin by examining one of several such counter examples as described by philosopher, Graham Oppy, which he calls elementary math:

“Two people who have been colleagues for the past decade are drinking coffee at a café while trying to determine how many other people from their department will be attending an upcoming conference. One, reasoning aloud, says: ‘Well, John and Monima are going on Wednesday, and Karen and Jakob are going on Thursday, and, since 2 + 2 = 4, there will be four other people from our department at the conference’. In response, the other says: ‘But 2+2 does not equal 4’. Prior to the disagreement, neither party to the conversation has any reason to suppose that the other is evidentially or cognitively deficient in any way; and, we may suppose, each knows that none of the speech acts involved is insincere. Moreover, we may suppose, the one is feeling perfectly fine: the one has no reason to think that she is depressed, or delusional, or drugged, or drunk, and so forth. In this case, it seems plausible to suppose that the one should conclude that something has just now gone evidentially or cognitively awry with the second, and that the one should not even slightly adjust the credence she gives to the claim that 2 + 2 = 4.”

Well, of course it seems plausible to us, readers whose independent opinions agree with the one that 2 + 2 = 4, that something has gone cognitively awry with the second! Our opinions serve as additional evidence that alters the 0.5 probability that each disagreeing party is correct. Watch how re-wording the alleged counter example illustrates this reason for its failure:

You and your colleague for the past decade are drinking coffee at a deserted café while trying to determine how many other people from your department will be attending an upcoming conference. You, reasoning aloud, say: ‘Well, John and Monima are going on Wednesday, and Karen and Jakob are going on Thursday, and, since 2 + 2 = 4, there will be four other people from our department at the conference’. In response, your colleague says: ‘But 2+2 does not equal 4’. Prior to the disagreement, neither you nor your colleague has any reason to suppose that the other is evidentially or cognitively deficient in any way; and, we may suppose, you each know that none of the speech acts involved is insincere. Moreover, we may suppose, you are both feeling perfectly fine with no reason to think that you or the other is depressed, or delusional, or drugged, or drunk, and so forth.”

I hope that you agree that it no longer seems plausible to suppose that you should conclude that something has just now gone evidentially or cognitively awry with your colleague, and that you should not even slightly adjust the credence you give to the claim that 2 + 2 = 4. One of you has a problem, but as bizarre and implausible as this scenario seems, if we are to really consider it a relevant counter example to the principle I have proposed , then neither you nor your colleague can have any reason independent of the disagreement itself to think that the other is the one with that problem. Without a reason independent of your disagreement to justify maintaining your belief, no matter how confident each of you is in being correct, it seems clear to me that you must both suspend belief until further evidence (which is, in this example, easy to acquire) sorts it out. Why? Because, as I have discussed earlier, from an epistemic perspective, neither of you has any reason to think that you are more likely to be correct than the other, so the probability at that point in time that either of you is correct – again, from each of your epistemic perspectives - is 0.5. Since assent to a belief that seems no more likely to be true than false is irrational, suspension of belief is required. Oppy provides several other examples of disagreements concerning “cognitively basic judgments” (those immediately grounded in memory or perception or elementary mathematics), but I think that they all fail for similar reasons. Essentially, if you found dress convincing of my proposition (an example of a cognitively basic judgment), then Oppy’s other similar counter examples should seem pretty unconvincing.

Imagine that you wake up and all of your memories and all of your intuitions tell you that 2 + 2 does not equal 4, while every other person on Earth disagrees. You hold your belief as confidently, sincerely, and intuitively as everyone else. Every time you take two oranges and put them in a box with two other oranges, you count the total number of oranges and you never get 4, while everybody else around you always does. This would be very strange, indeed, but no matter how confident you feel that you know better, I hope you can see that you simply cannot insist that you are right; you must suspend your belief or risk suffering from a delusion. On the other hand, everybody else can draw epistemic confidence in the otherwise perfect agreement that 2+2 does equal 4. Agreement really does matter since it serves to identify what sort of criteria we can use to determine what's normal and what isn't. Here's another example: if you think that killing others for your own pleasure is fine and dandy, and you can't see any problem with that, you're not a lone champion of an obscure moral truth, you're a psychopath.

Alright: I've saved the best for last. The final challenge posed to my proposition is that there appear to exist not just my own epistemic peers, but my own epistemic superiors who disagree with (p), including Dr. Oppy and Dr. Alvin Plantinga, among others. (There are, of course, other philosophers who agree with (p) (or something like it), but the fact that there are those who disagree is the very problem.) Lest I have some way of saving (p) from self-referential defeat, even I must suspend my own belief in it.

But I do have a way of saving (p) from this challenge, at least for now. I have argued logically for (p), and the mere disagreement of epistemic peers or superiors is not enough for me to dismiss it. Recall that (p) requires disagreement despite full disclosure. The latter requires that those who disagree explain which of my premises they disagree with and why. If after such a process, I am left with no reason independent of our disagreement to think that (p) is correct, then it seems that I will have to become agnostic regarding (p) because that's precisely what (p) requires that I do. That hasn't yet happened.

One of my interlocutors on this subject rejected the notion that when n epistemic peers disagree after full disclosure, the epistemic probability that any of them are correct is 1/n, for that is precisely what the disagreement calls into question. I am sympathetic to this criticism, and I interpret it as indicating that it is sometimes very difficult to determine if those disagreeing really are epistemic peers. However, there are times when this really isn't difficult at all, such as, when large groups of people disagree about, for example, cognitively basic judgments, as was the case in dress. Cardiology is a good example of a similar case involving a cognitively non-basic judgment. Relevant epistemic differences will tend to even out among large groups, so if you saw the stripes on the dress as white and gold, and you knew that your spouse saw them as blue and black, you might wonder whether there was something wrong with your spouse visuo-neurologically, but when you realize that there are  literally thousands of people agreeing with you, and thousands of others agreeing with your spouse, it's much more clear that there is something about the situation - about the picture itself - and not with either of the two individuals or camps that is preventing rational assent to a belief about the stripe colors. But this is just to say that the more reasons one has to think that the person disagreeing really is an epistemic peer, the more one must reduce the confidence in one's belief. Interestingly enough, it seems that applying Bayesian math to disagreements among perfectly rational cognizers leads to just this conclusion.

So at least for now, and at least on those occasions where one seems compelled to conclude that the disagreement really is among epistemic peers, (p) still stands. If you know where I might encounter objections to the premises leading to (p), please link or provide references in the comments below. Or even better, explain what they are in your own words. I'm keen to hear all about your disagreement . . .

(1) Oppy, G. Disagreement. Int J Philos Relig 2010 68:183-199.

No comments:

Post a Comment