Two separate discussions (one by Marc Moffet at "Close Range", the other by John Turri at "Fake Barns") have made me think about some issues also raised by Fantl and McGrath in their "Evidence, Pragmatics, and Justification", Phil Review 111:1, January 2002.
Consider some proposition p and action A that one is contemplating performing. Furthermore, suppose that success in A'ing depends on the truth of p, and that one knows this. Now it seems that many accept the following claim:
[RP] Given that success in A'ing depends on the truth of p, whether one is rationally permitted to A depends on whether one has the level of confidence in p required by the situation in which one is contemplating performing A.
Presumably, the point of [RP] is that there are certain actions, the consequences of which would be so grave that one must make very sure of the conditions under which one performs the action. Thus, we don't simply allow surgeons to sew up a patient as soon as they think they're done operating; rather, we require them to account for all sponges and instruments used in the operation to ensure that the patient doesn't have any unpleasant post-op surprises. The suggestion is that such considerations motivate the acceptance of [RP]. I don't think they do.
Suppose that we could have hypnotists present at every operating table who could convince surgeons, to a level approaching absolute certainty, that they had removed all sponges, etc., from inside their patients. Despite having an extreme level of confidence, such surgeons wouldn't, presumably, have thereby done anything to make one more comfortable about their sewing up their patients. So, the suggestion goes, when we require "confidence" in [RP] we mean "rational confidence" -- confidence that tracks the evidence one has.
I have two problems with this -- the first relatively minor, the second more serious. The first is that people don't generally have confidence levels that track the evidence they have. This has at least two important consequences for the present discussion. The first consequence, obviously, is that it would seem problematic to require -- as [RP] does -- that people attain certain levels of rational confidence before performing actions. The second consequence is that, if it is true that we are not the sorts of people whose confidence tracks the evidence that we have, it makes it very difficult to consider our intuitions regarding cases involving people who do have the ability to have their confidence levels track their evidence, people who are thus so different than we are.
The second, more significant, problem is this. If what one requires is not merely confidence but rational confidence, then, as I suggested above with the example of the hypnotist, the "confidence" component of "rational competence" seems irrelevant. I suspect that what one really demands, in asserting [RP], isn't a higher level of confidence at all -- say, in doctors performing crucial surgical procedures. Instead, one demands a higher level of RELIABILITY. That's why, e.g., the central element of reforms introduced to curb surgical and other hospital mistakes involves INSTITUTIONAL safeguards -- a nurse charged with counting sponges and other surgical instruments, computers that track prescription dosages, etc. -- that are not directly connected to the levels of confidence of individual practitioners. As has become increasingly obvious in the last 40 years or so of epistemology, reliability and confidence in one's reliability can -- and often do -- come apart. If this is the case, I'd opt for increasing reliability over increasing confidence every time.
This suggests that [RP] would need to be modified to involve situationally relative changes in required reliability -- as opposed to rational confidence -- in determining rational permissibility. Whether this has any implications for the relation between pragmatic considerations and knowledge must wait for another post.