Two central questions in epistemology are: What should rational agents believe? How should they revise their beliefs in light of new information? Bayesianism offers a unified answer to both. Roughly, the view consists of two claims:
Probabilistic Coherence. Rational agents have degrees of belief (credences) that can be represented by a real-valued probability function over an algebra of propositions.
Conditionalization. Rational agents update their credences by conditionalizing on their total evidence.
The view appears simple and powerful. It explains, for instance, why it seems irrational to be more confident that there are aliens on Mars than that there are aliens, or to change one’s mind about the existence of Martians after learning that snow is white, if one judges the two to be unconnected ex ante. The former violates the axioms of probability, while the latter violates the rule of conditionalization.
This dissertation examines the limits of Bayesianism from within its own theoretical confines. I look at three independent but related questions:
1. Should rational agents have degrees of belief that are countably additive?
2. Should rational agents refrain from having definite degrees of belief given certain propositions?
3. How should rational agents plan to update in response to new information in general?
I show that, in each case, the naive Bayesian answer conflicts with some intuitively plausible principles that many Bayesians endorse on independent grounds. The upshot is not purely negative. I argue that these conflicts shed interesting light on general epistemological questions, including the role of infinitary idealization, the nature of evidence and the value of rationality.