I concluded my previous post with a reference to two important probability puzzles: one is called the lottery paradox; the other, the paradox of the preface. What are these logical problems? Why are they important? And can they be solved?
Let’s roll up our metaphorical sleeves and start with the lottery paradox. Imagine a fair and honest lottery or raffle consisting of 100 raffle tickets. One of the tickets is guaranteed to win, since the lottery is a fair and honest one, but you don’t know which one. All you “know” ahead of time is that each ticket has a mere 1% chance of being the winning ticket. This scenario thus poses a logical paradox because it is rational for you to “believe” in the truth of two contradictory propositions at the same time: (1) you know or believe that one of the tickets will be a winning ticket even though (2) you also believe that none of the individual tickets is likely to win, since each individual ticket only has a 1% chance of winning.
Next up, let’s consider the preface paradox or what I like to call the “reverse lottery” problem. Have you ever noticed how most scholarly works often contain a statement up front along the following lines: “… any errors that remain are mine alone …”? Alas, once again we have a contradiction! On the one hand, an honest author of a scholarly work must generally believe each and every assertion in his academic book to be true and correct, especially if he did the research and checked his work for accuracy, and in any case, no self-respecting author would knowingly make a false assertion. But at the same time, he must also believe that at least one of the assertions in his book might very well be mistaken. After all, nobody’s perfect; we all make mistakes!
So, why are these probability puzzles worth worrying about? Simply put, because both the lottery and preface paradoxes potentially undermine the logical foundations of the subjective approach to probability (an approach that good Bayesians like yours truly are supposed to espouse), much like a lethal pincer movement in which two divisions of a powerful army simultaneously attack both flanks of an enemy formation. Or in the eloquent words of one scholar (philosopher Richard Foley; see generally here): “The lottery seems to show that no rational degree of [belief] less than 1.0 can be sufficient for rational belief, while the preface seems to show that a rational degree of [belief] greater than 0.5 is not even necessary for rational belief.”
Is it possible to solve either paradox? As it happens, many workarounds have been made, but each of these solutions has significant drawbacks. (See, e.g., Foley’s 1992 paper above or Huber’s excellent survey of the degrees of belief literature, which I reviewed in my previous post.) For my part, instead of going into the technical details of these purported solutions, for now it suffices to say that these paradoxes are “pseudo problems” — i.e. mere philosophical wordplay — and that both can easily be solved with a little creativity and common sense. Not every assertion in an academic book is well-researched, for example, and even when they are, more often than not new evidence may emerge that falsifies a previously well-settled assertion. Likewise, the lottery paradox goes away if you buy all 100 tickets.
Rest assured, I will present a more general and formal solution to the lottery and preface paradoxes in a future post; in the meantime, I will return to the task at hand and conclude my series of “micro reviews” with the recent works of Robert Darnton (The Revolutionary Temper) and Paul Sagar (Reconsidering Adam Smith).


