Alternative Title: Review of Robert Sanger, “Gettier in a Court of Law” (Concluding Post)
Alternative Title #2: The Irrelevance of Gettier Problems (with Apologies to Linda Zagzebski)
I want to conclude my review of Sanger’s Gettier paper with the following observation: Truth is not some binary value — i.e. a belief is not either true or false — but rather truth comes in degrees. In a word, truth is often probabilistic.
This unorthodox idea is often referred to as “degrees of belief” or “credences” and is associated with Bayesian epistemology a/k/a “subjective probability”–a topic that I have explored in some of my previous posts; see, for example, here and here. (This approach to truth can be traced back to Frank Ramsey and Bruno de Finetti; more recently, one of the leading contemporary exponents of this approach was Richard Jeffrey; see here.) Stated simply, a credence or degree of belief formally represents the strength with which we believe the truth of various propositions. Stated simply, the higher one’s degree of belief for a particular proposition, the higher one’s confidence in the truth of that proposition. In other words, beliefs may vary in degrees of strength or weakness–beliefs may come in shades of grey–or put another way, beliefs are not binary, are not all or nothing. Instead, my belief in a given conspiracy theory, for example, may range anywhere from 0 to 1.
Before proceeding, I now want to pose two further questions about the idea of degrees of belief. One is definitional. To the point: what is the difference between a plain and simple “belief” and a Bayesian “degree of belief”? In particular, is there some threshold or cut-off point (say, .9 or .95 or .99) above which a degree of belief acts like a full-fledged belief? The other question is logistical in nature. Specifically, when we are engaged in human reasoning, are our degrees of belief “infinitely precise real numbers”–e.g., exact numerical values ranging from 0 to 1–or “something less precise”–e.g., high, medium, and low? In other words, can a degree of belief really be expressed in precise numerical terms, and if so, how? (This second question is especially delicate because, if it turns out that, for whatever reason, we cannot assign a precise numerical value to a degree of belief, how can we transpose the axioms of probability into the Bayesian “subjective probability” framework? See also the entry for “Imprecise Probabilities” in The Stanford Encyclopedia of Philosophy.)
Putting aside these subsidiary questions (for now), the main question is, What do “degrees of belief” have to do with the things we have been talking about in our previous posts in this series, such as Gettier problems and photographic evidence, such as the picture of the sheep-dog in Sanger’s case of the negligent shepherd or the Zapruder film of the JFK assassination? In brief, two key conclusions or points flow from the idea of subjective probability or degrees of belief. One is that Gettier problems are often an irrelevant sideshow. Why? Because we cannot always determine whether a belief is true or false, except in the simplest or most trivial cases–like the number of coins in someone’s pocket (Gettier’s example) or the presence of a sheep in a field (Sanger’s example). In difficult cases, like whether Lee Harvey Oswald acted alone, or acted at all, in JFK’s murder, our beliefs are probabilistic.
The second conclusion is this: even if Errol Morris is correct to conclude the “photographs [and film clips] are neither true nor false” or “have no truth-value,” my Bayesian reply is, “So what?” Truth is often probabilistic anyways, not a binary or all-or-nothing value. Whether the photograph or film clip weakens or even falsifies our beliefs–shows that our beliefs might be or are, in fact, false, as in Sanger’s sheep-dog example–or whether such evidence supports our pre-existing beliefs, either through confirmation bias or because our beliefs are indeed likely to be true, the main point is that a photograph or a film clip can change, either up or down, our degrees of belief.

Pingback: Hume, Smith, and probabilistic truth | prior probability