Kenny Easwaran, a philosopher at Texas A&M, recently published in the journal Nous this beautiful paper on Bayesian probabilities (hat tip: Brian Leiter). Among other things, Easwaran’s paper contains the best and most succinct explanation of the “paradox of the preface” we’ve ever read. Here it is (edited by us for clarity):
Dr. Truthlove … has just written an extensively researched book, and she believes every claim in the body of the book. However, she is also aware of the history of other books on the same subject, and knows that every single one of them has turned out to contain some false claims, despite the best efforts of their authors. Thus, one of the claims she makes, in the preface of the book, is to the effect that the body of this book too, like all the others, surely contains at least one false claim. She believes that too. She notices a problem. At least one of her beliefs is false. Either some claim from the body of the book (all of which she believes) is false, or else the claim from the preface (which she also believes) is. So she knows that she’s doing something that she hates–believing a false claim. At the same time, she notices a benefit. At least one of her beliefs is true! Either the claim from the preface is true, or all of the claims in the body of the book are true.
We shall have more things to say about this original paper in the days ahead …
Via kottke, we found this 20-page government-issued, World War II era guidebook called the Simple Sabotage Field Manual. University administrators and business managers take note, here is tip #3:
Organizations and Conferences: When possible, refer all matters to committees, for “further study and consideration.” Attempt to make the committees as large and bureaucratic as possible. Hold conferences when there is more critical work to be done.
Use at your own risk
In our previous posts, we presented Brett Frischmann’s novel idea of a Reverse Turing Test, i.e. the idea of testing the ability of humans to think like a machine or a computer. But, how would we create such a test? For his part, Frischmann proposes four criteria (pictured below, via John Danaher) for creating a Reverse Turing Test. Here, we consider Frischmann’s fourth factor: rationality. ((His first two criteria–mathematical computation and random number generation–do not appear to carry any moral significance, while his third criterion–common sense or folk wisdom–seems better suited for Alan Turing’s original test rather than a reverse one.))
By rationality, Frischmann means instrumental or ends-means rationality. Consider the rational actor/utility-maximization model in economics (homo economicus) or the assumption of hyper-rationality in traditional (i.e. non-evolutionary) game theory: “I know that you know that I know …” Many human decisions, however, are often emotive or irrational in nature, such as falling in love, overeating, suicide, etc. Given this disparity between machine-like rationality and human-like emotions, we should in principle be able to create a Reverse Turing Test to measure how rational or machine-like a person is. The more instrumental and less emotional a person is, the closer he or she would be to passing Frischmann’s hypothetical Reverse Turing Test.
Does the rationality component of the Reverse Turing Test have any ethical implications? John Danaher thinks so: “This Reverse Turing Test has some ethical and political significance. The biases and heuristics that define human reasoning are often essential to what we deem morally and socially acceptable conduct. Resolute utility maximisers few friends in the world of ethical theory. Thus, to say that a human is too machine-like in their rationality might be to pass ethical judgment on their character and behavior.” (See his 21 July blog post.) We, however, are not so sure what the ethical implications of Frischmann’s rationality criterion are. John Rawls’s famous “original position” thought-experiment, for example, is premised on the rational actor model, and theories of consequentialism (such as rule-utilitarianism) form a major tributary in the infinite river of moral philosophy. In other words, to the extent machines are far less emotional and more instrumentally rational than humans, might machines potentially have a greater ethical capacity than humans?
Credit: John Danaher
In our previous post, we mentioned John Danaher’s excellent review of Brett Frischmann’s 2014 paper exploring the possibility of a Reverse Turing Test. One of the insightful contributions Frischmann makes to this voluminous literature is his idea of a Turing Line, or the fuzzy line that separates humans from machines. According to Frischmann, this line serves two essential functions: (1) it differentiates humans from machines (and machines from humans, we would add), and (2) it demarcates a “finish line” or goal. In other words, for a machine to pass Turing’s original test, it must be able to cross this imaginary line by deceiving us that it is human. Most of the literature in this area focuses on the human side of the line: will a machine ever be capable of crossing this boundary? Frischmann, however, focuses on the machine side of the line. (In the words of Danaher: “Instead of thinking about the properties or attributes that are distinctively human, [Frischmann is] thinking about the properties and attributes that are distinctly machine-like.”) In particular, Frischmann poses a different and far more original question: will a human ever be able to deceive another person (or another machine) that he or she is a machine? But what does it mean to “think like a machine”? We shall discuss that difficult question in our next post …
Credit: Brett Frischmann (via John Danaher)