In our previous posts, we presented Brett Frischmann’s novel idea of a Reverse Turing Test, i.e. the idea of testing the ability of humans to think like a machine or a computer. But, how would we create such a test? For his part, Frischmann proposes four criteria (pictured below, via John Danaher) for creating a Reverse Turing Test. Here, we consider Frischmann’s fourth factor: rationality. ((His first two criteria–mathematical computation and random number generation–do not appear to carry any moral significance, while his third criterion–common sense or folk wisdom–seems better suited for Alan Turing’s original test rather than a reverse one.))
By rationality, Frischmann means instrumental or ends-means rationality. Consider the rational actor/utility-maximization model in economics (homo economicus) or the assumption of hyper-rationality in traditional (i.e. non-evolutionary) game theory: “I know that you know that I know …” Many human decisions, however, are often emotive or irrational in nature, such as falling in love, overeating, suicide, etc. Given this disparity between machine-like rationality and human-like emotions, we should in principle be able to create a Reverse Turing Test to measure how rational or machine-like a person is. The more instrumental and less emotional a person is, the closer he or she would be to passing Frischmann’s hypothetical Reverse Turing Test.
Does the rationality component of the Reverse Turing Test have any ethical implications? John Danaher thinks so: “This Reverse Turing Test has some ethical and political significance. The biases and heuristics that define human reasoning are often essential to what we deem morally and socially acceptable conduct. Resolute utility maximisers few friends in the world of ethical theory. Thus, to say that a human is too machine-like in their rationality might be to pass ethical judgment on their character and behavior.” (See his 21 July blog post.) We, however, are not so sure what the ethical implications of Frischmann’s rationality criterion are. John Rawls’s famous “original position” thought-experiment, for example, is premised on the rational actor model, and theories of consequentialism (such as rule-utilitarianism) form a major tributary in the infinite river of moral philosophy. In other words, to the extent machines are far less emotional and more instrumentally rational than humans, might machines potentially have a greater ethical capacity than humans?