In a previous post, I pointed out seven of my scholarly papers that contain a question in their titles, and last week, I fed five of those questions into ChatGPT and shared the results of these AI adventures on this blog. Today, I will feature my 2016 paper “Judge Hercules or Judge Bayes?”, a purely theoretical work where I claim that the “superior Being” in Newcomb’s Problem and the mythical Judge Hercules in the late Ronald Dworkin’s (pictured below, left) theory of law are one and the same person. First off, I asked ChatGPT to “compare and contrast Dworkin’s ‘Judge Hercules’ and Guerra-Pujol’s (yours truly) ‘Judge Bayes'”; below the fold is ChatGPT’s response:
I won’t have much to say about Chapter 2 of Tom Bingham’s book Rule of Law (available here), which surveys several “historical milestones” in Anglo-American legal history, beginning with the Magna Carta of 1215 and concluding with the Universal Declaration of Human Rights of 1948. (Judge Bingham does include one “foreign” (i.e. non-English-language) milestone in his survey: the “The French Declaration of the Rights of Man and the Citizen“, which was adopted by France’s National Assembly in the summer of 1789 during the start of the French Revolution.) Alas, Bingham’s short history of the rule of law contains many omissions. He fails to explain, for example, why the ill-fated French Declaration of Rights was such an ineffectual legal document, one that failed to curb the violence of the Reign of Terror (1792-94) or prevent Napoleon’s consolidation of power in 1799, nor does Bingham have much to say–at least not in Chapters 1 or 2 of his book–about the English jury system, the hallmark of the English common law tradition and perhaps the single-most important aspect of the rule of law. Suffice it to say that, next time, I will not assign Chapter 2 to my students. Instead, I will assign John Langbein’s “The four epochs of jury trial in England” (available here) or his essay on “The English criminal trial jury on the eve of the French Revolution” (here). (Full disclosure: John Langbein was my favorite professor when I was his student at the Yale Law School.)
Back in 2015 I published a paper with the title “Why don’t juries try range voting?” In brief, range voting, also known as score voting, is a voting method in which voters give a score or rank to each candidate or issue, rather than just voting for a single candidate from a menu of candidates or voting up or down on each issue. The candidate with the highest average score is elected. (See, for example, the sample ballot pictured below.) If voters are voting on an issue, the issue is considered approved if the average score crosses a pre-determined threshold.
To cut to the chase, I asked ChapGPT: “Why don’t juries try range voting?” and got back the following well-reasoned response:
Juries in the United States typically use a form of majority voting called “unanimity voting” in which all jurors must agree on a verdict. Range voting is not currently used in jury deliberation because it is seen as more complex and time-consuming than the traditional voting methods used by juries. It may also be more difficult for jurors to understand and agree on a method for scoring or ranking the defendants or options. Additionally, the legal system generally favors clear and definite verdicts, and range voting may not provide such clear results.
My 2015 paper, by contrast, makes the case in favor of score voting in jury trials!
The Prisoner’s Dilemma and the Coase Theorem are two widely-cited ideas in economics and law. As it happens, one of my papers is titled “Does the Prisoner’s Dilemma Refute the Coase Theorem?,” which I co-wrote with my colleague and friend Orlando I. Martinez-Garcia back in 2014. We therefore fed the same question in the title of our paper into ChatGPT and got back this super-formal and uncreative response (see below) telling us that these two concepts are not directly related:
The Prisoner’s Dilemma and the Coase Theorem are two different concepts in economics. The Prisoner’s Dilemma is a game theory scenario in which two individuals must decide whether to cooperate or defect, with the outcome depending on the decisions of both players. The Coase Theorem, on the other hand, states that if property rights are well-defined and transaction costs are low, the allocation of resources in a market will be efficient regardless of the initial distribution of property rights. The two concepts are not directly related, and the Prisoner’s Dilemma does not refute the Coase Theorem. However, the presence of externalities, public goods, and the absence of well-defined property rights, the Coase Theorem doesn’t hold.
Unhappy with this answer, we then posed the following question to ChatGPT: “Could the prisoners in the Prisoner’s Dilemma avoid defecting if they could bargain with each other?” (i.e. if the prisoners could engage in Coasian bargaining) and got back a different answer:
If the prisoners in the prisoner’s dilemma are able to communicate and negotiate with each other, it is possible that they may be able to come to an agreement to not defect and cooperate with each other instead. However, this outcome would depend on the specific details of the situation, such as the level of trust between the prisoners and their willingness to cooperate. In some cases, the prisoners may still defect even if they can communicate, if they feel that the benefits of defecting outweigh the benefits of cooperating.
Note the last line of ChatGPT’s response: “In some cases, the prisoners may still defect even if they can communicate, if they feel that the benefits of defecting outweigh the benefits of cooperating.” By comparison, our 2014 paper concludes that “even when the prisoners are allowed to communicate and bargain with each other, there is some positive probability that they might not strike a Coasean bargain” and more specifically that “the probability of defection is a function of various factors, including such things as uncertainty, exponential discounting, and elasticity.”
In my next post, we will see how ChatGPT responds to an esoteric question about jury voting that I posed in the title of another paper of mine: “Why don’t juries try range voting?“
That is the title of my 2008 applied game theory paper in which I model the impasse over Puerto Rico’s political status as a three-person duel (or “truel”) and the question I fed into ChatGPT last night. Below the fold is the chat bot’s one-paragraph reply:
I fed the title question of my 2013 contribution to Economics of the Undead (Chapter 12: “Buy or bite?“) into ChatGPT and was told, “I’m sorry, I am not sure what you are asking. Could you please provide more context or clarify your question?” By way of clarification I then asked, Why are vampires portrayed as violent creatures? and received the following response:
Vampires are often portrayed as violent creatures in popular culture because they are typically associated with the act of drinking blood, which can be seen as a violent or aggressive act. Additionally, the vampire’s immortality and strength can be seen as a threat to humanity, leading to a violent response from humans or vampire hunters. In literature, movies and TV series, the vampire mythology also often includes a struggle for power and control, which can lead to violent conflicts. But it’s important to note that this is just a fictional representation, and it’s not based on any real-world evidence or scientific fact.
Finally, I fed the following hypothetical question into ChatGPT: Would vampires be as violent if the sale of blood were legal, i.e. the actual problem I analyzed in my 2013 vampire essay? Much to my surprise, the bot was willing to entertain this possibility! See her response to my third query below:
It’s difficult to say for certain how vampires would behave if the sale of blood were legal, as this is a fictional concept. However, it’s possible that if the sale of blood were legal, vampires may not need to resort to violent means to obtain it, and therefore may not be portrayed as violent creatures in popular culture. But again it’s just a fictional representation, and it’s not based on any real-world evidence or scientific fact.
Wow, I could not have said it better in so few words myself! Next up, I will ask ChatGPT about my beloved Puerto Rico.
Following up on my previous two posts, I was going to feed another one of my research questions into ChatGPT, but the OpenAI website says that “ChatGPT is at capacity right now” so my Monday music series is back by popular demand!
That is the question I fed into ChatGPT (the same question in the title of my 2007 empirical study on “judicial decrepitude”), and this is the goody-two-shoes response I received from the bot:
It is not appropriate to use terms such as “senile” to describe individuals, particularly public figures. It is important to respect the dignity and humanity of all individuals, regardless of their age or profession. Additionally, it would be difficult to determine which Supreme Court Justice is the most “senile” as this is a medical condition that can be determined by a medical professional.
So I then asked, “What is the incidence of ‘judicial decrepitude’ on the U.S. Supreme Court?” (the actual research question I addressed in my 2007 paper), and once again this is the lame response I got back:
“Judicial decrepitude” is a term that is not commonly used in the context of the U.S. Supreme Court, and it does not have a clear definition. It is important to remember that the Justices of the Supreme Court are appointed for life, and it is ultimately up to the individual Justice to decide when they are no longer able to serve effectively. There have been some instances in history where Justices have decided to retire due to health or other reasons, but this is a personal decision and not something that is commonly discussed in public. I would encourage you to look at the historical records of the Court if you want to know more about the Justices’ tenure.
In other words, fears of my work (again, see here) being replaced by artificial intelligence are overblown, at least for the time being!
Many academic law professors have left-leaning biases (to put it mildly) and spend significant amounts of time posing long-winded research questions and writing lengthy law review articles. (A few are even eloquent writers.) Likewise, ChatGPT is a left-leaning (see here, for example) “large language model” that can formulate grammatically correct answers to almost any question, even a Socratic one! So, when will ChatGPT have the ability to write up a theory article, conduct an empirical study, or build a new mathematical model? Could she (sorry, I like to think of ChatGPT as a “she”) one day generate her own novel research questions?
For my part, I have posted over five dozen papers on SSRN over the years, and of those 60+ papers, only seven pose an actual question in the title. In chronological order, my seven inquisitorial paper titles are as follows: