Many academic law professors have left-leaning biases (to put it mildly) and spend significant amounts of time posing long-winded research questions and writing lengthy law review articles. (A few are even eloquent writers.) Likewise, ChatGPT is a left-leaning (see here, for example) “large language model” that can formulate grammatically correct answers to almost any question, even a Socratic one! So, when will ChatGPT have the ability to write up a theory article, conduct an empirical study, or build a new mathematical model? Could she (sorry, I like to think of ChatGPT as a “she”) one day generate her own novel research questions?
For my part, I have posted over five dozen papers on SSRN over the years, and of those 60+ papers, only seven pose an actual question in the title. In chronological order, my seven inquisitorial paper titles are as follows:
- An empirical analysis of judicial decrepitude at the U.S. Supreme Court: “The Most Senile Justice?” (2007)
- An applied game theory paper about Puerto Rico’s political status: “Is a Post-Colonial Puerto Rico Possible?” (2008)
- A book chapter about vampires and the law: “Buy or Bite?” (2013)
- A survey of the Prisoner’s Dilemma game and its relation to Coasian bargaining: “Does the Prisoner’s Dilemma Refute the Coase Theorem?” (2014)
- A normative paper about jury voting: “Why Don’t Juries Try ‘Range Voting’“? (2015)
- A purely theoretical paper about Newcomb’s paradox and the prediction theory of law: “Judge Hercules or Judge Bayes?” (2015/16)
- Two reviews of books about Adam Smith’s moral philosophy: “Do Grasshoppers Dream of Impartial Spectators?” (2021/22)
What would happen if we fed these questions into ChatGPT? Let’s find out! Starting on Monday 1/23, I will post ChatGPT’s natural-language responses to my seven scholarly queries.

Pingback: More Adventures in ChatGPT: Judge Hercules versus Judge Bayes | prior probability
Pingback: Future adventures in ChatGPT | prior probability