I am reblogging my pop quiz from last week because, based on the evidence in several recent reports (see here, here, and here), it looks like the object shot down by the U.S. Air Force on 10 February was a small metallic balloon, which can be purchased at Party City for under $7.00, that was launched by a club of high-altitude-balloon hobbyists that call themselves the “Northern Illinois Bottlecap Balloon Brigade”! Question: Should I update my priors in favor of answer choice #3 (research), or should I add a new category (hobby balloons) to my quiz? Either way, that still leaves two other unidentified flying objects.
As I see it, a good Bayesian should assign an equal probability to each of these outcomes and then begin updating these priors as new evidence becomes available, right?
Next week, I will review Chapters 6 and 7 of Rule of Law by Tom Bingham as well as write another blog post or two about the one of the greatest legal scholars and judges of all time, Richard A. Posner. In the meantime, I am going to take this Presidents Day weekend off to attend to some other matters.
By F. E. Guerra-Pujol, University of Central Florida
Note: This essay is my contribution (sans the Christina Aguilera video) to the next issue of Faculty Focus
Have you heard about the new powerful chatbots cheating genies that Microsoft and Google have recently unleashed? One of these genies is GPT-3 a/k/a ChatGPT, a powerful artificial intelligence (AI) computer program created by the “OpenAI” research lab and launched to the public in late 2022. Among other things, ChatGPT can answer questions, write up a research article, translate a text, generate a blog post, narrate a story, or compose a poem. This program is so powerful that it can even program other computers!
To conjure this genie out of her bottle and make her carry out any one of these tasks, all you need to do is type your command directly into the chatbot. As a result, many students will no doubt be tempted to use these new AI programs to complete their academic assignments, and because these genies are so enticing and because they will only get even more sophisticated and powerful (GPT-4 is scheduled to be released later this year), instructors will have but two options, broadly speaking. One is to resist and fight back against our Big Tech overlords. The other is to “lean in” and join the revolution!
For my part, resistance is futile. This semester alone, for example, I have over 800 enrolled students spread out across five sections in my large business law and ethics survey course. To make matters worse, Big Tech has hundreds of billions of dollars in resources, while I am a mere college professor with a small handful of teaching assistants and a couple of liberal arts degrees. Like the lyrics in the song “Right Hand Man” from the Hamilton musical: “we are outgunned, outmanned, outnumbered, and outplanned.”
Therefore, instead of trying to fight the inevitable, instead of tilting at these Big Tech windmills, I have decided to allow this dangerous Trojan horse into my courses and accept these AI gifts. Now, moving forward, when I post a discussion question, research problem, or ethical dilemma on Canvas, I will give my students the option of looking up the answer on ChatGPT first–or on Bard, Google’s version of the genie. In exchange, my students will have to post a screenshot of the AI’s answer, cross-check it for accuracy, and make substantives revisions. How they would improve the AI’s answer? What stylistic changes or additions or substitutions would they make? This way, my students will get to work with this exciting new technology first hand, and at the same time, they will have an opportunity to develop their critical thinking skills.
As I see it, a good Bayesian should assign an equal probability to each of these outcomes and then begin updating these priors as new evidence becomes available, right?
Thus far, I have reviewed the first four chapters of Thomas Bingham’s book Rule of Law, pointing out some holes and flaws in Bingham’s work, such as the problem of vague or conflicting laws (see here) and the problem of hard cases (here). Today, I will point out a common but egregious logical fallacy that appears in Chapter 5, which is titled “Equality before the law”. To the point, Judge Bingham begins this chapter with the following legal formula: “The laws of the land should apply equally to all, save to the extent that objective differences justify differentiation.” The more general idea that Bingham is trying to defend is the idea that like cases should be treated alike. Alas, I hate to be that guy, but without some external criterion (or set of criteria) for deciding when two cases are sufficiently similar or for deciding when it’s okay to treat people differently, the principle of equality, standing alone, does absolutely no substantive work in the real world. Why not? Because all laws, by definition, make classifications! As a result, as Peter Weston explained many years ago in his classic Harvard Law Review article “The Empty Idea of Equality“, we will always need some external theory, principle, or rule–i.e. a theory, principle, or rule separate from the idea of equality itself–to figure out which classifications are justified and which are not. Stated formally, Bingham’s formula is tautological, for what he is really saying is that the law must apply to everyone equally except when it doesn’t have to.
I have now posted on SSRN the first two parts of my ongoing, four-part “Adam Smith in Paris: 1766” series of papers. In summary, it was during this fateful year (1766) that the Scottish philosopher attended the celebrated salons of the leading ladies of Paris and dined with such lights as Diderot and d’Alembert, co-editors of the famed Encyclopedie, but most importantly, it was in the French capital where Smith met and exchanged ideas with the leading political economists of Europe. Part 1, which is subtitled “First Impressions“, surveys the French reception of Adam Smith’s 1759 treatise on moral sentiments on the eve of Smith’s Paris sojourn, pinpoints the date of Smith’s arrival in Paris during the winter of 1766, and describes his possible initial impressions of the City of Lights, impressions that may have informed his later work, while part 2 of my Smith in Paris series, which is titled “Adam Smith’s Paris through the Eyes of Horace Walpole“–refers to the Paris travel journal of Smith’s British contemporary Horace Walpole, as well as other historical sources, to retrace the first eight weeks of Smith’s stay in the French capital (mid-February to mid-April 1766). FYI: I am currently researching the last two parts of this series and hope to begin writing up my findings this summer.
Reading of Voltaire’s tragedy of the Orphan of China in the salon of Marie Thérèse Rodet Geoffrin in 1755, by Anicet Charles Gabriel Lemonnier, c. 1812. Oil on canvas. Château de Malmaison, Rueil-Malmaison, France.
For Valentine’s Day, I am once again sharing my first-ever Adam Smith paper, where I describe the strict regulation of people’s private lives in the Scotland of Smith’s youth and further explore the evidence for and against several rumors regarding the Scottish scholar’s love life. The paper is “Adam Smith in Love” in Econ Journal Watch, 18(1), pp. 127-155. Enjoy!
Postscript: My Super Bowl contest was a success. In summary, four of my students–out of a total of 31 participants–correctly predicted that the Kansas City Chiefs would win the championship game by three points. Overall, my “wisdom of crowds” test predicted a close contest, since 15 students picked the Chiefs to win by an average of 7 points, while 16 students thought the Eagles would win by an average of 7.1 points (see my previous Super Bowl LVII post below).
On Friday 2/10, I invited the students in my business law survey course to predict which team will win the Super Bowl (the Kansas City Chiefs or the Philadelphia Eagles?) and by how many points? Then, on Sunday morning 2/12, I tallied up the results. In summary, a total of 31 students made predictions about the outcome of the championship game: 15 students predicted the Chiefs would win by an average of 7 points, while 16 students predicted the Eagles would win by an average of 7.1 points. These data are pictured in the table below:
For my part, I originally conducted this survey because I wanted to test the so-called “Wisdom of Crowds” hypothesis (i.e., I was going to base my prediction on the aggregate of student predictions), but how should I aggregate these data? By a narrow margin (one vote of over 30 votes cast), more students overall…
On Friday 2/10, I invited the students in my business law survey course to predict which team will win the Super Bowl (the Kansas City Chiefs or the Philadelphia Eagles?) and by how many points? Then, on Sunday morning 2/12, I tallied up the results. In summary, a total of 31 students made predictions about the outcome of the championship game: 15 students predicted the Chiefs would win by an average of 7 points, while 16 students predicted the Eagles would win by an average of 7.1 points. These data are pictured in the table below:
For my part, I originally conducted this survey because I wanted to test the so-called “Wisdom of Crowds” hypothesis (i.e., I was going to base my prediction on the aggregate of student predictions), but how should I aggregate these data? By a narrow margin (one vote of over 30 votes cast), more students overall think the Eagles will win, and both groups are equally confident that (on average) their team will win by a touchdown.