According to this recent report in The MIT Technology Review, researchers from Harvard and MIT have developed a new method for spotting text that has been generated using AI. Their method, which is called the “Giant Language Model Test Room” (GLTR), exploits the fact that AI text generators rely on statistical patterns in text, as opposed to the actual meaning of words and sentences. “In other words, the tool can tell if the words you’re reading seem too predictable to have been written by a human hand.” Ok, but is there an AI for identifying an AI that is able to detect text generated by an AI?

Happy Birthday, Adys Ann (Rose)!
Is the “detectability” of AI-written texts due to too-slavish adherence to the mean without considering the standard deviation in how humans write text? In other words, if an AI-text-generating algorithm turned up its “unpredictability” knob, would it be less detectable by GLTR?
Also, I wonder what the “false-positive” rate is for the GLTR, i.e., how many human texts does it incorrectly classify as AI texts? That was not clear to me in the arxiv paper.
Excellent questions!
By the way, the example of the recursive function you cited might be better called “Gum Up the Works Function” — it essentially just increases the “nesting level” in the processor by the value x without doing anything productive. Servers commonly have configuration settings that throw an error if the nesting level is greater than some value. Could some human mental incapacities involve such do-nothing recursive circuitry?
It’s nesting functions all the way down …