Note: This is the seventh blog post in a multi-part series on conspiracy theories
Is there a fruitful way of studying conspiracy theories, one that is not ad hoc or that doesn’t pre-judge conspiracy believers as somehow mentally defective? Thus far, I have summarized the ideas of Ross Douthat and Franz Neumann in my last few blog posts and have found their approaches to conspiracy theories wanting. In brief, I disagree with Douthat’s Quixotic attempt to subject conspiracy theories to rational analysis, and I equally dislike the ad hominem nature of Neumann’s psychological focus as well. But my critiques of Douthat and Neumann pose a deeper theoretical dilemma: if conspiracy theories are generally impossible to falsify, how can we possibly refrain from offering psychological diagnoses of conspiracy theorists?
I have given this deep conspiracy-theory dilemma a lot of thought. Instead of subjecting conspiracy to rational methods (Douthat’s Razor) or describing conspiracy believers as deranged, irrational, or mentally ill (Neumann’s Trap), I see several possible ways of avoiding the fallacies into which Douthat, Neumann, and so many others have fallen victim to. Specifically, we could apply Richard Dawkins’ original “meme’s-eye” view of cultural evolution to conspiracy theories, or in the alternative, we could frame conspiracy theories as a form of Foucauldian “discourse” or as a Wittgensteinian “language-game”–a separate linguistic domain, as pervasive and ineradicable like religion, but with its own logic and rules. I will follow these intriguing approaches, and see where they take us, in my next few blog posts …
Note: This is the sixth blog post in a multi-part series
Why do so many people fall for far-fetched conspiracy theories? In my previous post I introduced Franz Neumann’s theory of successful conspiracy theories, which appears in his classic essay “Anxiety and Politics.” Among other things, Neumann identifies the “intensification of anxiety through manipulation” as one of the reasons why people believe in conspiracies. Following Neumann’s lead, contemporary researchers of conspiracy theories tend to emphasize psychological explanations of conspiracy theories. One study (Goertzel, 1994), for example, concludes that “belief in conspiracies [is] correlated with anomia,” while another study (Oliver & Wood, 2014) concludes that “the likelihood of supporting conspiracy theories is strongly predicted by a willingness to believe in other unseen, intentional forces and an attraction to Manichean narratives.” Similarly, another study (van Prooijen & Douglas, 2017) examines the link between “societal crisis situations” and “belief in conspiracy theories” and blames “fear, uncertainty, and the feeling of being out of control” for “increasing the likelihood of perceiving conspiracies in social situations.” Yet another study, a comprehensive survey of the literature (Douglas, et al., 2020), concludes that conspiracy beliefs are due to “a range of psychological, political, and social factors.”
What all of these studies of conspiracy theories have in common, beginning with Neumann, is their focus on human psychology. For Neumann, for example, “anxiety” is what allegedly makes people more likely to buy into a conspiracy theory, like the infamous Stab-in-the-Back Myth during the Weimar Republic era. In Neumann’s own words (footnotes omitted):
“Germany of 1930–33 was the land of alienations and anxiety. The facts are familiar: defeat, shame, unfinished revolution, inflation, depression, non-identification with the existing political parties, non-functioning of the political system …. The inability to understand why man should be so hard pressed stimulated anxiety, which was made into nearly neurotic anxiety by the National Socialist policy of terror and its propaganda of anti-semitism.”
Although this explanation certainly sounds plausible, why did anxiety rear its ugly head in Germany and not in other countries? Why were Germans more anxious than, say, Americans, Spaniards, or Frenchmen? The underlying problem with these psychological explanations, including Neumann’s, is that they fall into what I like to call “the ad hominem trap.” (This fallacy occurs when, instead of addressing the merits of someone’s argument or position, we attack one’s appearance, one’s moral character, or some other irrelevant personal attribute, like one’s mental faculties. See the cartoon below for an illustration of this fallacy.)
Simply put, following Neumann’s lead, contemporary researchers often resort to finding some psychological fault or mental defect as the underlying source of conspiracy thinking. Ironically, however, blaming people’s mental states for holding fringe beliefs is itself a textbook example of the ad hominem fallacy.
Before going any further, it is worth asking why so many eminent scholars commit this egregious and embarrassing fallacy whenever they turn their attention to conspiracy theories? Why do so many research studies fall into this facile and tempting trap, questioning the intelligence or rationality of people who believe in conspiracy theories? Perhaps it is the result of researchers’ general inability to cast aside their own personal or normative views about conspiracy theories, or in the words of one scholar (Streicher, 2020, p. 281), “the academic treatment of [conspiracy theories] has frequently been characterized by the preconceived notion of conspiracy theories as morally ‘wrong’ ….” Or perhaps their falling into this fallacious trap is due to simple sociological factors. After all, most scholars have PhD’s or other advanced academic degrees, so how can anyone blame them for “looking down” on conspiracy theorists from their Ivory Tower perches, for seeing such gullible dupes as mentally-unhinged simpletons or irrational ignorami?
I, however, reject such ad hominem arguments out of hand. Instead of falling into the ad hominem trap, what if we were to take a more sympathetic view of conspiracy theorists and conspiracy believers? Specifically, regardless of one’s mental state, what is it about conspiracy theories that many people find so appealing? I will sketch several possible answers in my next few posts …
WORKS CITED
Goertzel, Ted. 1994. Belief in conspiracy theories. Political Psychology, 15(4), pp. 731-742.
Oliver, J. Eric, and Wood, Thomas J. 2014. Conspiracy theories and the paranoid style(s) of mass opinion. American Journal of Political Science, 58(4), pp. 952-966.
Streicher, Alois. 2020. “Truth under attack, or the construction of conspiratorial discourses after the Smolensk plane crash.” In “Truth” and Fiction: Conspiracy Theories in Eastern European Culture and Literature, Peter Deutschmann, et al., eds., pp. 279-299. Bielefeld, Germany and London: transcript Verlag.
van Prooijen, Jan–Willem, and Douglas, Karen M. 2017. Conspiracy theories as part of history: the role of societal crisis situations. Memory Studies, 10(3), pp. 323-333.
Alternative Title: Why do we like to believe in conspiracy theories?
Note: This is the fifth blog post in a multi-part series
When are conspiracy theories real? Last week, in my previous set of blog posts, I presented (and rebutted) Ross Douthat’s four-part test for deciding which conspiracy theories to believe in or keep an open mind about. This week, I will ask a different question. This week, I will ask, why are people so gullible as to believe in so many far-fetched conspiracy theories in the first place?
It turns out that scholars and researchers from many different fields–including law, political science, philosophy, psychology, and sociology–have been fascinated by this very question and have attempted to answer it through a wide variety of theoretical lenses. But as far as I am concerned, the best place to start is still with Franz Neumann (1900-1954), who is pictured below, and his classic essay on “Anxiety and Politics,” which was published posthumously in 1957 in a book edited by the great Herbert Marcuse: The Democratic and Authoritarian State: Essays in Political and Legal Theory, pp. 270-300 (Glencoe, Illinois: The Fress Press). I begin with Neumann because of his background and intellectual pedigree. Also, as a German, he must have been intimately familiar with the Stab-in-the-Back Myth and how it was exploited by the Nazis to win votes. (For your reference, here is his Wikipedia page.)
In his essay “Anxiety and Politics,” Neumann identifies three features that all shadowy conspiracy theories or alternate realities have in common: “intensification of anxiety through manipulation, identification, [and] false concreteness.” The first of these elements–anxiety–refers to the psychological aspect of alternate realities: who is most likely to fall for a conspiracy theory? The last two elements–identification and false concreteness–refer to the content or internal logic of any given conspiracy theory: the identity of the conspirators and their nefarious goals. In my next post, I will use the World War I Stab-in-the-Back Myth to illustrate each one of these three features.
(Note: This is the fourth blog post in a multi-part series.)
When are conspiracies real? We have been reviewing Ross Douthat’s recent New York Times column on this question, “A Better Way to Think about Conspiracies,” which contains a multi-part test for deciding which alleged conspiracies to keep an open mind about. Aside from simplicity and stochastic selectivity, Douthat also tells us to “avoid theories that seem tailored to fit a predetermined conclusion.” This key criterion appears to be inspired by Sir Karl Popper’s famous falsifiability principle. (Professor Popper, pictured below, was an influential 20th-Century philosopher of science who introduced the concept of falsifiability in his 1934 book Logik der Forschung, which was further revised and translated into English in 1959 as The Logic of Scientific Discovery.)
In brief, the Popperian concept of falsifiability refers to “testability” or “refutability”: the capacity for some proposition, statement, theory, or hypothesis to be tested and proven wrong, i.e. contradicted by evidence or “falsified.” As a result, conspiracy theories that are designed to confirm our pre-determined conclusions don’t deserve our respect; theories need to be falsifiable before they can be taken seriously. At first glance, Douthat’s version of this falsifiability criterion–which was originally used by Karl Popper to separate science from pseudo-science–would appear be an especially useful technique to distinguish “legitimate” or plausible conspiracy theories from imagined or invented ones. But upon closer inspection, one can make a psychological or “Kuhnian” objection to Popper’s falsifiability principle in the context of conspiracy theories. Such theories are more like religious beliefs: they are often a product of people’s deep-seated intuitions and implicit assumptions about the world, and those intuitions, beliefs, and assumptions are generally impossible to test or “falsify”!
As it happens, Douthat himself concedes in his NY Times essay that “to be a devout Christian or a believing Jew or Muslim is to be a bit like a conspiracy theorist, in the sense that you believe that there is an invisible reality that secular knowledge can’t recognize ….” In other words, religious beliefs, like many conspiracy theories, are usually the product of one’s private intuitions, not rational deliberations. These intuitions often reflect one’s most deeply-held beliefs and thus cannot be tested or falsified in any meaningful sense. To return to my favorite historical example, consider the German “Stab-in-the-Back” Myth from the Weimar Republic era (1919 to 1933). Not only is this conspiracy theory relatively simple and selective; it also solves a major mystery: why did Germany lose WWI? Alas, this myth is not amenable to rational analysis because it is unfalsifiable. If you really believe that the German Army was stabbed in the back by internal enemies, no amount of new evidence will be able to refute this narrative of Imperial Germany’s defeat.
Either way, whether we are in the domain of religion or the domain of politics, the main problem with Douthat’s analysis of conspiracy theories is that all such theories are, by definition, tailored to fit a pre-determined conclusion; that’s what makes it a conspiracy theory! As a result, conspiracy theories and alternate realities are impossible test or otherwise falsify. Like Freudian psychoanalysts or Marxist critics of capitalism, conspiracy theorists will always update their priors in favor of their pre-existing beliefs whenever they are presented with new or additional information. Put another way, no amount of evidence will be able to convince a “true believer” that a particular conspiracy theory or alternate reality is contrived. Perhaps, then, we should take a different approach. Instead of asking which conspirary theories we should keep an open mind about, what if we were to ask a different question. Specifically, what if we asked, Why are conspiracy theories so popular in the first place? I will proceed to address this question in my next set of blog posts starting on Monday, March 22.
Note: This is the third blog post in a four part series.
When are conspiracies real? We have been reviewing a recent New York Times column on this question, “A Better Way to Think about Conspiracies,” in which Ross Douthat formulates an elaborate four-part test for deciding which alleged conspiracies to keep an open mind about. Two of Douthat’s rules of thumb can be combined into a single global criterion: stochastic selectivity. Specifically, Douthat concludes his essay with the following two guidelines: (1) we should consider taking conspiracy theories more seriously only when “the mainstream narrative has holes,” and (2) just because one particular fringe theory or myth might be true doesn’t mean all of them are.
Alas, Douthat’s stochastic selectivity criterion is neither here nor there. Why? For starters, because even so-called “mainstream” or consensus narratives, will always have gaps or holes in them. A narrative is just a story, and by definition all stories are necessarily incomplete. Furthermore, even a story with a single hole or gap might be called into question, depending on the size of that gap or its nature. The German stab-in-the-back myth of the Weimar Republic era (1919 to 1933), for example, fills a gap in the story of Imperial Germany’s defeat in the First World War. After all, how could one of the best-trained and most well-equipped military forces in the world, an invincible army that was said to be “undefeated on the battlefield,” lose the war? Although the mainstream view today is that Imperial Germany had lost the war by late 1918 because her army was out of reserves and was overwhelmed by the entrance of the United States into the war, there are still significant holes in this story, especially from the perspective of a post-war demoralized German public. After all, the United States’ first major offensive in WWI did not occur until the Battle of Cantigny in mid-1918, and in any case, the German public at that time had no way of knowing the true number of Germany’s reserves, as that number was classified information.
That said, to the extent that two or more imagined conspiracies are stochastically independent, then point #2 appears to be logically sound, since the probability of all such conspiracies being true is the product of their individual probabilities. But (wait for it!) what happens if we are considering overlapping conspiracies, i.e. conspiracies with similar goals or with the same subset of members? Stated formally, what happens when the conspiracies or secret plots under consideration are dependent events instead of independent ones? (Two events are said to be “independent” if knowing that one event has occurred doesn’t change the probability of the other event’s occurrence.) By way of historical example, given the anti-Semitic origins of many interwar European conspiracy theories, many people in Weimar Germany who fell for “The Protocols of the Elders of Zion” hoax might be more likely to believe in stab-in-the-back betrayal myth as well. Either way, whether we classify two or more conspiracies as dependent or independent events, there is a much bigger problem with Douthat’s approach to conspiracy thinking. I shall identify this fatal flaw in my next post.
Note: This is the second blog post in a four part series.
As I mentioned in my previous post, Ross Douthat’s recent NY Times column on conspiracy thinking, “A Better Way to Think about Conspiracies,” formulates a four-part test for deciding which alleged conspiracies to keep an open mind about, or in Douthat’s own words, “a tool kit for discriminating among different fringe ideas.” Among other things, Douthat recommends: “Prefer simple theories to baroque ones.” This first criterion can thus be restated in Occam’s Razor terms as follows: prefer simpler conspiracy theories to more complex ones. Let’s call this principle “Ross’s Razor.”
In brief, Ross’s Razor tells us that when we are presented with competing explanations of the same event (e.g., Germany’s defeat in World War I; Trump’s loss in 2020 despite winning in Florida and Ohio), we should select the simplest explanation, the explanation with the fewest assumptions. As an aside, this preference for simplicity, though attributed to William of Ockham (1287?–1347), a Franciscan theologian and scholastic philosopher (see image below), may, in fact, go as far back as Aristotle’s treatise Physics, which states, “Nature operates in the shortest way possible.” As a further aside, whether we define simplicity in terms of the number of background assumptions or in terms of how nature or the world operates, I personally prefer to frame the simplicity/parsimony criterion in probabilistic terms, since one of the rationales for this preference for parsimony is a probabilistic one: the idea that the simplest explanation is most likely to be the correct one.
Either way, however, what does “simpler” mean in the domain of alternate realities or conspiracy theories? Does simplicity refer to the number of conspirators? The goal of the conspiracy? The number of steps necessary for the conspiracy to succeed? Worse yet, however we answer the foregoing questions, one of the supreme ironies of many conspiracy theories is that they pass Douthat’s parsimony test with flying colors, especially when it is the truth that is often ambiguous and messy! By way of illustration, consider the German “Stab-in-the-Back” Myth that I mentioned in my previous post. In many ways, this particular conspiracy theory provides a far simpler and parsimonious explanation of Germany’s defeat in World War I than the truth does.
Yes, the German Army was low on reserves, and yes the United States changed the course of the war after the Battle of Cantigny (28 May 1918), but how could the German public know these things at the time? Also, even if the number of German reserves and the number of U.S. troops were publicly-available information, what could be more simpler than to believe that Germany was stabbed-in-the-back by a visible group of traitors, the “November Criminals” who signed the armistice in November of 1918? Simply put (pun intended), it is this tempting yet misleading simplicity that is one of the main attractions of so many fringe conspiracy theories! That said, I will consider the remaining three factors in Douthat’s four-part test in my next few blog posts.
Happy St. Patrick’s Day! Ross Douthat, an influential columnist for The New York Times, recently wrote a fascinating essay titled “A Better Way to Think about Conspiracies.” As it happens, I have always been puzzled by one of the most famous conspiracy theories of all time, the “stab-in-the-back” myth that was popular in Germany during the ill-fated Weimar Republic era (1919 to 1933). How did the Imperial German Army–an army that was said to be “undefeated on the battlefield”–end up losing the First World War (WWI)? According to one popular conspiracy theory at the time, Germany lost WWI because she was “stabbed in the back” by a wide variety of left-wing politicians and intellectuals, who were collectively referred to as “the November Criminals” for agreeing to Germany’s surrender on 11 November 1918. In reality, however, Germany had lost the war because her army lacked sufficient reserves and because the USA had entered the war in full force in mid-1918. So, how did this dangerous myth persist for so long and win over so many hearts and minds?
Now, fast forward to the JFK assassination or, even more recently, to 2020? Did Harvey Lee Oswald act alone? Were the 2020 elections stolen from President Trump? If the JFK plot or Trump’s election fraud claims are just crazy conspiracy theories, why do so many people still believe in them? In short, where do we draw the line between plausible conspiracy theories and far-fetched ones? Here is why Douthat’s conspiracy theory essay is worth reading: he formulates a four-part test for deciding which alleged conspiracies to keep an open mind about, “a tool kit for discriminating among different fringe ideas.” In brief, Douthat’s conspiracy theory test consists of the following four criteria:
“Prefer simple theories to baroque ones.”
“Avoid theories that seem tailored to fit a predetermined conclusion.”
“Take fringe theories more seriously when the mainstream narrative has holes.”
“Just because you start to believe in one fringe theory, you don’t have to believe them all.”
Alas, Douthat’s four-part test is woefully inadequate for several reasons, which I shall discuss in detail in my next few posts. For now, it suffices to say that both the German “stab-in-the-back” myth as well as Trump’s stolen election story–indeed, most of the conspiracy theories mentioned in the chart below–would most likely pass Douthat’s four-part test with flying colors.
I shall close this series of Bayesian blog posts with a confession. Ex ante, before I began building my Bayesian model of the litigation process, I had taken a dim view of the legal game. Given the complexity and ambiguity of substantive as well as procedural rules, the indeterminate nature of most legal standards, and the high levels of strategic behavior by both litigants and judges, I expected my Bayesian model to confirm my negative view of the legal process. Ironically, however, the results of my Bayesian model of the litigation game were very surprising. In essence, my model shows that, regardless of the operative rules of procedure and substantive legal doctrine, a guilty verdict is nevertheless a highly reliable indicator of a defendant’s actual guilt. Specifically, my model demonstrates that when a defendant is found guilty of committing a wrongful act (civil or criminal), there is a high posterior probability that the defendant actually committed such a wrongful act, even when the underlying process of adjudication is random and even when the moving parties are risk-loving or less-than-virtuous!
Note: Because of two other research projects I am currently working on right now–both of which must be completed by April 16–as well as my regular spring semester teaching duties, I will be suspending my series of Bayesian blog posts for the time being. (After April 16, I will resume this series by showing how Bayesian methods can solve the blue bus problem and other evidentiary paradoxes.) In the meantime, I will switch gears, so to speak, and blog about my two ongoing research research projects in the days ahead …
Note: This is my fourteenth blog post in a month-long series on the basics of Bayesian probability and its application to law.
Happy Monday! Let’s now suppose that litigation is still a crapshoot but that plaintiffs and prosecutors are risk-loving or ‘less-than-virtuous’; that is, let’s assume that the moving parties are more willing to gamble than their virtuous colleagues. Specifically, I will assume that the litigation game is 50% sensitive and 50% specific and that plaintiffs and prosecutors are willing to play the litigation game even when they are only 60% certain that the named defendant has committed a wrongful act. Although these assumptions do not appear to be plausible, this permutation of my model, however implausible, may nevertheless provide an instructive counter-factual or hypothetical illustration of my Bayesian approach to litigation.
Note: This is my thirteenth blog post in a month-long series on the basics of Bayesian probability and its application to law.
Suppose that litigation is a crapshoot (to quote my mentor and favorite law school professor John Langbein); that is, what if litigation outcomes are only 50% sensitive and 50% specific? In other words, what if litigation games are completely random? Under this scenario, the process of adjudication is no better than a coin toss. Although this assumption may appear fanciful, as I explained in a previous post (see “Bayes 10“), the randomness of adjudication might be a function of the level of the complexity or the level of ambiguity of the applicable legal doctrines (e.g., assumption of risk) or procedural rules (e.g., res judicata). Simply put (pun intended), the more complex or ambiguous the applicable law is, the more random or arbitrary the outcome of litigation will be.