Match fixing has occurred in soccer leagues around the world, so why should the NFL (or college football, for that matter) be any different? In fact, according to this devious report by Brian “The-Fix-Is-In” Touhy, it is much easier to “fix” or tamper with a North American football game than you might think. From a potential fixer’s point of view, the main problem for the fixer is finding a place where to place one’s bet, or in the words of Mr Tuohy:
I’ve been told by more than one gambling insider that you’d have to be a complete idiot to bet on a fixed game in Las Vegas. Thanks to state regulations and the sports books’ corporate atmosphere, all bets are monitored. Wager over $10,000 and a Vegas sports book will kindly ask for your ID and social security number—that is if they’ll even accept your bet (sports books do have limits, especially if you’re unknown to them and/or wagering on an unpopular game). And to make any fix worth its while, the betting would certainly need to exceed five figures.
By the way, what’s to prevent umpires and referees from secretly betting (through third parties) on the games they are calling, or is such corruption less likely the more umpires or referees are assigned to call the same match? Also, what if players on both teams of the same match are engaged in match fixing? How frequent is such two-sided corruption or “double fixing” in sports? Does such double fixing cancel each other out in the aggregate?
That is the premise of our latest paper “Visualizing Probabilistic Proof.” (We’ve blogged about this paper before, but the latest draft of our paper is available on SSRN here and will be published in an upcoming volume of The Washington University Jurisprudence Review. By the way, our paper is full of diagrams and is “only” 37 pages long, which is short by law review standards!) In brief, in our probabilistic proof paper, we try to solve the Blue Bus Problem, a hypothetical puzzle often presented by law professors when they teach Evidence. Unlike most treatments of the Blue Bus Problem, however, our solution uses Bayesian methods. In a future post, we will talk about a related problem known as the Gatecrasher Paradox.
Hat tip to LargeCoinPurse via Reddit.
In their paper “Motive attribution asymmetry for love vs. hate drives intractable conflict,” Adam Waytz, Liane Young and Jeremy Ginge appear to extend the logic of the Coase Theorem into the domain of politics. Specifically, Waytz, Young, and Ginge study the problem of “motive attribution asymmetry,” i.e., the belief that people who disagree with you have motives that are bad. Among other things, they found that “offering Democrats and Republicans financial incentives for accuracy in evaluating the opposing party can mitigate this bias …” This is an exciting and promising research agenda, but if we can pay people to be less biased, couldn’t we also pay people to be more biased as well? Consider, in addition, this hypothetical scenario, courtesy of Sandeep Baliga (emphasis by us):
Via our friends at digg, we saw that someone posted a variant of this question on Quora: “What are the chances of survival of individual chess pieces on average.” Oliver Brennan, a chess aficionado and computer programmer, posted this answer:
Pocket Calculator, Meet the “PhotoMath” App: Complements or Substitutes?
Deborah Mayo recently reblogged and commented on Nathan Schachtman’s blog post titled “Courts Can and Must Acknowledge Multiple Comparisons in Statistical Analyses.” (Mr Schachtman is not only a lawyer; he is also a lecturer at Columbia Law School and an expert on scientific evidence.) Moreover, because Mr Schachtman’s blog post and Dr Mayo’s comments on Schachtman’s post touch on an area we care about — the role of probability theory in law — we are re-reblogging both items below and shall comment on them in a future post.
Update (22 October 2014): We read Schachtman’s post on the multiple testing problem in law (see below), and we also read Dr Mayo’s comments to his post (ditto), and we were left scratching our heads. We’re not sure where they disagree. After all, isn’t it true that “data trolling” is bad science? And, if so, shouldn’t trial judges retain the discretion to exclude expert testimony based on multiple comparisons?
Originally posted on Error Statistics Philosophy:
The following is from Nathan Schachtman’s legal blog, with various comments and added emphases (by me). He will try to reply to comments/queries.
Nathan Schachtman, Esq., PC * October 14th, 2014
In excluding the proffered testimony of Dr. Anick Bérard, a Canadian perinatal epidemiologist in the Université de Montréal, the Zoloft MDL trial court discussed several methodological shortcomings and failures, including Bérard’s reliance upon claims of statistical significance from studies that conducted dozens and hundreds of multiple comparisons.[i] The Zoloft MDL court was not the first court to recognize the problem of over-interpreting the putative statistical significance of results that were one among many statistical tests in a single study. The court was, however, among a fairly small group of judges who have shown the needed statistical acumen in looking beyond the reported p-value or…
View original 2,543 more words