Thus far, we have identified several common forms of “data fraud,” including cherry picking, data dredging, and the false cause fallacy. Yet all of these myriad forms of data fraud might be mere symptoms of a larger problem: publication bias. Just as TV and print media compete to report on the most salient or salacious events that will grab their viewers’ or readers’ attention (“If it bleeds, it leads”), scientific journals likewise compete to publish studies with the most exciting, novel, or “sexy” findings. But the problem with this fetish for novelty or salience is that it generates a scholarly market failure, one resulting in the overproduction of sexy studies, or in the words of the good folks at Geckoboard (a UK-based consulting firm), “For every study that shows statistically significant results, there may have been many similar tests that were inconclusive…. Not knowing how many ‘boring’ studies were filed away impacts our ability to judge the validity of the results we read about. When a company claims a certain activity had a major positive impact on growth, other companies may have tried the same thing without success, so they don’t talk about it.” That is why both the news media and the most prestigious scholarly journals often end up presenting such a distorted picture of reality.

Credit: Franco, et al.