Since summer, we’ve been carefully and slowly reading (off and on) Dr Deborah Mayo’s excellent tome “Error and the Growth of Experimental Knowledge.” It’s a tough read–we’re only up to page 192 of her book, less than half-way through, though we are hoping to pick up our pace over Christmas. Nevertheless, we wanted to share our initial impressions regarding the main idea of Mayo’s book, the intriguing and important notion of a “severe test” in the realm of hypothesis-testing in science. In brief, Mayo’s most original contribution to the statistics literature, as we understand it, is the idea that a good test of any given scientific hypothesis must be a “severe” one, i.e. a test designed in such a way that only a true hypothesis could pass. (Disclaimer: this is our simplification of Mayo’s notion of severity, but for purposes of this brief blog post, it suffices for now.) Along the way, however, Mayo insists on taking several jabs at “the Bayesian way” (her term, not ours). Yet we detect a delicious irony in her work: there is no real inconsistency between her notion of severity and the Bayesian approach. In fact, if anything, both approaches to hypothesis-testing are complementary, not in conflict. Why? Because the more “severe” a test is (using Mayo’s notion of severity), the higher posterior probability we can assign to the truth of the hypothesis being tested. This (tentative) conclusion sums up for now our initial impressions of Mayo’s book on error statistics. As good Bayesians, we will keep an open mind and update our own philosophical priors as we continue reading more of her work …
- 432,211 hits
This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License.
Blogs I Follow