Type I vs. Type II errors

We’re almost done reading Deborah G. Mayo’s magnum statistical opus Error and the Growth of Experimental Knowledge (University of Chicago Press, 1996), a must-read for anyone interested in the philosophy of statistics.  Her defense of conventional statistics or “Neyman-Pearson” methods in Chapter 11 is valiant, but at one point, Professor Mayo writes (p. 403, emphasis in original):

Consider two smoke detectors. The first is not very sensitive, rarely going off unless the house is fully ablaze. The second is very sensitive: merely burning toast nearly always triggers it. That the first (less sensitive) alarm goes off is a stronger indication of the presence of a fire than the second alarm’s going off.

It turns out that this particular example is a very important one. Why? Because this simple example illustrates Mayo’s larger point about how well-designed or “severe” statistical methods are supposed to minimize Type I errors, i.e. the problem of rejecting a hypothesis as false even when the hypothesis is probably true. In two words, our friendly reply to the fire alarm example is “not necessarily”–that the first, less sensitive alarm goes off is not necessarily a stronger indication of a fire than the second, more sensitive alarm sounding off.

Let’s call the first alarm A and the second alarm B. Since A is not very sensitive, there is a positive probability that it may not sound even when there is a real fire. In statistical terms, A will end up making a lot of Type I errors: it will reject the hypothesis H = fire even when H is true, i.e. even when there is a fire. B, by contrast, will sound even when there isn’t a real fire, i.e. B will make a lot of Type II errors. In colloquial terms, A is overly cautious (sounding only if the house is really on fire), while B is too cautious (sounding at the slightest hint of a fire). Now, if safety is your paramount goal, which alarm would you rather have, A or B; that is, in the context of fire prevention, which type of error is the lesser evil?

By the way, we will present and discuss other standard statistical problems, like the naval shell example and “the lady tasting tea” case, from a Bayesian perspective in future blog posts.

About F. E. Guerra-Pujol

When I’m not blogging, I am a business law professor at the University of Central Florida.
This entry was posted in Bayesian Reasoning, Probability. Bookmark the permalink.

3 Responses to Type I vs. Type II errors

  1. Alex says:

    I am a new follower to your blog – and I like it a lot. Keep going!

    Isn’t it the other way round? B, the sensitive one, is producing a lot of false positives. And A, the not sensitive one, is producing a lot of Type II errors?

    Looking forward to the next posts with bayesian emphasis!

  2. Pingback: On Bayes, part 3: all tests are imperfect | prior probability

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s