On Friday 2/10, I invited the students in my business law survey course to predict which team will win the Super Bowl (the Kansas City Chiefs or the Philadelphia Eagles?) and by how many points? Then, on Sunday morning 2/12, I tallied up the results. In summary, a total of 31 students made predictions about the outcome of the championship game: 15 students predicted the Chiefs would win by an average of 7 points, while 16 students predicted the Eagles would win by an average of 7.1 points. These data are pictured in the table below:

For my part, I originally conducted this survey because I wanted to test the so-called “Wisdom of Crowds” hypothesis (i.e., I was going to base my prediction on the aggregate of student predictions), but how should I aggregate these data? By a narrow margin (one vote of over 30 votes cast), more students overall think the Eagles will win, and both groups are equally confident that (on average) their team will win by a touchdown.
Is it possible if you extended the survey to a larger # of students and faculty members that the results would be more accurate?
Keeping the central limit theorem in mind, I would surmise that a campus would survey would provide a more accurate predictions.
*** a campus wide survey would
Love this idea!!!!
Reblogged this on prior probability and commented:
Postscript: Four of my students correctly predicted that the Kansas City Chiefs would win by three points. Overall, my “wisdom of crowds” test predicted a close game, since a total 15 students picked the Chiefs to win by an average of 7 points, while 16 students thought the Eagles to win by an average of 7.1 points.