Michelle N. Meyer and Christopher Chabris, professors at Union Graduate College in Schenectady, New York, recently published a thought-provoking essay in the Sunday Times on the ethics of nonconsensual “A/B testing” like the Facebook mood contagion experiment, the inconclusive results of which were published in a scientific journal last year. In brief, Professors Meyer and Chabris begin their short essay by posing a rhetorical question: “Can it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent?” Most people (ourselves included) would answer “no”–without consent, such experiments cannot be ethical–but the authors challenge this reasoning, arguing persuasively that it is based is on a “moral illusion.” Professors Meyer and Chabris argue:
Companies — and other powerful actors, including lawmakers, educators and doctors — “experiment” on us without our consent every time they implement a new policy, practice or product without knowing its consequences. When Facebook started, it created a radical new way for people to share emotionally laden information, with unknown effects on their moods. And when OkCupid started, it advised users to go on dates based on an algorithm without knowing whether it worked. Why does one “experiment” (i.e., introducing a new product) fail to raise ethical concerns, whereas a true scientific experiment (i.e., introducing a variation of the product to determine the comparative safety or efficacy of the original) sets off ethical alarms?
We are especially intrigued by the second question posed in Meyer & Chabris’s essay (in bold above). Any thoughts? Addendum: For what it’s worth, here is our take on this issue: we think the problem with A/B testing (versus just “imposing” a single option) is that one still has a choice when a new product or website is introduced–e.g. I don’t have to sign up for Facebook if I don’t want to–whereas I have no choice if I am being tested without my consent, as in the Facebook mood contagion experiment. This issue ultimately comes down to one’s view of ethics: if you are “consequentialist” you are more likely to approve of nonconsensual A/B testing like the Facebook experiment; if, however, you take a “duty-” or “principles-based” view of ethics, you are less likely to approve. In any case, to the extent this is an ethical issue, then don’t we have to concede there are no “right answers” up front, instead of pretending that there is?