This is the next-to-last post of a multi-part series.
Jack Balkin concludes his social media regulation paper by taking a closer look at “intermediary liability” — i.e. the idea that social media platforms should be legally liable for unlawful content posted by end-users.
In brief, Professor Balkin’s position is that the government should start operating more like a social media Mob boss: it should offer complete intermediary immunity to social media companies in exchange for substantial concessions from them. (“Nice social media platform you got here; it would be a pity if anything happened to it.”) To the point, Prof Balkin states (p. 93) that the law “should use intermediary immunity as a lever to get social media companies to accept fiduciary obligations toward their users” and to get these firms “to invest in increasing the number of [content] moderators they employ as well as providing more due process for end users.” Alas, Prof Balkin does not bother to tell us what the “optimal” number of content moderators is or how much due process users should be entitled to when their posts are taken down. (Can you blame him? After all, Balkin is a “serious academic”, not a fussy bureaucrat, so he simply can’t be bothered with the niggling details of his call for social media regulation.)
Additionally, Balkin toys with the more promising idea of “distributor liability” (p. 94) by extending the existing costly and cumbersome “notice and take down” system for online copyright infringement, which is pictured below, to all social media content across the board. (The U.S. Congress created the current decentralized system of online copyright enforcement in 1998 when it enacted the Digital Millennium Copyright Act or “DMCA”. For more details, see here, for example.) In summary, under this decentralized system of distributor liability, companies are generally immune from legal liability unless they receive a take-down notice that specific content on their platforms is unlawful; once they receive such a take-down notice, however, social media companies would be required to remove the flagged content within a particular period of time, or else they themselves would be potentially liable for the content.
For my part, I wonder whether extending this “notice and take down” system to social media platforms would be a panacea for all the supposed social media ills that Balkin has been complaining about in his paper, but I will go ahead and give Prof Balkin the benefit of the doubt on this one. Why? Because if we are going to regulate the Internet, the “notice and take down” system is literally “the lesser evil” (the least bad choice among a bevy of bad regulatory alternatives), one with three advantages. First and foremost, the “notice and take down” system would represent a mere modest or incremental change to the existing laissez faire landscape. Secondly, there already is a well-developed body of “notice and take down” law developed by the courts under the DMCA. And lastly, Balkin’s modest proposal presents a decentralized alternative to Internet regulation. (It does not require a government agency to figure out what the optimal number of content moderators is, for example.)
Note: I will conclude my review of Balkin’s social media regulation paper in my next post.