Recently some Twitter users have asserted that they are being defamed by Twitter "block bots."
It's easy to block people manually on Twitter, unless you want to block a whole lot of them. Various cultural and political conflicts online have led some users to develop blockbots, which are lists to which you can subscribe (to oversimplify the process) to mass-block everyone on the list. Some lists are created by methodology (like automatically blocking people who follow certain Twitter users affiliated with "GamerGate") and some, like BlockBot, are curated by individuals who choose who goes on the list and why.
Some folks don't like how they are characterized by these lists. BlockBot targets complain of being characterized by mostly anonymous and unaccountable strangers as "racists" or "transphobes" or "rape apologists." Some argue that merely being on a blocklist means they are being characterized as a harasser or affiliated with harassers.
Dissenters may have a point that the lists are unfair or unprincipled. I wouldn't use one myself.
But they almost certainly aren't libel.
Fact, Not Opinion, Is Potentially Defamatory
You're probably heard this multiple different ways: opinion can't be defamatory. Satire isn't defamation. Insults aren't defamation. Jokes are not defamation.
All of these rules derive from the same core idea: only statements that can reasonably be interpreted as asserting false statements of fact can be defamation.
So, for instance, a parody advertisement depicting Jerry Falwell has having drunken sex with his mother in an outhouse was protected speech because it could not "reasonably be understood as describing actual facts about [Falwell] or actual events in which [he] participated." "Rhetorical hyperbole" and "vigorous epithets" are not defamatory because they can't be understood as asserting specific facts.
Moreover, the question is not whether some hypothetical ignoramus would construe a statement to be factual. The question is whether a reader familiar with the context and the speaker and target would conclude that the statement is factual. So when WorldNetDaily sued Esquire over a parody about a silly birther book being withdrawn and pulped, the D.C. Circuit pointed out that it was analyzing the statement from the point of view of its target audience:
To determine whether Esquire’s statements could reasonably be understood as stating or implying actual facts about Farah and Corsi and, if so, whether those statements were verifiable and were reasonably capable of defamatory meaning, the “publication must be taken as a whole, and in the sense in which it would be understood by the readers to whom it was addressed.” Afro-American Publ’g Co. v. Jaffe, 366 F.2d 649, 655 (D.C. Cir. 1966) (en banc). “[T]he First Amendment demands” that the court assess the disputed statements “in their proper context.” Weyrich, 235 F.3d at 625. Context is critical because “it is in part the settings of the speech in question that makes their . . . nature apparent, and which helps determine the way in which the intended audience will receive them.” Moldea II, 22 F.3d at 314. “Context” includes not only the immediate context of the disputed statements, but also the type of publication, the genre of writing, and the publication’s history of similar works. See Letter Carriers, 418 U.S. at 284–86; Moldea II, 22 F.3d at 314–15.
So: if your complaint is "someone could stumble upon the BlockBot, see that I am described as a 'sexist cis-normative pro-frakking shitlord,' and draw conclusions about me," then they are applying the wrong test: the right test is whether someone who knows about the BlockBot and its context would understand the BlockBot to be making specific statements of fact, as opposed to elbow-throwing ideological eructations.1
Furthermore, the place of publication is important to the analysis. A growing and increasingly uniform body of law suggests that statements on the internet are less likely to be taken as literally true than statements elsewhere. This shouldn't surprise anyone who has been on the internet.
Next, the type of epithet figures into the fact vs. opinion analysis. Some sorts of accusations — such as that of racism — are so inherently subjective that more likely to be interpreted as opinion than fact.
Finally, the language surrounding the blockbots' labels make them less likely to be interpreted as statements of fact. When a writer uses vivid and figurative language, insults, or other less-than-professional terms, courts lean towards classifying the statements as opinions rather than facts. The very purple prose complained of suggests that the labels are opinions, not statements of fact.
If you are thinking "well, who knows how a jury would decide," the question of whether a challenged statement is fact or opinion is a question of law for the court.
Taken together, these doctrines make it extremely unlikely that it is defamatory to be put on a blocklist or characterized offensively on such a list. Such characterizations would be seen by their intended audience — and thereby by the courts — as partisan political rhetoric not premised on any specific facts and not susceptible to any specific factual analysis. Arguments to the contrary appear to be either based on the law of foreign jurisdictions or not based on specific legal principles.
Caveat Number One: I speak here of the rule of law, not the rule of feels. I understand many people feel as though BlockBot designations are defamatory. So they have that going for them, which is nice.
Caveat Number Two: I speak here of the laws of the United States. I do not opine about whether Blockbot designations may be defamatory under the laws of the United Kingdom. Although the U.K. has nominally reformed its libel laws recently, it remains a place where 15-year-olds are arrested for being dicks on Twitter, where adults are arrested for peaceful symbolic protests, and where old atheists are threatened with arrest for mild trash-talking of organized religion. The U.K. has less a system of jurisprudence designed to protect free expression and more an elaborate legal platform for mood swings. Fortunately any U.K. speech-related judgment will likely be unenforceable in the United States under the SPEECH Act.2
- Another good example is Sekrist v. Harkin. There a staffer sued for defamation based on statements in Tom Harkin's political press release about him. The court explained:
The literary context of the press release also supports a finding that the challenged statements are Candidate Harkin's opinion, rather than "fact." The "literary context" factor includes the type of forum or "social context" in which the statement was made, the category of publication, its style of writing, and the intended audience. Janklow, 788 F.2d at 1302-03. The forum or social context in this case is, as we have said, a political campaign, in which one would expect to hear a great deal of opinion concerning the performance of the incumbent Senator. The category of publication is a press release from the Senator's challenger. Suffice it to say that a campaign press release is not a research monograph; such a release is at least as likely to signal political opinion as a newspaper editorial or political cartoon.
- Perhaps, like me, you find it odd that people who say they oppose thin-skinnedness and support free speech are resorting to government help from a censorious system to protect themselves from mean words. ▲
Last 5 posts by Ken White
- Update on The Popehat Podcast - November 30th, 2016
- Lawsplainer: Why Flag Burning Matters, And How It Relates To Crush Videos - November 29th, 2016
- Update: Ninth Circuit Rejects Attack on "Comfort Women" Monument - November 28th, 2016
- True Threats v. Protected Speech, Post-Election Edition - November 16th, 2016
- Lawsplainer: About Trump "Opening Up" Libel Laws - November 14th, 2016