“There’s one way to love you, but a thousand ways to kill you. I’m not gonna rest until your body is a mess, soaked in blood and dying from all the cuts.” 1 Anthony Elonis claimed that this statement was not a threat, but an original lyric, which he penned and posted to Facebook as a form of self-help therapy after a painful divorce.2 His ex-wife disagreed, and so did federal prosecutors. Elonis was charged and convicted of the transmission in interstate commerce of “any threat . . . to injure the person of another.”3 The jury was instructed to convict if a reasonable person would foresee that Elonis’ statements would be interpreted as a threat.4
On appeal, Elonis argued that he could not be convicted without a showing that he meant to make a threat, rather than art.5 The Third Circuit disagreed, and upheld his conviction.6 In doing so, they affirmed that the District Court’s objective intent standard was sufficient to support a conviction.7 The Supreme Court reversed, holding that the objective intent standard amounted to criminal negligence, and that something more is required by § 875(c).
Writing for the majority, Chief Justice Roberts reasoned that while the text of § 875(c) did not specify a mens rea requirement, the bedrock common law principle that “wrongdoing must be conscious to be criminal” required the court to impute a standard higher than negligence.8 But as Justice Alito complained in his concurrence, the majority’s opinion stops there, remanding to the Court of Appeals without providing guidance about what standard to apply.9 On remand, the Third Circuit reinstated Elonis’ conviction for harmless error, holding that the trial record would have supported a conviction under either a recklessness or knowledge intent standard.10
While the uncertainty surrounding the proper standard for § 875(c) did not save Elonis, it lingers. Potential litigants need to know how their online behavior will be evaluated, and uncertainty in the regulation of speech creates an undesirable chilling effect. As social media users become increasingly bellicose and unrestrained, the issue will surely find its way before a court again. This comment will propose a practical solution, informed by a realistic view of online culture, to the residual uncertainty surrounding the proper standard of intent for online threats.
This comment will proceed in three parts, and begin by explaining the facts and reasoning in Elonis v. United States. Next, it will introduce First Amendment “true threat” jurisprudence, and highlight the features of online communication and culture that impact the recklessness-knowledge intent debate. Finally, it will propose a recklessness standard as the most pragmatic solution to the open question of intent in online threats.
Elonis v. United States
After a painful divorce, Anthony Elonis took to his Facebook page for what he referred to as therapy.11 Elonis stated that he found comfort in posting aggressive, gruesome, and often terrifyingly specific screeds in loose verse. One such post described how Elonis would smother his ex-wife with a pillow. Another set out a detailed invitation to destroy her kitchen with a mortar shell. Others cavalierly dismissed a restraining order.12
At trial, Elonis claimed his posts were rap lyrics styled after the violent verses of popular artists like Eminem.13 Because he left several of them in comments on his ex-wife’s sister’s Facebook updates, Elonis’ ex-wife read many of his posts, which made her fear for her life.14
After FBI agents visited his home, Elonis returned to Facebook with another series of angry posts. The subjects of these posts included local law enforcement, an FBI agent, a kindergarten class and employees of the amusement park where he worked. In one post, he suggested that he might detonate a suicide vest during the next police visit.15 Agents returned for him anyway soon after he made the posts, and Elonis was charged and convicted of five counts of transmitting threats in interstate commerce.16
The text of § 875(c) does not indicate a mens rea requirement.17 The District Court applied an objective general intent standard, and the jury was instructed to convict Elonis “if a reasonable person would foresee that his statements would be interpreted as a threat.” As the government argued in its closing statement, under this standard, “it doesn’t matter what [Elonis] thinks.”18 The jury convicted Elonis, and the conviction was affirmed by the Third Circuit.19
Before the Supreme Court, Elonis argued that conviction under § 875(c) required Elonis to intend to threaten his wife with the posts. Under this intent standard, he claimed that the government could not prove its case, because Elonis had intended his posts as art, and had not known that anyone would see them as anything else.20 The government urged the Court to affirm the objective standard under which Elonis was convicted.
The Court agreed with Elonis that negligence was not the appropriate standard, and remanded the case.21 The Court observed that criminal statutes are generally interpreted to assume intent on the part of the actor, even where the law does not explicitly say so.22 Therefore, the jury instructions at trial were defective, because they could have led to conviction even where Elonis was unaware of any wrongdoing.23 The Chief Justice’s majority opinion clarified only that specific intent was required; the negligence standard that the jury was instructed to apply was not enough. But Elonis had argued below that he could not be convicted because he did not know his lyrics were a threat, which would also exclude a recklessness standard. The Chief Justice stopped short of deciding whether such a standard would satisfy § 875(c).
Justice Alito concurred in the outcome, but argued that the Court should not leave this question for another day, and should instead affirmatively adopt a recklessness standard for § 875(c) threats.24 He agreed that the general presumption of an intent requirement should apply to the federal law against interstate threats, and that the negligence standard the jury instructions provided for was inadequate.25
Justice Alito’s concurrence criticized the majority’s refusal to decide the recklessness issue as a failure to completely decide the case, and noted that as a First Amendment matter “Elonis argued that recklessness is not enough, and the Government argued that it more than suffices.”26 As Justice Alito points out, the matter must be clarified for the attorneys and judges who must apply this important criminal statute.
True threats, online communities, and the risk of decontextualization
It is well-settled that the First Amendment protects provocative language unless it is one of a few “certain well-defined and narrowly limited classes of speech,”27 one of which is “true threats,”28.
Threatening statements can cause the listener to feel fear for his safety, and can lead to social disruption. Serious statements of intent to harm another can also provoke the listener or a passerby, and lead to a violent confrontation. In the interest of public safety, true threats fall beyond the protection of the First Amendment. 29 Justice Alito was right to press the Court to bring some finality to the issue of intent in criminal threats; how courts should decide whether a threat is “true” is the central issue of this case; How courts should decide whether a threat is “true” is the central issue of this case.
Whether a statement represents a true threat, as opposed to protected vitriol, depends on the factual context surrounding it.30 Further, the Court noted that the state must prove that the defendant had knowledge of the facts that make his conduct fit the definition of the offense.31 That is, while the state need not prove that the defendant actually knew that his statements fit the legal definition of a “true threat”, it must show that the defendant was aware of the specific contents of his speech, and the contextual factors which place his statements into that legal definition. This is increasingly difficult to prove as the analysis of context and audience becomes more complex in the digital age.
As of 2016, 79% of all American adults with regular Internet access used Facebook.32 Other familiar names such as Instagram, Pinterest, LinkedIn, and Twitter have large user bases of up to a third of adult Internet users.33 These platforms all share a few relevant features in common: they encourage users to form online communities, allow users to re-post and circulate other user’s updates, and give users some ability to restrict recirculation, and contain the reach of their posts through privacy settings. These features can help inform the legal analysis in this case, because they give speakers notice of, and some control over the context attached to their online statements.
Widespread use of social media has made it as easy to send a message to one friend across town as to broadcast one to hundreds of millions of people around the world. Groups of people who would never before have found each other can now interact and form communities online.34 Over time, online communities progress just like any other; they develop their own turns of phrase, inside jokes, and social mores.35 Some communities develop norms of language that outsiders might find disturbing or offensive.36
In some online contexts, the common parlance is shockingly violent and yet nonthreatening. A 2015 Southern Poverty Law Center report described how right-wing extremists and white supremacists have coalesced on Reddit.37 Like Eminem’s lyrics, some of their language is shocking. But the law must not lose sight of the bedrock principle that “the State has no power to ban speech on the basis of its content.”38 Only true threats may be criminalized, and speech can turn violent for reasons other than expressing a true threat.
One example of violent language unrelated to threats is expressed in “signaling theory.” Advancing an interdisciplinary theory derived from biology and economics, signaling theorists posit that speakers infuse their persona, delivery, or message with violence or incriminating statements to make a lie or facade costlier to maintain, thereby improving their credibility.39 This is the lottery winner who purchases a Maserati or other attention-grabbing status symbol to display wealth.
Online communities are treasure troves for signaling theorists. Because the cost of deception online is low, signaling behavior is particularly common as a means of demonstrating legitimate community membership and status.40 Social media users who post about inflammatory topics, for instance, sexually lewd, or criminal behavior, may do so as a means of signaling their identities and membership within a community, online or otherwise.41 In this context, in-group listeners can parse factual claims from signaling language. Indeed, the ability to make such distinctions is the entire point; only members can master the use of in-group signals, and imposters will be easily identified and excluded.42
Looking behind the words to find context, therefore, is at least as important when courts interpret online communication as it is for in-person speech. Courts must consider the norms of the community to which such a communication is addressed when evaluating threats. But online communications often travel beyond a user’s list of followers, and they are often decontextualized in the process.43 This highlights new justifications for the recklessness standard which Justice Alito missed: it would clarify expectations, and reinforce the responsibility of online speakers to direct their statements appropriately.
Applying the recklessness standard
As the Court of Appeals held on remand, Elonis’s conduct was wrongful under either standard.44 The record at trial established that when he posted his statements to his public Facebook feed, knew that some recipients would see them as serious threats.45 He knew, for instance, that his coworkers had expressed concern about similar, earlier posts, and that his wife had sought a restraining order after being frightened by other posts.46
Elonis could have taken steps to limit the audience to avoid harm, and applying a recklessness standard would incentivize doing so. To illustrate this idea, consider a different set of facts that shows how this distinction plays out. This time, rather than posting his violent lyrics to his own public Facebook wall, suppose that he posted them to an Eminem fan group page. Suppose also that on this Eminem fan page, it had become common for members to post their own lyrics, under which other users would leave comments and critiques. Elonis’ ex-wife is no longer friends with him on Facebook, but they share many mutual friends. One of them commented on Elonis’ lyrics, causing them to appear on his ex-wife’s News Feed.
Elonis’ behavior here is closer to the edge than it was in the actual case. The fan group with other, similarly violent lyrics casts his own words in a less threatening light. The factual context of the Eminem fan group would more strongly support his defense that he did not know he was saying something that anyone would interpret as a threat. On the other hand, the fact that Elonis posted his bloodthirsty desires to a group may not put his wife at ease if she nonetheless encountered them. Posting his lyrics to the group therefore may not mitigate the harm that § 875(c) was adopted to avoid: the fear of harm felt by Elonis’ ex-wife.
Consider a further variation. This time, Elonis posts his raps to an Eminem fan group that is closed to nonmembers; to access posts within the group, new members must expressly consent to reading violent rap lyrics in the style of Eminem. Because the post was made to a closed group, Facebook’s privacy system only allows it to be re-posted to other group members.
In this variation, Elonis posts in a protected forum where the technical design of Facebook’s privacy settings will confine his words to a linguistic context where they will not be taken as true threats. This same thought experiment could be played out on various platforms; Twitter, Reddit, Instagram, and most others have some means of segmenting, hiding, or otherwise controlling who can and cannot see certain posts.
This thought experiment is meant to illustrate another reason to adopt a recklessness standard: it creates the right incentives online. While our commitment to free speech should extend to protect the organic interactions of online communities, the risk of decontextualization presents a serious problem. The law may chill good speech if courts are too restrictive, or it may expose large groups of people to criminal speech if courts are too permissive. Adopting a recklessness standard for § 875(c) would be a step toward a solution. “Consciously disregarding”47 privacy settings and the option to use closed groups, and then posting violent language that will likely show up on his ex-wife’s News Feed creates a “substantial risk”48 that she will feel threatened. Such behavior would be punishable under a recklessness standard.
Going further and requiring the government to show knowledge intent, on the other hand, would allow too many people to go unpunished when they casually make statements about violence which they know will likely be seen by people who will react with fear. Wrongful speech would go unpunished so long as defendants could claim that it was never “practically certain” that any specific post would be re-posted beyond the appropriate context, even if they took no steps to keep that from happening.
Elonis’ conviction was rightly upheld. His conviction makes sense because he had the clear opportunity to make therapeutic art without creating the harm that § 875(c) punishes. He instead chose to broadcast statements that he knew cause fear in many who would receive them. He could have ensured that his art would remain abstract by posting it from an anonymous handle, or by posting it to a closed group. Instead, he included tongue-in-cheek disclaimers. No reasonable juror would believe that Elonis did not know that at least some people would take his posts seriously.
Users are on notice and in control of their own privacy settings, and of the forums in which they post. They should consider these facts when posting language that may be appropriate for a niche community, but would be a threat when read by an outsider. Inquiry into the context of social media communication must therefore include the question of what privacy settings, platforms, and groups were in play when the defendant posted the offending language. A recklessness standard would allow prosecutors to reach posts like Elonis’, which he knew his wife was likely to see and be shocked by, while still stopping short of drawing bright lines that chill protected speech.
In 1996, Judge Frank Easterbrook critiqued the emerging field of cyberlaw with a parable. The University of Chicago has never had a law school course called “The Law of the Horse,” he began.49 “Lots of cases deal with sales of horses; others deal with people kicked by horses; still more deal with the licensing and racing of horses, or with the care veterinarians give to horses, or with prizes at horse shows. Any effort to collect these strands into a course on “The Law of the Horse” is doomed to be shallow and to miss unifying principles.”50
Some legally significant scenarios are only possible because of new technologies, but not all of them require novel legal solutions. So it is with the problem of online threats. The new lessons of social media are important to the best interpretation of § 875(c), but the conclusion they point to is timeless. Our principles require that we pick rules that allow people to interact freely, even in ways that make outsiders squeamish, while protecting individuals from threats which put them in reasonable fear of harm. Social media users must consider their audience when making potentially inflammatory posts, including those who would rather not hear. Applying a recklessness standard to § 875(c) accomplishes that.