In the immediate aftermath of the recent tragedy in Las Vegas, false information spread rapidly across the internet about the perpetrator and the method of the attack.1 Online sources quickly attributed blame to a man later determined innocent; Google News spread misinformation and gave credence to unverified rumors for the online community.2 Websites such as Facebook and Google, which rely upon algorithms to sort through news content, have struggled to filter correct information from “fake news.”3 Whether this is simply the growing pains of building an internet news model or a failure of machine learning to discern fact from fiction remains to be seen, but the fate of these algorithms will impact the future of artificial intelligence and political discourse.

Defining what information is reliable and what information cannot be trusted becomes an initial barrier for technological solutions, as well as potential regulatory responses. One method of classifying fake news breaks the topic down into four categories based on the author’s intent to deceive and financial motivation: hoaxes, satire, propaganda/trolling, and humor.4 An alternative method might also include commentary or opinion pieces distributed by biased individuals, giving narrow and potentially misleading perspectives through credible channels.5

Questionable online information integrity can even be seen manifesting in the product reviews of Hilary Clinton’s recent book, What Happened, which collected 1,600 positive and negative Amazon reviews within hours of its release.6 Though determining whether a review is true or fake is ultimately impossible for an algorithm, Amazon knew which reviewers had purchased the item and quickly deleted 900 suspicious reviews.7 Amazon benefits from its pool of customer data in sifting through reviews, but Google and Facebook are frequently faced with filtering new websites, authors, and forms of media presenting breaking news on events they know little about.8

To address fake news and misinformation, computer algorithms are built to dissect the English language and compare articles to one another. The Fake News Challenge publicly challenged volunteers in 2016 to develop code that would compare articles are determine which ones were written on the same subject and if those articles agreed or disagreed with each other.9 The goal of the Fake News Challenge was to aggregate articles with the same topic by computer algorithm and use people to pass judgment on the veracity of their topics.10 The top three algorithms submitted to the challenge relied upon deep learning to solve the challenge—the same tool used by Google and Facebook.11

Technology–based solutions, such as those sought by the Fake News Challenge, will undoubtedly be part of any potential solution to the proliferation of fake news due to the limitations of the legal system. The Constitution’s First Amendment stands as a barrier to state-enforced speech restrictions, and particularly protects the freedom of the press.12 While defamation removes speech from protections under the First Amendment, a finding of libel against a public figure requires a finding of actual malice by the publisher, such as knowingly distributing lies or speaking with a reckless disregard to the truth.13

One proposed solution could involve Congress granting a private right to have libelous statements removed from the internet by hosting services, or creating an obligation to not publish libelous statements.14Another solution, although unlikely in the opinion of former Federal Trade Commission (FTC) Bureau of Consumer Protection Director David Vladeck, would have the FTC successfully find fake news to impact interstate commerce and then place the agency under a statutory obligation to police “unfair or deceptive acts or practices in or affecting commerce.”15 Unfortunately, these potential solutions each present challenges to successful implementation and are unlikely to find political footing that would provide a substantial and lasting solution to the modern problem of fake news.

Blunt legal solutions alone run the risk of being overbroad and stifling important public debate.16  To successfully navigate around the First Amendment, Congress may strip valuable protections that have enabled internet innovation.17 In the absence of legal solutions, the technology-enabled rise of fake news will have to be combated by technological solutions to address the concerns of the public, and preserve information integrity in an era of online media.