Election meddling, state-sponsored disinformation campaigns, and the potential manipulation of platform users is provoking intense reactions to technology around the world. The outrage following news reports that the data of millions of people were used without their knowledge to train sophisticated targeting tools that may have manipulated voters suggests that consumers expectations of how their data are collected and used do not correspond with the reality of the business models of many data-driven platforms.The allegations, if true, underscore the power of increasingly sophisticated predictive technology and the limitations of the United States’ largely self-regulatory approach to consumer data rights, privacy, and security. These allegations also raise the possibility that regulators, policymakers, consumers, and even the platforms themselves may be significantly underestimating the risks of data-fueled analytics and automated technology. In the last year new risks emerged including the use of technology to: manipulate perceptions and emotion; rapidly disseminate “fake news;” and deceive people through AI-driven systems that can create “deepfakes” (fake video and audio in which a person’s face is substituted). Technology is also being deployed to undermine democratic processes. For example, millions of fake comments were filed with the Federal Communications Commission during its proceeding revising the Open Internet rules, and bad actors used social network platforms to engage in widespread propaganda and disinformation campaigns.
The Federal Trade Commission (FTC), the nation’s primary consumer data protection agency, is at the center of the debate over whether the United States’ approach to consumer protection is adequate for the digital age.