Disinformation: what it is, why it’s pervasive, and proposed regulations

At the CADE Tech Policy Workshop, experts Renee DiResta and Guillaume Chaslot spoke on disinformation, including the dyanmics that cause it to go viral and attempts towards addressing it.
ai-in-society
Author

Rachel Thomas

Published

February 26, 2020

The next two videos from the the University of San Francisco Center for Applied Data Ethics Tech Policy Workshop are available! Read more below, or watch them now: - (Dis)Information & Regulation - The Toxic Potential of YouTube’s Feedback Loop

Renee DiResta and Guillaume Chaslot are experts on disinformation who spoke at the CADE Tech Policy Workshop

(Dis)Information & Regulation

Renee DiResta shares a framework for evaluating disinformation campaigns, explains the dynamics of why and how disinformation and propaganda spread, and surveys proposed regulatory approaches to address these issues. She shares regulatory proposals around ads, antitrust, and privacy, and how these proposed laws impact the privacy-security-free expression balance. Disinformation is an ecosystem level problem, not a software feature level problem, so policy making needs to be agile and to address the broader ecosystem.

Renée DiResta is the technical research manager at Stanford Internet Observatory. She investigates the spread of malign narratives across social networks, and assists policymakers in devising responses to the problem. Renee has studied influence operations and computational propaganda in the context of pseudoscience conspiracies, terrorist activity, and state-sponsored information warfare, and has advised Congress, the State Department, and other academic, civil society, and business organizations on the topic. At the behest of the Senate Select Committee on Intelligence, she led one of the two research teams that produced comprehensive assessments of the Internet Research Agency’s and GRU’s influence operations targeting the U.S. from 2014-2018.

Watch her talk here:

Read more about Renee’s work in these selected articles and essays:

The Toxic Potential of YouTube’s Feedback Loop

Systemic factors contribute to the proliferation and amplification of conspiracy theories on platforms such as YouTube. The emphasis on metrics, cheap cost of experimentation, and potential for rewards incentivize propagandists to game recommendation system. The process of flagging and removing harmful content is much slower than the virality with which videos spread. The situation is even worse for languages other than English, where tech platforms tend to not invest many resources. For example, major concerns were raised in France about YouTube promoting pedophilia in 2006 and 2017, yet YouTube failed to take action until 2019 when it became a news topic in the USA after a high-profile New York Times Article and major American companies pulling their ads.

Guillaume Chaslot earned his PhD in AI working on the computer players of Go, worked at Google on YouTube’s recommendation system several years ago, and has since run the non-profit AlgoTransparency, quantitatively tracking the way that YouTube recommends conspiracy theories. His work has been covered in the Washington Post, The Guardian, the Wall Street Journal, and more. Watch his talk here:

Read more about Guillaume’s work in these selected articles and essays:

Learn More About the CADE Tech Policy Workshop

Special thanks to Nalini Bharatula for her help with this post.