Samantha Bradshaw is a scholar of new technology, security, and democracy. She is an Assistant Professor at American University’s School of International Service, and an Associated Faculty member at the Center for Security, Innovation and New Technology (CSINT). Samantha also holds fellowships with the International Strategy Forum (ISF) as a North America Fellow and the Center for International Governance Innovation (CIGI).

Researching Disinformation, Social Media and Democracy.

Samantha is a leading expert on new technologies and democracy. Her research examines the producers and drivers of disinformation, and how technology—artificial intelligence, automation and big data analytics—enhance and constrain the spread of disinformation online. At the forefront of theoretical and methodological approaches for studying, analyzing, and explicating the complex relationship between social media and democracy, Samantha’s research has helped advance academic debate, public understanding and policy discussions around the impact of emerging technologies on political expression and privacy.

Recent Publications


An investigation of social media labeling decisions preceding the 2020 U.S. election.

PlosOne. 2023.

 

Since it is difficult to determine whether social media content moderators have assessed particular content, it is hard to evaluate the consistency of their decisions within platforms. We study a dataset of 1,035 posts on Facebook and Twitter to investigate this question. The posts in our sample made 78 misleading claims related to the U.S. 2020 presidential election. The platforms labeled some (but not all) of these posts as misleading. For 69% of the misleading claims, Facebook consistently labeled each post that included one of those claims—either always or never adding a label. It inconsistently labeled the remaining 31% of misleading claims. The findings for Twitter are nearly identical: 70% of the claims were labeled consistently, and 30% inconsistently. We investigated these inconsistencies and found most of the platforms’ decisions were arbitrary. However, in about a third of the cases we found plausible reasons that could explain the inconsistent labeling. Our strongest finding is that Twitter was more likely to label posts from verified users, and less likely to label identical content from non-verified users. This study demonstrates how academic–industry collaborations can provide insights into typically opaque content moderation practices.


Look Who’s Watching: Platform Labels and User Engagement on State-backed Media.

American Behavioral Scientist, 2023.

Recently, social media platforms have introduced several measures to counter misleading information. Among these measures are “state-media labels” which help users identify and evaluate the credibility of state-backed news. YouTube was the first platform to introduce labels that provide information about state-backed news channels. While previous work has examined the efficiency of information labels in controlled lab settings, few studies have examined how state-media labels affect users’ perceptions of content from state-backed outlets. This article proposes new methodological and theoretical approaches to investigate the effect of state-media labels on users’ engagement with content. Drawing on a content analysis of 8,071 YouTube comments posted before and after the labeling of five state-funded channels (Al Jazeera English [AJE], China Global Television Network, Russia Today [RT], TRT World, and Voice of America [VOA] News), this article analyses the effect that YouTube’s labels had on users’ engagement with state-backed media content.


Playing Both Sides: Russian State-Backed Media Coverage of the #BlackLivesMatter Movement

International Journal of Press/Politics, 2022.

Russian influence operations on social media have received significant attention following the 2016 US presidential elections. Here, scholarship has largely focused on the covert strategies of the Russia-based Internet Research Agency and the overt strategies of Russia's largest international broadcaster RT (Russia Today). But since 2017, a number of new news media providers linked to the Russian state have emerged, and less research has focused on these channels and how they may support contemporary influence operations. We conduct a qualitative content analysis of 2,014 Facebook posts about the #BlackLivesMatter (BLM) protests in the United States over the summer of 2020 to comparatively examine the overt propaganda strategies of six Russian-linked news organizations—RT, Ruptly, Soapbox, In The NOW, Sputnik, and Redfish.


The Gender Dimensions of Foreign Influence Operations

International Journal of Communication, vol 15, 2021.

 

Drawing on a qualitative analysis of 7,506 tweets by state-sponsored accounts from Russia’s GRU and the Internet Research Agency (IRA), Iran, and Venezuela, this article examines the gender dimensions of foreign influence operations. This comparative look at the online political communication of women’s rights by foreign state actors highlights distinct blueprints for foreign influence operations while enriching the literature about the unique challenges women face online.


Combatting Information Manipulation: A Playbook for Elections and Beyond

NDI, IRI, and the Stanford Internet Observatory, 2021.

 

Efforts to undermine election-related information integrity are a growing threat to democracies around the world. These efforts  serve to delegitimize elections by reducing faith in elected governments, polarizing societies, and strengthening authoritarians. In many countries, civil society actors, journalists, governments, election management bodies and other democratic actors are on the frontlines of these battles. Yet, they face significant challenges preparing and responding to new digital threats as they occur before, during, and after elections. To counter these threats, the International Republican Institute, National Democratic Institute and Stanford Internet Observatory collaborated to create Combating Information Manipulation: A Playbook for Elections and Beyond.


Industrialized Disinformation: 2020 Global Inventory of Organized Social Media Manipulation

Computational Propaganda Project, Working Paper 1. 2021.

 

The manipulation of public opinion over social media remains a critical threat to democracy. Over the past four years, we have monitored the global organization of social media manipulation by governments and political parties, and the various private companies and other organizations they work with to spread disinformation. Our 2020 report highlights the recent trends of computational propaganda across 81 countries and the evolving tools, capacities, strategies, and resources used to manipulate public opinion around the globe. 


Press & Media Engagement

I speak regularly with journalists working on issues related to social media, elections, privacy & surveillance, freedom of speech, and democracy. My research and writing has been featured in numerous local and global outlets, including the New York Times, The Washington Post, CNN, the Globe and Mail, and Reuters.

 

Public Speaking & Events

I have given lectures and keynotes at organizations around the world including international organizations such as UNESCO and NATO, universities including Harvard, MIT, and Cambridge, and other NGOs, think tanks, and research institutions. You can view a list of my past speaking engagements and access my power point presentations for any previous events.