
Samantha Bradshaw is a scholar of technology, security, and democracy. She is the director of the Center for Security, Innovation & New Technology (CSINT) at American University (AU) and an Assistant Professor at AU’s School of International Service. Her research has been published in leading academic journals, and featured in global media outlets such as the New York Times, the Washington Post, CNN and Bloomberg Magazine. Samantha regularly speaks on expert panels, delivers keynote addresses, and advises governments and international organizations such as UNESCO and NATO.

Recent Publications
Between 2010 and 2022, 80 countries enacted new legislation or amended existing laws in an attempt to curb the spread of misinformation online.
In a given month, more than 100 million people open Pokémon Go—the app that allows users to superimpose the world’s most profitable media franchise onto reality using only their smartphone.
While social media disinformation has received significant academic and policy attention, more consequential forms of intentional manipulation target the underlying digital infrastructures upon which society depends.
Disinformation spread via digital technologies is accelerating and exacerbating violence globally. There is an urgency to understand how coordinated disinformation campaigns rely on identity-based disinformation that weaponizes racism, sexism, and xenophobia to incite violence against individuals and marginalized communities, stifle social movements, and silence the press.
For almost a decade, the study of misinformation has taken priority among policy circles, political elites, academic institutions, non-profit organizations, and the media.
During the 2022 Russian invasion of Ukraine, Russia was accused of weaponizing its state-backed media outlets to promote a pro-Russian version of the war. Consequently, Russian state-backed media faced a series of new sanctions from Western governments and technology companies. While some studies have sought to identify disinformation about the war, less research has focused on understanding how these stories come together as narratives, particularly in non-English language contexts. Grounded in strategic narrative theory, we analyze Russian state-backed media coverage of the Ukraine war across 12 languages.
Since it is difficult to determine whether social media content moderators have assessed particular content, it is hard to evaluate the consistency of their decisions within platforms. We study a dataset of 1,035 posts on Facebook and Twitter to investigate this question. The posts in our sample made 78 misleading claims related to the U.S. 2020 presidential election. These posts were identified by the Election Integrity Partnership, a coalition of civil society groups, and sent to the relevant platforms, where employees confirmed receipt. The platforms labeled some (but not all) of these posts as misleading. For 69% of the misleading claims, Facebook consistently labeled each post that included one of those claims—either always or never adding a label. It inconsistently labeled the remaining 31% of misleading claims. The findings for Twitter are nearly identical: 70% of the claims were labeled consistently, and 30% inconsistently.
Recently, social media platforms have introduced several measures to counter misleading information. Among these measures are “state-media labels” which help users identify and evaluate the credibility of state-backed news. YouTube was the first platform to introduce labels that provide information about state-backed news channels. While previous work has examined the efficiency of information labels in controlled lab settings, few studies have examined how state-media labels affect users’ perceptions of content from state-backed outlets. This article proposes new methodological and theoretical approaches to investigate the effect of state-media labels on users’ engagement with content. Drawing on a content analysis of 8,071 YouTube comments posted before and after the labeling of five state-funded channels (Al Jazeera English [AJE], China Global Television Network, Russia Today [RT], TRT World, and Voice of America [VOA] News), this article analyses the effect that YouTube’s labels had on users’ engagement with state-backed media content.
Russian influence operations on social media have received significant attention following the 2016 US presidential elections. Here, scholarship has largely focused on the covert strategies of the Russia-based Internet Research Agency and the overt strategies of Russia's largest international broadcaster RT (Russia Today). But since 2017, a number of new news media providers linked to the Russian state have emerged, and less research has focused on these channels and how they may support contemporary influence operations.
Drawing on a qualitative analysis of 7,506 tweets by state-sponsored accounts from Russia’s GRU and the Internet Research Agency (IRA), Iran, and Venezuela, this article examines the gender dimensions of foreign influence operations. By examining the political communication of feminism and women’s rights, we find, first, that foreign state actors co-opted intersectional critiques and countermovement narratives about feminism and female empowerment to demobilize civil society activists, spread progovernment propaganda, and generate virality around divisive political topics.
Previous research has described how highly personalised paid advertising on social media platforms can be used to influence voter preferences and undermine the integrity of elections. However, less work has examined how search engine optimisation (SEO) strategies are used to target audiences with disinformation or political propaganda. This paper looks at 29 junk news domains and their SEO keyword strategies between January 2016 and March 2019. I find that SEO — rather than paid advertising — is the most important strategy for generating discoverability via Google Search.
Social media is an important source of news and information in the United States. But during the 2016 US presidential election, social media platforms emerged as a breeding ground for influence campaigns, conspiracy, and alternative media. Anecdotally, the nature of political news and information evolved over time, but political communication researchers have yet to develop a comprehensive, grounded, internally consistent typology of the types of sources shared. Rather than chasing a definition of what is popularly known as “fake news,” we produce a grounded typology of what users actually shared and apply rigorous coding and content analysis to define the phenomenon.
Press & Media Engagement

I speak regularly with journalists working on issues related to social media, elections, privacy & surveillance, freedom of speech, and democracy. My research and writing has been featured in numerous local and global outlets, including the New York Times, The Washington Post, CNN, the Globe and Mail, and Reuters.
Public Speaking & Events
Technology and the next frontier in human rights. Hertie School.
I have given lectures and keynotes at organizations around the world including international organizations such as UNESCO and NATO, universities including Harvard, MIT, and Cambridge, and other NGOs, think tanks, and research institutions. You can view a list of my past speaking engagements and access my power point presentations for any previous events.