Research

Published Papers

Understanding Political Communication and Political Communicators on Twitch
Accepted at PLOS One [Paper]

As new technologies rapidly reshape patterns of political communication, platforms like Twitch are transforming how people consume political information. This entertainment-oriented live streaming platform allows us to observe the impact of technologies such as “live-streaming” and “streaming-chat” on political communication. Despite its entertainment focus, Twitch hosts a variety of political actors, including politicians and pundits. This study explores Twitch politics by addressing three main questions: 1) Who are the political Twitch streamers? 2) What content is covered in political streams? 3) How do audiences of political streams interact with each other? To identify political streamers, I leveraged the Twitch API and supervised machine-learning techniques, identifying 574 political streamers. I used topic modeling to analyze the content of political streams, revealing seven broad categories of political topics and a unique pattern of communication involving context-specific “emotes.” Additionally, I created user-reference networks to examine interaction patterns, finding that a small number of users dominate the communication network. This research contributes to our understanding of how new social media technologies influence political communication, particularly among younger audiences.

The consistency principle: Crisis perceptions, partisanship and public support for democratic norms in comparative perspective
European Journal of Political Research. 2024. [Paper]
With Amanda Driscoll, Jay Krehbiel, and Michel J. Nelson

A growing body of research theorizes that partisanship can undermine democracy as citizens prioritize their political interests over abstract norms and values. We argue that crises might counteract intense partisanship by giving citizens clarity on the threats posed by rule of law violations. Examining the differential application of a law – a breach of democratic norms – we draw on an experiment embedded in representative surveys of Germany, the United States, Hungary and Poland to examine citizens’ sense of appropriate punishment for elites’ violation of a municipal mask-wearing ordinance. We find evidence of partisan bias in citizens’ willingness to support punishment in all four countries. But, in the two consolidated democracies, we find that concern about the Covid-19 crisis diminishes partisan biases in punishment preferences: citizens who are most concerned about the crisis also model the most consistency in their willingness to hold copartisans into account.

Spatial modeling of dyadic geopolitical interactions between moving actors
Political Science Research and Methods. 2023. [Paper]
With Howard Liu and Bruce A. Desmarais

Political actors often interact spatially, and move around. However, with a few exceptions, existing political research has analyzed spatial dependence among actors with fixed geographic locations. Focusing on fixated geographic units prevents us from probing dependencies in spatial interaction between spatially dynamic actors, which are common in some areas of political science, such as sub-national conflict studies. In this note, we propose a method to account for spatial dependence in dyadic interactions between moving actors. Our method uses the spatiotemporal histories of dyadic interactions to project locations of future interactions—projected actor locations (PALs). PALs can, in turn, be used to model the likelihood of future dyadic interactions. In a replication and extension of a recent study of subnational conflict, we find that using PALs improves the predictive performance of the model and indicates that there is a clear relationship between actors’ past conflict locations and the likelihood of future conflicts.

The Effects of an Informational Intervention on Attention to Anti-Vaccination Content on YouTube
Proceedings of the International AAAI Conference on Web and Social Media. 2020. [Paper]
With Omer F. Yalcin, Samuel E. Bestvater, Kevin Munger, Burt L. Monroe, and Bruce A. Desmarais

The spread of misinformation related to health, especially vaccination, is a potential contributor to myriad public health problems. This misinformation is frequently spread through social media. Recently, social media companies have intervened in the dissemination of misinformation regarding vaccinations. In the current study we focus on YouTube. Recognizing the extent of the problem, YouTube implemented an informational modification that affected many videos related to vaccination beginning in February 2019. We collect original data and analyze the effects of this intervention on video viewership. We find that this informational intervention reduced traffic to the affected videos, both overall, and in comparison to a carefully-matched set of control videos that did not receive the informational modification.

Working Papers

The Political Influence of Non-Politicized Friends: How do social networks affect the spread of protest information in social media?
Under Review

How do social networks influence the spread of protest information on social media? This article argues that the political characteristics of accounts sharing protest information affect how that information is interpreted and spread by other Twitter users. Specifically, I suggest that whether Twitter accounts are perceived as overtly political or nonpolitical can shape how users respond to signals about political protests. I hypothesize that nonpolitical accounts may exert more influence in spreading protest messages than political accounts, as they are seen as less biased or more trustworthy. To test this theory, I conducted an online experiment using vignettes that simulate the Twitter environment. Participants were exposed to protest-related Tweets and were asked whether they would retweet or like them, with some accounts presenting political traits in their profiles and others appearing nonpolitical. Contrary to my expectations, the results did not reveal a statistically significant difference in participants’ responses between political and nonpolitical profiles. However, the study revealed unexpected patterns, including the role of education in shaping retweet behavior differently across political groups and the influence of context-specific factors, such as protest types and images, on user engagement. These findings suggest that individual characteristics and content features may interact in complex ways, warranting further exploration.

Comparison of Credibility of News Shared in Four Different Platforms during Midterm Election 2022: Twitter, Facebook, Instagram, and Reddit
With Ozgur Can Seckin, Kaicheng Yang, and Filippo Menczer

Social media platforms have become primary sources for accessing and consuming political news, aligning with the ongoing digital transformation of the media landscape. While this transformation has facilitated easier access to information, concerns regarding the over-sharing of news from low-credibility sources and partisan-driven news sharing behaviors have emerged as significant issues for both the scientific community and policymakers. Despite various studies on this topic, there remains surprisingly little understanding of how users’ political news sharing behavior differs among different social media platforms. In this article, we compare the patterns of news sharing during a major political event, the United States 2022 midterm election, across three distinct social media platforms: Twitter, Meta (encompassing Facebook and Instagram), and Reddit. We leverage large-scale data collected during the election cycle. Our findings indicate differences in the credibility of news sources shared on each platform, both in terms of source credibility and partisanship. News sources shared on Reddit have higher credibility and are relatively left-leaning compared to those on Twitter and Meta. The study also reveals consistent patterns across all three platforms, indicating that right-leaning URLs tend to be associated with lower credibility, in line with existing literature. However, notable differences among the platforms emerge even when comparing URLs with similar partisan leanings. These findings underscore the importance of conducting multi-platform research on this topic, which can enhance our understanding of the overall news-sharing environment of social media.

The Persistence of Contrarianism on Twitter: Mapping users’ sharing habits for the Ukraine war, COVID-19 vaccination, and the 2020 Midterm Elections
With David Axelrod and John Paolillo

Empirical studies of online disinformation emphasize matters of public concern such as the COVID-19 pandemic, foreign election interference, and the Russo-Ukraine war, largely in studies that treat the topics separately. Comparatively fewer studies attempt to relate such disparate topics and address the extent to which they share behaviors. In this study, we compare three samples of Twitter data on COVID-19 vaccination, the Ukraine war and the 2020 midterm elections, to ascertain how distinct ideological stances of users across the three samples might be related. Our results indicate the emergence of a contrast between institutionally-oriented stances and a broad contrarian stance. The contrarian position is most clearly defined by retweeting conspiratorial content targeting public health, democratic institutions and US foreign policy. We confirm the existence of ideologically coherent cross-subject stances among Twitter users, but in a manner not squarely aligned with right-left political orientations.

Work in progress

Twitching, Fast and Slow: Field Experiment in Political Stream
With Chloe Ahn, Drew Dimmery, and Kevin Munger

Online livestreaming has become a prominent hub for online conversations with a variety of content, including politics. Unlike asynchronous political expressions in online spaces, livestreaming chats are more vulnerable to becoming environments where extreme speech is pervasive. To mitigate the spread of toxic speech online, we conducted a field experiment where we intervened in the streaming chat by entering the livestream, leaving comments, and measuring subsequent changes in the discussion. A confederate joined each stream and began commenting in a style that was randomized based on two factors: whether the comments were polite or less polite, and whether the comments were ideologically congruent or incongruent with the leaning of the stream. Our results indicate that sharing a polite or ideologically incongruent comment during a political livestream can reduce the toxicity of speech from other commenters and increase the substantiveness of comments shared. However, these benefits came with the trade-off of decreasing users’ engagement within the chats. While like-minded, less-restrictive expressions online promote engagement with other users, they may also incentivize extreme speech norms that lack substantive reflections on the discussed topics. Our findings have broader implications for studying the relatively new media affordance of livestreaming and important trade-offs in online platform design.

Quantifying the effects of time delay in illegal content takedown
With Bao Tran Truong, Samuel Groesch, Enrico Verdolotti, Silvia Giordano, Natascha Just, Florian Saurwein, and Filippo Menczer

Social media platforms implement content moderation to manage illegal content, which includes copyright violations and unlawful material dissemination. Various regulations mandate different “takedown deadlines” for such content, resulting in inconsistent enforcement and effectiveness across platforms. This study examines the impact of time delays in illegal content moderation, highlighting how different deadlines imposed by regulations influence the social media ecosystem. Building on research that explores the effectiveness of moderation measures for misinformation and harmful content, we utilize an agent-based model to simulate illegal content removal. The findings reveal that the adverse impact of time delay in illegal content removal increases proportionally with longer delays, but beyond a certain threshold, this impact plateaus, rendering prolonged delays ineffective. The study offers insights into optimal content moderation strategies and their implications for policymakers aiming to mitigate the risks associated with illegal content on social media platforms.

Large language models can infer political inclination from non-political discourse
With Byunghwee Lee, Yong-Yeol Ahn, Filippo Menczer, Jisun An, and Haewoon Kwak

This research explores the capability of large language models (LLMs) to infer political orientation from non-political text and examines how these inferences can be used to predict an individual’s stance on various political issues. Drawing from debate history on Debate.org, we investigate whether LLMs can accurately classify users as Democrat or Republican based on their discourse in non-political contexts. Additionally, we assess the effectiveness of different topics in facilitating these classifications and explore the models’ ability to predict stances on political issues using inferred political affiliations. Our findings contribute to the understanding of how LLMs may be used for micro-targeting, raising important ethical considerations about privacy and the potential misuse of inferred personal information in digital spaces.