Research

Published Papers

Understanding political communication and political communicators on Twitch
Plos One. 2024. [Paper]

As new technologies rapidly reshape patterns of political communication, platforms like Twitch are transforming how people consume political information. This entertainment-oriented live streaming platform allows us to observe the impact of technologies such as “live-streaming” and “streaming-chat” on political communication. Despite its entertainment focus, Twitch hosts a variety of political actors, including politicians and pundits. This study explores Twitch politics by addressing three main questions: 1) Who are the political Twitch streamers? 2) What content is covered in political streams? 3) How do audiences of political streams interact with each other? To identify political streamers, I leveraged the Twitch API and supervised machine-learning techniques, identifying 574 political streamers. I used topic modeling to analyze the content of political streams, revealing seven broad categories of political topics and a unique pattern of communication involving context-specific “emotes.” Additionally, I created user-reference networks to examine interaction patterns, finding that a small number of users dominate the communication network. This research contributes to our understanding of how new social media technologies influence political communication, particularly among younger audiences.

The consistency principle: Crisis perceptions, partisanship and public support for democratic norms in comparative perspective
European Journal of Political Research. 2024. [Paper]
With Amanda Driscoll, Jay Krehbiel, and Michel J. Nelson

A growing body of research theorizes that partisanship can undermine democracy as citizens prioritize their political interests over abstract norms and values. We argue that crises might counteract intense partisanship by giving citizens clarity on the threats posed by rule of law violations. Examining the differential application of a law – a breach of democratic norms – we draw on an experiment embedded in representative surveys of Germany, the United States, Hungary and Poland to examine citizens’ sense of appropriate punishment for elites’ violation of a municipal mask-wearing ordinance. We find evidence of partisan bias in citizens’ willingness to support punishment in all four countries. But, in the two consolidated democracies, we find that concern about the Covid-19 crisis diminishes partisan biases in punishment preferences: citizens who are most concerned about the crisis also model the most consistency in their willingness to hold copartisans into account.

Spatial modeling of dyadic geopolitical interactions between moving actors
Political Science Research and Methods. 2023. [Paper]
With Howard Liu and Bruce A. Desmarais

Political actors often interact spatially, and move around. However, with a few exceptions, existing political research has analyzed spatial dependence among actors with fixed geographic locations. Focusing on fixated geographic units prevents us from probing dependencies in spatial interaction between spatially dynamic actors, which are common in some areas of political science, such as sub-national conflict studies. In this note, we propose a method to account for spatial dependence in dyadic interactions between moving actors. Our method uses the spatiotemporal histories of dyadic interactions to project locations of future interactions—projected actor locations (PALs). PALs can, in turn, be used to model the likelihood of future dyadic interactions. In a replication and extension of a recent study of subnational conflict, we find that using PALs improves the predictive performance of the model and indicates that there is a clear relationship between actors’ past conflict locations and the likelihood of future conflicts.

The effects of an informational intervention on attention to anti-vaccination content on YouTube
Proceedings of the International AAAI Conference on Web and Social Media. 2020. [Paper]
With Omer F. Yalcin, Samuel E. Bestvater, Kevin Munger, Burt L. Monroe, and Bruce A. Desmarais

The spread of misinformation related to health, especially vaccination, is a potential contributor to myriad public health problems. This misinformation is frequently spread through social media. Recently, social media companies have intervened in the dissemination of misinformation regarding vaccinations. In the current study we focus on YouTube. Recognizing the extent of the problem, YouTube implemented an informational modification that affected many videos related to vaccination beginning in February 2019. We collect original data and analyze the effects of this intervention on video viewership. We find that this informational intervention reduced traffic to the affected videos, both overall, and in comparison to a carefully-matched set of control videos that did not receive the informational modification.

Working Papers

The political influence of non-politicized friends: How do social networks affect the spread of protest information in social media?
Under Review

How do social networks influence the spread of protest information on social media? This article argues that the political characteristics of accounts sharing protest information affect how that information is interpreted and spread by other Twitter users. Specifically, I suggest that whether Twitter accounts are perceived as overtly political or nonpolitical can shape how users respond to signals about political protests. I hypothesize that nonpolitical accounts may exert more influence in spreading protest messages than political accounts, as they are seen as less biased or more trustworthy. To test this theory, I conducted an online experiment using vignettes that simulate the Twitter environment. Participants were exposed to protest-related Tweets and were asked whether they would retweet or like them, with some accounts presenting political traits in their profiles and others appearing nonpolitical. Contrary to my expectations, the results did not reveal a statistically significant difference in participants’ responses between political and nonpolitical profiles. However, the study revealed unexpected patterns, including the role of education in shaping retweet behavior differently across political groups and the influence of context-specific factors, such as protest types and images, on user engagement. These findings suggest that individual characteristics and content features may interact in complex ways, warranting further exploration.

Delayed takedown of illegal content on social media makes moderation futile
With Bao Tran Truong, Erfan Samieyan Sahneh, Gianluca Nogara, Enrico Verdolotti, Florian Saurwein-Scherer, Natascha Just, Luca Luceri, Silvia Giordano, and Filippo Menczer

Social media platforms face legal and regulatory pressures to moderate illegal content through takedown procedures. However, the effectiveness of content moderation varies widely across platforms due to differences in takedown deadlines imposed by various regulations. This study models the relationship between the timeliness of content removal and the persistence of illegal material on social media. By simulating illegal content diffusion using empirical data from sources like the DSA Transparency Database and Facebook NetzDG reports, we demonstrate that while rapid takedown (within hours) significantly reduces illegal content prevalence and exposure, longer delays (beyond 23 days) render moderation efforts futile. Our findings stress the need for regulatory frameworks with enforceable, short deadlines, such as those outlined in German law, to ensure meaningful content removal. These insights provide critical recommendations for policymakers aiming to enhance online safety and improve moderation strategies.

Large language models can infer political partisanship from general discourse
With Byunghwee Lee, Yong-Yeol Ahn, Filippo Menczer, Jisun An, and Haewoon Kwak

Political campaigns increasingly use online micro-targeting, fueled by social media data, raising concerns about its potential for manipulation. The advent of large language models (LLMs) has heightened these concerns by making political micro-targeting more accessible. This study explores a privacy risk associated with LLMs: inferring users’ political orientation from their texts, including those written in general contexts. We evaluate the ability of large language models (LLMs) to infer political affiliation using two major online platforms: Debate.org, a general debate site, and Reddit, a diverse collection of online communities. Our results show that LLMs achieve high accuracy in inferring users’ political partisanship, even from texts that are not explicitly political, as demonstrated by experiments with GPT-4o and Llama-3.1-8B. Furthermore, incorporating confidence scores during inference enhances accuracy when aggregating predictions at the user level. Notably, political partisanship labels inferred from general-context texts can reliably predict users’ stances on specific political issues, underscoring the potential for more targeted and precise political micro-targeting.

Comparison of credibility of news shared in four different platforms during the Midterm election 2022: Twitter, Facebook, Instagram, and Reddit
With Ozgur Can Seckin

Social media platforms have become primary sources for accessing and consuming political news, aligning with the ongoing digital transformation of the media landscape. While this transformation has facilitated easier access to information, concerns regarding the over-sharing of news from low-credibility sources and partisan-driven news sharing behaviors have emerged as significant issues for both the scientific community and policymakers. Despite various studies on this topic, there remains surprisingly little understanding of how users’ political news sharing behavior differs among different social media platforms. In this article, we compare the patterns of news sharing during a major political event, the United States 2022 midterm election, across three distinct social media platforms: Twitter, Meta (encompassing Facebook and Instagram), and Reddit. We leverage large-scale data collected during the election cycle. Our findings indicate differences in the credibility of news sources shared on each platform, both in terms of source credibility and partisanship. News sources shared on Reddit have higher credibility and are relatively left-leaning compared to those on Twitter and Meta. The study also reveals consistent patterns across all three platforms, indicating that right-leaning URLs tend to be associated with lower credibility, in line with existing literature. However, notable differences among the platforms emerge even when comparing URLs with similar partisan leanings. These findings underscore the importance of conducting multi-platform research on this topic, which can enhance our understanding of the overall news-sharing environment of social media.

The persistence of contrarianism on Twitter: Mapping users’ sharing habits for the Ukraine war, COVID-19 vaccination, and the 2020 Midterm elections
With David Axelrod and John Paolillo

Empirical studies of online disinformation emphasize matters of public concern such as the COVID-19 pandemic, foreign election interference, and the Russo-Ukraine war, largely in studies that treat the topics separately. Comparatively fewer studies attempt to relate such disparate topics and address the extent to which they share behaviors. In this study, we compare three samples of Twitter data on COVID-19 vaccination, the Ukraine war and the 2020 midterm elections, to ascertain how distinct ideological stances of users across the three samples might be related. Our results indicate the emergence of a contrast between institutionally-oriented stances and a broad contrarian stance. The contrarian position is most clearly defined by retweeting conspiratorial content targeting public health, democratic institutions and US foreign policy. We confirm the existence of ideologically coherent cross-subject stances among Twitter users, but in a manner not squarely aligned with right-left political orientations.

Work in progress

Twitching, fast and slow: Field experiment in political stream
With Chloe Ahn, Drew Dimmery, and Kevin Munger

Online livestreaming has become a prominent hub for online conversations with a variety of content, including politics. Unlike asynchronous political expressions in online spaces, livestreaming chats are more vulnerable to becoming environments where extreme speech is pervasive. To mitigate the spread of toxic speech online, we conducted a field experiment where we intervened in the streaming chat by entering the livestream, leaving comments, and measuring subsequent changes in the discussion. A confederate joined each stream and began commenting in a style that was randomized based on two factors: whether the comments were polite or less polite, and whether the comments were ideologically congruent or incongruent with the leaning of the stream. Our results indicate that sharing a polite or ideologically incongruent comment during a political livestream can reduce the toxicity of speech from other commenters and increase the substantiveness of comments shared. However, these benefits came with the trade-off of decreasing users’ engagement within the chats. While like-minded, less-restrictive expressions online promote engagement with other users, they may also incentivize extreme speech norms that lack substantive reflections on the discussed topics. Our findings have broader implications for studying the relatively new media affordance of livestreaming and important trade-offs in online platform design.