Cody Buntain

Assistant Professor of Information

University of Maryland

Areas of Expertise: Algorithmic Transparency, Privacy, and Mis/disinformation

Cody Buntain is an assistant professor in the College of Information with an affiliate appointment in the University of Maryland Institute for Advanced Computer Studies. His work examines how people use online information spaces during crises and political unrest, with a focus on information quality and how we can make better, more informative online spaces

  • Alizadeh, M., Shapiro, J. N., Buntain, C., & Tucker, J. A. (2020). Content-Based Features Predict Social Media Influence Operations. Science Advances, 6(30), eabb5824.

    Abstract: We study how easy it is to distinguish influence operations from organic social media activity by assessing the performance of a platform-agnostic machine learning approach. Our method uses public activity to detect content that is part of coordinated influence operations based on human-interpretable features derived solely from content. We test this method on publicly available Twitter data on Chinese, Russian, and Venezuelan troll activity targeting the United States, as well as the Reddit dataset of Russian influence efforts. To assess how well content-based features distinguish these influence operations from random samples of general and political American users, we train and test classifiers on a monthly basis for each campaign across five prediction tasks. Content-based features perform well across period, country, platform, and prediction task. Industrialized production of influence campaign content leaves a distinctive signal in user-generated content that allows tracking of campaigns from month to month and across different accounts.

    Full Paper

  • Buntain, C. (2022). An Imperative to Assess Socio-Technical Impact of Algorithms in Online Spaces and Wresting Responsibility from Technology Companies. 2022 IEEE 8th International Conference on Collaboration and Internet Computing (CIC), 45–54.

    Abstract: Today’s information ecosystem has a plethora of tools to support decision-making and cognitive offloading. As this ecosystem has evolved and grown, however, we increasingly rely on recommendation systems, algorithmic curation, and other technologies to make decisions on our behalf and help manage an otherwise-overwhelming information environment. This cognitive offloading comes at a price, as these technologies make decisions that govern the information sources to which we are exposed, the celebrities we are likely follow, the content likely to become popular, the order of information shown to us, and even the state of our emotions. Despite the potential effects of this tradeoff, the true socio-technical impact of this reliance and imbuing these technologies with so much influence over the information space remains both an open and a controversial question. This paper outlines different forms of this question, wherein I summarize the controversies surrounding these technologies and the mixed evidence from academic research on them. I then describe the barriers impeding resolution of these concerns and how the current trajectories of online social platforms and the technology companies that own them are unlikely to provide answers to these issues without external intervention. I close by describing a "socio-technical safety triad" for online social spaces, where the responsibilities and incentives for reducing societal harm are devolved from corporations and spread across academic, governmental, and corporate stakeholders.

    Full Paper

  • Buntain, C., Bonneau, R., Nagler, J., & Tucker, J. A. (2021). YouTube Recommendations and Effects on Sharing Across Online Social Platforms. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 11:1-11:26.

    Abstract: In January 2019, YouTube announced its platform would exclude potentially harmful content from video recommendations while allowing such videos to remain on the platform. While this action is intended to reduce YouTube's role in propagating such content, continued availability of these videos via hyperlinks in other online spaces leaves an open question of whether such actions actually impact sharing of these videos in the broader information space. This question is particularly important as other online platforms deploy similar suppressive actions that stop short of deletion despite limited understanding of such actions' impacts. To assess this impact, we apply interrupted time series models to measure whether sharing of potentially harmful YouTube videos in Twitter and Reddit changed significantly in the eight months around YouTube's announcement. We evaluate video sharing across three curated sets of anti-social content: a set of conspiracy videos that have been shown to experience reduced recommendations in YouTube, a larger set of videos posted by conspiracy-oriented channels, and a set of videos posted by alternative influence network (AIN) channels. As a control, we also evaluate these effects on a dataset of videos from mainstream news channels. Results show conspiracy-labeled and AIN videos that have evidence of YouTube's de-recommendation do experience a significant decreasing trend in sharing on both Twitter and Reddit. At the same time, however, videos from conspiracy-oriented channels actually experience a significant increase in sharing on Reddit following YouTube's intervention, suggesting these actions may have unintended consequences in pushing less overtly harmful conspiratorial content. Mainstream news sharing likewise sees increases in trend on both platforms, suggesting YouTube's suppression of particular content types has a targeted effect. In summary, while this work finds evidence that reducing exposure to anti-social videos within YouTube potentially reduces sharing on other platforms, increases in the level of conspiracy-channel sharing raise concerns about how producers -- and consumers -- of harmful content are responding to YouTube's changes. Transparency from YouTube and other platforms implementing similar strategies is needed to evaluate these effects further.

    Full Paper

  • Buntain, C., Bonneau, R., Nagler, J., & Tucker, J. A. (2023). Measuring the Ideology of Audiences for Web Links and Domains Using Differentially Private Engagement Data. Proceedings of the International AAAI Conference on Web and Social Media, 17, 72–83.

    Abstract: This paper demonstrates the use of differentially private hyperlink-level engagement data for measuring ideologies of audiences for web domains, individual links, or aggregations thereof. We examine a simple metric for measuring this ideological position and assess the conditions under which the metric is robust to injected, privacy-preserving noise. This assessment provides insights into and constraints on the level of activity one should observe when applying this metric to privacy-protected data. Grounding this work is a massive dataset of social media engagement activity where privacy-preserving noise has been injected into the activity data, provided by Facebook and the Social Science One (SS1) consortium. Using this dataset, we validate our ideology measures by comparing to similar, published work on sharing-based, homophily- and content-oriented measures, where we show consistently high correlation (>0.87). We then apply this metric to individual links from several popular news domains and demonstrate how one can assess link-level distributions of ideological audiences. We further show this estimator is robust to selection of engagement types besides sharing, where domain-level audience-ideology assessments based on views and likes show no significant difference compared to sharing-based estimates. Estimates of partisanship, however, suggest the viewing audience is more moderate than the audiences who share and like these domains. Beyond providing thresholds on sufficient activity for measuring audience ideology and comparing three types of engagement, this analysis provides a blueprint for ensuring robustness of future work to differential privacy protections.

    Full Paper

  • Buntain, C., Innes, M., Mitts, T., & Shapiro, J. (2023). Cross-Platform Reactions to the Post-January 6 Deplatforming. Journal of Quantitative Description: Digital Media, 3.

    Abstract: We study changes in social media usage following the ‘Great Deplatforming’ in the aftermath of the 6 January 2021 attack on the US Capitol. Following the attack, several major platforms banned thousands of accounts, ostensibly to limit misinformation about voter fraud and suppress calls for violence. At the same time, alternative platforms like Gab, BitChute, and Parler welcomed these deplatformed individuals. We identify three key patterns: First, in studying the platforms that emerged among users seeking alternative spaces, we see high frequencies of users bridging these communities announcing their intent to join non-mainstream platforms to their audiences on mainstream platforms. Second, focusing on platforms that were created to be alternative, anti-censorship spaces, deplatforming preceded a sustained increase in engagement with Gabacross Twitter, Reddit, and Google search, while Parler saw a steep decline in engagement. Third, examining the language in these spaces, toxic discourse increased briefly on Reddit and Twitter but returned to normal after the deplatforming, while Gab became more toxic. These results suggest that while deplatforming may precede a reduction in targeted discussions within a specific platform, it can incentivize users to seek alternative platforms where these discussions are less regulated and often more extreme.

    Full Paper

  • Golovchenko, Y., Buntain, C., Eady, G., Brown, M. A., & Tucker, J. A. (2020). Cross-Platform State Propaganda: Russian Trolls on Twitter and YouTube During the 2016 US Presidential Election (SSRN Scholarly Paper 3552886).

    Abstract: This paper investigates online propaganda strategies of the Internet Research Agency (IRA)—Russian “trolls”—during the 2016 U.S. presidential election. We assess claims that the IRA sought either to (1) support Donald Trump or (2) sow discord among the U.S. public by analyzing hyperlinks contained in 108,781 IRA tweets. Our results show that although IRA accounts promoted links to both sides of the ideological spectrum, “conservative” trolls were more active than “liberal” ones. The IRA also shared content across social media platforms, particularly YouTube—the second-most linked destination among IRA tweets. Although overall news content shared by trolls leaned moderate to conservative, we find troll accounts on both sides of the ideological spectrum, and these accounts maintain their political alignment. Links to YouTube videos were decidedly conservative, however. While mixed, this evidence is consistent with the IRA’s supporting the Republican campaign, but the IRA’s strategy was multifaceted, with an ideological division of labor among accounts. We contextualize these results as consistent with a pre-propaganda strategy. This work demonstrates the need to view political communication in the context of the broader media ecology, as governments exploit the interconnected information ecosystem to pursue covert propaganda strategies.

    Full Paper

Featured Publications

News