Hot off the press! Budak, Garrett, and Sude (in press) Communication Methods and Measures!

Budak, C., Garrett, R. K., & Sude, D. (in press). Better crowdcoding: Strategies for promoting accuracy in crowdsourced content analysis. Communication Methods and Measures.

In this work, we evaluate different instruction strategies to improve the quality of
crowdcoding for the concept of civility. We test the effectiveness of training,
codebooks, and their combination through 2×2 experiments conducted on two
different populations—students and Amazon Mechanical Turk workers. In
addition, we perform simulations to evaluate the trade-off between cost and
performance associated with different instructional strategies and the number of
human coders. We find that training improves crowdcoding quality, while
codebooks do not. We further show that relying on several human coders and
applying majority rule to their assessments significantly improves performance.

Machine Learning Applications: Sunday Thoughts

A useful resource for building up intuitions about machine learning (in this case, supervised machine learning)….

People in my field will try to categorize social media posts as civil {uncivil} or low {high} in deliberative quality using these techniques. The goal, there, is to train machines that can, with a high but not perfect degree of accuracy, sort through hundreds of thousands of posts. This can help not only generate a pool of ecologically valid stimuli for use in experiments but help peole to understand emergent crises.

Organizations like (CVE = countering violent extremism) will identify individuals who are being radicalized using similar techniques. They can then try introducing “alternative content” (content that challenges extremist narrative). According to one of their representative’s talk at New America’s 2020 Future Security Forum, these potential extremists do click on that alternative content. Note: These “low effort” tech solutions are not perfect. Some people are flagged for more effortful counter and deradicalization efforts.

These efforts succeed in part with the coopertion of social media companies. They can also be pursued independently (for good or for ill. Imagine extremists using machine learning to identify good targets for their messages).

However, my own view is that introducing new content is not enough. Content may serve as an exemplar – impacting perceptions of public opinion (within groups) as well as of normative behaviors. It may also, of course, serve as a source of {mis}{dis}information. Each of these functions can have a downside that is hard for the individual to correct.

Perceptions of public opinion and norms based on exemplars may in fact be quite inaccurate (as when participants in Sude et al. 2019 shifted their perceptions of public opinion after reading a single article). Exemplification is largely an automatic process, people are not necessarily aware of it and it can persist even in the presence of base-rate information (e.g. When people “like” the revised perception of public opinion – e.g. when it implies that their opinions are popular – this can also lead to a motivation to embrace and defend these revised estimates, even in the face of counter evidence.

A social media company, aware of this impact, could consciously gather evidence about the “true” baserates (on their site, and, based on polling data, nationaly). The least it could do is provide base-rates that partially counter the exemplification effect. Alternatively, they could prompt users to “anchor” these base-rates on specific groups, both by presenting a finer grained profile of users that resemble that exemplar (individuating them) and by punctuating the social media experience with survey questions like: “What percentage of Republicans in your social network on Twitter do you think share this poster’s views?” accompanied by “What percentage of Republicans nationally do you think share this poster’s views?”: The goal would be to highlight the potential differences between the “answers” to these questions. It’s more likely that the exemplar provides meaningful information about your social network than about the national public.

Similarly, with regards to asking people to evaluate source and content quality before endorsing or sharing, putting the onus on the individual to be an “A student” is impractical. Affordances that facilitate external citations and links, and thus a stronger web of evidence, shift the burden away from the individual and towards a social media company which is better able to handle it. Importantly, the social media company could also categorize different types of sources, provide a transparent justification for why “mainstream” sources are more likely to be accurate (e.g. the legal and institutional processes that ensure higher accuracy), and then provide a more objective rubric by which individuals could evaluate alternative or non-institutional sources of information. This might actually be a good way for individuals who are doing really high quality work to get recognized: being scored consistently well on the more objective rubric could garner you a badge. Importantly, these rubrics would have to require providing concrete evidence from the text being evaluated (screen shots, for example).

From an economic perspective, of course, social media companies need to provide plenty of emotional rewards for putting the effort in. Making it easier to get an A is not enough, the A has to light up your heart (or the “addiction pathways” in your brain). Now that I’ve cleared my head of the ideas rustling around, Happy Sunday!

Publicity for “Self-expression just a click away”

This article generated interesting from normal science news sites ( to lifestyle magazines ( to career-building sites (

Hot off the Press! Self-expression just a click away: Source interactivity impacts on confirmation bias and political attitudes.

Sude, D. J., Pearson, G. D. H., Knobloch-Westerwick, S. (in press). Self-expression just a click away: Source interactivity impacts on confirmation bias and political attitudes. Computers in Human Behavior.

Abstract: Information is now commonly consumed online, often displayed in conjunction with self-expression affordances (i.e., likes, votes) that create a sense of “self as source.” Sundar et al.’s (2015) theory of interactive media effects (TIME) conceptualizes such affordances as source interactivity (SI). An experiment examined medium effects of SI as well as message effects on attitudes. It tracked selective exposure to attitude-consistent vs. –discrepant political messages, to capture confirmation bias, and manipulated SI presence (affordance to up-vote or down-vote articles present or absent) as within-subjects factors. SI use and attitude change were captured. SI reduced selective exposure to attitude-consistent content. However, use of SI affected attitude reinforcement independently as well. Hence, users shaped their own attitudes both by selectively reading articles and expressing their views through SI. Directions for theory development are offered.

Event: School of Communication 2020 Peer Mentor Award

At this year’s School of Communication, “Comm Day,” I was delighted to receive the 2020 Peer Mentor Award. Nominated and voted upon by my peers, this award recognize the informal mentoring that occurs within a department. It was nice to know that the long talks about statistical analyses, research ideas, and navigating intradepartmental relationships were helpful!

Informal support – whether one is functioning as teacher, sympathetic shoulder, or cheerleader – helps a department function, and I was one among many supportive graduate colleages in the School.

Event: Introducing Dr. Sude

On May 5th I had the pleasure of having my dissertation officially accepted by The Ohio State University’s graduate school.

My engaged and engaging committee was composed of Dr. Silvia Knobloch-Westerwick, Dr. Kelly Garrett, Dr. Jason Coronel, and Dr. Gerald Kosicki, of The Ohio State University.

Its title is: More Than Partisans: Factors that Promote and Constrain Partisan Selective Exposure with Implications for Political Polarization

The April defense was over Zoom 🙂
dissertation defense

Hot off the press! Peers versus Pros: Confirmation bias in selective exposure to user-generated verus professional media messages and its consequences

Westerwick, A., Sude, D.J., Robinson, M., & Knobloch-Westerwick, S. (2020). Peers versus pros: Confirmation bias in selective exposure to user-generated versus professional media messages and its consequences. Mass Communication and Society, 23, 510-536.

For a free eprint click here.

If the a preprint version click westerwick, sude, robinson, & knobloch-westerwick (accepted).

Abstract: Political information is now commonly consumed embedded in user-generated content and social media. Hence, peer users (as opposed to professional journalists) have become frequently encountered sources of such information. This experiment tested competing hypotheses on whether exposure to attitude-consistent versus -discrepant political messages (confirmation bias) depends on association with peer versus professional sources, through observational data and multi-level modeling. Results showed the confirmation bias was differentiated, as attitude importance fostered it only in the peer sources condition: When consuming user-generated posts on political issues, users showed a greater confirmation bias the more importance they attached to a specific political issue. Furthermore, exposure generally affected attitudes in line with message stance, as attitude-consistent exposure reinforced attitudes, while attitude-discrepant exposure weakened them (still detectable a day after exposure). Attitude impacts were mediated by opinion climate perceptions.

Hot off the press! Toeing the Party Lie

For a brief summary, please see the Publications section.

Garrett, R.K., Sude, D.J., & Riva, R. (2020). Toeing the party lie: Ostracism promotes endorsement of partisan falsehoods. Political Communication, 37, 157-172.

For a free eprint click.

For a preprint click Garrett et al – Toeing the party lie (prepress).


Research suggests that ostracism could promote endorsement of partisan falsehoods. Socially excluded individuals are uniquely attentive to distinctions between in-groups and out-groups, and act in ways intended to promote group belonging, potentially including a greater willingness to accept claims made by other group members. We test this assertion with a 2 (ostracism) X 2 (anonymity) X 2 (topic) mixed factorial design using the Ostracism Online paradigm with a demographically diverse online sample of Americans (N = 413). Results suggest that when ostracized, both Democrats and Republicans are more likely to endorse party-line falsehoods about the 2016 U.S. Presidential election. These effects are contingent on several individual-level differences, including strength of ideological commitment, cognitive reflection, and faith in intuition for facts. These patterns failed to replicate with fracking, a politically charged science topic.

Event: ICA 2019

Had the joy of giving two presentations at the 2019 International Communication Association conference in DC. Even the Tuesday session (end of conference) was pleasantly packed.

First presentation focused on a finding where the gender of the author of a political opinion piece was more influential in shaping whether people selected and spent time reading that piece than the stance of its political content! In other words, our cross-partisan identities sometimes matter more than our partisan ones and can foster “reading across party lines.” Work was the product of a collaboration with Dr. Westerwick and Dr. Knobloch-Westerwick, as well as the lab’s talented undergraduate programmers.

My second talk was on belief polarization in response to social exclusion (in collaboration with Dr. Garrett and Dr. Riva). We looked (with a national panel survey) at whether Democrats and Republicans who had just been socially excluded would be more resistant to a political fact check message. The Democrat-targeted message was about Russian tampering with vote counts; the Republican-targeted message was about vote fraud. After exclusion, weaker partisans were just as inaccurate as strong partisans. This both shows that a need to affiliate can drive belief polarization and that even every day social exclusion can have important impacts in a world where news is increasingly consumed on social media.
Lots of good questions at the end of both talks!

Hot off the press! “Pick and choose” opinion climate: How browsing of political messages shapes public opinion perceptions and attitudes

Sude, D., Knobloch-Westerwick, S., Robinson, M., & Westerwick, A. (2019). “Pick and choose” opinion climate: How browsing of political messages shapes public opinion perceptions and attitudes. Communication Monographs, 4, 457-478.

For a free eprint click.

For the preprint click Sude et al. – Pick and Choose Opinion Climate (2019).

High-choice media environments allow people to cocoon themselves with like-minded messages (confirmation bias), which could shape both individual attitudes and perceived prevalence of opinions. This study builds on motivated cognition and spiral of silence theory to disentangle how browsing political messages (both selective exposure as viewing full articles and incidental exposure as encountering leads only) shapes perceived public opinion and subsequently attitudes. Participants (N = 115) browsed online articles on controversial topics; related attitudes and public opinion perceptions were captured before and after. Multi-level modeling demonstrated a confirmation bias. Both selective and incidental exposure affected attitudes per message stance, with stronger impacts for selective exposure. Opinion climate perceptions mediated selective exposure impacts on attitudes.