Event: Introducing Dr. Sude

On May 5th I had the pleasure of having my dissertation officially accepted by The Ohio State University’s graduate school.

My engaged and engaging committee was composed of Dr. Silvia Knobloch-Westerwick, Dr. Kelly Garrett, Dr. Jason Coronel, and Dr. Gerald Kosicki, of The Ohio State University.

Its title is: More Than Partisans: Factors that Promote and Constrain Partisan Selective Exposure with Implications for Political Polarization

The April defense was over Zoom 🙂
dissertation defense

Hot off the press! Peers versus Pros: Confirmation bias in selective exposure to user-generated verus professional media messages and its consequences

Westerwick, A., Sude, D.J., Robinson, M., & Knobloch-Westerwick, S. (2020). Peers versus pros: Confirmation bias in selective exposure to user-generated versus professional media messages and its consequences. Mass Communication and Society, 23, 510-536. https://www.tandfonline.com/doi/full/10.1080/15205436.2020.1721542

For a free eprint click here.

If the a preprint version click westerwick, sude, robinson, & knobloch-westerwick (accepted).

Abstract: Political information is now commonly consumed embedded in user-generated content and social media. Hence, peer users (as opposed to professional journalists) have become frequently encountered sources of such information. This experiment tested competing hypotheses on whether exposure to attitude-consistent versus -discrepant political messages (confirmation bias) depends on association with peer versus professional sources, through observational data and multi-level modeling. Results showed the confirmation bias was differentiated, as attitude importance fostered it only in the peer sources condition: When consuming user-generated posts on political issues, users showed a greater confirmation bias the more importance they attached to a specific political issue. Furthermore, exposure generally affected attitudes in line with message stance, as attitude-consistent exposure reinforced attitudes, while attitude-discrepant exposure weakened them (still detectable a day after exposure). Attitude impacts were mediated by opinion climate perceptions.

Hot off the press! Toeing the Party Lie

For a brief summary, please see the Publications section.

Garrett, R.K., Sude, D.J., & Riva, R. (2020). Toeing the party lie: Ostracism promotes endorsement of partisan falsehoods. Political Communication, 37, 157-172. https://doi.org/10.1080/10584609.2019.1666943

For a free eprint click.

For a preprint click Garrett et al – Toeing the party lie (prepress).

Abstract:

Research suggests that ostracism could promote endorsement of partisan falsehoods. Socially excluded individuals are uniquely attentive to distinctions between in-groups and out-groups, and act in ways intended to promote group belonging, potentially including a greater willingness to accept claims made by other group members. We test this assertion with a 2 (ostracism) X 2 (anonymity) X 2 (topic) mixed factorial design using the Ostracism Online paradigm with a demographically diverse online sample of Americans (N = 413). Results suggest that when ostracized, both Democrats and Republicans are more likely to endorse party-line falsehoods about the 2016 U.S. Presidential election. These effects are contingent on several individual-level differences, including strength of ideological commitment, cognitive reflection, and faith in intuition for facts. These patterns failed to replicate with fracking, a politically charged science topic.

Event: ICA 2019

Had the joy of giving two presentations at the 2019 International Communication Association conference in DC. Even the Tuesday session (end of conference) was pleasantly packed.

First presentation focused on a finding where the gender of the author of a political opinion piece was more influential in shaping whether people selected and spent time reading that piece than the stance of its political content! In other words, our cross-partisan identities sometimes matter more than our partisan ones and can foster “reading across party lines.” Work was the product of a collaboration with Dr. Westerwick and Dr. Knobloch-Westerwick, as well as the lab’s talented undergraduate programmers.

My second talk was on belief polarization in response to social exclusion (in collaboration with Dr. Garrett and Dr. Riva). We looked (with a national panel survey) at whether Democrats and Republicans who had just been socially excluded would be more resistant to a political fact check message. The Democrat-targeted message was about Russian tampering with vote counts; the Republican-targeted message was about vote fraud. After exclusion, weaker partisans were just as inaccurate as strong partisans. This both shows that a need to affiliate can drive belief polarization and that even every day social exclusion can have important impacts in a world where news is increasingly consumed on social media.
Lots of good questions at the end of both talks!

Hot off the press! “Pick and choose” opinion climate: How browsing of political messages shapes public opinion perceptions and attitudes

Sude, D., Knobloch-Westerwick, S., Robinson, M., & Westerwick, A. (2019). “Pick and choose” opinion climate: How browsing of political messages shapes public opinion perceptions and attitudes. Communication Monographs, 4, 457-478. https://doi.org/10.1080/03637751.2019.1612528

For a free eprint click.

For the preprint click Sude et al. – Pick and Choose Opinion Climate (2019).

High-choice media environments allow people to cocoon themselves with like-minded messages (confirmation bias), which could shape both individual attitudes and perceived prevalence of opinions. This study builds on motivated cognition and spiral of silence theory to disentangle how browsing political messages (both selective exposure as viewing full articles and incidental exposure as encountering leads only) shapes perceived public opinion and subsequently attitudes. Participants (N = 115) browsed online articles on controversial topics; related attitudes and public opinion perceptions were captured before and after. Multi-level modeling demonstrated a confirmation bias. Both selective and incidental exposure affected attitudes per message stance, with stronger impacts for selective exposure. Opinion climate perceptions mediated selective exposure impacts on attitudes.

Communicating Science: Statistical Thinking Can Organize Qualitative Analysis

W/ my co-presenter, I conducted a whirlwind tour of complex regression models (serial mediation, parallel mediation, multi-level models, and multi-level mediation) to our lab. If you can imagine it, and you have meaningful quantitative data, there’s a model for you! (even if you only use SPSS, there are macros – PROCESS and MLMED) for you.)

When I was driving back from visiting my parents in Raleigh, one of the things I really looked forward to in Columbus was my multi-level modeling class (finding patterns in data when your observations are clustered within an individual, country, media market, school, etc). I was excited to be acquiring new tools – new ways of tackling meaningful questions, systematically. Knowledge is power (limited power, sometimes, but power none the less).

Statistical tools are not only powerful for measuring complex social situations, but can be powerful for thinking about them as well. I like to joke that stereotypes reflect really simple statistical thinking (mean differences). Intersectionality starts to take different levels of variables into account (regression). Privilege demands thinking in terms of clustering – different people with different traits in different situations (i.e. multi level regressions).

How many articles have you read on topics like privilege that had no guiding framework for thinking about the influence of group-membership, individual traits, overlapping groups, and categories of situations? They tend to fumble. They try to simplify with analogies, but often that simplicity feels artificial. Multi-level regression provides a heuristic framework – a way of organizing how we tackle that complexity. Even if we lack the data for a conclusive analysis – multi-level modeling helps us to articulate our questions, our guesses, and our insights.

It is also something, as the presentation this morning indicated, that can be made accessible in qualitative terms.

Communicating Science: Things to keep in mind when thinking about police shootings . . .

Things to keep in mind when thinking about police shootings:

Police show considerable race-weapon stereotyping, and race-aggression stereotyping on the social-psychologist designed “shooter task” – other people show even more (Correll and colleagues’ work).

Everyone has to resolve ambiguity – that’s when stereotypes can creep in. Is that a gun or a wallet? Is that person aggressive or scared? It could even lead to sensory distortions under high stress conditions. Anyone who has ever been really nervous knows to watch out for this (as opposed to blindly acting upon it). For a more everyday example of a sensory distortion, ever misread something while copy editing? Your brain “filled in the gap” with a coherent story, and the typo remained, unseen.

Police departments are not known for cultivating good mental health – but someone with a gun acting out poor mental health is a problem across the board – whether they’re killing themselves or killing someone else.

Citizens have less experience coping wth spikes of fear – of adrenaline and cortisol – than police, who have been through training, do.

White citizens are more likely to call cops on black citizens doing “ambiguously criminal” behaviors. Cops, then, are more likely to be monitoring for “suspicious black people.”

Studies of police-driver interactions at traffic stops often see the the citizen’s reactions – perfectly at ease versus even politely defensive – leading to more controlling attitudes from police. It would be hard for citizens who have been targeted to ever be perfectly at ease. Heck, even I’ve been harassed by customs agents and police for “looking nervous”.

Mentally ill people are particularly likely to become targets – because any not perfectly “safe and predictable” behavior is interpreted as a threat. This is why some departments call in specialists who are better able to assess the situation when mental illness is suspected.

Police, like the rest of us, like their stories – even ones that are more a matter of faith than fact. You can also imagine that departments would vary by how often they actually deal wth threat. On the low end, they may, on average, be looking for an opportunity to “suit up,” at the high end the may live, on average, in a state of chronic stress and fear.

Statistically – we can control for a lot of things – including actual-race-based difference in weapons charges and other signs of real versus imagined racial differences in dangerousness. Stereotypes are likely still relevant, even after those things are controlled for. This makes sense, empirically. We apply schema – ideas about the world – to resolve ambiguity all the time (the typo example). The solution would seem to be better schema and better methods for gathering information in the moment (both better schema and better attempts at evaluating the situation are part of the training that specialized mental health responders have).

However, statisticians, particularly ones relying on observational data, can cherry pick which measures they include, and which they exclude. They are also often trying to “start a conversation” with others in their field, such that national attention may be secondary. Sometimes “starting a conversation” means generating controversy. Even academics engage in PR, albeit for a limited audience.

So always ask yourself “Would I expect to see a race-based difference if the neighborhoods, suspects, or the officers were matched on a different set of characteristics? Statistics don’t provide the final answer, only pieces of the puzzle. To know they fit together, you have to look at them closely.In the end, however, most statistics are asking:

“Is there an average difference in y as we go up one unit on x, constant across (controlling for) levels of these other variables?” Y, for example, could be likelihood of getting shot, a one unit change in x could be going from “white, coded as 0, to black, coded as 1.” That would be a categorical variable. It could be that when people are matched on income, education, etc, a racial difference disappears, remains, or even increases. This is called “controlling for” or “adjusting for” those variables.

Often, the statistician (and you) could ask, does the odds of getting shot when black versus white depend upon another of those other variables, so that if that variable, W, let’s say, is high, the difference in Y (odds of getting shot) as X goes from 0, white, to 1, black, is bigger (or smaller)?” So if “W” is median income – it could be that the odds of getting shot while black versus white is lower in areas with high median income, controlling for (holding constant) the percentage of black people living in that area. So many questions can be asked (and partially answered) with statistical models!
 
The mechanics may seem complex, even scary, but understanding what a model is trying to evaluate, and how it works, is not rocket science. Like a jigsaw puzzle, it requires patience, but you can get it done!If you want to try your hand, think about the following and try to break it down into primary relationships (outcome variables versus focal predictor variables, control variables, and moderators (if any):
http://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0141854

Police shootings involve a rich if terrible tapestry of factors.
 
However, if you’re talking to any every day person, and they are feeling a sense of concrete personal danger, why not tend and befriend first, discuss and debate second?

Communicating Science: Reading Science News and Empirical Articles – What Not To Do

I recently came across this on my Facebook feed:
https://psmag.com/the-death-of-the-white-working-class-has-been-greatly-exaggerated-1c568d3e6b8c

It is a great example of what happens when you allow a headline to make you curious, about methodology as well as topic. Dig deep, and you may uncover misleading, questionable, or just confusing decisions made by researchers and people reporting on the research. Sometimes this is deliberate, an attempt to influence the policy decisions of people who don’t have time to understand research.

However, in order to get appropriate attention, portions of almost every empirical article you see will be hyped. The introduction, sometimes, the abstract, sometimes the discussion too, will inevitably exaggerate what the researchers actually found. Those sections are designed to say “Hey, if our interpretation is correct, these results will be really interesting.”

A social scientist reads the abstract and skips right to the manipulations (what the experimenters presented differently to different people) and the measures (what they measured). That’s the concrete, meaty detail. That’s the difference between being told a movie fits in a certain genre and watching the actual movie.

Then, we look at the statistics, acknowledging that these estimates (models designed to uncover an average trend amidst all that variation) are the best they researchers could do with the tools they had. (Let’s assume they were researching in good faith). We ask questions about what they didn’t report (word limits), what they might have found but not been able to tell a clear story about (there’s a lot of messiness, even scientists like to read about clarity).

If we’re really good at statistics, we may even look for mistakes. Peer reviewers who act as gatekeepers for academic journals are generally unpaid and overwhelmed. Mistakes happened.

Then we check out the discussion (the “let’s get real” section of the paper). Then we skim the intro for any novel interpretations of the existing literature, or citations we were unfamiliar with.

This is a good approach, not just with academic papers, but with anything. What does the real evidence look like? Are people interpreting it in good faith? Are they making errors you can help correct? What other interpretations could you offer?

Applying Psychological Research at Sooth – Anger and Information Processing

Sooth is [now, was] a social-psychologist founded company that develops community around the art of giving and receiving good advice. Their IOS platform app brings users together for anonymous advice-asking and advice-providing. Despite anonymity, and due in part to the educational materials provided to users, advice tends to be very high quality. I encourage you all to check it out!

For my own contribution, see:
http://www.soothspace.com/blog/anger (now defunct, unfortunately! See link at end of this post)

I briefly describe literature studying the effects of anger on information processing, and propose a response to anger that facilitates perspective-taking.

Advice on Anger — Sooth

Applying Psychological Research at Sooth: Advice-Giving

Sooth is a social-psychologist founded company that develops community around the art of giving and receiving good advice. Their IOS platform app brings users together for anonymous advice-asking and advice-providing. Despite anonymity, and due in part to the educational materials provided to users, advice tends to be very high quality. I encourage you all to check it out!

For my own contribution, see:
http://www.soothspace.com/blog/giving-good-advice

I describe obstacles to giving good advice – including confirmation bias, the illusion of explanatory depth, and passive dehumanization. I then recommend some research and experience-supported antidotes.