Key Concepts: Dual + Process Theories and the Tripartite Theory of Mind

There is an interesting chapter (which I link to here) by Stanovich on the “tripartite” mind that got me thinking about research, as well as being human.

Stanovich distinguishes, first, the Autonomous Set of Systems (TASS) which includes all automated parallel/associative processing. This is the set of systems that creates a “primary” representation of reality. They create the world as we initially perceive it – drawing on schemas as well as perceptual inputs. This is what dual systems theorists call System 1.

Second, he distinguishes the algorithmic mind – which creates secondary – “decoupled” representations. This is most clearly related to working memory capacity – our capacity to sustain representations in memory (and screen out distractions). He distinguishes two general abilities of the algorithmic mind, the second more sophisticated than the first. The first starts with the simplest model of the world that the person can come up with quickly, then adjusts that model, serially (one adjustment at a time). In social terms, this could be someone who is angry at their boss and who we advise to “correct” for different biases associated with anger. Basically, the angry person has one model, then we advise them to adjust it so that – if our advice is helpful – it better resembles the world.

There is a second ability of the algorithmic mind – to simulate different models of the world, and to select between them. This is a more cognitively-loading process. Intuitively, people are more likely to use this ability when they are thinking about the future. Even then, they’ll tend to want to default to a simpler process and start with a single model and then serially process it until they’re more confident in it. This approach will ultimately be more biased, assimilating or contrasting to the first impression. I think we’ve all found ourselves unable to fully break away from that initial model when forecasting future events.

Simulating and comparing multiple models could (should!) also be the process when people are perspective-taking. Rather than starting with a single model or schema (often a cultural schema) and then comparing their target’s behavior to that model (adjusting their impression of the target accordingly), people could start with multiple possible interpretations from different sources and rely on unfolding personal experience to distinguish the superior from the inferior models. Ethnographers and clinical psychologists become very good at this sort of thinking.

Last, in Stanovich’s theoretical model, is the reflective mind. This includes intentional, guiding goals, that a) can trigger the need to go beyond the TASS (the autonomous set of systems) and b) guide the algorithmic mind. People, of course, can differ in their tendency to override the TASS or to engage in single model vs. multi-model thinking. Measures like the Need for Cognition, the Need for Cognitive Closure, Personal Need for Structure, Actively Openminded Thinking, etc provide the researcher information.

I should, in closing, note that TASS-processing isn’t bad! Ideally, people would grow better at a)  recognizing when TASS-processing will fail them and b) being mindful enough of their goals that they can judge when a model is 1) sufficiently detailed and 2) gives appropriate weights to different variables. The TASS, at least in perhaps too tightly controlled thin slicing studies, can be better at both “1” and “2.”

Mini Lectures: Illusion of Explanatory Depth

In research, we must consider our own and others’ biases.

The illusion of explanatory depth, described in the video below, can negatively impact the precision and plausibility of our hypotheses.

It can also help explain participant behaviors.

In qualitative research, we can unintentionally disrupt this illusion in our informants – prompting them to give a less automatic, less “natural” answer to our questions.

Research Methods Intro: Accuracy and Ethnography

In personality psychology researchers empirically investigate sources of accuracy by using information about both the perceiver – the person whose accuracy we are evaluating – and the target – the person the perceiver is accurate about. Using a round robin design – where every participant rates and is rated by every other participant, as well as peer and self-report ratings of each participant, researchers can examine and quantify the relative predictive power of different factors.

Participants rate themselves and others on different traits. Ratings are more accurate if the perceiver’s ratings of the target match an average of the target’s peer and self-report ratings. Researchers can, for example, simultaneously compare:Accuracy Illustration

  • normativity – the actual prevalence of the trait in a group


  • perceived similarity to the perceiver – the influence of distinctive traits about the perceiver (calculated by adjusting the average of the perceiver’s peer and self-reports for the average self-report for the entire group.)


  • distinctiveness  – the extent to which the target is higher or lower than average on the trait (calculated by adjusting the average of the target’s peer and self-reports for the average self-report for the entire group) (Human & Biesanz, 2011).


Theoretically, the influence of normative accuracy – the extent to which an individual references others against a norm – should be higher when perceivers and targets share a cultural background. On the one hand, normative accuracy is the product of experience. The more muembers of a group you meet, the better you estimate average behaviors. On the other hand, cultural norms also shape who we seek to become and how we express ourselves.

Perceived similarity, on the other hand, can bias the perceiver towards seeing her own distinctive traits in others, at least when she likes or in some way identifies with those others. For example, an ethnographer may tend to see informants that he likes as being more similar to him than they actual are and informants that he dislikes as either being contrasted against his perceptions of himself or more similar to his perception of the “average” informant.

When perceptions of normativity are less established, however, the target’s distinctiveness should be less biasing, given that the ethnographer may not know what traits are distinctive and what traits are common. In other words, as the ethnographer’s perception of the actual averages for the group of informants changes, the roles of similarity and distinctiveness may change as well.

One takeaway for the ethnographer, then, is to exercise greater caution and give attention to the influence of presumed normative behaviors, perceived similarity (or lack thereof), and target distinctiveness. However, where a round robin design is practical, the ethnographer could also apply this observational research to the field. Given a culturally-validated scale, the ethnographer could compare the respective roles of these different influences on person perception across cultures. Other analysis could compare normative accuracy as determined by the actual average ratings for the groups to stereotypic accuracy – as determined by participant ratings of an imaginary “average person.”

This quantified data could be used to contextualize participant observation and in depth interviews.

Other Considerations – What is Accuracy?:

Accuracy is multi-dimensional. For example, if asked to judge the prevalence of a certain trait in different social groups, a person could have poor absolute accuracy. In that case, they might consistently underestimate or overestimate the prevalence in each group. However, they might still have good relative accuracy – judging the differences between groups well.  As in the discussion above, accuracy is a continuous variable and it can increase or decrease over time. Our stereotyping intervention, for example, targeted absolute accuracy for a target social group. It could be expanded to target absolute accuracy for both the target’s social group and for the perceiver’s. Relative accuracy would then take care of itself.

Further, statistical measures of accuracy are blind to process. Other research examines how an observer learns about the group’s average rating on any trait. More research can disentangle the roles of shared social-desirability concerns, self-stereotyping, and other culturally-accessible influences on the self concept. These shared concerns could, for example, lead participants to report being more similar without actually being more similar.

Considering this relative complexity, stereotyping and prejudice interventions have to choose their target:

  • Improving the validity and reliability of the process by which we judge individual targets and target groups?
  • Improving the absolute accuracy of these judgments?
  • Improving the relative accuracy of these judgments?
  • Improving accuracy for certain traits, but not for others? (Accuracy may differ by trait).


Research Methods Intro: Attitude Strength and Attitude Change – Survey Items in Context

Social scientists use the word “attitude” to refer to a positive or negative impression of some specific thing. That thing could be internal – most people have a negative attitude towards physical pain, for example. It could also be external. A stereotype is either a positive or negative attitude towards members of a social category. Prejudice, as I will be talking about it, is a negative attitude towards a person based on their membership in a social category. For a fuller look at stereotyping, click.

Regarding attitudes, decades of carefully designed experiments (Bohner & Dickel, 2011) ask the following questions:

  1. When we take an attitude in the moment, how much of our positive or negative feeling is driven by our memories, how much by other thoughts, feelings, and experiences that we are having at the same time?
  2. How complex are our attitudes? How complex is the thing we are reacting to? Do we feel ambivalent about it? Do we pay attention to different features of it in different contexts? Do we have an overall impression or are we still making up our minds?
  3. Do our attitudes reflect our most important, foundational beliefs and perspectives on the world?
  4. Do we think that our attitude is legitimate? Do we accept it and agree with it? Are we proud of it? Or do we think that we’re being unfair or even immoral?
  5. Do we think a lot about our attitude? Do we try to shape it in the moment?
  6. Do we act on our attitude?
  7. What would change our minds?

The purpose of these decades of research has mainly been a) to better understand how attitudes relate to behavior and b) to understand how to change destructive or unwanted attitudes.

We tend to measure attitudes by asking people to agree or disagree with a statement on a bipolar likert scale. Explicit measures are subject to social desirability concerns and, particularly when doing cross-cultural research, reference group effects. In understanding reference group effects as they apply to surveys, imagine a 7-point scale that asks you to do the following:

Please rate your agreement with the following statement –

“It is personally important to me to be nonprejudiced.”

In answering that question, you have to decide whether you “Agree very much” or “Agree” or “Disagree very much.” However, what is it to be nonprejudiced? Personal perceptions of cultural standards necessarily shape your answer.

Despite concerns about reference groups and social desirability, many explicit measures produce reliable – consistent – responses over time. In the absence of experimental manipulations, most people will respond to the same question similarly when they first answer it and when they answer it again a week or several weeks later.

In understanding the attitude, we can also directly ask people about:

  • the personal importance of their attitudes
  • whether they feel they have an attitude that reflects a clear stance
  • whether they feel that their attitude is the correct one to hold.

By asking participants to reflect on their attitudes, we can better predict the stability of the attitude, whether they will act on it, and how they will respond to people who disagree with them.

Attitude extremity (distance from the midpoint on the scale) and attitude accessibility (amount of time it takes to answer the survey question) can be calculated from the original question without necessitating additional questions. Attitude extremity may reflect both passion and automaticity – people may report more extreme attitudes when they have stronger emotional responses. They may also report more extreme attitudes when they are responding on first instinct, without taking the time to have a “sober second thought.”

Attitude accessibility, on the other hand, can reflect both difficulty interpreting the question and actual difficulty or ease deciding how you feel about something.

Because our deliberate answer on a survey may be different from our initial feeling, there are also numerous implicit measures of attitudes. These measures all attempt to gauge one’s degree of negativity or positivity in a way that is unobtrusive and more difficult for the participant to control. Numerous measures have been developed (see Fazio & Olson, 2003 and Gawronski & De Houwer, 2014 for reviews) to various degree of success. They tend to take two forms. In one, the participant’s positive or negative feelings facilitate responding to other positive or negative stimuli. In the second, positive or negative feelings are attributed to a neutral stimulus – a Chinese ideograph, when the participants is not familiar with those ideographs – for example. Various semantic versions of implicit measures – looking at the strength of associations between different concepts – have been designed as well.

Explicit and implicit measures, taken together, can improve your ability to predict whether people will act on their attitudes, how they will respond to attitude-relevant information, and how they will interact with people who disagree with them. If there is implicit-explicit discrepancy, for example, participants may try to resolve this discrepancy by being more sensitive to relevant information and processing that information more carefully. Implicit measures may be better predictors of behavior when a participant is a) indecisive (an undecided voter, for example), b) under time pressure (culling a stack of résumés for a job), or c) making complex, multiply-determined judgments (a hiring decision, for example). Explicit measures, on the other hand, may be sufficient or even superior predictors when a) implicit and explicit measures are congruent, or b) the person is making a decision based on a limited number of variables and has the time to process relevant information.

Research Methods Intro: Surveys – Standardizing How We Ask People About Prejudice

Individual survey questions may introduce noise – there’s rarely a perfect question that will be interpreted the same way by all people. Individuals may also tend to fill out any given question in certain ways, liking more moderate or more extreme responses, for example. Both of these factors introduce noise.

However, with enough questions and enough people providing data, you can deal with the first issue by identifying clusters of questions that tap into the same general idea. You can deal with the second issue by administering the survey to many different people and looking at correlations with other surveys and behavioral measures.

We can also use this process to identify differences between groups – if questions cluster for some groups but not for others – we have to think carefully about why this might be the case, why the same survey questions might tap into different ideas for different groups of people.

We can then compare the survey response to what we learned from our participant observation. Do differences in the language that people use to talk about race show up as actual differences in survey response? Do similarities show up as actual similarities?

There are numerous surveys available to tap into prejudice, stereotyping, and interpersonal motivation. Commonly used scales include the Modern Racism Scale (McConahey, 1986), the Pro and Anti-Black Attitudes Questionnaire (Katz & Haas, 1988), Dunton and Fazio (1995)’s Motivation to Control Prejudice Reactions scale, and Plant and Devine (1998)’s Internal and External Motivations to Respond Without Prejudice. These surveys have been extensively studied – predicting a variety of behavioral traits including reactions to African American authors and speakers and in person interracial interactions.

These surveys, in general, focus on negative attitudes towards African Americans as a group (with the exception of the Pro and Anti-Black Attitudes Questionnaire, which considers positive and negative attitudes).

The Motivation to Control Prejudiced Reactions scale assumes that people have stereotype-driven reactions and seek to control them. The Internal and External Motivations to Respond Without Prejudice scale assumes that these attempts at acting non-prejudiced can be driven by internal (personal) and external (social) reasons.

Internally-driven people, for example, are more likely to act from an egalitarian framework (Johns, Cullum, Smith, & Freng, 2008). This egalitarian framework treats individuals as individuals. Membership in a social category is acknowledged as a potential influence on what an individual becomes. However, for an egalitarian, the individual is best understood as a unique entity, the product of many, interacting, factors.

Other researchers contrast multicultural ideologies – ideologies that acknowledge that belonging to different social categories shapes your experience – with colorblind ideologies – perspectives that judge every individual by the same standard, regardless of what factors lead them to be who they are.

Planning Our Intervention:
As planned, our intervention will focus on those individual who attribute traits to all members of a group based on observations of representations of the group in the media. We want to see if we can get that person to shift their perspective. Perhaps quixotically, we want that individual to attribute traits to most members of the group based on empirically established means and standard deviations.

To do so – we would want to know the specific content of their prejudice, whether they feel any personal or social pressures to avoid being prejudiced, and whether colorblind or multicultural ideologies appeal to them. Each of these factors could influence how they interact with our intervention.

For example, someone who is purely externally (but not internally) motivated to respond without prejudice tends to self-report a greater number of and more extreme negative attitudes towards African Americans. She tends to demonstrate higher levels of implicit – automatic or unobtrusively measured – prejudice (Devine, Plant, Amodio, Harmon-Jones, & Vance, 2002; Hausmann & Ryan, 2004; Amodio, Devine, & Eddie Harmon-Jones, 2008; Plant & Devine, 2009; Schlauch, Lang, Plant, Christensen, & Donohue, 2009). When talking with someone who happens to be African American, she engages in fewer approach-oriented behaviors (she smiles less, asks fewer questions, and makes less eye contact). Further, she is consciously aware of her concern to avoid appearing prejudice and anticipates being less engaged, even before the interaction (Plant, Devine, & Peruche, 2010). They may also be less likely to pay attention to even a racially-irrelevant message when it is attribute to an African American source (Sude & Rios, 2011, conference presentation).

From this portrait, we can infer that it would be difficult to change how this socially (but not personally) motivated respond without prejudice person thinks about race. In order to do so, we would want to increase her personal motivation to change her mind. We would also want to refrain from making her anxious, refrain from signaling that she should take an avoidant, disengaged approach to our intervention.

We could appeal to values that are race-irrelevant, such as intelligence of sound-reasoning. We could then shape her critical thinking and her recognition of lower quality thinking in a way that is race-irrelevant. We could then embed information about race within a wider discussion of using statistics to make more nuanced interpretations of the social world.

If she doesn’t value intelligence or sound-reasoning, we could instead offer an intervention that directly targets her intergroup anxiety – one that will alleviate anxiety and provide effective strategies for smooth and at the same time authentic interracial interaction.

Research Methods Intro: Participant Observation – A Brief Primer

In interacting with Maori individuals, the indigenous people of New Zealand, I found a few techniques particularly helpful. First, being an anthropologist requires a degree of “suspension of disbelief.” You are there to learn other peoples’ stories, stories that will sometimes clash with what you yourself believe. People will be sensitive to your disbelief, so focus on trying to see through the eyes of multiple community members, attending to them and your memories of them more than to your personal reactions.

Of course, sometimes, silence is awkward. When you must take a personal stance, try to make it ecumenical. For example, in a meeting of a smaller Maori health trust, we had just had morning prayer (Pai Marire) and were discussing religious orientations. I am agnostic. I mentioned that when I prayed, I prayed as a calling out, without specifying to what or whom I was calling out, or how often I did so. My response was tailored to show spiritual focus without identifying myself as having distinct, potentially troubling, beliefs. If I had been a firm atheist, as opposed to an agnostic, I could have emphasized that I believed in certain values – and listed a set of generically acceptable values.

Most of the time, however, you should be listening intently, not talking. Pay active attention to their facial expressions and gestures. Let yourself mimic these expressions, subtly. You should also be comparing what they’re saying to what they’ve said and what other people have said. You can draw out a more in depth response by looking really excited by an idea or asking a clarifying question. Affirm their emotions by your facial expressions or, more rarely, by offering a label (which they then might accept or reject.)

Make sure every conversation is about them. The primary logic of the ethnographic process is that subtle, iterative, queries and challenges combined with careful observation over a long period of time gleans insight we cannot find elsewhere. When doing fieldwork, you’re constantly seeking a group’s intersubjectivity; their overlapping impressions of a topic. You want to describe that intersubjectivity and understand how it arises.

You, of course, may contribute to this intersubjectivity. However, if you are approaching a community in order to advocate for change, be honest from the beginning. An anthropologist tells a full story from the perspective of multiple community members. She does not spy. Doing so hurts not only your reputation but every anthropologist’s – I was actually called a “spy” by one gentleman. I nodded in acknowledgment of his concern and then continued listening and asking questions. By the end of my time in New Zealand, I had won his trust, but it would have been more efficient if he had not been biased by the actions of one of my predecessors.

Research Methods Intro: Ethnography – How Do People Actually Talk, Think, and Behave With Regards To Race?

If we have the resources, it is helpful to start with a qualitative, particularly an ethnographic, perspective. This perspective helps us to generate a “thick description” of the phenomenon in question. This process provides inspiration for quantitative work and helps us to interpret quantitative results.

Relevant to our intervention-oriented research question – we can identify areas of contradiction and areas of lack awareness in the way that our participants think about race, which can then inform interventions. We can build upon their existing wisdom as well. We can then frame our interventions in a way that is accessible to participants – that shows an understanding of their perspectives.

Showing understanding can allow us to be supportive and affirming even as we challenge them in ways that could produce a general sense of threat. Our intervention depends on challenging, not threatening, our participants.

We could, for example, select two communities, one racially diverse and the other majority white. Then we could conduct a participant observation study – meeting with community members (white and non-white) and spending time with them formally and informally.

Formal contact might be in the form open-ended interviews in which we ask community members about racial attitudes, attitudes towards prejudice, interracial interaction, and discrimination. For my own experience as anthropologist, click here. We could also sit in on meetings in which community members are discussing related issues, including diversity but also including economic or political issues that they may see as relevant to race. Using both individual interviews and a record of public utterances, formal study can juxtapose public and private expression.

Informal contact may be more rare and will depend upon the rapport that you have built in formal interviews. In “hanging out” with community members, you may encounter a very different, more spontaneous, public and private expression. However, spontaneity does not mean that the expression of the attitude is “pure.” By talking with people and asking questions you inevitably influence what they later express and how they express it, at least to you. You’ve made ideas and the expression of ideas salient that may have only been inchoate before you began your research. Last, no matter how much rapport your develop, some ideas will not be expressed.

To get at those ideas, we can employ structured, interview, surveys, implicit attitudes tests, and behavioral experiments.

Research Methods Intro: Identifying Variables – Thinking About Race

What variables are relevant to my postulated practical question of how to change people’s use of stereotyping, with regard to racial attitudes?

Well, first I brainstorm a set of variables.

Let’s look at an output from that process:

What are the different characteristics of stereotyping that I see around me?:

  1. Thinking of individuals as belonging to the same “group” or being from the same category of people.
  2. Attributing traits to either a) all members of that group or b) most members of that group.
  3. Attributing traits based on a) observation of a single member of the group, b) observations from an initial encounter with multiple members of the group, c) observations based on the totality of encounters with group members, d) observation of representations of the group in the media, e) popular ways of talking about the group, f) traits attributed to the group that help justify group-based inequalities, g) traits that group members have by definition – believing in Jesus and being a practicing evangelical Christian, for example, h) observations based on a statistical average, i) observations based on a statistical average, taking into account the variability around that average.

With two ways of looking at 2 and nine ways of looking at 3, we already have 18 possible definitions of stereotyping, just from one brainstorming session by one person. In choosing to change the way people stereotype, we have to target a particular type of stereotyping and either eliminate it or change it into a different type of stereotyping.

For example, I might want to take someone who takes 3-d (observations of a group in the media) and infers 2-a (that all members of a group has that trait), and shift them to 3-i (data driven observations about the mean and standard deviation for that group) and 2-b (applied to the way they think about most people in that group).

Ok – we have a goal – now what are different ways of measuring these variables before we design our intervention?

Research Methods Intro: Categories of Research Question – Example: Teaching Cutting Edge Thinking About Race

Every research project starts with a research question:

  • Observational: I believe that the world works in a certain way. I want to give evidence in favor of (or against) my observation.
  • Theoretical: This theory predicts that the world will work in this way but it hasn’t yet been tested in this particular context. Let’s do it!
  • Inferential: If the world works in one way, it probably works in a logically related way as well. Let’s see!
  • Incremental: The world has been shown to work in this way by numerous studies – let’s confirm the results of these studies and see if we can flesh out the story a bit.
  • Exploratory: Let’s see how the world works.
  • Practical: Can we get the world to work this way?

Let’s take an example. One of my specialties is the study of stereotyping and prejudice. When I hear people talking about stereotyping-related topics in the media and amongst themselves, I often find myself thinking, “Wow! You’re so busy being half right that you’re having this discussion all wrong!” That thought may be pretentious – but let’s go with it. Further, let’s pretend that I have a practical goal – to get a variety of audiences to embrace what I consider to be cutting-edge thinking and practice with regards to racial stereotypes.

So, I have a practical question which I am answering from an interdisciplinary perspective – is it true that I can get an audience to embrace this cutting-edge thinking?

Before we leap into speculating about potential measures of success or failure and potential tools for reaching our goals, let’s take a step back and consider relevant variables.