Social Influence
POLSCI 240 / PSY 225: Political Psychology
February 17, 2025
Goals for this week
Last week we considered some basic ideas about opinion change - it’s hard to learn - we usually need to rely on others
This week we consider models of social learning and social influence and what implications they have for mass politics
- How do “innovations” (e.g., new technologies, good ideas) spread through a population?
- What are the different kinds of conformity and what different implications do they have?
- How does the need for groups to maintain distinct “identity markers” shape public opinion?
- How can we model “authoritarian” regimes of enforced conformity?
Why do we care?
- Why do ideas become “fashionable”?
- How does misinformation spread and what conditions make spread more likely?
- When and why are extremist ideas and groups likely to emerge?
- What conditions lead to moderation and consensus and what conditions lead to polarization and sharp lines of conflict?
- Why does politics create seemingly arbitrary clusters of opinion and behavior? (e.g., latte liberals, country music and conservatism)
- When do “revolutions” in opinion happen, and why?
Our goal is not to give firm answers to these questions, but to develop concepts and tools to think carefully about them
Components of social influence models
As Smaldino explains, we need to assume three things (at least) to get a model
- A representation of opinions (e.g., binary (yes/no), continuous)
- A mechanism by which people influence each other (e.g., simple contagion, learn only from shared identity)
- A population structure (e.g., random interaction, close neighbors only)
The goal is to understand how different assumptions lead to different dynamics and outcomes in terms of the opinions of a population
- and to compare predictions to data!
Social learning strategies
Content biases
A content bias is a preference (e.g., choice, biased memory) for ideas that have particular characteristics
Examples:
- Lead to success (have high utility)
- Emotionally evocative
- Simple
- Minimally counter-intuitive
- Related to sex
- Related to group norms, morality
Content biases and social contagion
The spread of innovations (“attractive ideas”) seems to be sigmoidal - why?
Simplest model
Binary opinions (almost everyone starts with 0), no influence, everyone exposed to same information, adopt the new idea with fixed probability
Add social influence
Binary opinions (almost everyone starts with 0), people interact with those close by, probability of 1-1 spread is fixed
Population structure
Once we assume that information spreads through local interactions, assumptions about population structure become important
Smaller circle of interaction
Larger circle of interaction
What makes an idea “attractive”?
The motivating analogy for “attractive ideas” is useful technologies: they make us more successful, efficient, etc.
- In such cases, the “contagion” seems like a good thing!
But contagion can also leverage biases in human psychology that have little to do with truth, efficiency, success - e.g., content that is:
- Emotionally evocative and visually clear (e.g., individual cases vs statistics, e.g., NYC subway murders)
- Minimally counter-intuitive (e.g., supernatural entities, conspiracies)
Example
Prestige psychology
Copy successful people
- They accumulate visible signals of success (“prestige”): “followers”, signs of deference, money
- Visible signals can be used as heuristics when skill is hard to determine
Prestige psychology
Can often be successful but important ways it goes awry, especially in contemporary world
- Hard to know what traits of successful people lead to success -> over-imitation, spread of irrelevant traits
Prestige psychology
Can often be successful but important ways it goes awry, especially in contemporary world
- Hard to know what traits of successful people lead to success -> over-imitation, spread of irrelevant traits
Similarity and identity
A preference for copying similar others (people who look, sound, behave like me) is often advantageous: they are (imperfect) signals of shared interests, e.g.,
- Endemic diseases
- Tastes in food, music
- Norms and beliefs (e.g., etiquette, religion)
Less true in our interconnected world - but we have same psychology
- We are still identity-marker obsessed
- Drives differentiation even without deep value conflicts
Model of identity-based adoption
Start with our simple contagion model for innovations, but:
- Each agent is visibly a member of one of two groups
- People only listen to members of their own group
- Probability of adoption is increasing in the correlation of the idea with group membership
Contagion with identity biases
COVID vaccine and partisanship
![]()
Source
Identity-based adoption with negative influence
In the last model, people listen to people in their own group, but once they adopt the valuable idea, they have it forever
- We now allow people to switch back and forth
- They still listen to their own group
- Still switch with probability increasing in the correlation between positions and group distinction
Identity-based adoption with negative influence
Social constraint and the (arbitrary?) nature of belief systems
![]()
Article
Values by partisanship
How necessary vs arbitrary is left-right?
Opinion (or preference) falsification
These are cases of opinion falsification
- Your “public” opinion is different from your “private” opinion
A model of opinion falsification and social change
Imagine a population living under an authoritarian regime that vigorously polices dissent against the regime
- Each person pays some cost for falsifying their support for the regime
- The regime enforces a penalty against any individual who dissents
- The cost of dissent is declining in the number of other dissenters
- The cost of opinion falsification is constant
Collective action and common knowledge
In our hypothetical, everyone opposes the regime - does this mean the regime falls? No!
- Consider the following hypothetical population
- Each person is defined by the percentage of other people that need to dissent before they will dissent
\[
[0,20,20,30,40,50,60,70,80,100] \quad \text{Kuran (1991)}
\]
- It’s not enough that everyone opposes the regime - not even enough that everyone knows (that everyone knows…) that everyone opposes the regime
- What is needed is knowledge of the thresholds for dissent: person 2 can set off a revolution, but they don’t know it!
Changes in costs of falsification and opinion cascades
- A seemingly stable regime can fall quickly and unpredictably
- …or not - hinges (in this case) on person 3
\[
[0,10,20,30,40,50,60,70,80,100] \quad \text{Kuran (1991)}
\]
Predictably unpredictable
When dissent is suppressed, regime change is highly non-linear process and thus hard to predict
- Yet, in retrospect, it seems understandable and predictable
- The point is not that retrospective explanations are wrong
- The point is that, looking forward, it is nearly impossible to know what event will set off a revolution and when it will happen (if ever)
- This also helps to explain tendency toward harsh crackdowns on seemingly minor infractions
Social learning strategies
We will focus on:
Content biases and contagion
Conformist transmission: