Belief Updating & Persuasion

POLSCI 240 / PSY 225: Political Psychology

February 5, 2025

Learning about politics

We have talked a lot about the nature of opinions and how they may be constructed

  • How do they evolve and change in response to a changing world?
  • Started talking about this a bit with Zaller…

Why do we care?

  • Holding representatives accountable
  • Making good policy
  • Promoting a healthy discourse

(At least) two concerns

Goals for this section

Explore rational theories for how people should update their beliefs about politics

Explore (some) ways in which actual people deviate from these models

  • Biases in information search
  • Biases in evidence evaluation

In subsequent weeks, we will continue to explore sources of such biases

Rational theories of choice

Three components:

  1. Optimal information search
  2. Optimal belief updating (given 1.)
  3. Optimal decision making (given 2.)

We have talked about 3. (max expected utility) - what about 1. and 2.?

Selection of sources: a brief detour on persuasion

Under what conditions can someone change your mind? (i.e., persuasion)

  • One person (Speaker) sends a Message in favor of policy \(X\) to another person (Receiver)

  • For persuasion to occur:

    • R must believe that S has information about \(X\)’s value to R that R does not (“S is knowledgeable”)
    • R must believe that S is motivated to reveal that new information (“S is trustworthy”)

Cheap talk

Persuasion is harder than we think, because of speaker incentives to lie

  • Even if S is believed to be knowledgeable, how can we trust them?

  • Of course they will say we can trust them, but why should we believe that?

  • This is the problem of cheap talk

    • If S shares our interests, then they will be truthful in saying \(X\) is good for us
    • If S does not share our interests, they will lie and say \(X\) is good for us
    • So if S sends the same message no matter what, M conveys no information at all

Is (rational) persuasion possible?

Lupia and McCubbins say yes! If and when there are “institutions” or background conditions that create incentives for S to be trustworthy (incentive compatibility)


  • Verification
  • Penalties for lying
  • Observable, costly effort
  • What do these look like in the real world? Are they sufficient?

In the absence of incentive compatibility…

People need some way to determine whom to trust, but this is difficult and ripe for bias

  • Confirmation bias
  • Conformity to majority behavior (coming soon)
  • Conformity to in-group norms (coming soon)

Zaller

In the absence of incentive compatibility…

People need some way to determine whom to trust, but this is difficult and ripe for bias

  • Confirmation bias
  • Conformity to majority behavior (coming soon)
  • Conformity to in-group norms (coming soon)
  • Overgeneralization of trust

Overgeneralization of trust

Politicians are typically highly constrained: well-described by left vs right ideology

  • But citizens are not!
  • Perceptions of shared values on 1 dimension may be (over-)generalized to unrelated dimensions
  • Related: using partisanship or ideology as a cue may not be a good strategy

Johnston et al. (2017)

In the absence of incentive compatibility…

People need some way to determine whom to trust, but this is difficult and ripe for bias

  • Confirmation bias
  • Conformity to majority behavior (coming soon)
  • Conformity to in-group norms (coming soon)
  • Overgeneralization of trust

You might say: “just trust the experts!”

  • But how do you determine who is an expert? And don’t they have their own interests?

Optimal stopping rules

How do you decide when to stop gathering information?

Very hard problem b/c of potential for infinite regress

  • Choosing a stopping point is a decision

  • Making an optimal decision requires maximizing expected utility

    • Maximizing expected utility requires forming optimal beliefs
    • Optimal beliefs require optimal information search…😭

It is inevitable people will need to rely on “heuristics” (rules of thumb)

  • Are their rules of thumb good ones? What kinds of biases creep in?

Moving the goal posts

A confidence threshold is the amount of information you need before you are willing to make a decision (or form an opinion, judgment, etc.)

  • But we often raise or lower when information is inconsistent or consistent with prior opinions
  • More generally, we are often overconfident: hold stronger beliefs than are warranted by the information on which they are based

In sum

Optimal information gathering is a very hard problem!

  • Need to trust others in politics, but incentive compatibility is rare
  • Full-information is not possible, need to decide when you have enough info and how confident you should be

Ultimately, we rely heavily on heuristics (rules of thumb), which often have systematic biases

  • Belief rigidity

  • Mass conformity

  • Social group polarization

  • Overconfidence

    • And the extent of these problems increase with citizen political engagement!

Optimal belief updating

To help us think it through, we will start with a very simple general situation:

  • A proposition \(X\)
  • A person with a prior (pre-existing) belief about the truth of \(X\): \(p(X)\)
  • A piece of new information (“data”) relevant to \(X\): \(D\)
  • A posterior (updated) belief about \(X\) after the person integrates the new information: \(p(X|D)\)

Rational updating as Bayesian updating

Theories of rational belief state that people should update using Bayes’ Rule

  • one’s posterior belief after viewing information about a claim is equal to their prior belief in that claim, multiplied by the probability of seeing the information if the claim is true, divided by the total probability of seeing the information regardless of what’s true

\[ p(X|D) = p(X) \frac{p(D|X)}{p(D|X)p(X) + p(D|\neg X)p(\neg X)} \]

\(\text{updated belief} = \text{prior belief} \times \frac{\text{prob of data if X is true}}{\text{total prob of data}}\)

Simple example to illustrate

  • There is a disease in a person’s population with a rate of 1 in 10,000 - so the prior probability “person has disease” is \(p(X) = 0.0001\)

  • There is a test for the disease such that:

    • People with the disease test positive 99% of the time: \(p(D = +|X) = 0.99\)
    • People without the disease test positive 5% of the time: \(p(D = +|\neg X) = 0.05\)
  • The person tests positive (\(D = +\)): what should their posterior belief be that they have the disease (\(p(X|D)\))?

\(p(X|+) = 0.0001 \frac{0.99}{(0.99)(0.0001) + (0.05)(0.9999)}\)

\(= 0.0001 \frac{0.99}{0.050094}\)

\(\sim= 0.002 = 1 / 500\)

Another way to look at it

Use frequencies instead of probabilities:

  • 1 in 10,000 people have the disease

  • If you test 10,000 people:

    • ~500 people without the disease will test positive (5%)
    • ~1 person with the disease will test positive
  • If you test positive, what is probability you have the disease?

    • 1 out of 500! (approximately)

Political example


  • \(X\): “The average rate of inflation during the year 2024 was less than 3%”
  • \(p(X) = 0.25\)
  • Person sees MSNBC headline: \(D=\) “2024 Inflation comes in at 2.9%”
  • What should the person’s updated belief be? What is \(p(X|D)\)?

Reasons we might be concerned…

  • We might be skeptical people (can or do) perform these calculations, even implicitly

  • Even if they try, there may be systematic biases relative to optimality

  • Even if people have priors about \(X\), they usually need to construct the probability of the data - both conditional (\(p(D|X)\)) and unconditional (\(p(D)\))

    • How do they construct these in practice? What kinds of biases might be introduced?

Too cautious in integrating new information

People are less responsive to new information than Bayes’ rule dictates (“cautious Bayesians”, too conservative)

In Hill (2017) study, people updated in response to information, but only about 75% of what they should have

Hill (2017)

Can or must?

Galef suggests we have two modes of confronting new information:

When that information is consistent with our prior opinion:

  • Can I believe it?

When that information is inconsistent with our prior opinion:

  • Must I believe it?

Motivated skepticism

People use more rigorous standards for evaluating information inconsistent with their preferred beliefs

  • Inconsistent information is evaluated with fine-tooth comb
  • Consistent information is accepted without much consideration

Guay and Johnston (2022)

What kinds of biases are these?

Cognitive: problems of information processing, judgment - examples:

  • Faulty calculations (e.g., w/ Bayes’ rule)
  • Faulty heuristics (e.g., majority is always right, overgeneralization of trust)
  • Memory (e.g., spreading activation -> recall prior-consistent considerations)

Motivational: people have preferences over beliefs

  • Can be personal (Galef -> “emotional”)

    • comfort, self-esteem, morale
  • Can be social

    • persuasion, image, belonging