Truth and trust: transcript

Rational choices about who we trust

Slide 1

The first Truth and Trust Online conference was held in London on the 4th and 5th of October, 2019. The event brought together computer scientists, journalists and representatives of the big social media platforms to discuss misinformation, disinformation, fact-checking and fake news online.

Conference webpage.

My talk, Rational Choices About Who To Trust looked at the evidence from psychological science about how we navigate complex informational worlds, and what the implications for endeavours likes fact-checking might be. The slides shown here are as shown at the conference, but the text was written after the event, so may not match exactly what I said on the day.

Slide 2

I gave a talk once, to the general public, about the science of advertising. "Do you think advertising works?". I asked them for a show of hands and looked around to a sea of raised hands from the audience.

"Second question: Do you think advertising works on you?".

Nothing.

Now there is a contradiction in our intuitions here. It seems most of are drawn to thinking that we're immune to advertising, while at the same time recognising the general effectiveness of adverts. Both beliefs can't be true, at least not for everyone.

It seems that I had discovered what has been called, within political science, the third person effect. This is the general belief that other people are more prone to influence than you are, that other people have impressionable minds which are more likely to be swayed by political or religious propaganda, misinformation or hype.

Slide 3

I'm a psychologist, so what interests me is the model for of the mind that we are implicitly subscribing to when we hold beliefs like this. I submit that it is something like this slide - that there are rational people, and biased people (and obviously in this room we are the rational people, and, outside, they are the biased people).

Slide 4

There's a popular view of the mind called "dual process theory" which you might have heard of. It's the view Nobel laureate Daniel Kahneman sets out in his book Thinking Fast and Slow. The idea is that we have slow, deliberate, rational processing, which contends (and is often overtaken) by fast, error-prone, and intuitive processing.

The model of the mind portrayed is something like this slide - not rational people and biased people, but all of us a bit rational and a bit biased.

My problem with this view is that it tempts us to make the same division between people - to think that I’m a little less biased, you are a little more. I make rational choices, you are swayed by your biases. Bias is presented as a contaminant, something which could be skimmed off the top of our rational minds, but the model doesn’t have any useful guidance as to how.

Slide 5

Fortunately psychology has another account of bias to offer, from what is called Signal Detection Theory. The starting point of signal detection theory is to consider your judgement of something against reality, as shown in this slide. So, say there is a claim in a news story, that could be true or false, and for both of these cases you could also judge the story to be true or false. This gives us four possible outcomes. There are two correct judgements: judging a true story "true" - a "hit" - and judging a false story "false" - a "correct rejection". And there are also, critically, two kinds of errorful judgments. If you judge a false story "true" you have made "false alarm". If you judge a true story false you have made a "miss".

A crucial insight from signal detection theory is that these two kinds of errors trade-off against each other. You can avoid false alarms, but only at the cost of more misses; or you can reduce the number of misses you make, but only at the cost of increasing your false alarm rate. Yes, people can vary in the sensitivity of their judgements (the overall accuracy), but we all, independently of our ability, have to pick a position on the continuum between many false alarms (but fewer misses) and many misses (but fewer false alarms).

This position is called, in signal detection theory, your bias. Bias, in this sense, is unavoidable. It isn't something you remove from the decision, it is an integral part of the decision, a component which reflects how you weight the costs of different kinds of errors.

Slide 6

Let me tell you a story about research I was involved in (PDF, 260KB), where we applied the insights of signal detection theory to the topic of science communication, of communicating risks.

This was a project looking at brownfield land, ex-industrial land, and how the risks from pollution were perceived by people whose houses were built on this ex-industrial land. These people were sincerely invested in knowing the risks from pollution, and faced with a range of sources of information about those risks. How did they decide who to trust?

Slide 7

We used the magic of asking people. We asked them how much they trusted the different sources of information: scientists, local government, friends and family, property developers etc (shown on the left of this slide). And we also asked them to rate various qualities of these sources: perceived expertise, bias in perceiving any risks (whether they would be over or under sensitive), bias in communicating risks if they knew about them), general openness and shared values. This last item involved asking residents if the different information sources had their "best interests at heart".

Slide 8

Here are some results.

First, trust in scientists was very high. Perhaps not too surprising.

This is the distribution of how scientists were rated by residents on whether they would trust information from them about the risks of pollution on the brownfield land upon which they lived. 1 (on the left) is very low trust, 5 (on the right) is very high trust. The y-axis shows frequency of responses in each category. So you can see that most people rate their trust of scientists at 5, at the top of the rating scale.

Slide 9

Not all sources of information were so trusted. Here are the results for property developers, which maybe won't surprise you if you've met any property developers.

Again, lower trust towards the left, higher towards the right. Property developers are, in general, not trusted to provide information on pollution risks.

Slide 10

But remember that as well as asking about trust, we also asked people to rate each information source on their other characteristics. We used this information to model statistically how perception of these characteristics was related to trust.

This allows us take the natural variation in trust between sources, and between people, and extract the factors which predict why different people trust different sources.

When we did this we found something interesting: the biggest predictor of trust wasn't expertise. It was shared values. If you want to know who someone would trust to tell them about pollution risks, you needed to know who they thought had their best interests at heart, not who they believed was most expert in the science of those risks.

Slide 11

The converse of this is that people who don't trust scientists, for example, don't feel this way because they don't think scientists have relevant expertise. Here's a graph of average perception of expertise of scientists for two groups: on the left people who had high trust in scientists, on the right people who had low trust in scientists. Yes, the low trust group rate scientists as less expert, but not by much.

Slide 12

The real difference is in how the two groups rate the scientists on shared values, on whether they have their best interests at heart. Here the difference is stark: people low in trust believe scientists don't have their best interests at heart; people high in trust believe they do.

Slide 13

This pattern was true for all information sources we asked about. This graph shows the importance of each factor (on the y-axis) for each information source (on the x-axis).

The blue line is shared values, and you can see it is the most important factor for predicting trust in all the information sources we asked about.

Expertise is in red. In green and purple are perceptions of bias in perceiving and communicating risk. These are the factors we had predicted, from signal detection theory, would be important in determining whether a source was trusted. These are properties of the decision process that an information source must go through when communicating risks.

But, despite our expectations, these factors are unimportant in predicting trust compared to a basic perception of shared values between the source and the target.

Slide 14

This research was on my mind when people lament "Why don’t people trust the experts?", over such topics as the 2016 EU referendum.

Here are results from a YouGov poll of trust in different types of people to provide information about the decision to leave or remain, divided by remain voters (left column) and leave voters (right column).

I've highlighted in red those types of people with the biggest difference in trust between leave and remain voters: academics, economists, people from international organisations like the UN.

And the research I've just told you about makes a strong prediction that if these groups are not trusted, it isn't because their expertise is doubted, but because of doubts they have your best interests at heart.

Slide 15

My claim is merely this: every rational reason has a bias, every bias has a reason for it.

The idea that biases are mere errors blinds you to this. All of us have been annoyed by fire alarms which go off then there is no fire, but a moment's reflection - and the aid of the conceptual framework from signal detection theory - shows us that this isn't a bug, it is a feature. Like every decision maker, a fire alarm has to choose a bias, a position between making false alarms - and we all have to get out of the building when we're in the middle of the night when there is no fire - and misses - when there is a fire but the alarm doesn't go off and we burn to death in our beds.

The false alarms are the price you pay for avoiding misses at all costs.

Slide 16

A popular theme in online discussions is the cataloguing of logical fallacies. A famous one of these is the argument from authority: trying to argue that something is so just because someone important says so. (The converse is the ad hominen fallacy, arguing that something isn't so because of criticisms of the person saying it).

So why am I showing you four old white guys?

The point here is that, I hope, that whatever your political persuasion, there is now someone on this screen here that you absolutely can't stand.

And this raises a question. I'm sure nobody in this rooms needs convincing that argument from authority is a logical fallacy, strictly, but would you really be as likely to believe something you were told regardless of who, from these four men, told you it?

I thought not.

Viewed like this, all the so called biases and fallacies are we commit have their own reasons, their own logics. These make up the texture of the mind, the grab-bag of intuitions, blindspots, quirks and curiosities which mean you can't just cite the evidence and consider your communicative work done.

Slide 17

Recently, with Dr Kate Dommett, I did a small pilot survey of Facebook users, asking similar questions about trust. What predicted how people felt about the platform, about advertisers on the platform, political parties (that might use advertising) and about civil servants? (We included this last group because we expected them to be a good, high trust, comparison with the other groups).

Slide 18

We expected shared interests to be the strongest predictor, as with the brownfield study, and although it was a good predictor, the best predictor of trust was perception that that group would keep personal data secure.

Trust in Facebook was relatively high (above that of advertisers and political parties). Other questions we asked showed that people had strong and sometimes contradictory intuitions about personalisation. In average people were in favour of personalisation, but against the collection of information which would allow personalisation. People also felt very differently about personalisation across different domains (so welcomed it for shopping, but not for dating or political adverts).

We're doing more work in this area, but I wanted to share with you a flavour of how we're trying to look below the surface of who people trust online, and what they say they want on topics like targeted advertising.

Slide 19

Here's another project I was involved in recently, a Wellcome Trust project on attitudes to vaccination. This was a small scale ethnography carried out by a company called Shift, looking at the online information diets of mothers in London, who said they had encountered vaccine related information online and who were, to some degree, thoughtful or hesitant about vaccinating their own children.

The study threw up some interesting findings, which seem to gel with what I have been saying about looking at the reasons people have for displaying the biases they do.

Slide 20

The health benefits of vaccines are beyond medical doubt, so how should we think about those who do come to doubt their benefits? Do they just lack good information? Do they have different information processing biases than you and I?

No. This slides shows screenshots of vaccine sceptics talking about the importance of seeking out good quality information about vaccines.

"Please do careful research on this" says one. "I researched this", says another. Another: "I have done at least one hour of research a day for nine months"

People who don't trust vaccines are not passively receiving information, but actively seeking it out

Slide 21

The people in this study also evaluated evidence quality. This slide shows a Facebook post showing a sick baby (and warning of the risk of not vaccination). One study participant comments that this information can be trusted, because it is a picture ("literally what can happen"), another that it can't be trusted because it is "graphical not factual".

You'll have your own view. All I am trying to show is that everybody has standards of evidence quality that they apply, and we need to understand these if we want our evidence to be trusted.

Slide 22

Finally, a key finding from this study was that users navigate information socially. Facts don't come at them in isolation, but along social channels - via particular individuals, or via groups they feel an affiliation with. A key vector for the participants in this study was parenting groups, which connected our participants with other parents, some of whom were in natural parenting groups, and these parents shared information they had seen in groups they belonged to - general health or natural lifestyle groups - which expressed vaccine scepticism.

We talk as if information is merely correct or not, but for our participants that information always arrived with them with a social wrapper, a wrapper that influenced their decision to trust it or not.

Slide 23

So this and other research leads me to believe that people online, just like people offline, are profoundly engaged in a set of reasonable processes for dealing with information.

They actively seek evidence, evaluate its quality, judge things according to who says them. Overall they have reasons for accepting or rejecting information, just like you and I. If people come to divergent conclusions, it may be that they started in a different places, rather than, fundamentally, they are different from us in how they think.

Slide 24

Summing this view up, we have a model of the mind which looks more like this: bias and rationality are different sides of the same thing - you can't remove bias from our thinking like it is some foreign contaminant, it is just the flip side of the reasons we have for dealing with information in the way we do.

Any model which is based on an opposition of bias and rationality encourages us to see some people as being irrational. This is a dead-end. Believing people are "just irrational", or "just stupid", tempts us to abandon attempts to persuade based on evidence and argument. Thinking like this also cuts us off from trying to understanding the reasons people have for believing what they believe, which is to everyone's loss.

Slide 25

And so, to conclude.

I have argued that biases in information processing are universal and inevitable. They exist for reasons, often reasons to do with dealing with risk.

The moral for fact-checkers is that if people look like they don't want facts from you, it is probably because they don’t think you are on their side.

This can be an optimistic story though: there is nothing in psychological science, despite what some psychologists tell you, to believe that our minds are hopelessly stubborn or irrational. Our rationality isn't perfect, but we do respond to reasons, and fact-checkers should focus on providing those reasons, along with clear and consistent signals which foster trust in them as a source of information.

Slide 26

Back to top