Is it possible to accurately measure prejudice




















But once you take the results of a much larger population of test takers or an individual who took the test multiple times, supporters argue, you can say with some certainty whether that broader group or that individual is implicitly biased based on the average of all the tests.

Not everyone agrees. This debate, in fact, is very heated — unusually so for the academic world. When I reached out to the several researchers who have criticized the IAT, they told me they wanted no more part in this discussion, instead pointing me to a piece by Jesse Singal at New York magazine for their side. Lai also noted that measures of explicit bias are similarly flawed for individuals but valuable in the aggregate.

But if you have a group in which, for instance, 40 percent admitted to explicit bias and another group in which 80 percent did, you would still expect the 80 percent group to be more biased in their overall behaviors — even if some of the respondents in both groups were dishonest.

The research so far comes down somewhere in the middle of the debate. It seems like the IAT predicts some variance in discriminatory behaviors, but its predictive power to this end seems to be quite small: Depending on the study, the estimate ranges from less than 1 percent to 5.

This is because any individual behavior is influenced by so many things — e. Given this, it may be that the IAT is still the best tool for measuring subconscious bias. The researchers I spoke to on both sides conceded that there is a large body of scientific evidence for racial bias in the US.

The question, then, is whether the IAT accurately measures that racism and whether, in fact, implicit biases are really the big force behind this racism.

I worry that an obsession by some with implicit bias, given its overall empirical track record, may potentially divert attention and resources away from us addressing factors that are far more influential and important in shaping discriminatory behavior and that create the unjust ethnic disparities we sorely need to do something about. Again, the measures of explicit racism do show steady drops over time.

Many researchers have interpreted this to suggest that a lot of people have simply shifted their racial biases from the conscious to the subconscious — hence the need for an IAT in the first place.

In fact, some recent research has questioned whether targeting implicit bias as a strategy for combating racism can even work. A meta-analysis that Lai co-authored, which is still under peer review and undergoing changes, concluded that implicit bias as measured by the IAT and other similar tests is correlated with explicit bias and behavior, and implicit bias can be successfully mitigated.

Lai suggested that targeting racial bias in general may not be the correct approach. He pointed to an experiment recently run with the Las Vegas Police Department. There, researcher Phillip Atiba Goff was tasked with helping the police find a way to reduce their use of force, which disproportionately targets minority residents.

Goff found that a lot of these uses of force were often the result of foot pursuits. With this finding, the police established a foot pursuit policy that said the officer who was giving chase should not be the first person to put his or her hands on the suspect, with coordinated backup instead arriving on the scene and taking on that role.

The idea is that foot pursuits often ended in excessive use of force; after all, they are high-adrenaline chases in which the officer and the suspect can get really angry really fast.

So by limiting, when possible, chasing officers from putting their hands on the suspect, Goff figured you could limit use of force. The change appeared to work: There was a 23 percent reduction in total use of force and an 11 percent reduction in officer injury over several years, on top of reducing racial disparities, according to Goff. I had to change the fundamental situation where police are chronically engaging with suspects.

This, Lai argued, is the kind of work that researchers need to consider if strategies that target implicit bias or other kinds of racial biases prove unworkable or ineffective.

Those cures are of the snake oil variety. Go for the cures that involve redesigning procedures so that implicit bias, which can be assumed to be present in many people, just does not have a chance to operate. In this way, the IAT may not amount to much.

Our mission has never been more vital than it is in this moment: to empower through understanding. Financial contributions from our readers are a critical part of supporting our resource-intensive work and help us keep our journalism free for all. Please consider making a contribution to Vox today to help us keep our work free for all.

It is, however, a difficult question to answer, as prejudice is a slippery concept and hard to measure effectively. The problems of measuring prejudice, and the impact a poorly chosen and poorly analysed measure can have on the debate, were illustrated by a front page splash last year in the Guardian newspaper. The most obvious problem with this story was that the data showed no such thing. To understand why, we need to look closely at the wording of the question, and how it matches up to other possible measures.

Although this looks like a straightforward question, it has a fundamental flaw. One person may consider himself prejudiced because he once laughed at a racist joke, while another may consider him or herself unprejudiced, despite holding very negative views about some minority groups, because in their understanding prejudice requires something stronger, such as support for violence against minorities.

The biggest problem this self-diagnosis approach causes is that it can mislead us about changes in attitudes over time, because the standards people apply to diagnose themselves as prejudiced are socially relative, and these standards change. In Britain, the evidence suggests two things: firstly, that younger generations are less prejudiced against minorities than older ones, resulting in a gradual decline in prejudice over time.

Secondly, that the standards people apply to diagnose their own prejudice have become steadily stricter as Britain overall has become less prejudiced. We can show this process at work by examining how self-diagnosis of prejudice matches up with more concrete racial attitudes. But this is not the case. Crucially, we also split this out by generation, comparing those born before — who mostly grew up before mass migration to Britain began — with those born later, in a more diverse society.

Younger Britons are employing a stricter standard: a large majority of them accept that they cannot call themselves unprejudiced if they oppose ethnic intermarriage. This has happened over and over again, contributing to what has become a moment of grim soul-searching among many research psychologists — particularly social psychologists, whose field has experienced a particularly worrying string of failed replications.

This, the thinking goes, can reduce experimenter error and bias. And when you use meta-analyses to examine the question of whether IAT scores predict discriminatory behavior accurately enough for the test to be useful in real-world settings, the answer is: No. Race IAT scores are weak predictors of discriminatory behavior. We know this because of a protracted meta-analytical back-and-forth that has played out in the pages of the Journal of Personality and Social Psychology , a flagship publication in the field of psychology.

The arguments and subarguments get pretty complicated and technical, but two important points stand out. One is that the most IAT-friendly numbers, published in a meta-analysis lead-authored by Greenwald, which found fairly unimpressive correlations race IAT scores accounted for about 5. The Oswald group also claimed — again, convincingly — that the Greenwald team took a questionable approach to handling so-called ironic IAT effects, or published findings in which high IAT scores correlated with better behavior toward out-group than in-group members, the theory being the implicitly biased individuals were overcompensating.

If one study finds a correlation between IAT scores and discriminatory behavior against out-group members, and another, similarly-sized study finds a similarly sized correlation between IAT scores and discriminatory behavior against the in- group members, for meta-analytic purposes those two studies should average out to a correlation of about zero.

The second, more important point to emerge from this years-long meta-analytic melee is that both critics and proponents of the IAT now agree that the statistical evidence is simply too lacking for the test to be used to predict individual behavior. In this sense, the IAT has simply failed to deliver on a promise it has been making since its inception — that it can reveal otherwise hidden propensities to commit acts of racial bias. Over and over, that early coverage — and early statements from Banaji and Greenwald — prematurely implied that there was a connection between IAT scores and real-world outcomes.

But overall, any lay reader would have come away from early coverage of the IAT convinced that the test had vital real-world implications for understanding racism. Banaji and Greenwald contributed to this idea. Rather, from their point of view, the IAT fit neatly into preexisting theories of prejudice and intergroup relations they had been working on for an extended period.

They thought they were onto something big, in other words. And researchers have every right to tout their exciting findings to the public, of course. But they also have a responsibility to not get ahead of the available evidence. In reality, what Greenwald and Banaji had found around the turn of the millennium were certain predictable patterns in how quickly different sorts of people responded to different sorts of stimuli.

Majority groups tended to score higher than minority groups on the IAT, for example. That first paper from , for example, had a major impact when it was published by Allen R.

McConnell and Jill Leibold, and has since been cited heavily. But a group of six researchers that included Blanton, Mitchell, and Tetlock eventually uncovered serious methodological problems with it, which they highlighted in a article. Another influential paper , published in by the researchers Jeremy Heider and John Skowronski, also reported impressive findings over the course of two studies.

Other IAT studies, too, have been conducted in sloppy and misleading manners, Blanton and his colleagues have discovered over the years. Many people, particularly members of the public not up on the latest literature, seem to still believe this. But there have always been alternate potential explanations for what the IAT really measures.

Along those same lines, some researchers have suggested that it could be the case that those who empathize with out-group members, and are therefore well aware of the negative treatment and stereotypes they are victimized by, have an easier time forming the quick negative associations with minority groups that the IAT interprets as implicit bias against those groups.

That study dealt with these questions in a slightly more real-world way. Andreychik and Gill focused on the difference between so-called external or internal explanations for why certain out-groups are disadvantaged.

Past research has shown that those who endorse external explanations for disparate outcomes tend, unsurprisingly, to express more compassion and empathy for the groups in question.

Andreychik and Gill found that for those students who endorsed external explanations for the plight of African-Americans or a novel group, or who were induced to do so, high IAT scores correlated with greater degrees of explicitly reported more compassion and empathy for those groups. For those who rejected, or were induced to reject, external explanations, the correlation was exactly reversed: High IAT scores predicted lower empathy and compassion.

And those are just two examples of the many published instances in which the IAT appears to be measuring something other than implicit bias. McFarland and Zachary Crouch found : Those who were a bit cognitively slower got higher IAT scores, meaning they were told they were more biased than faster test-takers.

Blanton said that he has never seen a psychological instrument in which less statistical noise predictably biases the results upward or downward. That is, the test will tell them they are extremely implicitly biased despite their having exhibited almost zero bias in their actual performance. They all have their flaws and shortcomings — sometimes maddening ones. So how much of a given IAT score is an actual marker of implicit bias? One researcher, Ulrich Schimmack, just published a blog post in which he re-analyzed an influential IAT study to try to better understand this issue.

He found that just 25 percent in the variance of IAT scores generated by a single version of the test is explained by actual implicit bias, and that for technical reasons this may well be a significant overestimate. It seems extremely unlikely. If you read the mass-market explanation of the test published in book form — as of this writing, Blindspot is No. Both these uses raise serious questions too, though. Therefore, limited reliability and small-to-moderate effect sizes are not problematic in diagnosing system-level discrimination, for which analyses often involve large samples.

There are examples everywhere. If my depression scale overestimates depressive symptoms in some patients and underestimates them in others, there still might be situations in which I can use a bunch of data generated from it to make certain broad statements about population-level rates of depression. What about the claim that the IAT is useful as an educational tool?

Then you can extend that to things like diversity trainings. It is absolutely the case that in a country as segregated and racially troubled the United States, efforts to better inform citizens particularly members of majority groups about the sometimes subtle nature of bigotry and racial disparities are important.

Surely, at least some of that effect is attributable to implicit bias, and the same goes for many of the other areas in which racial cues have been correlated with discrepant outcomes. So there is nothing wrong with implicit-bias training that covers this sort of research. Nor is there anything wrong with IAT-based trainings which merely explain to people that they may well be carrying around certain associations in their head they are unaware of, and that researchers have uncovered patterns about who is more likely to demonstrate which response-time differences.

In situations where one group holds historic or current-day power over the other, for example, members of the in-group do tend to score higher on the IAT than the out-group.



0コメント

  • 1000 / 1000