Facebook Pixel

Featured Research

Cracking The Code

Implicit Bias and Racial Profiling

My Interest In racial profiling was sparked in 1999 when I read an article by a prominent legal scholar arguing that, while the policy was flawed on Constitutional grounds, it nevertheless represented a rational policing strategy. Having been steeped in the study of racial stereotyping, I was not ready to accept the assertion that profiling is rational. Racial profiling is stereotype-based policing, making judgments about individuals based on traits that are presumed to be prevalent in that individual’s racial, ethnic, gender, or other group. It occurred to me that the very evidence that drove stereotypes associating Blacks with drug crime (the typically profiled crime) was likely to be skewed by profiling itself. To the extent that police were stopping and searching Blacks at a higher rate, they would be arresting and incarcerating them at higher rates. This would skew the criminal justice statistics that people were pointing to.

Trying to study the effects of profiling quickly presented “the benchmark problem.” Like so many policy problems, racial profiling is plagued by a paucity of valid data. So I turned to an alternate empirical strategy: simulation. Running numerous scenarios estimating incarceration rates as a function of criminal offending and police stopping rates for minority and majority groups, I found that even in the absence of higher offending by minorities, profiling causes disproportions in incarcerations. Even if minorities are offending at a higher rate, profiling exaggerates the disproportions. The only way to get incarceration rates that are proportional to offending rates (i.e., Blacks and Whites represented in prison in shares commensurate with their offending rates) is to avoid using race as a basis for suspicion. More surprising were the very modest overall gains in criminal captures that resulted from profiling, even when minorities were modeled as offending at much higher rates.

The Journal of Policy Analysis & Management published these findings in 2006, including the finding that, when the possibility that profiling would have a deterrent effect was modeled, the capture rates looked even more modest. This was because racial profiling reflects a special case with regard to deterrence. Deterrence theory holds that potential offenders respond to the cost of crime. The cost of crime is a function of the probability of apprehension and the punishment that would ensue. When the probability of capture increases, the cost goes up and, according to deterrence theory, people commit fewer crimes. In the case of racial profiling, however, there is not a general increase in the chance of capture, but a group-specific one. Because police departments have finite person-hour resources, when they shift attention to one group, they will shift attention away from another. As long as there are potential offenders in the non-profiled group (e.g., Whites), their offending rate should increase. Because, by definition, there are more people in the majority group, this has the potential to yield a net increase in crime. I call this “reverse deterrence.”

Not long after this paper came out, I was contacted by a new colleague, Amy Hackney at Georgia Southern University, who had come across a method to test reverse deterrence. We designed a study to manipulate whether GSU students thought Black students, White students, or nobody was being profiled by the proctor during a test. In the control group, cheating rates were reassuringly low. But when White students thought Black students were being profiling (singled out), they cheated significantly more. Black students did not cheat more in our “White-profiling” condition. We suspect this is because they do not have a mental schema for Whites being profiled. In this experiment, the net effect of profiling was a higher rate of offending. 

The implications of this research are that 1) profiling may not do much to increase criminal incapacitation, even when profiled groups have higher offending rates; and 2) profiling may actually increase the rate of crime. 

Some might ask, “Isn’t it only the criminals who need to worry?” No. The disproportionate incarcerations that arise from racially biased policing have dire collateral effects. Police stops and searches are not benign events, even for the innocent. They are potentially stressful, disruptive, humiliating, stigmatizing, and alienating experiences. The disproportionate incarcerations cause losses in income and wealth that are borne by the targeted communities. After incarceration, criminal records pose lasting barriers to employment. Criminal records disenfranchise voters, and the cumulative effect of profiling induced disparities undermines the democratic representation of minority groups. Perhaps most disturbing, incarceration has deadly effects on minority communities. GSPP professors Rucker Johnson and Steve Raphael have shown that incarcerations of Black men explain some of the rate of HIV infection in unincarcerated Black women.

Surely, the deleterious effects, combined with the limited (at best) utility and the potential for increased crime through reverse deterrence, render racial profiling a problem worthy of affirmative policy intervention. Many (but not enough) in government agree. The End Racial Profiling Act (ERPA), while far from perfect, would standardize police stop data collection, provide guidance for departments in designing training and monitoring procedures, and fund programs to promote the development of good practices. ERPA has been introduced in every Congress since 2001 but has yet to receive a floor vote, and I don’t expect it to get passed any time soon. Most state legislatures have passed laws on racial profiling, but they typically involve only stated bans without enforcement mechanisms. Some mandate data collection, but rarely provide guidance, and there is great variation across jurisdictions and states.

As for the judiciary, I have yet to meet a constitutional scholar who believes that racial profiling is legally permissible. It violates 4th and 14th Amendment guarantees of due process and equal protection. Yet, the Supreme Court has allowed wide latitude to law enforcement in this area, ruling unambiguously that the Court is indifferent to the actual motivations of officers for stopping drivers and pedestrians, as long as they can articulate a legitimate basis (a valid pretext) for a stop. As a criminal defense, racial profiling is generally a non-starter. Civil law is a different matter, and when courts have been convinced (often by the Department of Justice, and based on statistical analysis) that a police department has exhibited a pattern of racial discrimination, they have imposed supervision and requirement of remedial steps. This is happening now in Oakland and New Orleans. 

Court order is a first step, but the question of how to actually solve the problem is hardly settled. One challenge stems from the inadequacy of a simple ban on racial profiling. In most departments, profiling is sufficiently taboo that nobody does it overtly, let alone admits to it. There is, in effect, a de facto ban. Steps must be taken to build in monitoring, accountability, and incentive systems to track and change officer behavior. Insights from decades of social psychological study of stereotyping should prove useful.

Stereotypes serve a “heuristic” (cognitive shortcut) function — they enable us to make reasonably rapid judgments of individuals when we have incomplete information (which is very often the case). But stereotypes tend to be inaccurate, either in direction or scale. After all, what is the likelihood that you could actually know the exact prevalence of a trait (like criminality) in a group? People also have a tendency to overestimate the similarity of members of other groups (the “they all look alike” phenomenon is very real), causing them to underappreciate the individuality of other-group members. Even if a stereotype was somewhat accurate at the aggregate level, it would tend to produce errors in judgments at the individual level. The very low rates of drug and weapon yields resulting from stops of minorities (in numerous studied locales) — and often lower than those for Whites — reveal that the stereotypes are not contributing much, if any, diagnostic value to the decisions to stop and search. Another dramatic insight from social psychology is that stereotypes are held in, and activated from, memory without conscious awareness or control. This is what is referred to as “implicit stereotyping.” There is now overwhelming evidence that it is ubiquitous and that implicit stereotypes influence real behaviors. Consequently, even the most well-intentioned, and consciously egalitarian police officer is likely to be influenced by stereotypes associating minorities with crime. Officers will typically not even know that they are doing it.This is another reason why bans will not be effective, and why careful, systematic data collection and analysis is required to monitor for bias.

To this end, Steve Raphael and I, and our colleagues Phil Goff at UCLA and Amanda Geller at Columbia University, are developing a project to set national standards for data collection on police-civilian encounters. With the standardization of these practices, and with the collection of psychological, demographic, sociological, economic, and policy practice variables on (to start with) at least twenty major police departments, we aim to make progress solving the benchmark problem, and helping police leaders identify biased policing in individuals, units, precincts, and departments. We expect this will enable us to evaluate the effectiveness of accountability regimes that departments put in place. How to fully address these problems when they are identified is a longer term challenge, but my colleagues and I are making some early progress there as well.

One obvious method for reducing biased policing is through “de-biasing” training. In fact, most departments already engage in training meant to discourage the use of stereotypes, and to increase cultural sensitivity. But there is no evidence that these trainings actually work. We are developing a new, experiential training procedure that builds on our understanding of how implicit stereotyping works. My students and I, as well as a police trainer, have been out on the weekends taking photographs of Black and White actors we’ve hired to act in manners of varying suspiciousness, from not at all (standing, minding one’s own business) to highly (dropping a gun behind a bush).

With these visual stimuli, we will develop a procedure to train officers to avoid using race as a basis for suspicion, and to reward them for appropriate judgments that are based on valid criteria. Finally, I believe the low hanging fruit in remediating biased policing hangs on the tree of discretion. The Supreme Court has, through its rulings, allowed remarkable leeway to officers in deciding who to stop, question, and search. This discretion, combined with the inherent ambiguity of suspect identification, opens the door to implicit stereotypes. The very small proportions of those searched who turn out to be in violation of the law speaks to the inefficiency this level of discretion can give rise to. A telling case, relayed by social psychologists’ and policy analysts’ favorite journalist, Malcolm Gladwell, implicates the role of discretion in shoddy law enforcement. 

In 1998, when he took over US Customs, Raymond Kelly (now Commissioner of Police for New York City), directed his agents to conduct fewer searches of airline passengers, and to use a much smaller number of indicators of suspicion, those that were more directly associated with smuggling. The result was a 75% decrease in searches, and a dramatic increase in the hit rate (findings of contraband per stop). The net effect was roughly zero on the absolute number of finds — they caught almost as many smugglers with a quarter of the number of searches. Racial and ethnic disparities also declined. They achieved the same effectiveness with far less intrusion on individual and civil liberties.

Ironically, as head of NYPD, Kelly has presided over a dramatic upscaling of the controversial “Stop & Frisk” program, which has gone from about 150,000 to about 700,000 pedestrian stops per year over the last decade. Stop and frisk, in combination with racial stereotypes, results in large numbers of young Black and Latino men being incarcerated for petty drug possession offenses. Customs enforcement and city policing pose very different challenges, and New York in particular is a daunting and complex assignment. Stop and frisk does not absolutely necessitate racial profiling, but because it involves a large number of high discretion stops in public, urban places, it almost invariably does result in racial disproportions. Nevertheless, I will give Kelly the last word, from an earlier time, when he succinctly evaluated racial profiling: “It’s the wrong thing to do, and it’s also ineffective.”