I have spent hundreds of hours in diversity training over the past two decades–from descriptions of federal anti-discrimination laws to academic-style seminars on the perils of implicit bias, microaggressions, or misgendering.
Advocates of this kind of training have their hearts in the right place. We are all familiar with comparisons showing that Black people earn 50% less than white peers and women earn 70 cents for every dollar that a man earns.
However, the most popular tools used to combat disparities in the workplace have produced almost no measurable results.
The average impact of corporate diversity, equity, and inclusion (DEI) training is zero and some evidence suggests that the impact can become negative if the training is mandated.
“Statistical Snapshots,” which describe how employee outcomes differ by demographic group, are another popular tool. These numbers cannot provide proof of bias. Simple averages often mislead and, importantly, crafting strategies based on misleading data often does more harm than good.
Some business leaders, in their determination to increase diversity, leap directly from observing raw disparities to removing some information from application forms, another common practice meant to make workplaces more equitable. However, hiding information on applications often leads to worse outcomes for those it was intended to help–likely because hiring managers to use race itself as a proxy for the information they’re no longer allowed to see.
Our intuition for how to decrease race and gender disparities in the workplace has failed us for decades. It’s time to stop guessing and start using the scientific method. Remember when we thought that the Bubonic Plague was caused by a triple conjunction of Saturn, Jupiter, and Mars in the 40th degree of Aquarius?
Here is a three-step approach that can turn earnest intentions into good science.
Understand disparities
For decades, social scientists have shown that raw gaps in employment outcomes like hiring or wages–the type of data typically provided to C-suite executives–misstate the amount of actual bias in an organization. This data omits many factors that are key to personnel decisions, factors that often vary by group, owing to disparities in society at large. Business leaders can and should work to address inequality in their communities, but they should not mistake society-wide gaps for bias by their own employees.
One of the most important developments in the study of racial inequality has been the quantification of the importance of pre-market skills in explaining differences in labor market outcomes between Black and white workers. In 2010, using nationally representative data on thousands of individuals in their 40s, I estimated that Black men earn 39.4% less than white men and Black women earn 13.1% less than white women. Yet, accounting for one variable–an educational achievement in their teenage years––reduced that difference to 10.9% (a 72% reduction) for men and revealed that Black women earn 12.7 percent more than white women, on average. Derek Neal, an economist at the University of Chicago, and William Johnson were among the first to make this point in 1996: “While our results do provide some evidence for current labor market discrimination, skills gaps play such a large role that we believe future research should focus on the obstacles Black children face in acquiring productive skill.”
Recently, I worked with a network of hospitals determined to rid their organization of gender bias. The basic facts were startling: Women earned 33% less than men when they were hired and their wages increased less than men once on the job. Yet, accounting for basic demographic variables known about individuals prior to hiring, these differences decreased by 74%. A problem remained, but it was an order of magnitude smaller than the unadjusted numbers implied.
Find the root causes of bias
Social scientists tend to categorize bias into one of three flavors: preference, information, and structure. Preference bias is good old-fashioned bigotry. If company A prefers group W over group B then they will hire and promote them more even if they are similarly qualified.
Information bias arises when employers have imperfect information about workers’ potential productivity and use observable proxies, like gender or race, to make inferences (gender stereotypes are a classic example).
Structural bias occurs when companies institute practices, formally or informally, that have a disparate impact on particular groups, even when the underlying practices are themselves group blind. Employee referral programs can fall into this category.
Over the past fifty years, economists and other social scientists have developed brilliant ways of statistically distinguishing between different types of bias. Gary Becker, in his 1993 Nobel Prize acceptance speech, outlined one such statistical procedure known as the “outcomes test.” It operates by comparing the success rate of decisions across groups and then inferring whether different decision rules were used for different groups. For example, if women CEOs statistically outperform male CEOs, all else equal, that would suggest that a higher standard was applied to women in the selection process. This type of statistical test can be used for hiring, promotions, and attrition across an organization.
In 2013, collaborators and I developed a similar test to detect information-based bias. Our approach uses the insight that if employers have information-based biases at the time of hiring, but then learn more about an employee’s productivity once they are on the job, one would expect to see the returns to tenure within the company to be higher for the group that faced the initial bias. Using a nationally representative dataset of thousands of individuals, we found that there was a significant gap at the time of hiring for Black candidates relative to white peers but that, as predicted, Black candidates experienced a 1.1 percentage point higher return to tenure.
With the aforementioned hospital network, the data pointed to a structural bias in scheduling. Women and men who worked the same number of hours earned exactly the same wage, but men worked more hours due to how the company assigned schedules, not women’s desire to work less.
This is the key step that is missing in every DEI initiative I have seen in the past 25 years: a rigorous, data-driven assessment of root causes that drives the search for effective solutions. In other aspects of life, we would not fathom prescribing a treatment without knowing the underlying cause. Hiding information on resumes when information bias is present is as effective as using alcohol baths to treat fever.
Evaluate
We must rigorously evaluate what works and what doesn't. The old cardiac test–where you “feel it in your heart”–is not good enough. Once we know where potential biases exist, determine what caused them, and curate a set of solutions to test, we must meticulously evaluate what’s working and what’s not.
Solutions that yield measurable results can be substantiated into company policy, while those that don't should be discarded. In the case of the hospital network, once a small change was made to the structure of their scheduling, gender differences were reduced. Despite countless hours spent in training and seminars, their results were unchanged for years. The solution was hidden in plain data.
This will seem heretical to some–but it barely scratches the surface of what's possible with a data-first approach to diversity, equity, and inclusion. More corporate leaders should be trying to solve diversity challenges in the same way they solve problems in every other aspect of their business: through intelligent use of data, rigorous hypothesis testing, and honest inference about what is working.
Roland Fryer is a professor of economics at Harvard University.