By Robin O. Andreasen
We generally associate flowers with positive qualities, such as beauty and happiness, and insects with negative sensations, such as poison and fear. We do this despite the fact that flowers are sometimes poisonous and insects can sometimes be beautiful. These perceptions make up the basis of the implicit association test (IAT), developed by psychologists Anthony G. Greenwald, Debbie E. McGhee, and Jordan L.K. Schwartz. The test measures strength of association between a category or concept, such as race or gender, and evaluative terms (good, bad) or stereotypes (leader, caretaker). It can also expose people’s hidden attitudes about members of certain social groups. For instance, I might explicitly believe that men and women are equally good at science, but unknowingly implicitly associate science with males and the liberal arts with females. You can discover your own implicit attitudes by taking an IAT (there are many!) on Harvard University’s Project Implicit website.
Implicit associations are natural. They are part of concept formation, and concepts are useful. They allow us to simplify and organize the mass quantities of information that we accumulate when navigating the world. They can also be statistically accurate. For example, it is true that women are underrepresented in the scientific workforce and overrepresented in the humanities. Implicit associations become problematic, however, when they are misapplied or biased by socialization. Socialization can lead us to wrongly associate the role of mathematics professor with a male and English teacher with a female. Accurate associations can also be misapplied. While it is true that roughly 85 percent of the directors of the National Science Foundation (NSF) have been male, assuming that the current director is male would be a mistake.
The role of implicit attitudes and their impact on women and other underrepresented groups in science, technology, engineering, and mathematics (STEM) was the subject of a minisymposium and panel discussion entitled “Implicit Bias, Stereotyping and Prejudice in STEM” at the 2017 SIAM Annual Meeting, held in Pittsburgh, Pa., this July. The panel was organized by Charles R. Doering (University of Michigan), and speakers included Nicholas P. Jewell (University of California, Berkeley), Denise Sekaquaptewa (University of Michigan) and Ron Buckmire (NSF). Jewell discussed the pervasiveness of implicit bias in academic evaluative contexts, such as hiring, promotion, and peer review. Sekaquaptewa examined the effects of bias and stereotype on the experiences of underrepresented groups in STEM, while Buckmire outlined the NSF’s efforts to educate reviewers about the potential for—and impact of—bias in the proposal review process. See accompanying sidebar for reports from Jewell and Sekaquaptewa.
It is well known that race and gender disparities exist in the STEM workforce. Women have earned roughly 50 percent of all STEM bachelor’s degrees, 45 percent of all STEM master’s degrees, and 40 percent of all STEM Ph.D.s awarded since the early 2000s. Yet they filled only 28 percent of all STEM occupations in 2015 . That same year, black and Hispanic scientists, mathematicians, and engineers collectively constituted only 11 percent of that workforce . Further disparities exist as well. Women and other underrepresented groups often receive lower pay, win fewer awards, and advance through the ranks more slowly than their male or white counterparts, even when they possess equal qualifications.
The existence and persistence of group-based disparities in STEM are often explained in terms of a combination of interacting structural and social factors, including implicit bias. Our brains are not perfect. Everyone has implicit biases, even about members of their own group. The problem is that these types of biases often impose small disadvantages on women and other underrepresented groups, and small advantages to men and other dominant groups. These types of (dis)advantages can accumulate over time, resulting in large-scale inequalities. Rates of pay serve as an example. If women consistently receive lower raises—even by a small amount—than equally-qualified men, a gender pay gap will eventually emerge. Psychologist Virginia Valian calls this mechanism “accumulation of advantage.” She argues that taken together, these factors can make significant headway in explaining the glass ceiling and other group-based career inequalities.
The good news is that something can be done. Although it may not be possible to eliminate implicit biases altogether, they can be reduced and modified. Awareness of implicit bias and its role in evaluation is an important first step. One should be mindful of common cognitive shortcuts that sometimes occur in the evaluation process. Examples include preferring people with qualifications and characteristics similar to one’s own, undervaluing a person’s work or research because it is unfamiliar, and making snap judgments by focusing on a few negatives rather than overall qualifications. Also important is recognition of the contexts in which implicit bias is likely to influence evaluation. Research shows that people are more likely to resort to implicit bias under specific circumstances, including when they lack information, experience time pressure, or are distracted or under stress. Taking measures to ensure that these factors are not at work during the evaluation process is essential.
There are also a number of best practices that can be used to work around implicit attitudes. For instance, when serving on a hiring committee or awards panel, make sure that a variety of candidates are represented. If the pool lacks diversity, take active steps to deepen it and encourage individuals from underrepresented groups to apply. In any type of evaluation process—including hiring, peer review, appraisals, and promotion—it is important for evaluators to establish clear criteria and ways to weigh their relative importance prior to evaluation. Using those set criteria, take adequate time to review each candidate and consider his/her qualifications as a whole. When group decision-making is involved, as when serving on a committee, complete your own assessment before hearing the views of others; committee chairs must be aware of power dynamics and allow everyone to share their views. Keep careful notes during the evaluation process and refer back to the preset criteria.
A number of organizations, such as SIAM, the NSF, the Association for Women in Science, and the Mathematical Association of America are advocating for the aforementioned measures. Although completely eliminating one’s biases might not be possible, appropriate steps can diminish and alter them, thus ultimately increasing diversity in STEM. Having a more diverse workforce taps into a broader talent pool. Perhaps more importantly, diversity in the workplace and educational settings can promote broader and more creative thinking, therefore enhancing the science itself.
 National Science Foundation & National Center for Science and Engineering Statistics. (2017). Women, Minorities, and Persons with Disabilities in Science and Engineering: 2017 (Special Report NSF 17-310). Arlington, VA. Retrieved from www.nsf.gov/statistics/wmpd/.
MacNell, L., Driscoll, A., & Hunt, A.N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. J. Coll. Barg. Acad., 0(53), 1-13.
Robin O. Andreasen is an associate professor in the Department of Linguistics and Cognitive Science and research director of UD-ADVANCE at the University of Delaware.