SIAM News Blog
SIAM News
Print

Mitigating Bias in Science and Engineering

By Robin O. Andreasen

   Stereotyping and Pay
   Inequities in STEM Fields
        Implicit bias influences all areas of academic
   life, including appointments, promotions, peer
   review, and leadership, thus affecting individuals
   and the institutional structures underlying these
   activities.
        Gender pay equity has attracted national
   attention, particularly in recent years with the
   failure to enact major pay equity legislation.
   Despite its progressive reputation, academia
   has a poor record in this regard. There have
   been substantial quantitative and methodological
   efforts to quantify bias in the area of pay equity.
   Statistical methods allow local academic leaders
   to consider pay data and assess whether a gender
   “gap” exists [1]; these methods reveal the general
   need for across-the-board adjustments when pay
   inequities are discovered. In addition, there is
   considerable evidence of what sociologist Robert
   Hironimus-Wendt coined “gated communities” in
   academia — the highest-paid academic disciplines
   tend to contain the highest percentage of male
   members, who also control community entry.
   Comparable worth evidenced through comparable
   pay is an ideal worth embracing. 
      — Nick Jewell, University of California, Berkeley

   References

   [1] Billard, L. (2017). Study of Salary Differentials
   by Gender and Discipline. Statistics and Public
   Policy, 4
(1), 1-14.

   Experiences of Women and Minorities
        An increasingly large research literature in the
   social sciences focuses on stereotyping and
   implicit bias regarding women and
   underrepresented racial/ethnic minorities (URMs)
   in STEM. 
        Research has also shown significant impact on
   targets of stereotyping. Studies on stereotype
   threat
indicate that members of groups
   stereotyped as poor performers in math-intensive
   domains (including white women and URMs)
   actually perform more poorly on math tests when
   they become aware of these negative stereotypes.
   Knowing that others may be stereotyping them
   causes cognitive distraction; giving a poor math
   performance would, in turn, seem to confirm those
   stereotypes. This performance-related concern
   consumes cognitive resources that could have been
   devoted to solving test problems [1]. When the
   relevance of the stereotype is reduced in the testing
   situation (via experimental manipulations in the
   lab), the scores of female, male, white, and URM
   students are not significantly different.
        Social scientists have asserted that many STEM
   environments contain elements that “trigger”
   stereotyping and implicit bias. STEM settings that
   comprise very few women or URMs, or subtle
   cues—such as masculine décor—in the setting, can
   activate stereotypes among everyone involved.
   This promotes implicit favorability towards men
   and white people in evaluation, and triggers the
   damaging stereotype threat experience among
   members of negatively-stereotyped groups. Thus,
   we must acknowledge the features of our STEM
   environments and consider making changes to
   reduce stereotypic elements and create a more
   inclusive environment for all. 
      — Denise Sekaquaptewa, University of
   Michigan

   References
   [1] Steele, C.M. (2010). Whistling Vivaldi: And
   Other Clues to how Stereotypes Affect UsNew
   York, NY: Norton. See also:

   http://reducingstereotypethreat.org.                   

We generally associate flowers with positive qualities, such as beauty and happiness, and insects with negative sensations, such as poison and fear. We do this despite the fact that flowers are sometimes poisonous and insects can sometimes be beautiful. These perceptions make up the basis of the implicit association test (IAT), developed by psychologists Anthony G. Greenwald, Debbie E. McGhee, and Jordan L.K. Schwartz. The test measures strength of association between a category or concept, such as race or gender, and evaluative terms (good, bad) or stereotypes (leader, caretaker). It can also expose people’s hidden attitudes about members of certain social groups. For instance, I might explicitly believe that men and women are equally good at science, but unknowingly implicitly associate science with males and the liberal arts with females. You can discover your own implicit attitudes by taking an IAT (there are many!) on Harvard University’s Project Implicit website.

Implicit associations are natural. They are part of concept formation, and concepts are useful. They allow us to simplify and organize the mass quantities of information that we accumulate when navigating the world. They can also be statistically accurate. For example, it is true that women are underrepresented in the scientific workforce and overrepresented in the humanities. Implicit associations become problematic, however, when they are misapplied or biased by socialization. Socialization can lead us to wrongly associate the role of mathematics professor with a male and English teacher with a female. Accurate associations can also be misapplied. While it is true that roughly 85 percent of the directors of the National Science Foundation (NSF) have been male, assuming that the current director is male would be a mistake.

The role of implicit attitudes and their impact on women and other underrepresented groups in science, technology, engineering, and mathematics (STEM) was the subject of a minisymposium and panel discussion entitled “Implicit Bias, Stereotyping and Prejudice in STEM” at the 2017 SIAM Annual Meeting, held in Pittsburgh, Pa., this July. The panel was organized by Charles R. Doering (University of Michigan), and speakers included Nicholas P. Jewell (University of California, Berkeley), Denise Sekaquaptewa (University of Michigan) and Ron Buckmire (NSF). Jewell discussed the pervasiveness of implicit bias in academic evaluative contexts, such as hiring, promotion, and peer review. Sekaquaptewa examined the effects of bias and stereotype on the experiences of underrepresented groups in STEM, while Buckmire outlined the NSF’s efforts to educate reviewers about the potential for—and impact of—bias in the proposal review process. See accompanying sidebar for reports from Jewell and Sekaquaptewa.

It is well known that race and gender disparities exist in the STEM workforce. Women have earned roughly 50 percent of all STEM bachelor’s degrees, 45 percent of all STEM master’s degrees, and 40 percent of all STEM Ph.D.s awarded since the early 2000s. Yet they filled only 28 percent of all STEM occupations in 2015 [1]. That same year, black and Hispanic scientists, mathematicians, and engineers collectively constituted only 11 percent of that workforce [1]. Further disparities exist as well. Women and other underrepresented groups often receive lower pay, win fewer awards, and advance through the ranks more slowly than their male or white counterparts, even when they possess equal qualifications.

The existence and persistence of group-based disparities in STEM are often explained in terms of a combination of interacting structural and social factors, including implicit bias. Our brains are not perfect. Everyone has implicit biases, even about members of their own group. The problem is that these types of biases often impose small disadvantages on women and other underrepresented groups, and small advantages to men and other dominant groups. These types of (dis)advantages can accumulate over time, resulting in large-scale inequalities. Rates of pay serve as an example. If women consistently receive lower raises—even by a small amount—than equally-qualified men, a gender pay gap will eventually emerge. Psychologist Virginia Valian calls this mechanism “accumulation of advantage.” She argues that taken together, these factors can make significant headway in explaining the glass ceiling and other group-based career inequalities.

The good news is that something can be done. Although it may not be possible to eliminate implicit biases altogether, they can be reduced and modified. Awareness of implicit bias and its role in evaluation is an important first step. One should be mindful of common cognitive shortcuts that sometimes occur in the evaluation process. Examples include preferring people with qualifications and characteristics similar to one’s own, undervaluing a person’s work or research because it is unfamiliar, and making snap judgments by focusing on a few negatives rather than overall qualifications. Also important is recognition of the contexts in which implicit bias is likely to influence evaluation. Research shows that people are more likely to resort to implicit bias under specific circumstances, including when they lack information, experience time pressure, or are distracted or under stress. Taking measures to ensure that these factors are not at work during the evaluation process is essential.

There are also a number of best practices that can be used to work around implicit attitudes. For instance, when serving on a hiring committee or awards panel, make sure that a variety of candidates are represented. If the pool lacks diversity, take active steps to deepen it and encourage individuals from underrepresented groups to apply. In any type of evaluation process—including hiring, peer review, appraisals, and promotion—it is important for evaluators to establish clear criteria and ways to weigh their relative importance prior to evaluation. Using those set criteria, take adequate time to review each candidate and consider his/her qualifications as a whole. When group decision-making is involved, as when serving on a committee, complete your own assessment before hearing the views of others; committee chairs must be aware of power dynamics and allow everyone to share their views. Keep careful notes during the evaluation process and refer back to the preset criteria.

A number of organizations, such as SIAM, the NSF, the Association for Women in Science, and the Mathematical Association of America are advocating for the aforementioned measures. Although completely eliminating one’s biases might not be possible, appropriate steps can diminish and alter them, thus ultimately increasing diversity in STEM. Having a more diverse workforce taps into a broader talent pool. Perhaps more importantly, diversity in the workplace and educational settings can promote broader and more creative thinking, therefore enhancing the science itself. 

References
[1] National Science Foundation & National Center for Science and Engineering Statistics. (2017). Women, Minorities, and Persons with Disabilities in Science and Engineering: 2017 (Special Report NSF 17-310). Arlington, VA. Retrieved from www.nsf.gov/statistics/wmpd/.

Further Reading
MacNell, L., Driscoll, A., & Hunt, A.N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. J. Coll. Barg. Acad., 0(53), 1-13.

Robin O. Andreasen is an associate professor in the Department of Linguistics and Cognitive Science and research director of UD-ADVANCE at the University of Delaware.

blog comments powered by Disqus