Bias in Peer Review [Webinar, October 6, 2020]

On Tuesday, October 6th, 2020, the HRA Grants Administration Working Group hosted a Zoom webinar around bias in peer review and potential interventions for bias reduction. With diversity and equity being at the forefront of national discussion, ensuring a fair and equitable peer review process is more important and timely now than ever.

Speakers and Webinar Notes:

  • Eileen MelnickDirector of Grants & Awards, Conquer Cancer, the ASCO Foundation; Co-Chair, Grants Administration Working Group
    Eileen discussed Conquer Cancer’s initial steps to reduce bias and some easy-to-implement approaches. Conquer Cancer has strategized three actions to reduce bias in peer review, but right now is at “Action 1”.
    • Action 1 (lowest touch, easiest to implement)Create awareness and culture of inclusivity
      • Implicit bias statementConquer Cancer reviewers read this at the beginning of each review meeting
      • Understanding unconscious bias video (2 minutes, from The Royal Society) – Conquer Cancer plays this at the beginning of each review meeting:
      • Unconscious bias training moduleTakes about 10-15 minutes. Online version have to do it then and there, printed version they can do on their own schedule. Research shows that this should be voluntary, it can backfire if it’s required.
      • Conquer Cancer started these interventions with some of their committees, now working to have it be the standard for all their review committees.
    • Action 2- Create systems to reduce implicit bias, determine timeline.
      • Increase diversity in scientific review by key stakeholders. Review language and materials (RFPs, reviewer guidelines, instructions for evaluation) for inclusivity.  – Is there is common nomenclature? Could something be created for distribution that all can use?
      • Need a balanced committee (gender, race, institution, research area).
    • Action 3- Create data for ongoing analysis. Need to get the baseline data in order to determine efficacy and impact of any interventions.
      • Right now, ethnicity data is completely optional for applicants.
      • Do we need new systems for collecting data?
  • Molly Carnes, MD, MS, Virginia Valian Professor of Medicine, Psychiatry, and Industrial & Systems Engineering; Director, Center for Women’s Health Research; and Co-director, Women in Science and Engineering Leadership Institute, University of Wisconsin-Madison
    Molly discussed the current thinking in the field, her “Bias Reduction in Scientific Peer Review” (BRISPR) study to date, and the impending pilot intervention program for bias reduction. Excellent slides, on the HRA Website.
    • Why are we concerned about bias in grant review? Research has shown:
      • Evidence for lower rating and funding levels for Black and Asian vs. White applicants
      • Mixed but compelling evidence for lower rating and funding of female vs. male applicants
      • Has a very high degree of randomness
      • A review of 105 studies on grant peer review concluded that – does not fund the best science, is only a weak predictor of future performance, is open to cronyism
    • What do reviewers look for in an ideal grant proposal? Innovation, scientific excellence, application to/impact on the disease area, etc.
      • Considerations for impact – Studies have found that women and non-white men made more novel research contributions than men and white scholars but these led to fewer publications and faculty positions five years post-PhD.
        • Reviewers’ scores on funded research proposals did not correlate with subsequent productivity metrics .
      • Considerations for innovation – The most novel research as defined by number of new combinations of scientific terms in a proposal received the worst reviewers’ scores
        • Creativity and innovation are more strongly associated with male than female stereotypes.
      • Considerations for Scientific excellence of the PI –  When great awards were made on the basis of the research, no gender difference in awards; when made on the basis of the researcher, women were less likely to be funded.
        • Investigators receiving an early career award by even the smallest margin were 2.5 times more likely to receive a mid-career award than those who fell just short of winning. (The “Matthew Effect.)


    • Interactions with review panels could influence review outcomes
      • Score calibration talk (SCT) = occurs when reviewers discuss the score and no aspect of the proposal.
        • When the review panel chair engaged in SCT or when SCT invoked laughter, reviewers publicly changed their score. – Especially important to note for in-person Review Board meetings, where this is common.
      • Comments, descriptions, or settings prime group stereotypes.
      • Whether decisions are made by majority or unanimous rule.
    • Reviewers do have explicit scientific biases. Believe certain methods are better than others. That we know, and we know we can’t change.
    • BRISPR addresses implicit biases. These can very much influence a review unfairly.
      • Stereotypes about any group exist and we know them even if we don’t believe them. (E.g. Race, ethnicity, gender, institution, geographic location, even just a name.)
      • Takes more than just good intentions to break bias habits. Requires intentional practice of new behaviors until they become habitual.
      • Knowing common stereotypes creates bias habits even if we don’t believe them. Terms that commonly describe men also commonly describe “scientists” and “leaders”. Same with ethnic stereotypes.
    • BRISPR is based on the only strategy proven effective in helping change behavior in response to bias habits: “motivated self-regulation” or “intuitive override” (i.e. “breaking the bias habit”)
    • BRISPR training session design- relevant to reviewing grant applications
      • (1) First, explain research on stereotypes and implicit bias (as she has done thus far).
      • (2) Then, will utilize evidence-based strategies to reduce bias (did not discuss what this training is just yet)
        • NOTE: If only step one is done, this could potentially reinforce implicit biases and invoke moral licensing. Must have awareness of biases and then strategies to reduce it.
      • Considerations in study design:
        • Unit of randomization: The organization’s specific review panel
        • Outcome measures: Survey responses
        • Will use 90 review panels for two experimental groups (active vs. passive training, not enough panels for a third control group although that would be ideal)
          • Trying for a 60-minute intervention but most likely will be 90-minutes
    • Doing the study this fall (next week at University of Wisconsin- Madison):


  • Jessica Biddinger, Senior Manager of Research Operations, American Heart Association
    Jessica discussed American Heart Association’s involvement in Molly’s work and how other HRA Members organizations might be able to similarly collaborate and implement bias reduction practices into their review cycles.
    • Started their work in reducing bias early on just based on the size and types of AHA’s peer-review committees. (Different career-levels, award amounts, project types, etc.)
    • Don’t have a dedicated anti-bias training, but have removed gender terms from their applications. Asking peer-review staff and reviewers to take implicit bias trainings online.
    • Working with Molly to have a comprehensive approach to mitigate bias in peer review, hopefully in the next few years.

Q&A Session and Ending Notes
Maryrose gave a shoutout to HRA Analyzer. As Eileen noted, we can’t determine the efficacy of interventions if we don’t have any baseline data to begin with! Can put career stage, ethnicity, etc., to Analyzer so we can pull reports with this data.

Q: How can we distinguish whether an application is rated poorly because of grantsmanship versus bias? Sometimes, poor grantsmanship is a reflection of inadequate training (that can still be due to inequities).
A: Experimentally have found that if a typo is in the same exact application, it will be noticed (and judged negatively) when submitted by a black applicant but not noticed when submitted by a white applicant.
Don’t have an answer to this, but it’s a very complex and difficult topic.

Q: Are there ways to get reviewers on board to participate in trainings that add to their review workload?
A: A lot of up-front work to get leadership members of the organizations on board and deem it as important.
Also helping in getting participation when promoting a quality product: this is the only evidence-based intervention deemed effective. All other methods have not been experimentally validated, so they are a waste of time.

Final note: It’s not enough to simply acknowledge implicit bias (sometimes this can simply reinforce them or invoke moral licensing), we must implement active ways to combat it and change behavior!