A note on “Science faculty’s subtle gender biases favor male students”

In a recent study, science faculty members were asked to evaluate job applications for a lab manager position. The same application file was submitted to everyone, except the name of the applicant was sometimes male and sometimes female. The male applicant was rated as more competent and hireable, despite the otherwise identical application file.

This proves that there is gender bias in academic science hiring, or so we are supposed to conclude.

My concern is this: Many faculty members want studies to prove that there is gender bias. It fits their own political and ideological beliefs. They are happy when they see studies prove this. They like to refer to such studies. I know because I follow them on Twitter.

This raises the question: Did the faculty members in the study answer truthfully, or did they “second guess” the purpose of the study and submit whatever answer would produce their own preferred outcome? They may indeed have thought to themselves: “Although I’m not biased, I am convinced that a lot of my colleagues are, so I better answer as if I was too, so that attention is drawn to this important problem and progressive measures can be taken.”

Of course the faculty members knew they were being studied and of course they had no actual stake in their replies, unlike when they’re doing actual hiring. And if we look at the prompt the faculty evaluators received, the purpose of the study is quite transparent. So they had no incentive to be truthful, but some incentive to ensure the study produced the results they favoured.

Which hypothesis is right, mine or the authors’? We could try to test it by looking at gender differences among the evaluators. If the authors are right, and there is unfair discrimination against women due to bias and prejudice, one might expect this bias to be stronger among male evaluators, since women who are themselves established scientists might be expected to be open to promising female students. If my hypothesis is the operative one, on the other hand, one might expect the opposite; that is, that female evaluators would be even more biased than men, since they arguably have a greater stake in “gaming the study” to make sure it shows gender bias. The latter is in fact what happened, though the difference is not great.

Meanwhile, if one looks at real data instead of contrived experiments, “actual hiring shows female Ph.D.s are disproportionately … more likely to be hired” (source, page 5365). We see the same thing by looking at the official data from the American Mathematical Society regarding hiring and PhDs in the mathematical sciences in the United States. In the latest data, women constitute 31% of PhDs awarded and 32% of positions filled. However, women constitute only 28% among PhD recipients who are U.S. citizens. This is perhaps the more relevant ratio since, among those who do their doctorate in the U.S., those who are U.S. citizens are surely significantly more inclined to aim for a job in U.S. academia. It therefore seems that hiring institutions have a preference for women, as they indeed often state openly.