Search papers, labs, and topics across Lattice.
This paper investigates gender leakage in de-gendered academic letters of recommendation (LoRs) using Transformer-based encoders and LLMs. They found that models like DistilBERT, RoBERTa, and Llama 2 can predict applicant gender with up to 68% accuracy even after explicit identifiers are removed, revealing implicit gender cues. Removing these cues reduces classification accuracy but doesn't eliminate gender prediction entirely, highlighting the difficulty of creating truly gender-neutral LoRs.
Even after removing names and pronouns, language models can still guess an applicant's gender from recommendation letters with surprising accuracy, revealing hidden biases lurking in seemingly objective text.
Letters of recommendation (LoRs) can carry patterns of implicitly gendered language that can inadvertently influence downstream decisions, e.g. in hiring and admissions. In this work, we investigate the extent to which Transformer-based encoder models as well as Large Language Models (LLMs) can infer the gender of applicants in academic LoRs submitted to an U.S. medical-residency program after explicit identifiers like names and pronouns are de-gendered. While using three models (DistilBERT, RoBERTa, and Llama 2) to classify the gender of anonymized and de-gendered LoRs, significant gender leakage was observed as evident from up to 68% classification accuracy. Text interpretation methods, like TF-IDF and SHAP, demonstrate that certain linguistic patterns are strong proxies for gender, e.g."emotional''and"humanitarian''are commonly associated with LoRs from female applicants. As an experiment in creating truly gender-neutral LoRs, these implicit gender cues were remove resulting in a drop of up to 5.5% accuracy and 2.7% macro $F_1$ score on re-training the classifiers. However, applicant gender prediction still remains better than chance. In this case study, our findings highlight that 1) LoRs contain gender-identifying cues that are hard to remove and may activate bias in decision-making and 2) while our technical framework may be a concrete step toward fairer academic and professional evaluations, future work is needed to interrogate the role that gender plays in LoR review. Taken together, our findings motivate upstream auditing of evaluative text in real-world academic letters of recommendation as a necessary complement to model-level fairness interventions.