Search papers, labs, and topics across Lattice.
This paper investigates gender bias in GPT-5's job recommendations for simulated candidate profiles, focusing on under-35 Italian graduates. The study prompts the model to suggest jobs for 24 balanced profiles, varying gender, age, experience, and field. While job titles and industries showed no significant differences, the analysis revealed gendered linguistic patterns in the adjectives used to describe candidates, associating women with emotional traits and men with strategic ones.
GPT-5 subtly perpetuates gender stereotypes in job recommendations, favoring emotional adjectives for women and strategic ones for men, even when candidate profiles are balanced.
In recent years, generative artificial intelligence (GenAI) systems have assumed increasingly crucial roles in selection processes, personnel recruitment and analysis of candidates'profiles. However, the employment of large language models (LLMs) risks reproducing, and in some cases amplifying, gender stereotypes and bias already present in the labour market. The objective of this paper is to evaluate and measure this phenomenon, analysing how a state-of-the-art generative model (GPT-5) suggests occupations based on gender and work experience background, focusing on under-35-year-old Italian graduates. The model has been prompted to suggest jobs to 24 simulated candidate profiles, which are balanced in terms of gender, age, experience and professional field. Although no significant differences emerged in job titles and industry, gendered linguistic patterns emerged in the adjectives attributed to female and male candidates, indicating a tendency of the model to associate women with emotional and empathetic traits, while men with strategic and analytical ones. The research raises an ethical question regarding the use of these models in sensitive processes, highlighting the need for transparency and fairness in future digital labour markets.