Search papers, labs, and topics across Lattice.
The paper introduces SPAGBias, a novel framework to systematically evaluate spatial gender bias in LLMs across 62 urban micro-spaces using explicit, probabilistic, and constructional diagnostic layers. Experiments on six LLMs reveal structured gender-space associations embedded and reinforced across the model pipeline, exceeding real-world distributions. Downstream tasks demonstrate that these biases lead to failures in normative and descriptive applications, highlighting the practical implications of spatial gender bias in LLMs.
LLMs don't just reflect gender bias in public vs. private spaces; they encode nuanced, micro-level mappings that substantially exceed real-world distributions, shaping spatial gender narratives in unexpected ways.
Large language models (LLMs) are being increasingly used in urban planning, but since gendered space theory highlights how gender hierarchies are embedded in spatial organization, there is concern that LLMs may reproduce or amplify such biases. We introduce SPAGBias - the first systematic framework to evaluate spatial gender bias in LLMs. It combines a taxonomy of 62 urban micro-spaces, a prompt library, and three diagnostic layers: explicit (forced-choice resampling), probabilistic (token-level asymmetry), and constructional (semantic and narrative role analysis). Testing six representative models, we identify structured gender-space associations that go beyond the public-private divide, forming nuanced micro-level mappings. Story generation reveals how emotion, wording, and social roles jointly shape"spatial gender narratives". We also examine how prompt design, temperature, and model scale influence bias expression. Tracing experiments indicate that these patterns are embedded and reinforced across the model pipeline (pre-training, instruction tuning, and reward modeling), with model associations found to substantially exceed real-world distributions. Downstream experiments further reveal that such biases produce concrete failures in both normative and descriptive application settings. This work connects sociological theory with computational analysis, extending bias research into the spatial domain and uncovering how LLMs encode social gender cognition through language.