Search papers, labs, and topics across Lattice.
This paper investigates how users justify their robot color selections across different occupational contexts, revealing the influence of implicit social biases. Through qualitative analysis of 4,146 open-ended justifications from 1,038 participants, the study identifies that while utilitarian Functionalism dominates (52%), justifications are adapted to align with racial and occupational stereotypes. The research also demonstrates that bias often operates unconsciously, with stereotype primes shifting color choices despite participants' reliance on standard reasoning, and that robot shape modulates color interpretation.
Robot color choices are subtly shaped by racial and occupational stereotypes, even when users offer seemingly rational justifications.
As robots increasingly enter the workforce, human-robot interaction (HRI) must address how implicit social biases influence user preferences. This paper investigates how users rationalize their selections of robots varying in skin tone and anthropomorphic features across different occupations. By qualitatively analyzing 4,146 open-ended justifications from 1,038 participants, we map the reasoning frameworks driving robot color selection across four professional contexts. We developed and validated a comprehensive, multidimensional coding scheme via human--AI consensus ($\kappa = 0.73$). Our results demonstrate that while utilitarian \textit{Functionalism} is the dominant justification strategy (52\%), participants systematically adapted these practical rationales to align with established racial and occupational stereotypes. Furthermore, we reveal that bias frequently operates beneath conscious rationalization: exposure to racial stereotype primes significantly shifted participants'color choices, yet their spoken justifications remained masked by standard affective or task-related reasoning. We also found that demographic backgrounds significantly shape justification strategies, and that robot shape strongly modulates color interpretation. Specifically, as robots become highly anthropomorphic, users increasingly retreat from functional reasoning toward \textit{Machine-Centric} de-racialization. Through these empirical results, we provide actionable design implications to help reduce the perpetuation of societal biases in future workforce robots.