Search papers, labs, and topics across Lattice.
This paper investigates the use of large language models (LLMs) for generating spatial relation labels, comparing them to human-generated labels from the Topological Relations Picture Series (TRPS). The authors find that LLM-generated labels align reasonably well with human labels, suggesting LLMs can assist in expanding spatial data sets. They then extend the TRPS with 42 new scenes, demonstrating improved coverage compared to previous extensions.
LLMs can generate spatial relation labels that align with human judgments, offering a scalable path to richer, multilingual spatial datasets.
Variation in spatial categorization across languages is often studied by eliciting human labels for the relations depicted in a set of scenes known as the Topological Relations Picture Series (TRPS). We demonstrate that labels generated by large language models (LLMs) align relatively well with human labels, and show how LLM-generated labels can help to decide which scenes and languages to add to existing spatial data sets. To illustrate our approach we extend the TRPS by adding 42 new scenes, and show that this extension achieves better coverage of the space of possible scenes than two previous extensions of the TRPS. Our results provide a foundation for scaling towards spatial data sets with dozens of languages and hundreds of scenes.