Search papers, labs, and topics across Lattice.
This paper introduces GMRL-BD, a novel algorithm that uses multi-agent reinforcement learning to identify topics where black-box LLMs are likely to generate biased or untrustworthy responses. The method leverages a knowledge graph derived from Wikipedia to guide the RL agents in efficiently exploring the topic space under query constraints. Experiments demonstrate the algorithm's ability to detect untrustworthy boundaries with limited queries, and the authors release a new dataset of LLM biases across various topics.
LLMs are more fragile than we thought: a new algorithm efficiently maps the boundaries of their trustworthiness, revealing specific topics where they're prone to bias.
Large Language Models (LLMs) have shown a high capability in answering questions on a diverse range of topics. However, these models sometimes produce biased, ideologized or incorrect responses, limiting their applications if there is no clear understanding of which topics their answers can be trusted. In this research, we introduce a novel algorithm, named as GMRL-BD, designed to identify the untrustworthy boundaries (in terms of topics) of a given LLM, with black-box access to the LLM and under specific query constraints. Based on a general Knowledge Graph (KG) derived from Wikipedia, our algorithm incorporates with multiple reinforcement learning agents to efficiently identify topics (some nodes in KG) where the LLM is likely to generate biased answers. Our experiments demonstrated the efficiency of our algorithm, which can detect the untrustworthy boundary with just limited queries to the LLM. Additionally, we have released a new dataset containing popular LLMs including Llama2, Vicuna, Falcon, Qwen2, Gemma2 and Yi-1.5, along with labels indicating the topics on which each LLM is likely to be biased.