Search papers, labs, and topics across Lattice.
The paper introduces ObfusQAte, a framework to evaluate the robustness of LLMs in factual question answering when presented with obfuscated questions. ObfusQA, the benchmark dataset created using this framework, employs multi-tiered obfuscation levels across named-entity indirection, distractor indirection, and contextual overload. Experiments using ObfusQA reveal that LLMs often fail or hallucinate when faced with nuanced obfuscations, highlighting vulnerabilities in their factual reasoning capabilities.
LLMs, despite their prowess, stumble and hallucinate when questions are subtly obfuscated, revealing a surprising fragility in their factual QA abilities.
The rapid proliferation of Large Language Models (LLMs) has significantly contributed to the development of equitable AI systems capable of factual question-answering (QA). However, no known study tests the LLMs'robustness when presented with obfuscated versions of questions. To systematically evaluate these limitations, we propose a novel technique, ObfusQAte and, leveraging the same, introduce ObfusQA, a comprehensive, first of its kind, framework with multi-tiered obfuscation levels designed to examine LLM capabilities across three distinct dimensions: (i) Named-Entity Indirection, (ii) Distractor Indirection, and (iii) Contextual Overload. By capturing these fine-grained distinctions in language, ObfusQA provides a comprehensive benchmark for evaluating LLM robustness and adaptability. Our study observes that LLMs exhibit a tendency to fail or generate hallucinated responses when confronted with these increasingly nuanced variations. To foster research in this direction, we make ObfusQAte publicly available.