Search papers, labs, and topics across Lattice.
This paper analyzes the shortcomings of current LLM evaluation benchmarks when applied to ill-defined tasks, characterized by unclear input/output spaces and ambiguous success criteria. Through case studies on Complex Instruction Following and Natural Language to Mermaid Sequence Diagrams, the authors demonstrate that existing metrics conflate distinct failure modes, leading to unstable, non-diagnostic, and unactionable scores. The work highlights the need for more robust and interpretable evaluation designs that address the inherent challenges posed by ill-defined tasks.
LLM benchmarks for complex tasks often produce scores that are meaningless and misleading, masking distinct failure modes and hindering progress.
Many evaluations of Large Language Models (LLMs) target tasks that are inherently ill-defined, with unclear input and output spaces and ambiguous success criteria. We analyze why existing evaluation benchmarks and metrics fail to provide reliable or diagnostic signals of model capability for such tasks. We examine two case studies: Complex Instruction Following (CIF), where we identify recurring issues including limited coverage of real-world instruction complexity, sensitivity to instruction phrasing, inconsistent and non-comparable metrics, and instability introduced by LLM-based judges; and Natural Language to Mermaid Sequence Diagrams (NL2Mermaid), where we show how multi-faceted evaluation criteria can yield actionable insights beyond aggregate scores. Together, these case studies show that current evaluations frequently conflate distinct failure modes, yielding scores that are unstable, non-diagnostic, and difficult to act upon. Our findings expose fundamental limitations in existing evaluation practices for ill-defined tasks and motivate more robust, interpretable evaluation designs.