Search papers, labs, and topics across Lattice.
FLARE, a novel testing framework, addresses the challenge of testing Multi-Agent LLM Systems (MAS) by extracting specifications and behavioral spaces from agent definitions to build test oracles. It then employs coverage-guided fuzzing to expose failures, analyzing execution logs to determine pass/fail status and generate failure reports. Evaluations on 16 open-source applications show FLARE achieves 96.9% inter-agent and 91.1% intra-agent coverage, outperforming baselines and uncovering 56 previously unknown MAS failures.
LLM-based multi-agent systems are riddled with hidden failure modes that traditional testing misses, but FLARE uncovers them with coverage-guided fuzzing.
Multi-Agent LLM Systems (MAS) have been adopted to automate complex human workflows by breaking down tasks into subtasks. However, due to the non-deterministic behavior of LLM agents and the intricate interactions between agents, MAS applications frequently encounter failures, including infinite loops and failed tool invocations. Traditional software testing techniques are ineffective in detecting such failures due to the lack of LLM agent specification, the large behavioral space of MAS, and semantic-based correctness judgment. This paper presents FLARE, a novel testing framework tailored for MAS. FLARE takes the source code of MAS as input and extracts specifications and behavioral spaces from agent definitions. Based on these specifications, FLARE builds test oracles and conducts coverage-guided fuzzing to expose failures. It then analyzes execution logs to judge whether each test has passed and generates failure reports. Our evaluation on 16 diverse open-source applications demonstrates that FLARE achieves 96.9% inter-agent coverage and 91.1% intra-agent coverage, outperforming baselines by 9.5% and 1.0%. FLARE also uncovers 56 previously unknown failures unique to MAS.