Search papers, labs, and topics across Lattice.
This paper introduces an automated pipeline leveraging LLMs to detect and diagnose flaky tests in quantum software, addressing the challenge of inconsistent test outcomes due to the probabilistic nature of quantum systems. The pipeline expands an existing dataset of flaky tests using LLMs and cosine similarity, then evaluates several LLMs (OpenAI GPT, Meta LLaMA, Google Gemini, Anthropic Claude) for flakiness classification and root-cause identification. Google Gemini achieves F1-scores of 0.9420 and 0.9643 for flakiness detection and root-cause identification, respectively, demonstrating the potential of LLMs to automate flaky test triage in quantum software.
LLMs can now automatically detect and diagnose flaky tests in quantum software with high accuracy, potentially saving quantum software developers significant time and effort.
Like classical software, quantum software systems rely on automated testing. However, their inherently probabilistic outputs make them susceptible to quantum flakiness -- tests that pass or fail inconsistently without code changes. Such quantum flaky tests can mask real defects and reduce developer productivity, yet systematic tooling for their detection and diagnosis remains limited. This paper presents an automated pipeline to detect flaky-test-related issues and pull requests in quantum software repositories and to support the identification of their root causes. We aim to expand an existing quantum flaky test dataset and evaluate the capability of Large Language Models (LLMs) for flakiness classification and root-cause identification. Building on a prior manual analysis of 14 quantum software repositories, we automate the discovery of additional flaky test cases using LLMs and cosine similarity. We further evaluate a variety of LLMs from OpenAI GPT, Meta LLaMA, Google Gemini, and Anthropic Claude suites for classifying flakiness and identifying root causes from issue descriptions and code context. Classification performance is assessed using standard performance metrics, including F1-score. Using our pipeline, we identify 25 previously unknown flaky tests, increasing the original dataset size by 54%. The best-performing model, Google Gemini, achieves an F1-score of 0.9420 for flakiness detection and 0.9643 for root-cause identification, demonstrating that LLMs can provide practical support for triaging flaky reports and understanding their underlying causes in quantum software. The expanded dataset and automated pipeline provide reusable artifacts for the quantum software engineering community. Future work will focus on improving detection robustness and exploring automated repair of quantum flaky tests.