Search papers, labs, and topics across Lattice.
The authors introduce UAVBench, a new benchmark dataset of 50,000 validated UAV flight scenarios generated using LLMs and structured in JSON format, to address the lack of standardized benchmarks for evaluating LLM-driven autonomous aerial systems. They further extend this with UAVBench_MCQ, a reasoning-oriented benchmark of 50,000 multiple-choice questions designed to assess cognitive and ethical reasoning in UAV operations. Evaluating 32 state-of-the-art LLMs, the study reveals strong performance in perception and policy reasoning but identifies limitations in ethics-aware and resource-constrained decision-making, highlighting areas for future research.
LLMs still struggle with ethical and resource-constrained decisions in UAV flight scenarios, despite strong performance in perception and policy reasoning, as revealed by a new 50,000-scenario benchmark.
Autonomous aerial systems increasingly rely on large language models (LLMs) for mission planning, perception, and decision-making, yet the lack of standardized and physically grounded benchmarks limits systematic evaluation of their reasoning capabilities. To address this gap, we introduce UAVBench, an open benchmark dataset comprising 50,000 validated UAV flight scenarios generated through taxonomy-guided LLM prompting and multi-stage safety validation. Each scenario is encoded in a structured JSON schema that includes mission objectives, vehicle configuration, environmental conditions, and quantitative risk labels, providing a unified representation of UAV operations across diverse domains. Building on this foundation, we present UAVBench_MCQ, a reasoning-oriented extension containing 50,000 multiple-choice questions spanning ten cognitive and ethical reasoning styles, ranging from aerodynamics and navigation to multi-agent coordination and integrated reasoning. This framework enables interpretable and machine-checkable assessment of UAV-specific cognition under realistic operational contexts. We evaluate 32 state-of-the-art LLMs, including GPT-5, ChatGPT-4o, Gemini 2.5 Flash, DeepSeek V3, Qwen3 235B, and ERNIE 4.5 300B, and find strong performance in perception and policy reasoning but persistent challenges in ethics-aware and resource-constrained decision-making. UAVBench establishes a reproducible and physically grounded foundation for benchmarking agentic AI in autonomous aerial systems and advancing next-generation UAV reasoning intelligence. To support open science and reproducibility, we release the UAVBench dataset, the UAVBench_MCQ benchmark, evaluation scripts, and all related materials on GitHub at https://github.com/maferrag/UAVBench