Search papers, labs, and topics across Lattice.
The paper demonstrates that LLM tool selection, relying solely on textual descriptions, is highly susceptible to manipulation. By strategically editing tool descriptions, the authors achieved a 10x increase in tool usage by GPT-4.1 and Qwen2.5-7B compared to tools with original descriptions. This highlights a critical vulnerability in current tool-calling protocols and the need for more robust tool selection mechanisms.
LLMs can be tricked into using specific tools over others simply by tweaking the tool's description, even if the tool is less suitable.
Large language models (LLMs) can now access a wide range of external tools, thanks to the Model Context Protocol (MCP). This greatly expands their abilities as various agents. However, LLMs rely entirely on the text descriptions of tools to decide which ones to use--a process that is surprisingly fragile. In this work, we expose a vulnerability in prevalent tool/function-calling protocols by investigating a series of edits to tool descriptions, some of which can drastically increase a tool's usage from LLMs when competing with alternatives. Through controlled experiments, we show that tools with properly edited descriptions receive over 10 times more usage from GPT-4.1 and Qwen2.5-7B than tools with original descriptions. We further evaluate how various edits to tool descriptions perform when competing directly with one another and how these trends generalize or differ across a broader set of 17 different models. These phenomena, while giving developers a powerful way to promote their tools, underscore the need for a more reliable foundation for agentic LLMs to select and utilize tools and resources. Our code is publicly available at https://github.com/kazemf78/llm-unreliable-tool-preferences.