Search papers, labs, and topics across Lattice.
UtilityMax Prompting is introduced as a formal framework for specifying LLM tasks with multiple objectives using influence diagrams and utility functions. The LLM is tasked with maximizing expected utility based on the conditional probability distributions within the diagram, forcing explicit reasoning about each objective component. Experiments on the MovieLens 1M dataset using Claude Sonnet 4.6, GPT-5.4, and Gemini 2.5 Pro show consistent improvements in precision and NDCG for multi-objective movie recommendations compared to natural language prompts.
Ditch the ambiguity of natural language prompts: UtilityMax Prompting uses formal math to make LLMs explicitly optimize for multiple objectives, boosting recommendation precision and NDCG.
The success of a Large Language Model (LLM) task depends heavily on its prompt. Most use-cases specify prompts using natural language, which is inherently ambiguous when multiple objectives must be simultaneously satisfied. In this paper we introduce UtilityMax Prompting, a framework that specifies tasks using formal mathematical language. We reconstruct the task as an influence diagram in which the LLM's answer is the sole decision variable. A utility function is defined over the conditional probability distributions within the diagram, and the LLM is instructed to find the answer that maximises expected utility. This constrains the LLM to reason explicitly about each component of the objective, directing its output toward a precise optimization target rather than a subjective natural language interpretation. We validate our approach on the MovieLens 1M dataset across three frontier models (Claude Sonnet 4.6, GPT-5.4, and Gemini 2.5 Pro), demonstrating consistent improvements in precision and Normalized Discounted Cumulative Gain (NDCG) over natural language baselines in a multi-objective movie recommendation task.