Search papers, labs, and topics across Lattice.
The paper introduces HIPO, a reinforcement learning framework that formulates Hierarchical Instruction Following (HIF) as a Constrained Markov Decision Process to improve system prompt compliance in LLMs. HIPO uses a primal-dual safe reinforcement learning approach to explicitly enforce system prompt compliance as a constraint while maximizing user utility. Experiments across Qwen, Phi, and Llama models show HIPO significantly improves both system compliance and user utility, and mechanistic analysis reveals the model shifts attention toward system tokens.
LLMs can now reliably follow complex, hierarchical instructions thanks to a new constrained RL framework that treats system prompts as strict algorithmic boundaries.
Hierarchical Instruction Following (HIF) refers to the problem of prompting large language models with a priority-ordered stack of instructions. Standard methods like RLHF and DPO typically fail in this problem since they mainly optimize for a single objective, failing to explicitly enforce system prompt compliance. Meanwhile, supervised fine-tuning relies on mimicking filtered, compliant data, which fails to establish the priority asymmetry at the algorithmic level. In this paper, we introduce \textsc{HIPO}, a novel alignment framework that formulates HIF as a Constrained Markov Decision Process. \textsc{HIPO} elevates system prompts from mere input context to strict algorithmic boundaries. Using a primal-dual safe reinforcement learning approach, the algorithm dynamically enforces system prompt compliance as an explicit constraint, maximizing user utility strictly within this feasible region. Extensive evaluations across diverse model architectures (e.g., Qwen, Phi, Llama) demonstrate that \textsc{HIPO} significantly improves both system compliance and user utility. Furthermore, mechanistic analysis reveals that this constrained optimization autonomously drives the model to shift its attention toward long-range system tokens, providing a principled foundation for reliable LLM deployment in complex workflows.