Search papers, labs, and topics across Lattice.
The paper introduces Financial Instruction Following Evaluation (FIFE), a new benchmark to evaluate language models' ability to follow complex, interdependent instructions in financial analysis. FIFE consists of 88 human-authored prompts and a verification system with chainable constraints to provide fine-grained reward signals. Evaluating 53 models in a zero-shot setting, the authors found that the best open-weight model outperforms the leading proprietary system, but even top models struggle with perfect compliance, highlighting the challenge of financial instruction following.
The best open-weight language model beats proprietary systems on finance instruction following, but *all* models still struggle with the complexity of real-world financial analysis.
Language Models (LMs) struggle with complex, interdependent instructions, particularly in high-stakes domains like finance where precision is critical. We introduce FIFE, a novel, high-difficulty benchmark designed to assess LM instruction-following capabilities for financial analysis tasks. FIFE comprises 88 human-authored prompts and employs a verification system with chainable, verifiable constraints for fine-grained reward signals. We evaluate 53 models (proprietary, open-weight, open-source) in a zero-shot setting. Our key findings reveal a clear performance hierarchy: the top open-weight model (76.1 strict / 79.5 loose) surpasses the leading proprietary system (65.9 strict / 70.5 loose), while the best open-source models lag significantly (45.5 strict / 48.9 loose). However, even top-performing models struggle with FIFE's complex requirements, failing to achieve perfect compliance. We release our dataset and code as an open-source resource to promote research in Reinforcement Learning for the financial domain.