Search papers, labs, and topics across Lattice.
University of Virginia
4
0
9
LLMs can be steered to respect cultural nuances in specific tasks by training modular adapters and routing between them, significantly improving performance over standard alignment techniques.
LLMs can be tricked into making incorrect causal judgments by semantically similar questions, even with Chain-of-Thought prompting, highlighting a critical gap in true causal reasoning.
LLMs can be coaxed into shorter, more accurate reasoning chains by rewarding tokens that maximize mutual information with the final answer.
Stop visual grounding errors from snowballing in vision-language models: this method lets models re-consult visual evidence during later reasoning steps.