Search papers, labs, and topics across Lattice.
University of Science and Technology of China
2
0
5
LVLMs hallucinate less when you intervene *before* they start generating, by cleaning up the initial Key-Value cache with modality-aware steering vectors.
Uncover the hidden hierarchy of LLM components driving instruction following, revealing that activation-space importance doesn't always mean parameter-space encoding.