Search papers, labs, and topics across Lattice.
2
0
5
Reasoning models aren't just verbose, they're actively *harmed* by their own verbosity, but a simple self-distillation trick can compress their outputs by up to 59% while boosting accuracy by up to 16 points.
By surgically removing "hallucination patterns" from a model's hidden state, HulluEdit offers a reference-free, single-pass method to dramatically reduce object hallucinations in LVLMs without sacrificing visual grounding.