Search papers, labs, and topics across Lattice.
The Ohio State University
3
0
8
Attention Sink, where Transformers fixate on seemingly irrelevant tokens, is more than just a quirk – it's a fundamental challenge impacting training, inference, and even causing hallucinations, demanding a systematic approach to understanding and mitigating its effects.
Text-based speculative decoding falls flat for vision-language models, but ViSkip dynamically adapts to vision tokens for state-of-the-art acceleration.
LLMs can reason better if you force them to explore *different* ways of being right, not just be more random.