Search papers, labs, and topics across Lattice.
The Ohio State University, Columbus, OH, USA
2
0
5
2
Counterintuitively, scaling up LLM decoders in speech recognition doesn't guarantee fairness; audio encoder design matters more, as Whisper's pathological hallucinations on Indian-accented speech and repetition loops under masking demonstrate.
Recurrent-depth transformers don't just memorize facts, they learn to *reason* with them, unlocking systematic generalization and depth extrapolation that eludes standard transformers.