Search papers, labs, and topics across Lattice.
9
0
13
Achieve near-perfect speech recognition at a ridiculously low 200 bits per second by using reinforcement learning to directly optimize a neural codec for intelligibility.
LLMs get a reasoning boost by treating information extraction not as a one-off task, but as a dynamic cache that persists and filters information across multiple steps.
Achieve 75% input length reduction in LLMs with minimal performance loss by compressing token embeddings directly in the latent space.
Stochastic Trust-Region methods offer stable deep learning optimization and handle hard constraints effectively, all without manual learning-rate tuning.
Fine-grained rubrics unlock significantly better visual reasoning in preference optimization, rivaling GPT-5.4 with a much smaller model.
Forget modality-specific architectures: this work achieves state-of-the-art camouflaged object detection by learning modality-agnostic prompts for SAM, unlocking efficient adaptation to new modalities.
Robot manipulation models trained on mostly VR data can perform as well as those trained on real-world data, but at 1/20th the cost.
Forget opaque embeddings: Cross-Layer Transcoders reveal how ViT layers contribute to the final representation, pinpointing the critical few that drive performance.
Polarization cues, often overlooked, unlock significantly more robust monocular depth estimation, especially in scenes with challenging reflective or transparent surfaces.