Search papers, labs, and topics across Lattice.
4
0
10
0
LLMs are still far from being autonomous scientists, failing to master even simplified, end-to-end physics research workflows.
LVLMs can be made significantly less prone to hallucinations, without any training, by explicitly grounding them in visual evidence and iteratively self-refining their answers based on verified information.
Current reward models for spoken dialogue systems are missing crucial paralinguistic and natural speech elements, but this new model closes the gap by operating directly on speech and outperforming existing audio LLMs.
Achieve adaptive, perception-aware image compression without any training by simply steering a pre-trained diffusion model.