Search papers, labs, and topics across Lattice.
6
0
11
By explicitly modeling modality reliability, RSGMamba avoids feature degradation from noisy or misaligned data, achieving state-of-the-art multimodal semantic segmentation.
LLMs choke on long numerical sequences, but a simple separator token trick can boost accuracy by 35% and cut token costs by 16%鈥攚ithout any training.
LALMs get a noise-canceling superpower with Focus-Then-Listen, a plug-and-play module that boosts performance without expensive retraining.
Environmental sound deepfakes are a rising threat, and this challenge reveals the current state-of-the-art in detecting them, highlighting both the progress and remaining gaps.
LALMs struggle with polyphonic audio, losing significant performance on tasks requiring reasoning about concurrent sound events, as revealed by the new PolyBench benchmark.
Today's best AI agents fail at realistic software engineering tasks, stalling before even reaching 30% completion, revealing the urgent need for better long-horizon planning and human-AI collaboration.