Search papers, labs, and topics across Lattice.
5
0
10
Bitstream-corrupted video restoration remains a significant challenge, even with recent advances, as revealed by the NTIRE 2026 challenge results.
Autonomous vehicles can drive more safely and reliably by grounding LLM reasoning in a "Commonsense World" that quantifies and leverages the trustworthiness of LLM outputs.
Forget quadratic attention: FEAT achieves state-of-the-art performance on structured data with linear complexity and 40x faster inference.
Lightweight LLMs can now generate high-quality data preparation pipelines for TableQA in a single step, outperforming multi-step approaches while slashing costs and latency.
Attention sinks, considered essential in autoregressive language models, turn out to be surprisingly prunable in diffusion language models, leading to better efficiency.