Search papers, labs, and topics across Lattice.
2
0
5
Stop wasting compute: LRMs can cut reasoning steps by 30% without sacrificing accuracy using a metacognitive approach to determine when "thinking is enough."
Achieve up to 77.5% reduction in semantic alert delay and 98.33% visual evidence delivery within 0.5s by intelligently cascading small and large models and adaptively transmitting data in edge-cloud MLLM systems.