Search papers, labs, and topics across Lattice.
3
0
6
7
Hallucinations in multimodal reasoning models are linked to high-entropy transition words, and can be reduced by decoding with probability-weighted continuous embeddings rather than discrete tokens during these uncertain states.
Medical VLMs get a calibration boost without training or labels: LATA sharpens predictions by smoothing over a k-NN graph, shrinking prediction sets and balancing class coverage.
Turns out, skipping the boring parts of a video (like static backgrounds) makes your vision AI both faster and smarter, beating state-of-the-art models with less data.