Search papers, labs, and topics across Lattice.
Chongqing Ant Consumer Finance Co., Ltd., Nanyang Technological University
3
0
5
VLMs that ace digital document parsing benchmarks still stumble badly when faced with real-world scanned, warped, or photographed documents, revealing a significant "reality gap."
You can cut MLLM hallucinations in remote sensing tasks without any training by cleverly exploiting the model's own attention mechanisms to focus on relevant image regions.
LLM-powered pentesting agents fail not because of model limitations, but because they can't estimate task difficulty, leading to wasted effort and premature context exhaustion.