Search papers, labs, and topics across Lattice.
UC Santa Cruz
1
0
2
LVLMs can be made significantly less prone to hallucinations, without any training, by explicitly grounding them in visual evidence and iteratively self-refining their answers based on verified information.