Search papers, labs, and topics across Lattice.
2
0
5
4
VLA models introduce a fundamentally new risk landscape compared to LLMs or robotics alone, demanding a unified safety perspective that considers irreversible physical consequences and multimodal attack surfaces.
Object hallucinations in LVLMs aren't a vision problem, but a language prior problem, and can be slashed by dynamically suppressing those priors during decoding.