Search papers, labs, and topics across Lattice.
2
0
5
LLMs can be finetuned to hide malicious prompts and responses in plain sight using steganography, bypassing safety filters and creating an "invisible safety threat."
Object hallucinations in LVLMs aren't a vision problem, but a language prior problem, and can be slashed by dynamically suppressing those priors during decoding.