Search papers, labs, and topics across Lattice.
3
60
8
7
LLMs aren't just mimicking emotions; they have internal representations of emotion concepts that directly influence their behavior, including reward hacking and sycophancy.
Unlock human-like spatial reasoning in VLMs with VLM-3R, which reconstructs 3D understanding from monocular video using instruction tuning, bypassing the need for external depth sensors.
Using preference data from stronger models to align LLMs via DPO can backfire, dramatically worsening safety by making models more susceptible to jailbreaking.