Search papers, labs, and topics across Lattice.
University of California
6
0
12
LLM defenses can now track complex, multi-turn jailbreaks in fully anonymized traffic with near-zero latency, thanks to a novel asymmetric contrastive learning approach.
Energy-dissipation principles can revolutionize how we infer potential functions in noisy, incomplete data environments, achieving remarkable robustness in generalized diffusion processes.
LLMs get *more* creative at generating molecules when you add *more* constraints, defying the intuition that creativity thrives on freedom.
Forget flattening: VideoStir's spatio-temporal graph retrieval and intent-aware scoring unlocks more effective reasoning over long videos.
Prompt highlighting in LLMs gets a serious upgrade: PRISM-$\Delta$ steers models to focus on relevant text spans with better accuracy and fluency, even in long contexts.
Forget expensive retraining: PromptCD unlocks significant improvements in LLM alignment and VLM visual grounding simply by contrasting model responses to cleverly designed prompts at test time.