Search papers, labs, and topics across Lattice.
3
12
5
2
You can dial up or down how obvious an AI's hallucinations are, giving you control over whether users catch the errors.
You can now audit CLIP and CLAP models for PII memorization using *only* text queries, sidestepping the need for risky biometric inputs and computationally expensive shadow models.
LLMs can move beyond simple refusals to actively guide vulnerable users towards safe outcomes, achieving state-of-the-art safety and robustness against jailbreaks.