Search papers, labs, and topics across Lattice.
5
0
9
23
Poisoning attacks got you down? This defense flips the script by using the attacker's own clustering behavior against them, achieving near-perfect attack mitigation with minimal accuracy loss.
GPT-5 isn't always the smartest student: Qwen-Plus outshines it in Chinese CS certification exams, revealing critical cross-lingual performance gaps in LLMs.
LLM agents can now selectively forget sensitive information without sacrificing overall performance, thanks to a new framework that translates natural language unlearning requests into actionable prompts.
Semantic segmentation models, even recent transformer-based architectures like SAM, are surprisingly vulnerable to new backdoor attacks that current defenses can't reliably stop.
A shockingly small number of poisoned, synthetically distilled data points can completely hijack a model during transfer learning, turning it into an unwitting accomplice.