Search papers, labs, and topics across Lattice.
6
0
10
7
LALMs are shockingly vulnerable to inaudible audio prompts that can make them execute unauthorized actions, even on commercial systems like Mistral AI and Microsoft Azure.
Stop rewriting security rules for every SIEM platform: ARuleCon automates the process with 15% higher fidelity than existing LLMs.
Audio backdoor attacks leave a tell: triggers are surprisingly stable to destructive noise but fragile to meaning-preserving changes.
Autonomous AI agents that can independently sustain and extend their operation are closer than we think, but raise thorny security and governance questions we need to address now.
Diffusion language models can now efficiently self-evaluate their output quality by regenerating their own sequences, enabling more reliable uncertainty quantification and flexible-length generation.
Now you can audit black-box LLM APIs for cheating (model substitution, overbilling) with <1% overhead, using verifiable computation.