Search papers, labs, and topics across Lattice.
2
0
4
Safety-aligned LLMs are accidentally handicapping cyber defenders, refusing to help with critical tasks like malware analysis and system hardening simply because the requests sound too much like attacks.
LLMs can boost novice performance on complex biosecurity tasks to surpass even expert-level benchmarks, but users struggle to fully leverage the models' capabilities.