Search papers, labs, and topics across Lattice.
1
0
3
Safety-aligned LLMs are accidentally handicapping cyber defenders, refusing to help with critical tasks like malware analysis and system hardening simply because the requests sound too much like attacks.