Search papers, labs, and topics across Lattice.
1
0
3
Even safety-aligned coding LLMs can be weaponized to generate malicious tools that evade existing detection methods, posing a significant threat to LLM agents.