Search papers, labs, and topics across Lattice.
1
0
3
7
Seemingly harmless instructions can be weaponized to cause real-world harm by exploiting the limited causal reasoning of embodied LLMs at the action level.