Search papers, labs, and topics across Lattice.
1
0
2
4
LLMs are more vulnerable to gradient inversion attacks than previously thought: SOMP recovers meaningful training text even with batch sizes up to 128, where prior attacks fail.