Search papers, labs, and topics across Lattice.
3
0
8
ViTs can achieve robust generalization through adversarial training even when overfitting, mirroring a phenomenon previously observed only in CNNs.
CAT's ability to defend against jailbreaks hinges on the singular values of the LLM's embedding matrix, offering a new handle for improving robustness.
Forget scaling model size: RefineRL shows that incentivizing self-refinement in smaller LLMs lets them punch *way* above their weight, rivaling models 10x larger on competitive programming tasks.