Search papers, labs, and topics across Lattice.
5
0
9
ViTs can achieve robust generalization through adversarial training even when overfitting, mirroring a phenomenon previously observed only in CNNs.
Generate better peptide therapeutics faster: a new deep learning framework predicts peptide-protein interactions with high accuracy and generates novel peptides with enhanced binding affinity.
Training on a subset of data, a common technique for scaling ML, surprisingly introduces new privacy vulnerabilities by leaking information about both the training set and the selection process itself.
CAT's ability to defend against jailbreaks hinges on the singular values of the LLM's embedding matrix, offering a new handle for improving robustness.
LLMs, impressive as they are, can't juggle multiple users' conflicting needs without dropping balls on privacy, prioritization, and efficiency.