Search papers, labs, and topics across Lattice.
3
0
7
LLMs can leapfrog state-of-the-art scientific algorithms and human-designed solutions, but only if you scale the evaluation loop, not just the model.
Dataset distillation, intended to compress data while preserving model performance, actually leaks sensitive information about the original training data and model architecture.
Current image watermarks are easily bypassed by pixel-wise reconstruction, raising serious questions about their effectiveness as a defense against misuse of machine-generated images.