Search papers, labs, and topics across Lattice.
2
0
4
8
Trigger-based defenses offer a false sense of security in federated learning, as this new attack shows backdoors can be implanted without any explicit triggers, achieving 2-50x better performance than trigger-based attacks.
Medical VQA models can be made significantly more robust to adversarial attacks using a novel pre-training approach based on masked autoencoders and variational inference, without requiring additional data or complex procedures.