Search papers, labs, and topics across Lattice.
2
0
4
7
Label inference attacks in vertical federated learning don't work because bottom models are good at representing labels, but because of feature-label distribution alignment, opening the door to simple, effective defenses.
Medical VQA models can be made significantly more robust to adversarial attacks using a novel pre-training approach based on masked autoencoders and variational inference, without requiring additional data or complex procedures.