Search papers, labs, and topics across Lattice.
The paper unifies leading membership inference attacks (MIAs) like LiRA and RMIA under a single exponential-family log-likelihood ratio framework, demonstrating they differ primarily in distributional assumptions and parameter estimation complexity. It identifies variance estimation as a critical bottleneck, especially with limited shadow models. To address this, the authors propose BaVarIA, a Bayesian variance inference attack using conjugate normal-inverse-gamma priors, resulting in stable performance without extensive hyperparameter tuning and improved performance compared to LiRA and RMIA, particularly in low-resource settings.
Forget choosing between LiRA and RMIA: this paper reveals they're just different points on a spectrum of exponential-family attacks, and introduces BaVarIA, a Bayesian approach that outperforms both, especially when shadow models are scarce.
Membership inference attacks (MIAs) are becoming standard tools for auditing the privacy of machine learning models. The leading attacks -- LiRA (Carlini et al., 2022) and RMIA (Zarifzadeh et al., 2024) -- appear to use distinct scoring strategies, while the recently proposed BASE (Lassila et al., 2025) was shown to be equivalent to RMIA, making it difficult for practitioners to choose among them. We show that all three are instances of a single exponential-family log-likelihood ratio framework, differing only in their distributional assumptions and the number of parameters estimated per data point. This unification reveals a hierarchy (BASE1-4) that connects RMIA and LiRA as endpoints of a spectrum of increasing model complexity. Within this framework, we identify variance estimation as the key bottleneck at small shadow-model budgets and propose BaVarIA, a Bayesian variance inference attack that replaces threshold-based parameter switching with conjugate normal-inverse-gamma priors. BaVarIA yields a Student-t predictive (BaVarIA-t) or a Gaussian with stabilized variance (BaVarIA-n), providing stable performance without additional hyperparameter tuning. Across 12 datasets and 7 shadow-model budgets, BaVarIA matches or improves upon LiRA and RMIA, with the largest gains in the practically important low-shadow-model and offline regimes.