Search papers, labs, and topics across Lattice.
This paper addresses the challenge of measuring statistical dependence in deterministic autoencoders by introducing a stable neural dependence estimator based on a variational (Gaussian) formulation. The method avoids input concatenation and product-of-marginals re-pairing, thus improving stability and reducing computational cost compared to MINE. Empirical results demonstrate the effectiveness of the proposed estimator for quantitative feature analysis, showing sequential convergence of singular values.
Ditch the concatenation: a new neural dependence estimator sidesteps MINE's computational baggage, offering a more stable and efficient way to analyze autoencoder features.
Statistical dependence measures like mutual information is ideal for analyzing autoencoders, but it can be ill-posed for deterministic, static, noise-free networks. We adopt the variational (Gaussian) formulation that makes dependence among inputs, latents, and reconstructions measurable, and we propose a stable neural dependence estimator based on an orthonormal density-ratio decomposition. Unlike MINE, our method avoids input concatenation and product-of-marginals re-pairing, reducing computational cost and improving stability. We introduce an efficient NMF-like scalar objective and demonstrate empirically that assuming Gaussian noise to form an auxiliary variable enables meaningful dependence measurements and supports quantitative feature analysis, with a sequential convergence of singular values.