Search papers, labs, and topics across Lattice.
This paper introduces KProxNPLVM, a novel Nonlinear Probabilistic Latent Variable Model (NPLVM) that addresses the approximation error inherent in conventional amortized variational inference by relaxing the objective function using a Wasserstein distance-based proximal operator. They theoretically prove the approximation error of standard NPLVMs and derive a new variational inference strategy from the relaxed optimization problem. Experiments on synthetic and real-world industrial datasets demonstrate that KProxNPLVM improves soft sensor modeling accuracy compared to existing methods.
By relaxing the objective function with a Wasserstein proximal operator, KProxNPLVM sidesteps the approximation errors that plague standard variational inference in Nonlinear Probabilistic Latent Variable Models.
Nonlinear Probabilistic Latent Variable Models (NPLVMs) are a cornerstone of soft sensor modeling due to their capacity for uncertainty delineation. However, conventional NPLVMs are trained using amortized variational inference, where neural networks parameterize the variational posterior. While facilitating model implementation, this parameterization converts the distributional optimization problem within an infinite-dimensional function space to parameter optimization within a finite-dimensional parameter space, which introduces an approximation error gap, thereby degrading soft sensor modeling accuracy. To alleviate this issue, we introduce KProxNPLVM, a novel NPLVM that pivots to relaxing the objective itself and improving the NPLVM's performance. Specifically, we first prove the approximation error induced by the conventional approach. Based on this, we design the Wasserstein distance as the proximal operator to relax the learning objective, yielding a new variational inference strategy derived from solving this relaxed optimization problem. Based on this foundation, we provide a rigorous derivation of KProxNPLVM's optimization implementation, prove the convergence of our algorithm can finally sidestep the approximation error, and propose the KProxNPLVM by summarizing the abovementioned content. Finally, extensive experiments on synthetic and real-world industrial datasets are conducted to demonstrate the efficacy of the proposed KProxNPLVM.