Search papers, labs, and topics across Lattice.
This paper addresses Gaussian Process (GP) regression when inputs are measured with error, a scenario that leads to underestimation of uncertainty. They propose a novel Wasserstein-type GP regression (\PWAGPs) that represents noisy inputs as probability measures and defines covariance using Wasserstein distances. The method introduces a deterministic projected Wasserstein ARD (PWA) kernel with closed-form expressions, avoiding latent variables and Monte Carlo methods for more transparent uncertainty quantification.
By handling input noise directly through Wasserstein distances, \PWAGPs offer a more robust and transparent approach to uncertainty quantification in GP regression compared to latent-input models.
Gaussian process (GP) regression is widely used for uncertainty quantification, yet the standard formulation assumes noise-free covariates. When inputs are measured with error, this errors-in-variables (EIV) setting can lead to optimistically narrow posterior intervals and biased decisions. We study GP regression under input measurement uncertainty by representing each noisy input as a probability measure and defining covariance through Wasserstein distances between these measures. Building on this perspective, we instantiate a deterministic projected Wasserstein ARD (PWA) kernel whose one-dimensional components admit closed-form expressions and whose product structure yields a scalable, positive-definite kernel on distributions. Unlike latent-input GP models, PWA-based GPs (\PWAGPs) handle input noise without introducing unobserved covariates or Monte Carlo projections, making uncertainty quantification more transparent and robust.