Search papers, labs, and topics across Lattice.
This paper presents a unified Bayesian Optimization (BO) framework using Gaussian Processes (GPs) to accelerate the search for stationary points on potential energy surfaces, encompassing minimization, single-point saddle searches, and double-ended saddle searches. The framework leverages derivative observations, inverse-distance kernels, and active learning within a six-step surrogate loop, with specific extensions including Optimal Transport GPs and adaptive trust radius. The authors demonstrate that random Fourier features decouple hyperparameter training from predictions, enabling favorable scaling for high-dimensional systems, and provide Rust code to illustrate the unified BO loop.
A single Bayesian Optimization loop can now handle minimization, single-point saddle searches, and double-ended saddle searches on potential energy surfaces, thanks to a unified framework leveraging Gaussian Processes.
Accelerating the explorations of stationary points on potential energy surfaces building local surrogates spans decades of effort. Done correctly, surrogates reduce required evaluations by an order of magnitude while preserving the accuracy of the underlying theory. We present a unified Bayesian Optimization view of minimization, single point saddle searches, and double ended saddle searches through a unified six-step surrogate loop, differing only in the inner optimization target and acquisition criterion. The framework uses Gaussian process regression with derivative observations, inverse-distance kernels, and active learning. The Optimal Transport GP extensions of farthest point sampling with Earth mover's distance, MAP regularization via variance barrier and oscillation detection, and adaptive trust radius form concrete extensions of the same basic methodology, improving accuracy and efficiency. We also demonstrate random Fourier features decouple hyperparameter training from predictions enabling favorable scaling for high-dimensional systems. Accompanying pedagogical Rust code demonstrates that all applications use the exact same Bayesian optimization loop, bridging the gap between theoretical formulation and practical execution.