Search papers, labs, and topics across Lattice.
This paper introduces a method for efficiently fitting neural fields to spatiotemporal scientific data by transferring INR features across different signals. They demonstrate that this transfer learning approach significantly accelerates convergence and improves reconstruction quality in complex scientific domains like turbulent flows and astrophysics. Key results show up to an order of magnitude reduction in iterations needed for target reconstruction quality and substantial improvements in early-stage reconstruction fidelity (up to 10 dB gains).
Transfer learning slashes the computational cost of fitting neural fields to scientific data, making high-dimensional simulations far more tractable.
Neural fields, also known as implicit neural representations (INRs), offer a powerful framework for modeling continuous geometry, but their effectiveness in high-dimensional scientific settings is limited by slow convergence and scaling challenges. In this study, we extend INR models to handle spatiotemporal and multivariate signals and show how INR features can be transferred across scientific signals to enable efficient and scalable representation across time and ensemble runs in an amortized fashion. Across controlled transformation regimes (e.g., geometric transformations and localized perturbations of synthetic fields) and high-fidelity scientific domains-including turbulent flows, fluid-material impact dynamics, and astrophysical systems-we show that transferable features improve not only signal fidelity but also the accuracy of derived geometric and physical quantities, including density gradients and vorticity. In particular, transferable features reduce iterations to reach target reconstruction quality by up to an order of magnitude, increase early-stage reconstruction quality by multiple dB (with gains exceeding 10 dB in some cases), and consistently improve gradient-based physical accuracy.