Fluids You Can Trust: Property-Preserving Operator Learning for Incompressible Flows. Sharma, R., Lowery, M., Owhadi, H., & Shankar, V. March, 2026. arXiv:2602.15472 [physics]
Fluids You Can Trust: Property-Preserving Operator Learning for Incompressible Flows [link]Paper  doi  abstract   bibtex   
We present a novel property-preserving kernel-based operator learning method for incompressible flows governed by the incompressible Navier–Stokes equations. Traditional numerical solvers incur significant computational costs to respect incompressibility. Operator learning offers efficient surrogate models, but current neural operators fail to exactly enforce physical properties such as incompressibility, periodicity, and turbulence. Our kernel method maps input functions to expansion coefficients of output functions in a propertypreserving kernel basis, ensuring that predicted velocity fields analytically and simultaneously preserve the aforementioned physical properties. Our method leverages efficient numerical linear algebra, simple rootfinding, and streaming to allow for training at-scale on desktop GPUs. We also present universal approximation results and both pessimistic and more realistic a priori convergence rates for our framework. We evaluate the method on challenging 2D and 3D, laminar and turbulent, incompressible flow problems. Our method achieves up to six orders of magnitude lower relative ℓ2 errors upon generalization and trains up to five orders of magnitude faster compared to neural operators, despite our method being trained on desktop GPUs and neural operators being trained on cutting-edge GPU servers. Moreover, while our method enforces incompressibility analytically, neural operators exhibit very large deviations. Our results show that our method provides an accurate and efficient surrogate for incompressible flows.
@misc{sharma_fluids_2026,
	title = {Fluids {You} {Can} {Trust}: {Property}-{Preserving} {Operator} {Learning} for {Incompressible} {Flows}},
	shorttitle = {Fluids {You} {Can} {Trust}},
	url = {http://arxiv.org/abs/2602.15472},
	doi = {10.48550/arXiv.2602.15472},
	abstract = {We present a novel property-preserving kernel-based operator learning method for incompressible flows governed by the incompressible Navier–Stokes equations. Traditional numerical solvers incur significant computational costs to respect incompressibility. Operator learning offers efficient surrogate models, but current neural operators fail to exactly enforce physical properties such as incompressibility, periodicity, and turbulence. Our kernel method maps input functions to expansion coefficients of output functions in a propertypreserving kernel basis, ensuring that predicted velocity fields analytically and simultaneously preserve the aforementioned physical properties. Our method leverages efficient numerical linear algebra, simple rootfinding, and streaming to allow for training at-scale on desktop GPUs. We also present universal approximation results and both pessimistic and more realistic a priori convergence rates for our framework. We evaluate the method on challenging 2D and 3D, laminar and turbulent, incompressible flow problems. Our method achieves up to six orders of magnitude lower relative ℓ2 errors upon generalization and trains up to five orders of magnitude faster compared to neural operators, despite our method being trained on desktop GPUs and neural operators being trained on cutting-edge GPU servers. Moreover, while our method enforces incompressibility analytically, neural operators exhibit very large deviations. Our results show that our method provides an accurate and efficient surrogate for incompressible flows.},
	language = {en},
	urldate = {2026-03-19},
	publisher = {arXiv},
	author = {Sharma, Ramansh and Lowery, Matthew and Owhadi, Houman and Shankar, Varun},
	month = mar,
	year = {2026},
	note = {arXiv:2602.15472 [physics]},
	keywords = {Computer Science - Machine Learning, Physics - Fluid Dynamics},
}

Downloads: 0