Principal Member of Technical Staff
Sandia National Laboratories
Structure preserving model discovery with scientific machine learning
Nat is a principal member of technical staff at Sandia National Laboratories where he leads a group developing geometric structure-preserving machine learning for data-driven multiphysics as well as a data science team developing tools for automated scientific discovery. He is a recipient of the DOE Early Career award and deputy director of SEA-CROGS, a new DOE center for causal physics-informed machine intelligence. Prior to joining staff, he was a NSF-MSPRF postdoctoral fellow at SNL. He received his PhD in the Division of Applied Mathematics at Brown University working with Dr. Martin Maxey and has a master’s degree and dual bachelor’s degrees in mechanical engineering and applied mathematics from the University of Massachusetts: Amherst. His work spans applications in multiphase flow, material science, semiconductor physics, shock magnetohydrodynamics, fracture mechanics, and climate.
Advances in machine learning and artificial intelligence are rendering construction of digital twins for complex systems possible. At Sandia, we are using these tools to perform scientific discovery, design optimization, and data-informed decision making in diverse applications. In this talk we show how graphs may be used to build robust digital twins in high-consequence engineering settings and then how to use them to perform AI-enhanced scientific discovery. The objective of this work is to use ML not just to identify patterns/surrogates from data, but to emulate human-like cognition linking physics to interpretable causal mechanisms.
First, ML-accelerated multiphysics models require mathematical foundations (stability/accuracy/structure-preservation) to reliably couple component models together into a digital twin. We introduce a finite element exterior calculus to discover structure-preserving Whitney forms. This learning framework reveals physically-relevant control volumes with accompanying integral balance laws which naturally encode physical structure in terms of a graph. With a predictive digital twin in hand, we next show how they can be used to reveal causal relationships in large multimodal datasets. We present a variational inference framework for discovering causal relationships in scientific data. Multimodal data may be combined and embedded into directed acyclic graphs which encode interpretable causal relationships. Unsupervised discovery of causal graphs provide a means of identifying exploitable scientific relationships or precursors to failure/rare events.