Scope and Topics

In recent years, there has been an explosion of research on the intersection of machine learning and classical engineering domains. Machine learning is increasingly being used in the development of novel data-driven approaches for modeling and control of dynamical systems, traditionally dominated by physics-based models and scientific computing solvers. On the other hand, engineering and scientific computing principles are changing the machine learning landscape from purely black-box into domain-aware methods by incorporating more structure and prior knowledge into their model architectures and loss functions.

Physics-informed machine learning leverages the inherent knowledge and understanding of the physical world to inform the learning process of machine learning algorithms. By explicitly integrating physical laws, domain expertise, and prior knowledge into the learning framework, physics-informed learning empowers control systems to leverage the flexibility and adaptability of machine learning while remaining grounded in a solid understanding of the underlying dynamics. This fusion enables more efficient and trustworthy learning that results in control applications with superior performance, robustness, and interpretability.

The potential applications of physics-informed machine learning for control and optimization are immense and diverse, spanning a wide range of domains. For example, in robotics, physics-informed learning can enhance the control of complex manipulators and autonomous agents by explicitly considering mechanical constraints, kinematics, and dynamics. Furthermore, in power systems and industrial processes, physics-informed learning can optimize control strategies by taking into account physical phenomena, such as heat transfer, fluid dynamics, and thermodynamics.

This workshop aims to provide insight into recent advances in the field of physics-informed machine learning for modeling, control and optimization, and sketch some of the open challenges and opportunities using physics-informed machine learning. In the morning, experts with experience in physics-informed learning and optimization based control will present new results in this area and spotlight challenges and opportunities for the control community as well as recent advances in physics-informed learning in general. In the afternoon, a tutorial-style coding session will provide attendees hands-on experience with tools in the ecosystem of physics-informed learning. The workshop targets an audience from graduate level to experienced theoretical and practically oriented control engineers who aim to improve their knowledge in physics-informed machine learning for control and optimization.

Format

The full-day workshop will be organized into a morning and an afternoon session, split by a lunch break. The morning session starts with a general introduction, followed by presentations on special topics and applications of physics-informed learning. In the afternoon, we will offer a tutorial-style hands-on coding session.

Registration

Registration in each workshop is required by all active participants, and is also open to all interested individuals. For more information please refer to ACC-24 Workshop page.

Schedule

All times are in EST (UTC-5)
  • [08:30-08:45]: Workshop Opening
  • [08:45-09:30]: Invited Talk 1 Thomas Beckers (Vanderbilt University)
  • [9:30-10:15]: Invited Talk 2 Giulio Evangelisti / Sandra Hirche (Technical University of Munich)
  • [10:15-10:45]: Coffee Break
  • [10:45-11:30]: Invited Talk 3 Sivaranjani Seetharaman (Purdue University)
  • [11:30-12:15]: Invited Talk 4 Rolf Findeisen (TU Darmstadt)
  • [12:15-13:45]: Lunch (on your own; no sponsored lunch provided)
  • [13:45-15:15]: Open-source Code Tutorial Session I
  • [15:15-15:45]: Coffee Break
  • [15:45-17:15]: Open-source Code Tutorial Session II
  • [17:15-17:30]: Closing Remarks and Discussion

Invited Speakers

Thomas Beckers

Thomas Beckers

Vanderbilt University

Title: A Composable Physics-informed Learning Framework [08:45-09:30]

Abstract: Reliable dynamical system models are key for operations like failure detection, design optimization, and safe control. However, first-principle models of complex systems such as water distribution networks, power grids, and other coupled dynamics suffer from time-consuming development and the need for expert knowledge. Dynamical models powered by machine learning face challenges like unknown trustworthiness, limited generalizability, and physical inconsistency, making them unsuitable for safe controller design. For that reason, we might want to decompose complex systems into subsystems to facilitate the modeling and verification process. However, that approach raises challenges like interconnecting ODE and PDE models while preserving physical correctness, quantifying and propagating uncertainty, and designing safe control algorithms. In this talk, I will present our results on data-driven port-Hamiltonian systems (PHS) to enable trustworthy and accurate compositional modeling of complex dynamics including ODEs and PDEs while preserving physical consistency. We use ML techniques with uncertainty quantification to learn the Hamiltonian function of each subsystem. In contrast to many physics-informed techniques that impose physics by penalty, the proposed data-driven model is physically correct by design. The framework is in particular suitable for composable learning as its physical consistency can be preserved under interconnection. Finally, data-driven port-Hamiltonian systems can be used in a robust control setting to establish safe learning-based control.

Bio: Thomas Beckers is an Assistant Professor of Computer Science and Mechanical Engineering at Vanderbilt University. Before joining Vanderbilt, he was a postdoctoral researcher at the Department of Electrical and Systems Engineering, University of Pennsylvania, where he was member of the GRASP Lab, PRECISE Center and ASSET Center. In 2020, he earned his doctorate in Electrical Engineering at the Technical University of Munich (TUM), Germany. He received the B.Sc. and M.Sc. degree in Electrical Engineering in 2010 and 2013, respectively, from the Technical University of Braunschweig, Germany. In 2018, he was a visiting researcher at the University of California, Berkeley. He is a DAAD AInet fellow and was awarded with the Rhode & Schwarz Outstanding Dissertation prize. His research interests include physics-enhanced learning, nonparametric models, and safe learning-based control.

Thomas Slides.

Sandra Hirche

Giulio Evangelisti / Sandra Hirche

Technical University of Munich

Title: Learning & Control with Lagrangian-Gaussian Processes [9:30-10:15]

Abstract: The formulation of Gaussian Processes and other learning frameworks consistent with the relevant physical laws and mathematical models holds great promise for learning-based control of uncertain systems, improving data efficiency and reliability via their physical integrity. In particular, deep learning with neural networks has also shown auspicious results for physical systems. However, these methods lack a straightforward measure for uncertainty quantification, hampering the provision of guarantees, the applicability to robust & adaptive control, and leading to the inevitable requirement of low training errors only satisfied in the vicinity of the training domain. Thus, with the increasing uncertainty in physical systems, developing reliable yet tractable models is still a crucial ongoing issue. This talk will address these issues by focusing on a common methodology behind different learning methods applied to control uncertain systems: integrating physical knowledge and other structural priors into the learning framework on the one hand and enforcing structure via control with learning on the other hand. In particular, we introduce the concept of physically consistent Lagrangian-Gaussian Processes (L-GPs) for data-driven modeling of uncertain Lagrangian systems, which constrain the function space according to the energy components of the Lagrangian and the differential equation structure, analytically guaranteeing properties such as energy conservation and quadratic form. Recent theoretical and experimental results on L-GP-based control & observation are presented in the form of exponential stability guarantees as well as in numerical and physical experiments.

Bio: Giulio Evangelisti received the B.Sc. and M.Sc. degrees in electrical engineering and information technology from the Technical University of Munich (TUM), Munich, Germany, in 2017 and 2019, respectively. Since 2021, he has been working toward the Ph.D. degree in electrical and computer engineering with the Chair of Information-oriented Control, TUM School of Computation, Information and Technology, TUM. From 2017 to 2018, he was with the Signal Generator Department, Measurement Technology Division, Rohde and Schwarz GmbH and Co. KG, Munich, Germany, and from 2019 to 2020, a full-time Control Engineer with Blickfeld GmbH. His research interests include the stability of data-driven control systems, physically consistent machine learning, nonlinear and passivity-based control, and robotics.

Giulio Slides.

Sivaranjani Seetharaman

Sivaranjani Seetharaman

Purdue University

Title: Learning Dissipative Neural Dynamical Models [10:45-11:30]

Abstract: Deep learning-based dynamical system models, such as neural ordinary differential equations (neural ODEs) and physics-informed neural networks to capture the dynamical behavior of nonlinear systems have also recently gained attention. Particularly, such models can capture nonlinear dynamical behavior well beyond the ‘local’ region in the vicinity of the equilibrium that is captured by linear models, allowing us to expand the validity and usefulness of our control designs. However, when identifying models for control, it is typically not sufficient to simply obtain a model that approximates the dynamical behavior of the system. Rather, we would ideally like to preserve essential system properties such as stability in the identified models. One such control-relevant system property that is particularly useful is dissipativity, which provides a general framework to guarantee several crucial properties like L2 stability, passivity, conicity, and sector-boundedness, and can facilitate elegant distributed and compositional control designs in large-scale systems. Therefore, it is particularly attractive to learn neural dynamical models that capture relevant system dynamics, while preserving the dissipativity property in the model. In general, imposing dissipativity constraints during neural network training is a hard problem for which no known techniques exist. In this talk, we present a two-stage framework towards learning a dissipative neural dynamical system models. First, we learn an unconstrained neural dynamical model that closely approximates the system dynamics. Next, we derive sufficient conditions to perturb the weights of the neural dynamical model to ensure dissipativity, followed by perturbation of the biases to retain the fit of the model to the trajectories of the nonlinear system. We show that these two perturbation problems can be solved independently to obtain a neural dynamical model that is guaranteed to be dissipative while closely approximating the nonlinear system.

Bio: Sivaranjani Seetharaman is an Assistant Professor in the School of Industrial Engineering at Purdue University. Previously, she was a postdoctoral researcher in the Department of Electrical Engineering at Texas A&M University, and the Texas A&M Research Institute for Foundations of Interdisciplinary Data Science (FIDS). She received her PhD in Electrical Engineering from the University of Notre Dame, and her Master’s and undergraduate degrees, also in Electrical Engineering, from the Indian Institute of Science, and PES Institute of Technology, respectively. Sivaranjani has been a recipient of the Schlumberger Foundation Faculty for the Future fellowship, the Zonta International Amelia Earhart fellowship, and the Notre Dame Ethical Leaders in STEM fellowship. She was named among MIT Technology Review's Innovators Under 35 (TR35) in 2023, and the MIT Rising Stars in EECS in 2018. Her research interests lie at the intersection of control and machine learning in large-scale networked systems, with applications to energy systems, transportation networks, and human-autonomous systems.

Rolf Findeisen

Rolf Findeisen

Otto-von-Guericke University Magdeburg

Title: Integrating Physics in Learning for Model Based Control [11:45-12:15]

Abstract: Integrating machine learning with model-based control has gained significant interest in recent years. However, simply combining these approaches does not ensure improved performance or explainability of the closed-loop system. In the first part of this talk, we discuss embedding physical knowledge into the machine learning process through physics-informed machine learning. We demonstrate how physical knowledge and desired system properties can be incorporated into Gaussian process learning and neural networks. In the second part, we explain how physically constrained, machine learning-supported models for system dynamics, references, disturbances, and desired cost functions can be integrated with model-based control techniques to provide guarantees. We focus on model predictive control approaches due to their high degree of flexibility.

Bio: Rolf Findeisen studied engineering cybernetics at the University of Stuttgart and chemical engineering at the University of Wisconsin – Madison. He began his doctoral studies at ETH Zurich's, which he completed in 2004 following his doctoral father to the University of Stuttgart. 2007, Rolf was appointed professor at the Institute of Automatic Control at Otto-von-Guericke University Magdeburg. Since August 2021, he heads the Control and Cyber-physical Systems Laboratory at the Technical University of Darmstadt. Rolf is engaged in method development in the area of systems theory and control engineering, focusing on optimization-based and predictive control; fusing machine learning approaches such as Gaussian processes and neural networks with model based control providing guarantees; and control of complex, distributed systems via communication networks.

Rolf Slides.

Open-source Code Tutorial Session

This two-hour long session will provide hands-on code tutorials in the form of well documented jupyter notebooks introducing open-source libraries using physics-informed machine learning for control.

PyEPO - Bo Tang [13:45-15:15]

Summary: PyEPO (PyTorch-based End-to-End Predict-then-Optimize Tool) is a Python-based, open-source software that supports modeling and solving predict-then-optimize problems with the linear objective function. The core capability of PyEPO is to build optimization models with GurobiPy, Pyomo, or any other solvers and algorithms, then embed the optimization model into an artificial neural network for the end-to-end training. For this purpose, PyEPO implements various methods as PyTorch autograd modules.

https://github.com/khalil-research/PyEPO

NeuroMANCER - Jan Drgona [15:45-17:15]

Summary: Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularizations (NeuroMANCER) is an open-source differentiable programming (DP) library for solving parametric constrained optimization problems, physics-informed system identification, and parametric model-based optimal control. NeuroMANCER is written in PyTorch and allows for systematic integration of machine learning with scientific computing for creating end-to-end differentiable models and algorithms embedded with prior knowledge and physics.

https://github.com/pnnl/neuromancer

Neuromancer Slides.

Workshop Chairs

Thomas Becker

Vanderbilt University

thomas...@vanderbilt.edu

Ján Drgoňa

Pacific Northwest National Laboratory

jan...@pnnl.gov

Draguna Vrabie

Pacific Northwest National Laboratory

drag...@pnnl.gov