What Is Physics-Informed Machine Learning?
In many scientific problems, we want to infer unknown parameters of a physical system from observations while respecting known physical laws. For example, suppose we measure the number of cells over time and want to estimate how fast they grow. If we fit a neural network directly to the data, the model may overfit because the data is sparse or noisy. However, we often know something about the dynamics in advance. For instance, early cell growth is often modeled by the ordinary differential equation \(u_t = a u\), where \(u(t)\) is the cell population and \(a\) is an unknown growth rate.
Problems of this type are known as inverse problems, where the goal is to infer unknown parameters of a model from observed data. Inverse problems have been studied extensively in applied mathematics, statistics, and engineering. Physics-informed machine learning provides one way to approach such problems by combining data with physical models.
What Is BiLO?
Bilevel Local Operator Learning uses a neural network \(u(t,a;W)\) to represent the solution of the differential equation, where \(W\) are the network weights. The unknown parameter \(a\) is also included as an input to the neural network. To enforce the initial condition, we can represent the solution by \(u(t,a;W) = N(t,a;W) + u_0 t\), where \(u_0\) is the initial condition and \(N\) is a multilayer perceptron (MLP).
BiLO solves the following bilevel optimization problem
\[ \begin{aligned} a^* &= \arg\min_a \underbrace{\frac{1}{N}\sum_i \left(u(t_i,a;W^*(a))-\hat u(t_i)\right)^2}_{\text{data loss } L_{\rm data}}, \\ W^*(a) &= \arg\min_W \underbrace{\frac{1}{N_r}\sum_j \left(u_t(t_j,a;W)-a\,u(t_j,a;W)\right)^2}_{\text{residual loss } L_{\rm res}}+ \\ & w_{\mathrm{rgrad}} \underbrace{\frac{1}{N_r}\sum_j \left(\partial_a\big(u_t(t_j,a;W)-a\,u(t_j,a;W)\big)\right)^2}_{\text{residual-gradient loss } L_{\rm rgrad}} . \end{aligned} \]
The lower-level problem trains the neural network to approximate a local solution operator near the current parameter value.
First, the residual loss enforces that the network approximately solves the PDE at the current parameter \(a\).
Second, the residual-gradient term ensures that small changes in \(a\) lead to small changes in the residual. This term is closely related to the sensitivity of the PDE with respect to the parameter. In other words, the network does not only solve the PDE at one parameter value, but also approximates how the solution varies in a small neighborhood of that parameter. This is different from classical operator learning, where the goal is to learn the full parameter-to-solution map over a wide range of parameters.
Once such a local operator is available, taking the derivative of the data loss with respect to the parameter provides a descent direction for updating the parameter while respecting the differential equation constraint.
What Is a PINN?
A Physics-Informed Neural Network (PINN) represents the solution using a neural network \(u(t;W)\). In contrast to BiLO, the parameter \(a\) is not an input to the neural network, although it is still treated as a trainable variable.
PINNs solve a single optimization problem
\[ \min_{W,a} \quad \underbrace{\frac{1}{N_r}\sum_j \left(u_t(t_j;W)-a\,u(t_j;W)\right)^2}_{\text{residual loss } L_{\rm res}} + w_{\mathrm{data}} \underbrace{\frac{1}{N}\sum_i \left(u(t_i;W)-\hat u(t_i)\right)^2}_{\text{data loss } L_{\rm data}} . \]
In this formulation, the neural network is trained so that it both fits the observed data and the governing differential equation.
Interactive Playground
Inspired by TensorFlow Playground. This visualization is inspired by the original TensorFlow Playground created by Daniel Smilkov and Shan Carter. The original project made neural networks easier to understand through interactive visualization. This version adapts the same idea to illustrate concepts in physics-informed machine learning, including PINNs and BiLO.
Open Source. The playground is open-sourced with the goal of making these ideas more accessible for teaching and learning. Feel free to reuse or modify it for educational purposes, demonstrations, or experimentation. Suggestions for improvements, extensions, or new features are welcome.
Web-Based Implementation. As with the original TensorFlow Playground, everything runs directly in the browser. There is no PyTorch or TensorFlow behind the scenes. The playground uses a lightweight neural-network implementation written in JavaScript, including a small manual backpropagation engine capable of computing the higher-order derivatives required for physics-informed learning.
AI Assistance. Gemini was used to assist with deriving some of the higher-order derivative expressions, and Cursor was used to help build and iterate on the code.
References
Zhang, R.Z., Miles, C.E., Xie, X., Lowengrub, J.S., 2026. BiLO: Bilevel Local Operator Learning for PDE Inverse Problems. Journal of Computational Physics 551, 114679.
Raissi, M., Perdikaris, P., Karniadakis, G.E., 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics 378, 686–707.