10,16,2021

News Blog Paper China
Evolutional Deep Neural Network2021-03-17   ${\displaystyle \cong }$
The notion of an Evolutional Deep Neural Network (EDNN) is introduced for the solution of partial differential equations (PDE). The parameters of the network are trained to represent the initial state of the system only, and are subsequently updated dynamically, without any further training, to provide an accurate prediction of the evolution of the PDE system. In this framework, the network parameters are treated as functions with respect to the appropriate coordinate and are numerically updated using the governing equations. By marching the neural network weights in the parameter space, EDNN can predict state-space trajectories that are indefinitely long, which is difficult for other neural network approaches. Boundary conditions of the PDEs are treated as hard constraints, are embedded into the neural network, and are therefore exactly satisfied throughout the entire solution trajectory. Several applications including the heat equation, the advection equation, the Burgers equation, the Kuramoto Sivashinsky equation and the Navier-Stokes equations are solved to demonstrate the versatility and accuracy of EDNN. The application of EDNN to the incompressible Navier-Stokes equation embeds the divergence-free constraint into the network design so that the projection of the momentum equation to solenoidal space is implicitly achieved. The numerical results verify the accuracy of EDNN solutions relative to analytical and benchmark numerical solutions, both for the transient dynamics and statistics of the system.
 
A deep surrogate approach to efficient Bayesian inversion in PDE and integral equation models2020-02-25   ${\displaystyle \cong }$
We propose a novel deep learning approach to efficiently perform Bayesian inference in partial differential equation (PDE) and integral equation models over potentially high-dimensional parameter spaces. The contributions of this paper are two-fold; the first is the introduction of a neural network approach to approximating the solutions of Fredholm and Volterra integral equations of the first and second kind. The second is the description of a deep surrogate model which allows for efficient sampling from a Bayesian posterior distribution in which the likelihood depends on the solutions of PDEs or integral equations. For the latter, our method relies on the approximate representation of parametric solutions by neural networks. This deep learning approach allows the accurate and efficient approximation of parametric solutions in significantly higher dimensions than is possible using classical techniques. Since the approximated solutions are very cheap to evaluate, the solutions of Bayesian inverse problems over large parameter spaces are tractable using Markov chain Monte Carlo. We demonstrate the efficiency of our method using two real-world examples; these include Bayesian inference in the PDE and integral equation case for an example from electrochemistry, and Bayesian inference of a function-valued heat-transfer parameter with applications in aviation.
 
Actor-Critic Algorithm for High-dimensional Partial Differential Equations2020-10-07   ${\displaystyle \cong }$
We develop a deep learning model to effectively solve high-dimensional nonlinear parabolic partial differential equations (PDE). We follow Feynman-Kac formula to reformulate PDE into the equivalent stochastic control problem governed by a Backward Stochastic Differential Equation (BSDE) system. The Markovian property of the BSDE is utilized in designing our neural network architecture, which is inspired by the Actor-Critic algorithm usually applied for deep Reinforcement Learning. Compared to the State-of-the-Art model, we make several improvements including 1) largely reduced trainable parameters, 2) faster convergence rate and 3) fewer hyperparameters to tune. We demonstrate those improvements by solving a few well-known classes of PDEs such as Hamilton-Jacobian-Bellman equation, Allen-Cahn equation and Black-Scholes equation with dimensions on the order of 100.
 
General solutions for nonlinear differential equations: a rule-based self-learning approach using deep reinforcement learning2019-05-29   ${\displaystyle \cong }$
A universal rule-based self-learning approach using deep reinforcement learning (DRL) is proposed for the first time to solve nonlinear ordinary differential equations and partial differential equations. The solver consists of a deep neural network-structured actor that outputs candidate solutions, and a critic derived only from physical rules (governing equations and boundary and initial conditions). Solutions in discretized time are treated as multiple tasks sharing the same governing equation, and the current step parameters provide an ideal initialization for the next owing to the temporal continuity of the solutions, which shows a transfer learning characteristic and indicates that the DRL solver has captured the intrinsic nature of the equation. The approach is verified through solving the Schrödinger, Navier-Stokes, Burgers', Van der Pol, and Lorenz equations and an equation of motion. The results indicate that the approach gives solutions with high accuracy, and the solution process promises to get faster.
 
DL-PDE: Deep-learning based data-driven discovery of partial differential equations from discrete and noisy data2020-04-06   ${\displaystyle \cong }$
In recent years, data-driven methods have been developed to learn dynamical systems and partial differential equations (PDE). The goal of such work is discovering unknown physics and the corresponding equations. However, prior to achieving this goal, major challenges remain to be resolved, including learning PDE under noisy data and limited discrete data. To overcome these challenges, in this work, a deep-learning based data-driven method, called DL-PDE, is developed to discover the governing PDEs of underlying physical processes. The DL-PDE method combines deep learning via neural networks and data-driven discovery of PDE via sparse regressions. In the DL-PDE, a neural network is first trained, and then a large amount of meta-data is generated, and the required derivatives are calculated by automatic differentiation. Finally, the form of PDE is discovered by sparse regression. The proposed method is tested with physical processes, governed by groundwater flow equation, convection-diffusion equation, Burgers equation and Korteweg-de Vries (KdV) equation, for proof-of-concept and applications in real-world engineering settings. The proposed method achieves satisfactory results when data are noisy and limited.
 
A Discussion on Solving Partial Differential Equations using Neural Networks2019-04-15   ${\displaystyle \cong }$
Can neural networks learn to solve partial differential equations (PDEs)? We investigate this question for two (systems of) PDEs, namely, the Poisson equation and the steady Navier--Stokes equations. The contributions of this paper are five-fold. (1) Numerical experiments show that small neural networks (< 500 learnable parameters) are able to accurately learn complex solutions for systems of partial differential equations. (2) It investigates the influence of random weight initialization on the quality of the neural network approximate solution and demonstrates how one can take advantage of this non-determinism using ensemble learning. (3) It investigates the suitability of the loss function used in this work. (4) It studies the benefits and drawbacks of solving (systems of) PDEs with neural networks compared to classical numerical methods. (5) It proposes an exhaustive list of possible directions of future work.
 
DLGA-PDE: Discovery of PDEs with incomplete candidate library via combination of deep learning and genetic algorithm2020-01-20   ${\displaystyle \cong }$
Data-driven methods have recently been developed to discover underlying partial differential equations (PDEs) of physical problems. However, for these methods, a complete candidate library of potential terms in a PDE are usually required. To overcome this limitation, we propose a novel framework combining deep learning and genetic algorithm, called DLGA-PDE, for discovering PDEs. In the proposed framework, a deep neural network that is trained with available data of a physical problem is utilized to generate meta-data and calculate derivatives, and the genetic algorithm is then employed to discover the underlying PDE. Owing to the merits of the genetic algorithm, such as mutation and crossover, DLGA-PDE can work with an incomplete candidate library. The proposed DLGA-PDE is tested for discovery of the Korteweg-de Vries (KdV) equation, the Burgers equation, the wave equation, and the Chaffee-Infante equation, respectively, for proof-of-concept. Satisfactory results are obtained without the need for a complete candidate library, even in the presence of noisy and limited data.
 
Data-driven peakon and periodic peakon travelling wave solutions of some nonlinear dispersive equations via deep learning2021-01-12   ${\displaystyle \cong }$
In the field of mathematical physics, there exist many physically interesting nonlinear dispersive equations with peakon solutions, which are solitary waves with discontinuous first-order derivative at the wave peak. In this paper, we apply the multi-layer physics-informed neural networks (PINNs) deep learning to successfully study the data-driven peakon and periodic peakon solutions of some well-known nonlinear dispersion equations with initial-boundary value conditions such as the Camassa-Holm (CH) equation, Degasperis-Procesi equation, modified CH equation with cubic nonlinearity, Novikov equation with cubic nonlinearity, mCH-Novikov equation, b-family equation with quartic nonlinearity, generalized modified CH equation with quintic nonlinearity, and etc. These results will be useful to further study the peakon solutions and corresponding experimental design of nonlinear dispersive equations.
 
Asymptotics of Reinforcement Learning with Neural Networks2019-11-13   ${\displaystyle \cong }$
We prove that a single-layer neural network trained with the Q-learning algorithm converges in distribution to a random ordinary differential equation as the size of the model and the number of training steps become large. Analysis of the limit differential equation shows that it has a unique stationary solution which is the solution of the Bellman equation, thus giving the optimal control for the problem. In addition, we study the convergence of the limit differential equation to the stationary solution. As a by-product of our analysis, we obtain the limiting behavior of single-layer neural networks when trained on i.i.d. data with stochastic gradient descent under the widely-used Xavier initialization.
 
StarNet: Gradient-free Training of Deep Generative Models using Determined System of Linear Equations2021-01-03   ${\displaystyle \cong }$
In this paper we present an approach for training deep generative models solely based on solving determined systems of linear equations. A network that uses this approach, called a StarNet, has the following desirable properties: 1) training requires no gradient as solution to the system of linear equations is not stochastic, 2) is highly scalable when solving the system of linear equations w.r.t the latent codes, and similarly for the parameters of the model, and 3) it gives desirable least-square bounds for the estimation of latent codes and network parameters within each layer.
 
Solving non-linear Kolmogorov equations in large dimensions by using deep learning: a numerical comparison of discretization schemes2020-12-09   ${\displaystyle \cong }$
Non-linear partial differential Kolmogorov equations are successfully used to describe a wide range of time dependent phenomena, in natural sciences, engineering or even finance. For example, in physical systems, the Allen-Cahn equation describes pattern formation associated to phase transitions. In finance, instead, the Black-Scholes equation describes the evolution of the price of derivative investment instruments. Such modern applications often require to solve these equations in high-dimensional regimes in which classical approaches are ineffective. Recently, an interesting new approach based on deep learning has been introduced by E, Han, and Jentzen [1], [2]. The main idea is to construct a deep network which is trained from the samples of discrete stochastic differential equations underlying Kolmogorov's equation. The network is able to approximate the solutions of the Kolmogorov equation with polynomial complexity in whole spatial domains, therefore avoiding the curse of dimensionality. In this contribution we study variants of the deep networks by using different discretizations schemes of the stochastic differential equation. We compare the performance of the associated networks, on benchmarked examples, and show that, for some discretization schemes, improvements in the accuracy are possible without affecting the computational complexity.
 
Learning To Solve Differential Equations Across Initial Conditions2020-04-19   ${\displaystyle \cong }$
Recently, there has been a lot of interest in using neural networks for solving partial differential equations. A number of neural network-based partial differential equation solvers have been formulated which provide performances equivalent, and in some cases even superior, to classical solvers. However, these neural solvers, in general, need to be retrained each time the initial conditions or the domain of the partial differential equation changes. In this work, we posit the problem of approximating the solution of a fixed partial differential equation for any arbitrary initial conditions as learning a conditional probability distribution. We demonstrate the utility of our method on Burger's Equation.
 
Three algorithms for solving high-dimensional fully-coupled FBSDEs through deep learning2020-02-02   ${\displaystyle \cong }$
Recently, the deep learning method has been used for solving forward-backward stochastic differential equations (FBSDEs) and parabolic partial differential equations (PDEs). It has good accuracy and performance for high-dimensional problems. In this paper, we mainly solve fully coupled FBSDEs through deep learning and provide three algorithms. Several numerical results show remarkable performance especially for high-dimensional cases.
 
Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations2017-06-14   ${\displaystyle \cong }$
We propose a new algorithm for solving parabolic partial differential equations (PDEs) and backward stochastic differential equations (BSDEs) in high dimension, by making an analogy between the BSDE and reinforcement learning with the gradient of the solution playing the role of the policy function, and the loss function given by the error between the prescribed terminal condition and the solution of the BSDE. The policy function is then approximated by a neural network, as is done in deep reinforcement learning. Numerical results using TensorFlow illustrate the efficiency and accuracy of the proposed algorithms for several 100-dimensional nonlinear PDEs from physics and finance such as the Allen-Cahn equation, the Hamilton-Jacobi-Bellman equation, and a nonlinear pricing model for financial derivatives.
 
A Neuro-Symbolic Method for Solving Differential and Functional Equations2020-11-04   ${\displaystyle \cong }$
When neural networks are used to solve differential equations, they usually produce solutions in the form of black-box functions that are not directly mathematically interpretable. We introduce a method for generating symbolic expressions to solve differential equations while leveraging deep learning training methods. Unlike existing methods, our system does not require learning a language model over symbolic mathematics, making it scalable, compact, and easily adaptable for a variety of tasks and configurations. As part of the method, we propose a novel neural architecture for learning mathematical expressions to optimize a customizable objective. The system is designed to always return a valid symbolic formula, generating a useful approximation when an exact analytic solution to a differential equation is not or cannot be found. We demonstrate through examples how our method can be applied on a number of differential equations, often obtaining symbolic approximations that are useful or insightful. Furthermore, we show how the system can be effortlessly generalized to find symbolic solutions to other mathematical tasks, including integration and functional equations.
 
Deep Forward-Backward SDEs for Min-max Control2019-06-11   ${\displaystyle \cong }$
This paper presents a novel approach to numerically solve stochastic differential games for nonlinear systems. The proposed approach relies on the nonlinear Feynman-Kac theorem that establishes a connection between parabolic deterministic partial differential equations and forward-backward stochastic differential equations. Using this theorem the Hamilton-Jacobi-Isaacs partial differential equation associated with differential games is represented by a system of forward-backward stochastic differential equations. Numerical solution of the aforementioned system of stochastic differential equations is performed using importance sampling and a Long-Short Term Memory recurrent neural network, which is trained in an offline fashion. The resulting algorithm is tested on two example systems in simulation and compared against the standard risk neutral stochastic optimal control formulations.
 
FiniteNet: A Fully Convolutional LSTM Network Architecture for Time-Dependent Partial Differential Equations2020-02-07   ${\displaystyle \cong }$
In this work, we present a machine learning approach for reducing the error when numerically solving time-dependent partial differential equations (PDE). We use a fully convolutional LSTM network to exploit the spatiotemporal dynamics of PDEs. The neural network serves to enhance finite-difference and finite-volume methods (FDM/FVM) that are commonly used to solve PDEs, allowing us to maintain guarantees on the order of convergence of our method. We train the network on simulation data, and show that our network can reduce error by a factor of 2 to 3 compared to the baseline algorithms. We demonstrate our method on three PDEs that each feature qualitatively different dynamics. We look at the linear advection equation, which propagates its initial conditions at a constant speed, the inviscid Burgers' equation, which develops shockwaves, and the Kuramoto-Sivashinsky (KS) equation, which is chaotic.
 
Deep learning based numerical approximation algorithms for stochastic partial differential equations and high-dimensional nonlinear filtering problems2020-12-02   ${\displaystyle \cong }$
In this article we introduce and study a deep learning based approximation algorithm for solutions of stochastic partial differential equations (SPDEs). In the proposed approximation algorithm we employ a deep neural network for every realization of the driving noise process of the SPDE to approximate the solution process of the SPDE under consideration. We test the performance of the proposed approximation algorithm in the case of stochastic heat equations with additive noise, stochastic heat equations with multiplicative noise, stochastic Black--Scholes equations with multiplicative noise, and Zakai equations from nonlinear filtering. In each of these SPDEs the proposed approximation algorithm produces accurate results with short run times in up to 50 space dimensions.
 
Equation Embeddings2018-03-24   ${\displaystyle \cong }$
We present an unsupervised approach for discovering semantic representations of mathematical equations. Equations are challenging to analyze because each is unique, or nearly unique. Our method, which we call equation embeddings, finds good representations of equations by using the representations of their surrounding words. We used equation embeddings to analyze four collections of scientific articles from the arXiv, covering four computer science domains (NLP, IR, AI, and ML) and $\sim$98.5k equations. Quantitatively, we found that equation embeddings provide better models when compared to existing word embedding approaches. Qualitatively, we found that equation embeddings provide coherent semantic representations of equations and can capture semantic similarity to other equations and to words.
 
Deep Learning Models for Global Coordinate Transformations that Linearize PDEs2019-11-06   ${\displaystyle \cong }$
We develop a deep autoencoder architecture that can be used to find a coordinate transformation which turns a nonlinear PDE into a linear PDE. Our architecture is motivated by the linearizing transformations provided by the Cole-Hopf transform for Burgers equation and the inverse scattering transform for completely integrable PDEs. By leveraging a residual network architecture, a near-identity transformation can be exploited to encode intrinsic coordinates in which the dynamics are linear. The resulting dynamics are given by a Koopman operator matrix $\mathbf{K}$. The decoder allows us to transform back to the original coordinates as well. Multiple time step prediction can be performed by repeated multiplication by the matrix $\mathbf{K}$ in the intrinsic coordinates. We demonstrate our method on a number of examples, including the heat equation and Burgers equation, as well as the substantially more challenging Kuramoto-Sivashinsky equation, showing that our method provides a robust architecture for discovering interpretable, linearizing transforms for nonlinear PDEs.