From 754d79b0612e4cf81ce798aabfbd8d027ec79c71 Mon Sep 17 00:00:00 2001 From: NT Date: Fri, 4 Mar 2022 02:21:23 +0100 Subject: [PATCH] fixed Burgers eq, thanks to YuSha --- physicalloss.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/physicalloss.md b/physicalloss.md index f30aa92..b0e80ec 100644 --- a/physicalloss.md +++ b/physicalloss.md @@ -137,7 +137,7 @@ Due to the lack of explicit spatial sampling points, an MLP, i.e., fully-connect To pick a simple example, Burgers equation in 1D, $\frac{\partial u}{\partial{t}} + u \nabla u = \nu \nabla \cdot \nabla u $ , we can directly -formulate a loss term $R = \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} - \nu \frac{\partial^2 u}{\partial x^2} u$ that should be minimized as much as possible at training time. For each of the terms, e.g. $\frac{\partial u}{\partial x}$, +formulate a loss term $R = \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} - \nu \frac{\partial^2 u}{\partial x^2}$ that should be minimized as much as possible at training time. For each of the terms, e.g. $\frac{\partial u}{\partial x}$, we can simply query the DL framework that realizes $u$ to obtain the corresponding derivative. For higher order derivatives, such as $\frac{\partial^2 u}{\partial x^2}$, we can simply query the derivative function of the framework multiple times. In the following section, we'll give a specific example of how that works in tensorflow.