formatting updates
This commit is contained in:
parent
f533fd7a4f
commit
05d2783759
@ -19,6 +19,7 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"Here $\\mathbf{g}$ collects the forcing terms. Below we'll use a simple buoyancy model. We'll solve this PDE on a closed domain with Dirchlet boundary conditions $\\mathbf{u}=0$ for the velocity, and Neumann boundaries $\\frac{\\partial p}{\\partial x}=0$ for pressure, on a domain $\\Omega$ with a physical size of $100 \\times 80$ units. \n",
|
||||
"[[run in colab]](https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/overview-ns-forw.ipynb)\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
|
@ -20,18 +20,21 @@ We can improve this setting by trying to bring the model equations (or parts the
|
||||
into the training process. E.g., given a PDE for $\mathbf{u}(\mathbf{x},t)$ with a time evolution,
|
||||
we can typically express it in terms of a function $\mathcal F$ of the derivatives
|
||||
of $\mathbf{u}$ via
|
||||
$
|
||||
|
||||
$$
|
||||
\mathbf{u}_t = \mathcal F ( \mathbf{u}_{x}, \mathbf{u}_{xx}, ... \mathbf{u}_{xx...x} ) ,
|
||||
$
|
||||
$$
|
||||
|
||||
where the $_{\mathbf{x}}$ subscripts denote spatial derivatives with respect to one of the spatial dimensions
|
||||
of higher and higher order (this can of course also include derivatives with respect to different axes).
|
||||
|
||||
In this context we can employ DL by approximating the unknown $\mathbf{u}$ itself
|
||||
with a NN, denoted by $\tilde{\mathbf{u}}$. If the approximation is accurate, the PDE
|
||||
naturally should be satisfied, i.e., the residual $R$ should be equal to zero:
|
||||
$
|
||||
R = \mathbf{u}_t - \mathcal F ( \mathbf{u}_{x}, \mathbf{u}_{xx}, ... \mathbf{u}_{xx...x} ) = 0
|
||||
$.
|
||||
|
||||
$$
|
||||
R = \mathbf{u}_t - \mathcal F ( \mathbf{u}_{x}, \mathbf{u}_{xx}, ... \mathbf{u}_{xx...x} ) = 0 .
|
||||
$$
|
||||
|
||||
This nicely integrates with the objective for training a neural network: similar to before
|
||||
we can collect sample solutions
|
||||
@ -42,7 +45,9 @@ get solutions with random offset or other undesirable components. Hence the supe
|
||||
help to _pin down_ the solution in certain places.
|
||||
Now our training objective becomes
|
||||
|
||||
$\text{arg min}_{\theta} \ \alpha_0 \sum_i (f(x_i ; \theta)-y_i)^2 + \alpha_1 R(x_i) $,
|
||||
$$
|
||||
\text{arg min}_{\theta} \ \alpha_0 \sum_i (f(x_i ; \theta)-y_i)^2 + \alpha_1 R(x_i) ,
|
||||
$$ (physloss-training)
|
||||
|
||||
where $\alpha_{0,1}$ denote hyperparameters that scale the contribution of the supervised term and
|
||||
the residual term, respectively. We could of course add additional residual terms with suitable scaling factors here.
|
||||
|
@ -114,7 +114,7 @@
|
||||
}
|
||||
|
||||
|
||||
@article{thuerey2020deepFlowPred,
|
||||
@article{thuerey2020dfp,
|
||||
title={Deep learning methods for Reynolds-averaged Navier--Stokes simulations of airfoil flows},
|
||||
author={Thuerey, Nils and Weissenow, Konstantin and Prantl, Lukas and Hu, Xiangyu},
|
||||
journal={AIAA Journal}, year={2020},
|
||||
@ -123,6 +123,7 @@
|
||||
url={https://ge.in.tum.de/publications/2018-deep-flow-pred/},
|
||||
}
|
||||
|
||||
|
||||
@article{prantl2019rtliq,
|
||||
title ={{Generating Liquid Simulations with Deformation-Aware Neural Networks}},
|
||||
author={Lukas Prantl and Boris Bonev and Nils Thuerey},
|
||||
|
Loading…
Reference in New Issue
Block a user