sol code
This commit is contained in:
parent
6210fede2c
commit
57938d7abe
@ -39,4 +39,5 @@ The following table summarizes these findings:
|
||||
|
||||
As a summary, both methods are definitely interesting, and leave a lot of room for improvement with more complicated extensions and algorithmic modifications that change and improve on the various negative aspects we have discussed for both sides.
|
||||
|
||||
However, as of this writing, the physics-informed (PI) approach has clear limitations when it comes to performance and campatibility with existing numerical methods. Thus, when knowledge of the problem at hand is available, which typically is the case when we choose a suitable PDE model to constrain the learning process, employing a differentiable physics (DP) solver can significantly improve the training process as well as the quality of the obtained solution. Next, we will target a more setting, i.e., fluids with Navier Stokes, to illustrate this behavior.
|
||||
However, as of this writing, the physics-informed (PI) approach has clear limitations when it comes to performance and campatibility with existing numerical methods. Thus, when knowledge of the problem at hand is available, which typically is the case when we choose a suitable PDE model to constrain the learning process, employing a differentiable physics (DP) solver can significantly improve the training process as well as the quality of the obtained solution. Next, we will target more complex settings, i.e., fluids with Navier Stokes, to illustrate this in more detail.
|
||||
|
||||
|
||||
13
diffphys-outlook.md
Normal file
13
diffphys-outlook.md
Normal file
@ -0,0 +1,13 @@
|
||||
Outlook
|
||||
=======================
|
||||
|
||||
TODO
|
||||
hybrid methods!
|
||||
|
||||
We demonstrate that neural networks can be successfully trained if they can interact with the respective PDE solver during training. To achieve this, we leverage differentiable simulations [1, 68]. Differentiable simulations allow a trained model to autonomously explore and experience the physical environment and receive directed feedback regarding its interactions throughout the solver iterations. Hence, our work fits into the broader context of machine learning as differentiable programming, and we specifically target recurrent interactions of highly non-linear PDEs with deep neural networks.
|
||||
|
||||
This combination bears particular promise: it improves generalizing capabilities of the trained models by letting the PDE-solver handle large-scale changes to the data distribution such that the learned model can focus on localized structures not captured by the discretization. While physical models generalize very well, learned models often specialize in data distributions seen at training time. However, we will show that, by combining PDE-based solvers with a learned model, we can arrive at hybrid methods that yield improved accuracy while handling solution manifolds with significant amounts of varying physical behavior.
|
||||
|
||||
|
||||
|
||||
|
||||
9
intro.md
9
intro.md
@ -102,6 +102,8 @@ Loose collection of notes and TODOs:
|
||||
|
||||
General physics & dl , intro & textual overview
|
||||
|
||||
- Intro phys loss example, notebook patrick
|
||||
|
||||
Supervised? Airfoils? Liwei, simple example? app: optimization, shape opt w surrogates
|
||||
|
||||
- AIAA supervised learning , idp_weissenov/201019-upd-arxiv-v2/ {cite}`thuerey2020deepFlowPred`
|
||||
@ -120,9 +122,10 @@ vs. PINNs [alt.: neural ODEs , PDE net?] , all using GD (optional, PINNs could u
|
||||
Diff phys, start with overview of idea: gradients via autodiff, then run GD
|
||||
(TODO include squared func Patrick?)
|
||||
|
||||
- illustrate and discuss gradients -> mult. for chain rule; (later: more general PG chain w func composition)
|
||||
|
||||
- Differentiable Physics (w/o network) , {cite}`holl2019pdecontrol`
|
||||
-> phiflow colab notebook good start, but needs updates (see above Jan2)
|
||||
illustrate and discuss gradients -> mult. for chain rule; (later: more general PG chain w func composition)
|
||||
|
||||
- SOL_201019-finals_Solver-in-the-Loop-Main-final.pdf , {cite}`um2020sol`
|
||||
numerical errors, how to include in jupyter / colab?
|
||||
@ -136,10 +139,10 @@ beyond GD: re-cap newton & co
|
||||
Phys grad (PGs) as fundamental improvement, PNAS case; add more complex one?
|
||||
PG update of poisson eq? see PNAS-template-main.tex.bak01-poissonUpdate , explicitly lists GD and PG updates
|
||||
|
||||
PGa 2020 Sept, content: ML & opt
|
||||
- PGa 2020 Sept, content: ML & opt
|
||||
Gradients.pdf, -> overleaf-physgrad/
|
||||
|
||||
PGb 201002-beforeVac, content: v1,v2,old - more PG focused
|
||||
- PGb 201002-beforeVac, content: v1,v2,old - more PG focused
|
||||
-> general intro versions
|
||||
|
||||
TODO, for version 2.x add:
|
||||
|
||||
@ -8,10 +8,11 @@
|
||||
| $A$ | matrix |
|
||||
| $\eta$ | learning rate or step size |
|
||||
| $\Gamma$ | boundary of computational domain $\Omega$ |
|
||||
| $f()$ | approximated version of $f^{*}$ |
|
||||
| $f^{*}()$ | generic function to be approximated, typically unknown |
|
||||
| $f()$ | approximate version of $f^{*}$ |
|
||||
| $\Omega$ | computational domain |
|
||||
| $\mathcal P$ | physical model, PDE |
|
||||
| $\mathcal P^*$ | continuous/ideal physical model |
|
||||
| $\mathcal P$ | discretized physical model, PDE |
|
||||
| $\theta$ | neural network params |
|
||||
| $t$ | time dimension |
|
||||
| $\mathbf{u}$ | vector-valued velocity |
|
||||
|
||||
@ -2,11 +2,21 @@ Model Equations
|
||||
============================
|
||||
|
||||
overview of PDE models to be used later on ...
|
||||
continuous pde $\mathcal P^*$
|
||||
|
||||
domain $\Omega$, boundary $\Gamma$
|
||||
$\vx \in \Omega \subseteq \mathbb{R}^d$
|
||||
for the domain $\Omega$ in $d$ dimensions,
|
||||
time $t \in \mathbb{R}^{+}$.
|
||||
|
||||
boundary $\Gamma$ , for specifying BCs
|
||||
|
||||
continuous functions, but few assumptions about continuity for now...
|
||||
|
||||
Numerical methods yield approximations of a smooth function such as $\mathcal P^*$ via discretization and invariably introduce errors.
|
||||
These errors can be measured in terms of the deviation from the exact analytical solution,
|
||||
and for discrete simulations of PDEs, they are typically expressed as a function of the truncation error
|
||||
$O( \Delta x^k )$, where $\Delta x$ denotes a step size of the discretization.
|
||||
|
||||
```{admonition} Notation and abbreviations
|
||||
:class: seealso
|
||||
If unsure, please check the summary of our mathematical notation
|
||||
|
||||
@ -146,10 +146,6 @@ learn via GD, $\partial f / \partial \theta$
|
||||
|
||||
general goal, minimize E for e(x,y) ... cf. eq. 8.1 from DLbook
|
||||
|
||||
$$
|
||||
test \~ \approx eq \ \RR
|
||||
$$
|
||||
|
||||
introduce scalar loss, always(!) scalar...
|
||||
(also called *cost* or *objective* function)
|
||||
|
||||
@ -164,3 +160,4 @@ we only deal with _regression_ problems in the following.
|
||||
|
||||
maximum likelihood estimation
|
||||
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user