Compare commits
12 Commits
1c7c5b107f
...
025fbc4559
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
025fbc4559 | ||
|
|
46afdf334a | ||
|
|
e9e2620713 | ||
|
|
3165a15270 | ||
|
|
a8e027c5db | ||
|
|
370ee58d4b | ||
|
|
3a52104c71 | ||
|
|
e3125e0fa4 | ||
|
|
413d682d83 | ||
|
|
d71def7084 | ||
|
|
817a5909b8 | ||
|
|
0fbe4f10bd |
@@ -626,7 +626,7 @@
|
||||
"This example illustrated how the differentiable physics approach can easily be extended towards significantly more \n",
|
||||
"complex PDEs. Above, we've optimized for a mini-batch of 20 steps of a full Navier-Stokes solver.\n",
|
||||
"\n",
|
||||
"This is a powerful basis to bring NNs into the picture. As you might have noticed, our degrees of freedom were still a regular grid, and we've jointly solved a single inverse problem. There were three cases to solve as a mini-batch, of course, but nonetheless the setup still represents a direct optimization. Thus, in line with the PINN example of {doc}`physicalloss-code` we've not really dealt with a _machine learning_ task here. However, DP training allows for a range of flexible compinations with NNs that will be the topic of the next chapters.\n"
|
||||
"This is a powerful basis to bring NNs into the picture. As you might have noticed, our degrees of freedom were still a regular grid, and we've jointly solved a single inverse problem. There were three cases to solve as a mini-batch, of course, but nonetheless the setup still represents a direct optimization. Thus, in line with the PINN example of {doc}`physicalloss-code` we've not really dealt with a _machine learning_ task here. However, DP training allows for a range of flexible combinations with NNs that will be the topic of the next chapters.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -158,7 +158,7 @@
|
||||
"id": "RY1F4kdWPLNG"
|
||||
},
|
||||
"source": [
|
||||
"Also let's get installing / importing all the necessary libraries out of the way. And while we're at it, we set the random seed - obviously, 42 is the ultimate choice here \ud83d\ude42"
|
||||
"Also let's get installing / importing all the necessary libraries out of the way. The cell below by default contains a fix for the current colab version. If you're happy with your local install, make sure to disable the TensorFlow version change. And while we're at it, we also set the random seed - obviously, 42 is the ultimate choice here \ud83d\ude42"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -173,6 +173,8 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!yes|pip uninstall tensorflow\n",
|
||||
"!pip install --upgrade --quiet tensorflow==2.9 # for colab ignore msgs, should work despite these\n",
|
||||
"!pip install --upgrade --quiet phiflow==2.2\n",
|
||||
"from phi.tf.flow import *\n",
|
||||
"import tensorflow as tf\n",
|
||||
|
||||
11
make-pdf.sh
11
make-pdf.sh
@@ -1,6 +1,7 @@
|
||||
# source this file with "." in a shell
|
||||
|
||||
# note this script assumes the following paths/versions: python3.7 , /Users/thuerey/Library/Python/3.7/bin/jupyter-book
|
||||
# updated for nMBA !
|
||||
|
||||
# do clean git checkout for changes from json-cleanup-for-pdf.py via:
|
||||
# git checkout diffphys-code-burgers.ipynb diffphys-code-ns.ipynb diffphys-code-sol.ipynb physicalloss-code.ipynb bayesian-code.ipynb supervised-airfoils.ipynb reinflearn-code.ipynb physgrad-code.ipynb physgrad-comparison.ipynb physgrad-hig-code.ipynb
|
||||
@@ -9,13 +10,17 @@ echo
|
||||
echo WARNING - still requires one manual quit of first pdf/latex pass, use shift-x to quit
|
||||
echo
|
||||
|
||||
PYT=python3.7
|
||||
PYT=python3
|
||||
|
||||
# warning - modifies notebooks!
|
||||
python3.7 json-cleanup-for-pdf.py
|
||||
${PYT} json-cleanup-for-pdf.py
|
||||
|
||||
# clean / remove _build dir ?
|
||||
|
||||
# GEN!
|
||||
/Users/thuerey/Library/Python/3.7/bin/jupyter-book build . --builder pdflatex
|
||||
#/Users/thuerey/Library/Python/3.7/bin/jupyter-book build . --builder pdflatex
|
||||
/Users/thuerey/Library/Python/3.9/bin/jupyter-book build . --builder pdflatex
|
||||
|
||||
cd _build/latex
|
||||
#mv book.pdf book-xetex.pdf # not necessary, failed anyway
|
||||
@@ -29,7 +34,7 @@ mv book.aux book-in.aux
|
||||
mv book.toc book-in.toc
|
||||
#mv sphinxmanual.cls sphinxmanual-in.cls
|
||||
|
||||
python3.7 ../../fixup-latex.py
|
||||
${PYT} ../../fixup-latex.py
|
||||
# reads book-in.tex -> writes book-in2.tex
|
||||
|
||||
# remove unicode chars via unix iconv
|
||||
|
||||
@@ -63,7 +63,7 @@ $$f(x+\Delta) = f(x) + \int_0^1 \text{d}s ~ f'(x+s \Delta) \Delta \ . $$
|
||||
|
||||
In addition, we'll make use of Lipschitz-continuity with constant $\mathcal L$:
|
||||
$|f(x+\Delta) + f(x)|\le \mathcal L \Delta$, and the well-known Cauchy-Schwartz inequality:
|
||||
$ u^T v < |u| \cdot |v| $.
|
||||
$ u^T v \le |u| \cdot |v| $.
|
||||
|
||||
## Newton's method
|
||||
|
||||
|
||||
@@ -564,7 +564,7 @@
|
||||
],
|
||||
"source": [
|
||||
"plt = vis.plot(x_gd.values.batch[0], x_sip.values.batch[0], data.values.batch[0], size=(15,4) )\n",
|
||||
"plt.get_axes()[0].set_title(\"Adam\"); plt.get_axes()[1].set_title(\"SIP\"); plt.get_axes()[2].set_title(\"Reference\");\n"
|
||||
"plt.axes[0].title=\"Adam\"; plt.axes[1].title=\"SIP\"; plt.axes[2].title=\"Reference\";\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -90,7 +90,7 @@
|
||||
"\n",
|
||||
"Next, we set up a simple NN with 8 fully connected layers and `tanh` activations with 20 units each. \n",
|
||||
"\n",
|
||||
"We'll also define the `boundary_tx` function which gives an array of constraints for the solution (all for $=0.5$ in this example), and the `open_boundary` function which stores constraints for $x= \\pm1$ being 0."
|
||||
"We'll also define the `boundary_tx` function which gives an array of constraints for the solution (all for $t=0.5$ in this example), and the `open_boundary` function which stores constraints for $x= \\pm1$ being 0."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -13,6 +13,69 @@
|
||||
@STRING{NeurIPS = "Advances in Neural Information Processing Systems"}
|
||||
|
||||
|
||||
@inproceedings{liu2024airfoils,
|
||||
title={Uncertainty-aware Surrogate Models for Airfoil Flow Simulations with Denoising Diffusion Probabilistic Models},
|
||||
author={Liu, Qiang and Thuerey, Nils},
|
||||
booktitle={Journal of the American Institute of Aeronautics and Astronautics},
|
||||
year={2024},
|
||||
}
|
||||
|
||||
@inproceedings{schnell2024sbptt,
|
||||
title={Stabilizing Backpropagation Through Time to Learn Complex Physics},
|
||||
author={Schnell, Patrick and Thuerey, Nils},
|
||||
booktitle={International Conference on Learning Representations},
|
||||
year={2024},
|
||||
}
|
||||
|
||||
@inproceedings{winchenbach2024iclr,
|
||||
title={Symmetric Basis Convolutions for Learning Lagrangian Fluid Mechanics},
|
||||
author={Rene Winchenbach and Thuerey, Nils},
|
||||
booktitle={International Conference on Learning Representations},
|
||||
year={2024},
|
||||
}
|
||||
|
||||
@inproceedings{holl2024phiml,
|
||||
title={Phi-ML: Intuitive Scientific Computing with Dimension Types for Jax, PyTorch, TensorFlow and NumPy},
|
||||
author={Holl, Philipp and Thuerey, Nils},
|
||||
booktitle={Journal of Open Source Software},
|
||||
year={2024},
|
||||
url={https://joss.theoj.org/papers/10.21105/joss.06171},
|
||||
}
|
||||
|
||||
|
||||
@article{holzschuh2023smdp,
|
||||
title={Solving Inverse Physics Problems with Score Matching},
|
||||
author={Benjamin Holzschuh and Simona Vegetti and Thuerey, Nils},
|
||||
journal={Advances in Neural Information Processing Systems (NeurIPS)},
|
||||
volume={36},
|
||||
year={2023}
|
||||
}
|
||||
|
||||
@inproceedings{franz2023nglobt,
|
||||
title={Learning to Estimate Single-View Volumetric Flow Motions without 3D Supervision},
|
||||
author={Erik Franz, Barbara Solenthaler, and Thuerey, Nils},
|
||||
booktitle={ICLR},
|
||||
year={2023},
|
||||
url={https://github.com/tum-pbs/Neural-Global-Transport},
|
||||
}
|
||||
|
||||
@inproceedings{kohl2023volSim,
|
||||
title={Learning Similarity Metrics for Volumetric Simulations with Multiscale CNNs},
|
||||
author={Kohl, Georg and Chen, Li-Wei and Thuerey, Nils},
|
||||
booktitle={AAAI Conference on Artificial Intelligence},
|
||||
year={2022}
|
||||
url={https://github.com/tum-pbs/VOLSIM},
|
||||
}
|
||||
|
||||
|
||||
@article{prantl2022guaranteed,
|
||||
title={Guaranteed conservation of momentum for learning particle-based fluid dynamics},
|
||||
author={Prantl, Lukas and Ummenhofer, Benjamin and Koltun, Vladlen and Thuerey, Nils},
|
||||
journal={Advances in Neural Information Processing Systems},
|
||||
volume={35},
|
||||
year={2022}
|
||||
}
|
||||
|
||||
@inproceedings{schnell2022hig,
|
||||
title={Half-Inverse Gradients for Physical Deep Learning},
|
||||
author={Schnell, Patrick and Holl, Philipp and Thuerey, Nils},
|
||||
@@ -21,11 +84,12 @@
|
||||
url={https://github.com/tum-pbs/half-inverse-gradients},
|
||||
}
|
||||
|
||||
|
||||
@inproceedings{chen2021highacc,
|
||||
title={Towards high-accuracy deep learning inference of compressible turbulent flows over aerofoils},
|
||||
author={Chen, Li-Wei and Thuerey, Nils},
|
||||
booktitle={arXiv},
|
||||
year={2021},
|
||||
booktitle={Computers and Fluids},
|
||||
year={2022},
|
||||
url={https://ge.in.tum.de/publications/},
|
||||
}
|
||||
|
||||
@@ -110,11 +174,13 @@
|
||||
url={https://ge.in.tum.de/publications/2020-iclr-holl/},
|
||||
}
|
||||
|
||||
@inproceedings{holl2021pg,
|
||||
title={Physical Gradients and Scale-Invariant Physics for Deep Learning},
|
||||
@article{holl2021pg,
|
||||
title={Scale-Invariant Physics for Deep Learning},
|
||||
author={Holl, Philipp and Koltun, Vladlen and Thuerey, Nils},
|
||||
booktitle={arXiv:2109.15048},
|
||||
year={2021},
|
||||
journal={Advances in Neural Information Processing Systems},
|
||||
volume={35},
|
||||
pages={5390--5403},
|
||||
year={2022},
|
||||
url={https://arxiv.org/abs/2109.15048},
|
||||
}
|
||||
|
||||
|
||||
Binary file not shown.
@@ -202,7 +202,7 @@
|
||||
"source": [
|
||||
"Next, let's define a small helper class `DfpDataset` to organize inputs and targets. We'll transfer the corresponding data to the pytorch `DataLoader` class. \n",
|
||||
"\n",
|
||||
"We also set up some globals to control training parameters, maybe most importantly: the learning rate `LR`, i.e. $\\eta$ from the previous setions. When your training run doesn't converge this is the first parameter to experiment with.\n",
|
||||
"We also set up some globals to control training parameters, maybe most importantly: the learning rate `LR`, i.e. $\\eta$ from the previous sections. When your training run doesn't converge this is the first parameter to experiment with.\n",
|
||||
"\n",
|
||||
"Here, we'll keep it relatively small throughout. (Using _learning rate decay_ would be better, i.e. potentially give an improved convergence, but is omitted here for clarity.) "
|
||||
]
|
||||
|
||||
Reference in New Issue
Block a user