Compare commits

...

12 Commits

Author SHA1 Message Date
Nils Thuerey
025fbc4559 Merge pull request #25 from caic99/patch-1
Fix typo in Cauchy-Schwarz inequality. Thanks caic99!
2024-04-29 12:36:18 +02:00
Chun Cai
46afdf334a Fix typo in Cauchy-Schwarz inequality 2024-04-28 16:01:40 +08:00
N_T
e9e2620713 updated bibs 2024-03-11 21:12:54 +01:00
N_T
3165a15270 minor fix for plotting with newer matplotlib versions 2024-02-13 17:18:23 +01:00
N_T
a8e027c5db colab TF version fix for SoL notebook, images 2024-02-13 17:07:21 +01:00
N_T
370ee58d4b fixed typo, and script update 2023-11-14 21:18:45 +01:00
NT
3a52104c71 minor references typo 2023-09-11 09:51:52 +02:00
NT
e3125e0fa4 citation and figure update 2023-08-26 15:21:09 +02:00
NT
413d682d83 Merge branch 'main' of github.com:tum-pbs/pbdl-book into main 2023-04-08 13:01:27 +02:00
NT
d71def7084 fixed typo 2023-04-08 13:01:22 +02:00
Nils Thuerey
817a5909b8 Merge pull request #21 from andrinr/main
Fix typo
2023-03-20 12:31:46 +01:00
Andrin Rehmann
0fbe4f10bd Fix typo
setions instead of sections
2023-03-20 11:45:31 +01:00
9 changed files with 91 additions and 18 deletions

View File

@@ -626,7 +626,7 @@
"This example illustrated how the differentiable physics approach can easily be extended towards significantly more \n", "This example illustrated how the differentiable physics approach can easily be extended towards significantly more \n",
"complex PDEs. Above, we've optimized for a mini-batch of 20 steps of a full Navier-Stokes solver.\n", "complex PDEs. Above, we've optimized for a mini-batch of 20 steps of a full Navier-Stokes solver.\n",
"\n", "\n",
"This is a powerful basis to bring NNs into the picture. As you might have noticed, our degrees of freedom were still a regular grid, and we've jointly solved a single inverse problem. There were three cases to solve as a mini-batch, of course, but nonetheless the setup still represents a direct optimization. Thus, in line with the PINN example of {doc}`physicalloss-code` we've not really dealt with a _machine learning_ task here. However, DP training allows for a range of flexible compinations with NNs that will be the topic of the next chapters.\n" "This is a powerful basis to bring NNs into the picture. As you might have noticed, our degrees of freedom were still a regular grid, and we've jointly solved a single inverse problem. There were three cases to solve as a mini-batch, of course, but nonetheless the setup still represents a direct optimization. Thus, in line with the PINN example of {doc}`physicalloss-code` we've not really dealt with a _machine learning_ task here. However, DP training allows for a range of flexible combinations with NNs that will be the topic of the next chapters.\n"
] ]
}, },
{ {
@@ -671,4 +671,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 1 "nbformat_minor": 1
} }

View File

@@ -158,7 +158,7 @@
"id": "RY1F4kdWPLNG" "id": "RY1F4kdWPLNG"
}, },
"source": [ "source": [
"Also let's get installing / importing all the necessary libraries out of the way. And while we're at it, we set the random seed - obviously, 42 is the ultimate choice here \ud83d\ude42" "Also let's get installing / importing all the necessary libraries out of the way. The cell below by default contains a fix for the current colab version. If you're happy with your local install, make sure to disable the TensorFlow version change. And while we're at it, we also set the random seed - obviously, 42 is the ultimate choice here \ud83d\ude42"
] ]
}, },
{ {
@@ -173,6 +173,8 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"!yes|pip uninstall tensorflow\n",
"!pip install --upgrade --quiet tensorflow==2.9 # for colab ignore msgs, should work despite these\n",
"!pip install --upgrade --quiet phiflow==2.2\n", "!pip install --upgrade --quiet phiflow==2.2\n",
"from phi.tf.flow import *\n", "from phi.tf.flow import *\n",
"import tensorflow as tf\n", "import tensorflow as tf\n",

View File

@@ -1,6 +1,7 @@
# source this file with "." in a shell # source this file with "." in a shell
# note this script assumes the following paths/versions: python3.7 , /Users/thuerey/Library/Python/3.7/bin/jupyter-book # note this script assumes the following paths/versions: python3.7 , /Users/thuerey/Library/Python/3.7/bin/jupyter-book
# updated for nMBA !
# do clean git checkout for changes from json-cleanup-for-pdf.py via: # do clean git checkout for changes from json-cleanup-for-pdf.py via:
# git checkout diffphys-code-burgers.ipynb diffphys-code-ns.ipynb diffphys-code-sol.ipynb physicalloss-code.ipynb bayesian-code.ipynb supervised-airfoils.ipynb reinflearn-code.ipynb physgrad-code.ipynb physgrad-comparison.ipynb physgrad-hig-code.ipynb # git checkout diffphys-code-burgers.ipynb diffphys-code-ns.ipynb diffphys-code-sol.ipynb physicalloss-code.ipynb bayesian-code.ipynb supervised-airfoils.ipynb reinflearn-code.ipynb physgrad-code.ipynb physgrad-comparison.ipynb physgrad-hig-code.ipynb
@@ -9,13 +10,17 @@ echo
echo WARNING - still requires one manual quit of first pdf/latex pass, use shift-x to quit echo WARNING - still requires one manual quit of first pdf/latex pass, use shift-x to quit
echo echo
PYT=python3.7
PYT=python3
# warning - modifies notebooks! # warning - modifies notebooks!
python3.7 json-cleanup-for-pdf.py ${PYT} json-cleanup-for-pdf.py
# clean / remove _build dir ? # clean / remove _build dir ?
# GEN! # GEN!
/Users/thuerey/Library/Python/3.7/bin/jupyter-book build . --builder pdflatex #/Users/thuerey/Library/Python/3.7/bin/jupyter-book build . --builder pdflatex
/Users/thuerey/Library/Python/3.9/bin/jupyter-book build . --builder pdflatex
cd _build/latex cd _build/latex
#mv book.pdf book-xetex.pdf # not necessary, failed anyway #mv book.pdf book-xetex.pdf # not necessary, failed anyway
@@ -29,7 +34,7 @@ mv book.aux book-in.aux
mv book.toc book-in.toc mv book.toc book-in.toc
#mv sphinxmanual.cls sphinxmanual-in.cls #mv sphinxmanual.cls sphinxmanual-in.cls
python3.7 ../../fixup-latex.py ${PYT} ../../fixup-latex.py
# reads book-in.tex -> writes book-in2.tex # reads book-in.tex -> writes book-in2.tex
# remove unicode chars via unix iconv # remove unicode chars via unix iconv

View File

@@ -63,7 +63,7 @@ $$f(x+\Delta) = f(x) + \int_0^1 \text{d}s ~ f'(x+s \Delta) \Delta \ . $$
In addition, we'll make use of Lipschitz-continuity with constant $\mathcal L$: In addition, we'll make use of Lipschitz-continuity with constant $\mathcal L$:
$|f(x+\Delta) + f(x)|\le \mathcal L \Delta$, and the well-known Cauchy-Schwartz inequality: $|f(x+\Delta) + f(x)|\le \mathcal L \Delta$, and the well-known Cauchy-Schwartz inequality:
$ u^T v < |u| \cdot |v| $. $ u^T v \le |u| \cdot |v| $.
## Newton's method ## Newton's method

View File

@@ -564,7 +564,7 @@
], ],
"source": [ "source": [
"plt = vis.plot(x_gd.values.batch[0], x_sip.values.batch[0], data.values.batch[0], size=(15,4) )\n", "plt = vis.plot(x_gd.values.batch[0], x_sip.values.batch[0], data.values.batch[0], size=(15,4) )\n",
"plt.get_axes()[0].set_title(\"Adam\"); plt.get_axes()[1].set_title(\"SIP\"); plt.get_axes()[2].set_title(\"Reference\");\n" "plt.axes[0].title=\"Adam\"; plt.axes[1].title=\"SIP\"; plt.axes[2].title=\"Reference\";\n"
] ]
}, },
{ {
@@ -605,4 +605,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 0 "nbformat_minor": 0
} }

View File

@@ -90,7 +90,7 @@
"\n", "\n",
"Next, we set up a simple NN with 8 fully connected layers and `tanh` activations with 20 units each. \n", "Next, we set up a simple NN with 8 fully connected layers and `tanh` activations with 20 units each. \n",
"\n", "\n",
"We'll also define the `boundary_tx` function which gives an array of constraints for the solution (all for $=0.5$ in this example), and the `open_boundary` function which stores constraints for $x= \\pm1$ being 0." "We'll also define the `boundary_tx` function which gives an array of constraints for the solution (all for $t=0.5$ in this example), and the `open_boundary` function which stores constraints for $x= \\pm1$ being 0."
] ]
}, },
{ {

View File

@@ -13,6 +13,69 @@
@STRING{NeurIPS = "Advances in Neural Information Processing Systems"} @STRING{NeurIPS = "Advances in Neural Information Processing Systems"}
@inproceedings{liu2024airfoils,
title={Uncertainty-aware Surrogate Models for Airfoil Flow Simulations with Denoising Diffusion Probabilistic Models},
author={Liu, Qiang and Thuerey, Nils},
booktitle={Journal of the American Institute of Aeronautics and Astronautics},
year={2024},
}
@inproceedings{schnell2024sbptt,
title={Stabilizing Backpropagation Through Time to Learn Complex Physics},
author={Schnell, Patrick and Thuerey, Nils},
booktitle={International Conference on Learning Representations},
year={2024},
}
@inproceedings{winchenbach2024iclr,
title={Symmetric Basis Convolutions for Learning Lagrangian Fluid Mechanics},
author={Rene Winchenbach and Thuerey, Nils},
booktitle={International Conference on Learning Representations},
year={2024},
}
@inproceedings{holl2024phiml,
title={Phi-ML: Intuitive Scientific Computing with Dimension Types for Jax, PyTorch, TensorFlow and NumPy},
author={Holl, Philipp and Thuerey, Nils},
booktitle={Journal of Open Source Software},
year={2024},
url={https://joss.theoj.org/papers/10.21105/joss.06171},
}
@article{holzschuh2023smdp,
title={Solving Inverse Physics Problems with Score Matching},
author={Benjamin Holzschuh and Simona Vegetti and Thuerey, Nils},
journal={Advances in Neural Information Processing Systems (NeurIPS)},
volume={36},
year={2023}
}
@inproceedings{franz2023nglobt,
title={Learning to Estimate Single-View Volumetric Flow Motions without 3D Supervision},
author={Erik Franz, Barbara Solenthaler, and Thuerey, Nils},
booktitle={ICLR},
year={2023},
url={https://github.com/tum-pbs/Neural-Global-Transport},
}
@inproceedings{kohl2023volSim,
title={Learning Similarity Metrics for Volumetric Simulations with Multiscale CNNs},
author={Kohl, Georg and Chen, Li-Wei and Thuerey, Nils},
booktitle={AAAI Conference on Artificial Intelligence},
year={2022}
url={https://github.com/tum-pbs/VOLSIM},
}
@article{prantl2022guaranteed,
title={Guaranteed conservation of momentum for learning particle-based fluid dynamics},
author={Prantl, Lukas and Ummenhofer, Benjamin and Koltun, Vladlen and Thuerey, Nils},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2022}
}
@inproceedings{schnell2022hig, @inproceedings{schnell2022hig,
title={Half-Inverse Gradients for Physical Deep Learning}, title={Half-Inverse Gradients for Physical Deep Learning},
author={Schnell, Patrick and Holl, Philipp and Thuerey, Nils}, author={Schnell, Patrick and Holl, Philipp and Thuerey, Nils},
@@ -21,11 +84,12 @@
url={https://github.com/tum-pbs/half-inverse-gradients}, url={https://github.com/tum-pbs/half-inverse-gradients},
} }
@inproceedings{chen2021highacc, @inproceedings{chen2021highacc,
title={Towards high-accuracy deep learning inference of compressible turbulent flows over aerofoils}, title={Towards high-accuracy deep learning inference of compressible turbulent flows over aerofoils},
author={Chen, Li-Wei and Thuerey, Nils}, author={Chen, Li-Wei and Thuerey, Nils},
booktitle={arXiv}, booktitle={Computers and Fluids},
year={2021}, year={2022},
url={https://ge.in.tum.de/publications/}, url={https://ge.in.tum.de/publications/},
} }
@@ -110,11 +174,13 @@
url={https://ge.in.tum.de/publications/2020-iclr-holl/}, url={https://ge.in.tum.de/publications/2020-iclr-holl/},
} }
@inproceedings{holl2021pg, @article{holl2021pg,
title={Physical Gradients and Scale-Invariant Physics for Deep Learning}, title={Scale-Invariant Physics for Deep Learning},
author={Holl, Philipp and Koltun, Vladlen and Thuerey, Nils}, author={Holl, Philipp and Koltun, Vladlen and Thuerey, Nils},
booktitle={arXiv:2109.15048}, journal={Advances in Neural Information Processing Systems},
year={2021}, volume={35},
pages={5390--5403},
year={2022},
url={https://arxiv.org/abs/2109.15048}, url={https://arxiv.org/abs/2109.15048},
} }

Binary file not shown.

View File

@@ -202,7 +202,7 @@
"source": [ "source": [
"Next, let's define a small helper class `DfpDataset` to organize inputs and targets. We'll transfer the corresponding data to the pytorch `DataLoader` class. \n", "Next, let's define a small helper class `DfpDataset` to organize inputs and targets. We'll transfer the corresponding data to the pytorch `DataLoader` class. \n",
"\n", "\n",
"We also set up some globals to control training parameters, maybe most importantly: the learning rate `LR`, i.e. $\\eta$ from the previous setions. When your training run doesn't converge this is the first parameter to experiment with.\n", "We also set up some globals to control training parameters, maybe most importantly: the learning rate `LR`, i.e. $\\eta$ from the previous sections. When your training run doesn't converge this is the first parameter to experiment with.\n",
"\n", "\n",
"Here, we'll keep it relatively small throughout. (Using _learning rate decay_ would be better, i.e. potentially give an improved convergence, but is omitted here for clarity.) " "Here, we'll keep it relatively small throughout. (Using _learning rate decay_ would be better, i.e. potentially give an improved convergence, but is omitted here for clarity.) "
] ]
@@ -801,4 +801,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 0 "nbformat_minor": 0
} }