fixing PDf output, removing citations in figure captions for now as these are causing problem in the tex output
This commit is contained in:
parent
16e2c13930
commit
7278a04cf1
68
make-pdf.sh
68
make-pdf.sh
@ -1,13 +1,11 @@
|
||||
# source this file with "." in a shell
|
||||
|
||||
# note this script assumes the following paths/versions: python3.7 , /Users/thuerey/Library/Python/3.7/bin/jupyter-book
|
||||
# updated for nMBA !
|
||||
|
||||
# do clean git checkout for changes from json-cleanup-for-pdf.py via:
|
||||
# git checkout diffphys-code-burgers.ipynb diffphys-code-ns.ipynb diffphys-code-sol.ipynb physicalloss-code.ipynb bayesian-code.ipynb supervised-airfoils.ipynb reinflearn-code.ipynb physgrad-code.ipynb physgrad-comparison.ipynb physgrad-hig-code.ipynb
|
||||
|
||||
|
||||
echo
|
||||
echo WARNING - still requires one manual quit of first pdf/latex pass, use shift-x to quit
|
||||
echo WARNING - still requires one manual quit of first pdf/latex pass, use shift-x to quit, then fix latex
|
||||
echo
|
||||
|
||||
PYT=python3
|
||||
@ -18,49 +16,27 @@ ${PYT} json-cleanup-for-pdf.py
|
||||
# clean / remove _build dir ?
|
||||
|
||||
/Users/thuerey/Library/Python/3.9/bin/jupyter-book build . --builder pdflatex
|
||||
xelatex book
|
||||
|
||||
exit # sufficient for newer jupyter book versions
|
||||
#necessary fixes for jupyter-book 1.0.3
|
||||
#open book.tex in text editor:
|
||||
#problem 1: replace all
|
||||
#begin{align} with begin{aligned}
|
||||
#end{align} with end{aligned}
|
||||
|
||||
#problem 2:
|
||||
#\begin{equation*}
|
||||
#\begin{split}
|
||||
#\begin{equation} <- aligned
|
||||
#...
|
||||
#\end{equation} <- aligned
|
||||
#\end{split}
|
||||
#\end{equation*}
|
||||
|
||||
# manual
|
||||
#xelatex book
|
||||
#xelatex book
|
||||
|
||||
# unused fixup-latex.py
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# old "pre" GEN
|
||||
#/Users/thuerey/Library/Python/3.7/bin/jupyter-book build . --builder pdflatex
|
||||
#/Users/thuerey/Library/Python/3.9/bin/jupyter-book build . --builder pdflatex
|
||||
|
||||
# old cleanup
|
||||
|
||||
cd _build/latex
|
||||
#mv book.pdf book-xetex.pdf # not necessary, failed anyway
|
||||
# this generates book.tex
|
||||
|
||||
rm -f book-in.tex sphinxmessages-in.sty book-in.aux book-in.toc
|
||||
# rename book.tex -> book-in.tex (this is the original output!)
|
||||
mv book.tex book-in.tex
|
||||
mv sphinxmessages.sty sphinxmessages-in.sty
|
||||
mv book.aux book-in.aux
|
||||
mv book.toc book-in.toc
|
||||
#mv sphinxmanual.cls sphinxmanual-in.cls
|
||||
|
||||
${PYT} ../../fixup-latex.py
|
||||
# reads book-in.tex -> writes book-in2.tex
|
||||
|
||||
# remove unicode chars via unix iconv
|
||||
# reads book-in2.tex -> writes book.tex
|
||||
iconv -c -f utf-8 -t ascii book-in2.tex > book.tex
|
||||
|
||||
# finally run pdflatex, now it should work:
|
||||
# pdflatex -recorder book
|
||||
pdflatex book
|
||||
pdflatex book
|
||||
|
||||
# for convenience, archive results in main dir
|
||||
mv book.pdf ../../pbfl-book-pdflatex.pdf
|
||||
tar czvf ../../pbdl-latex-for-arxiv.tar.gz *
|
||||
cd ../..
|
||||
ls -l ./pbfl-book-pdflatex.pdf ./pbdl-latex-for-arxiv.tar.gz
|
||||
|
||||
|
||||
|
@ -163,12 +163,12 @@ This is a highly challenging solution manifold, and requires an extended "cyclic
|
||||
that pushes the discriminator to take all the physical parameters under consideration into account.
|
||||
Interestingly, the generator learns to produce realistic and accurate solutions despite
|
||||
being trained purely on data, i.e. without explicit help in the form of a differentiable physics solver setup.
|
||||
The figure below shows a range of example outputs of a physically-parametrized GAN {cite}`chu2021physgan`.
|
||||
|
||||
```{figure} resources/others-GANs-meaningful-fig11.jpg
|
||||
---
|
||||
name: others-GANs-meaningful-fig11
|
||||
---
|
||||
A range of example outputs of a physically-parametrized GAN {cite}`chu2021physgan`.
|
||||
The network can successfully extrapolate to buoyancy settings beyond the
|
||||
range of values seen at training time.
|
||||
```
|
||||
|
@ -1,7 +1,7 @@
|
||||
Additional Topics
|
||||
=======================
|
||||
|
||||
The next sections will give a shorter introduction to other topics that are highly
|
||||
The next sections will give a shorter introduction to other classic topics that are
|
||||
interesting in the context of physics-based deep learning. These topics (for now) do
|
||||
not come with executable notebooks, but we will still point to existing open source
|
||||
implementations for each of them.
|
||||
|
@ -6,12 +6,13 @@ While this is straight-forward for cases such as data consisting only of integer
|
||||
for continuously changing quantities such as the temperature in a room.
|
||||
While the previous examples have focused on aspects beyond discretization
|
||||
(and used Cartesian grids as a placeholder), the following chapter will target
|
||||
scenarios where learning with dynamically changing and adaptive discretization has a benefit.
|
||||
scenarios where learning Neural operators with dynamically changing
|
||||
and adaptive discretizations have a benefit.
|
||||
|
||||
|
||||
## Types of computational meshes
|
||||
|
||||
Generally speaking, we can distinguish three types of computational meshes (or "grids")
|
||||
As outlined in {doc}`supervised-arch`, we can distinguish three types of computational meshes (or "grids")
|
||||
with which discretizations are typically performed:
|
||||
|
||||
- **structured** meshes: Structured meshes have a regular
|
||||
@ -85,7 +86,6 @@ for the next stage of convolutions. After expanding
|
||||
the size of the latent space over the course of a few layers, it is contracted again
|
||||
to produce the desired result, e.g., an acceleration.
|
||||
|
||||
% {cite}`prantl2019tranquil`
|
||||
|
||||
## Continuous convolutions
|
||||
|
||||
@ -161,13 +161,14 @@ to reproduce such behavior.
|
||||
Nonetheless, an interesting side-effect of having a trained NN for such a liquid simulation
|
||||
by construction provides a differentiable solver. Based on a pre-trained network, the learned solver
|
||||
then supports optimization via gradient descent, e.g., w.r.t. input parameters such as viscosity.
|
||||
The following image shows an examplary _prediction_ task with continuous convolutions from {cite}`ummenhofer2019contconv`.
|
||||
|
||||
```{figure} resources/others-lagrangian-canyon.jpg
|
||||
---
|
||||
name: others-lagrangian-canyon
|
||||
---
|
||||
An example of a particle-based liquid spreading in a landscape scenario, simulated with
|
||||
learned approach using continuous convolutions {cite}`ummenhofer2019contconv`.
|
||||
learned, continuous convolutions.
|
||||
```
|
||||
|
||||
## Source code
|
||||
|
@ -126,6 +126,8 @@ Ideally, this step is furthermore unrolled over time to stabilize the evolution
|
||||
The resulting training will be significantly more expensive, as more weights need to be trained at once,
|
||||
and a much larger number of intermediate states needs to be processed. However, the increased
|
||||
cost typically pays off with a reduced overall inference error.
|
||||
The following images show several time frames of an example prediction of {cite}`wiewel2020lsssubdiv`,
|
||||
which additionally couples the learned time evolution with a numerically solved advection step.
|
||||
|
||||
|
||||
```{figure} resources/others-timeseries-lss-subdiv-prediction.jpg
|
||||
@ -133,8 +135,6 @@ cost typically pays off with a reduced overall inference error.
|
||||
height: 300px
|
||||
name: timeseries-lss-subdiv-prediction
|
||||
---
|
||||
Several time frames of an example prediction from {cite}`wiewel2020lsssubdiv`, which additionally couples the
|
||||
learned time evolution with a numerically solved advection step.
|
||||
The learned prediction is shown at the top, the reference simulation at the bottom.
|
||||
```
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user