updated config and overview
This commit is contained in:
parent
8b71d57e05
commit
3f3e828236
@ -4,11 +4,10 @@
|
||||
title: Physics-based Deep Learning
|
||||
author: N. Thuerey, P. Holl, M. Mueller, P. Schnell, F. Trost, K. Um
|
||||
logo: resources/logo.jpg
|
||||
copyright: "2021"
|
||||
only_build_toc_files: true
|
||||
|
||||
# Force re-execution of notebooks on each build.
|
||||
# See https://jupyterbook.org/content/execute.html
|
||||
execute:
|
||||
# Whether to execute notebooks at build time. Must be one of ("auto", "force", "cache", "off")
|
||||
execute_notebooks: off
|
||||
|
||||
# Define the name of the latex output file for PDF builds
|
||||
@ -23,7 +22,6 @@ bibtex_bibfiles:
|
||||
# Information about where the book exists on the web
|
||||
repository:
|
||||
url: https://github.com/tum-pbs/pbdl-book/ # Online location of your book
|
||||
#path_to_book: docs # Optional path to your book, relative to the repository root
|
||||
branch: master # Which branch of the repository should be used when creating links (optional)
|
||||
|
||||
# Add GitHub buttons to your book
|
||||
@ -31,3 +29,4 @@ repository:
|
||||
html:
|
||||
use_issues_button: true
|
||||
use_repository_button: true
|
||||
favicon: "favicon.ico"
|
||||
|
BIN
favicon.ico
Normal file
BIN
favicon.ico
Normal file
Binary file not shown.
After Width: | Height: | Size: 15 KiB |
@ -15,7 +15,7 @@
|
||||
"source": [
|
||||
"Let's start with a very reduced example that highlights some of the key capabilities of physics-based learning approaches. Let's assume our physical model is a very simple equation: a parabola along the positive x-axis.\n",
|
||||
"\n",
|
||||
"Despite being very simple, for every point along there are two solutions, i.e. we have two modes, one above the other one below the x-axis, as shown on the left below. If we don't take care a conventional learning approach will give us an approximation like the red one shown in the middle, which is obviously completely off. With an improved learning setup, ideally, by using a discretized numerical solver, we can at least accurately represent one of the modes of the solution (shown in green on the right).\n",
|
||||
"Despite being very simple, for every point along there are two solutions, i.e. we have two modes, one above the other one below the x-axis, as shown on the left below. If we don't take care a conventional learning approach will give us an approximation like the red one shown in the middle, which is completely off. With an improved learning setup, ideally, by using a discretized numerical solver, we can at least accurately represent one of the modes of the solution (shown in green on the right).\n",
|
||||
"\n",
|
||||
"```{figure} resources/intro-teaser-side-by-side.png\n",
|
||||
"---\n",
|
||||
@ -68,7 +68,7 @@
|
||||
"To illustrate these two approaches, we consider the following simplified setting: Given the function $\\mathcal P: y\\to y^2$ for $y$ in the intverval $[0,1]$, find the unknown function $f$ such that $\\mathcal P(f(x)) = x$ for all $x$ in $[0,1]$. Note: to make things a bit more interesting, we're using $y^2$ here instead of the more common $x^2$ parabola, and the _discretization_ is simply given by representing the $x$ and $y$ via floating point numbers in the computer for this simple case.\n",
|
||||
"\n",
|
||||
"We know that possible solutions for $f$ are the positive or negative square root function (for completeness: piecewise combinations would also be possible).\n",
|
||||
"Knowing that this is not overly difficult, it's an obvious idea to try training a neural network to approximate this inverse mapping $f$.\n",
|
||||
"Knowing that this is not overly difficult, a solution that suggests itself is to train a neural network to approximate this inverse mapping $f$.\n",
|
||||
"Doing this in the \"classical\" supervised manner, i.e. purely based on data, is an obvious starting point. After all, this approach was shown to be a powerful tool for a variety of other applications, e.g., in computer vision."
|
||||
]
|
||||
},
|
||||
@ -387,7 +387,7 @@
|
||||
"\n",
|
||||
"It's a very simple example, but it very clearly shows a failure case for supervised learning. While it might seem very artificial at first sight, many practical PDEs exhibit a variety of these modes, and it's often not clear where (and how many) exist in the solution manifold we're interested in. Using supervised learning is very dangerous in such cases - we might simply and unknowingly _blur_ out these different modes.\n",
|
||||
"\n",
|
||||
"A good and obvious example are bifurcations in fluid flows. Smoke rising above a candle will start out straight, and then, due to tiny perturbations in its motion, start oscillating in a random direction. The images below illustrate this case via _numerical perturbations_: the perfectly symmetric setup will start turning left or right, depending on how the approximation errors build up. Similarly, we'll have different modes in all our numerical solutions, and typically it's important to recover them, rather than averaging them out. Hence, we'll show how to leverage training via _differentiable physics_ in the following chapters for more practical and complex cases.\n",
|
||||
"Good and obvious examples are bifurcations in fluid flow. Smoke rising above a candle will start out straight, and then, due to tiny perturbations in its motion, start oscillating in a random direction. The images below illustrate this case via _numerical perturbations_: the perfectly symmetric setup will start turning left or right, depending on how the approximation errors build up. Similarly, we'll have different modes in all our numerical solutions, and typically it's important to recover them, rather than averaging them out. Hence, we'll show how to leverage training via _differentiable physics_ in the following chapters for more practical and complex cases.\n",
|
||||
"\n",
|
||||
"```{figure} resources/intro-fluid-bifurcation.jpg\n",
|
||||
"---\n",
|
||||
|
10
overview.md
10
overview.md
@ -57,7 +57,7 @@ DL techniques and NNs are novel, sometimes difficult to apply, and
|
||||
it is admittedly often non-trivial to properly integrate our understanding
|
||||
of physical processes into the learning algorithms.
|
||||
|
||||
Over the course of the last decades,
|
||||
Over the last decades,
|
||||
highly specialized and accurate discretization schemes have
|
||||
been developed to solve fundamental model equations such
|
||||
as the Navier-Stokes, Maxwell's, or Schroedinger's equations.
|
||||
@ -69,7 +69,7 @@ is highly beneficial for DL to use them as much as possible.
|
||||
|
||||
```{admonition} Goals of this document
|
||||
:class: tip
|
||||
The key aspects that we want to address in the following are:
|
||||
The key aspects that we will address in the following are:
|
||||
- explain how to use deep learning techniques to solve PDE problems,
|
||||
- how to combine them with **existing knowledge** of physics,
|
||||
- without **discarding** our knowledge about numerical methods.
|
||||
@ -93,7 +93,7 @@ that this goal is not overly far away {cite}`um2020sol,kochkov2021`.
|
||||
|
||||
Another way to look at it is that all mathematical models of our nature
|
||||
are idealized approximations and contain errors. A lot of effort has been
|
||||
made to obtain very good model equations, but in order to make the next
|
||||
made to obtain very good model equations, but to make the next
|
||||
big step forward, DL methods offer a very powerful tool to close the
|
||||
remaining gap towards reality {cite}`akkaya2019solving`.
|
||||
|
||||
@ -131,7 +131,7 @@ techniques:
|
||||
an output from a deep neural network; this requires a fully differentiable
|
||||
simulator and represents the tightest coupling between the physical system and
|
||||
the learning process. Interleaved differentiable physics approaches are especially important for
|
||||
temporal evolutions, where they can yield an estimate of future behavior of the
|
||||
temporal evolutions, where they can yield an estimate of the future behavior of the
|
||||
dynamics.
|
||||
|
||||
Thus, methods can be roughly categorized in terms of forward versus inverse
|
||||
@ -173,7 +173,7 @@ give introductions into the differentiable simulation framework _Φ<sub>Flow</su
|
||||
these examples, you should have a good overview of what's available in current APIs, such that
|
||||
the best one can be selected for new tasks.
|
||||
|
||||
As we're (in most jupyter notebook examples) dealing with stochastic optimizations, many of the following code examples will produce slightly different results each time they're run. This is fairly common with NN training, but it's important to keep in mind when executing the code. It also means that the numbers discussed in the text might not exactly match the numbers you'll see after re-running the examples.
|
||||
As we're (in most Jupyter notebook examples) dealing with stochastic optimizations, many of the following code examples will produce slightly different results each time they're run. This is fairly common with NN training, but it's important to keep in mind when executing the code. It also means that the numbers discussed in the text might not exactly match the numbers you'll see after re-running the examples.
|
||||
|
||||
---
|
||||
<br>
|
||||
|
Loading…
Reference in New Issue
Block a user