updated logos

This commit is contained in:
NT 2021-04-11 13:51:16 +08:00
parent 93a3ca9ed8
commit 3ad6eaef0d
7 changed files with 16 additions and 15 deletions

View File

@ -3,7 +3,7 @@
title: Physics-based Deep Learning
author: TUM-I15
logo: resources/logo.png
logo: resources/logo.jpg
# Force re-execution of notebooks on each build.
# See https://jupyterbook.org/content/execute.html

View File

@ -1,16 +1,16 @@
Summary and Discussion
=======================
The previous sections have explained the differentiable physics approach for deep learning, and have given a range of examples: from a very basic gradient calculation, all the way to complex learning setups powered by simulations. This is a good time to pause and take a step back, to take a look at what we have: in the end, the _differentiable physics_ part is not too complicated. It's largely based on existing numerical methods, with a focus on efficiently using those methods to not only do a forward simulation, but also to compute gradient information. What's more exciting is the combination of these methods with deep learning.
The previous sections have explained the differentiable physics approach for deep learning, and have given a range of examples: from a very basic gradient calculation, all the way to complex learning setups powered by complex simulations. This is a good time to pause and take a step back, to take a look at what we have: in the end, the _differentiable physics_ component of these approaches is not too complicated. It's largely based on existing numerical methods, with a focus on efficiently using those methods to not only do a forward simulation, but also to compute gradient information. What is primarily exciting in this context are the implications that arise from the combination of these numerical methods with deep learning.
## Integration
Most importantly, training via differentiable physics allows us to seamlessly bring the two fields together:
we can obtain _hybrid_ methods, that use the best numerical methods that we have at our disposal for the simulation itself, as well as for the training process. We can then use the trained model to improve the forward or backward solve. Thus, in the end we have a solver that employs both a _traditional_ solver and a _learned_ component.
we can obtain _hybrid_ methods, that use the best numerical methods that we have at our disposal for the simulation itself, as well as for the training process. We can then use the trained model to improve forward or backward solves. Thus, in the end, we have a solver that combines a _traditional_ solver and a _learned_ component.
## Interaction
One key component for these hybrids to work well is to let the NN _interact_ with the PDE solver at training time. Differentiable simulations allow a trained model to explore and experience the physical environment, and receive directed feedback regarding its interactions throughout the solver iterations. This combination nicely fits into the broader context of machine learning as _differentiable programming_.
One key aspect that is important for these hybrids to work well is to let the NN _interact_ with the PDE solver at training time. Differentiable simulations allow a trained model to "explore and experience" the physical environment, and receive directed feedback regarding its interactions throughout the solver iterations. This combination nicely fits into the broader context of machine learning as _differentiable programming_.
## Generalization
@ -18,5 +18,8 @@ The hybrid approach also bears particular promise for simulators: it improves ge
---
Despite being a very powerful method, the DP approach is clearly not the end of the line. In the next chapters we'll consider further improvements and extensions.
Training NNs via differentiable physics solvers, i.e., what we've described as "DP" in the previous
sections, is a very generic approach that is applicable to a wide range of combinations of PDE-based models
and deep learning. Nonetheless, the next chapters will discuss several variants that are orthogonal
to the general DP version, or can yield benefits in more specialized settings.

View File

@ -1,7 +1,7 @@
Welcome ...
============================
Welcome to the Physics-based Deep Learning Book 👋
Welcome to the _Physics-based Deep Learning Book_ 👋
**TL;DR**:
This document targets a variety of combinations of physical simulations with deep learning.
@ -10,25 +10,25 @@ Beyond standard _supervised_ learning from data, we'll look at _physical loss_ c
more tightly coupled learning algorithms with _differentiable simulations_.
```{figure} resources/teaser.png
```{figure} resources/teaser.jpg
---
height: 220px
name: pbdl-teaser
---
Some visual examples of hybrid solvers, i.e. numerical simulators that are enhanced by trained neural networks.
Some visual examples of numerically simulated time sequences. In this book, we aim for algorithms that use neural networks alongside numerical solvers.
```
% Teaser, simple version:
% ![Teaser, simple version](resources/teaser.png)
% ![Teaser, simple version](resources/teaser.jpg)
## Coming up
As a _sneak preview_, in the next chapters we'll show:
As a _sneak preview_, in the next chapters will show:
- How to train networks to infer fluid flows around shapes like airfoils in one go, i.e., a _surrogate model_ that replaces a traditional numerical simulation.
- We'll show how to use model equations as residual to train networks that represent solutions, and how to improve upon these residual constraints by using _differentiable simulations_.
- How to use model equations as residuals to train networks that represent solutions, and how to improve upon these residual constraints by using _differentiable simulations_.
- How to more tightly interact with a full simulator for _control problems_. E.g., we'll demonstrate how to circumvent the convergence problems of standard reinforcement learning techniques by leveraging simulators in the training loop.
- How to more tightly interact with a full simulator for _inverse problems_. E.g., we'll demonstrate how to circumvent the convergence problems of standard reinforcement learning techniques by leveraging simulators in the training loop.
This _book_, where "book" stands for a collection of texts, equations, images and code examples,
is maintained by the
@ -44,7 +44,7 @@ We also maintain a [link collection](https://github.com/thunil/Physics-Based-Dee
```{admonition} Executable code, right here, right now
:class: tip
We focus on jupyter notebooks, a key advantage of which is that all code examples
can be executed _on the spot_, out of a browser. You can modify things and
can be executed _on the spot_, with your browser. You can modify things and
immediately see what happens -- give it a try...
<br><br>
Oh, and it's great because it's [literate programming](https://en.wikipedia.org/wiki/Literate_programming).
@ -58,7 +58,6 @@ Oh, and it's great because it's [literate programming](https://en.wikipedia.org/
The contents of the following files would not have been possible without the help of many people. Here's an alphabetical list. Big kudos to everyone 🙏
- [Li-wei Chen](https://ge.in.tum.de/about/dr-liwei-chen/)
- [Philipp Holl](https://ge.in.tum.de/about/)
- [Maximilian Mueller](https://www.tum.de)
- [Patrick Schnell](https://ge.in.tum.de/about/patrick-schnell/)
@ -82,7 +81,6 @@ See also... Test link: {doc}`supervised`
## TODOs , include
- DP control, show targets at bottom?
- update teaser image
- DP intro, check transpose of Jacobians in equations
- fix phiflow2 , diffphys-code-ns.ipynb

BIN
resources/logo.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

BIN
resources/teaser.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 454 KiB