minor cleanup
This commit is contained in:
parent
59d3d8c040
commit
00d198df5b
9
_toc.yml
9
_toc.yml
@ -34,15 +34,6 @@ parts:
|
||||
chapters:
|
||||
- file: reinflearn-intro.md
|
||||
- file: reinflearn-code.ipynb
|
||||
- caption: Improved Gradients
|
||||
chapters:
|
||||
- file: physgrad.md
|
||||
- file: physgrad-comparison.ipynb
|
||||
- file: physgrad-nn.md
|
||||
- file: physgrad-code.ipynb
|
||||
- file: physgrad-hig.md
|
||||
- file: physgrad-hig-code.ipynb
|
||||
- file: physgrad-discuss.md
|
||||
- caption: PBDL and Uncertainty
|
||||
chapters:
|
||||
- file: bayesian-intro.md
|
||||
|
@ -2,9 +2,6 @@ Discussion
|
||||
=======================
|
||||
|
||||
The previous sections have explained the _differentiable physics_ approach for deep learning, and have given a range of examples: from a very basic gradient calculation, all the way to complex learning setups powered by advanced simulations. This is a good time to take a step back and evaluate: in the end, the differentiable physics components of these approaches are not too complicated. They are largely based on existing numerical methods, with a focus on efficiently using those methods not only to do a forward simulation, but also to compute gradient information.
|
||||
|
||||
The training via differentiable physics (DP) allows us
|
||||
to integrate full numerical simulations into the training of deep neural networks.
|
||||
What is primarily exciting in this context are the implications that arise from the combination of these numerical methods with deep learning.
|
||||
|
||||
![Divider](resources/divider6.jpg)
|
||||
@ -39,4 +36,8 @@ To summarize, the pros and cons of training NNs via DP:
|
||||
|
||||
_Outlook_: the last negative point (regarding heavy machinery) is bound to strongly improve given the current pace of software and API developments in the DL area. However, for now it's important to keep in mind that not every simulator is suitable for DP training out of the box. Hence, in this book we'll focus on examples using phiflow, which was designed for interfacing with deep learning frameworks.
|
||||
|
||||
Training NNs via differentiable physics solvers is a very generic approach that is applicable to a wide range of combinations of PDE-based models and deep learning. In the next chapters, we will target the underlying learning process to obtain even better NN states.
|
||||
The training via differentiable physics (DP) allows us to integrate full numerical simulations into the training of deep neural networks.
|
||||
It is also a very generic approach that is applicable to a wide range of combinations of PDE-based models and deep learning.
|
||||
|
||||
In the next chapters, we will first compare DP training to model-free alternatives for control problems, and afterwards target the underlying learning process to obtain even better NN states.
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
Towards Gradient Inversion
|
||||
=======================
|
||||
|
||||
**Note, this chapter is very preliminary - probably not for the first version of the book. move after RL, before BNNs?**
|
||||
**Note, this chapter is very preliminary - to be finalized**
|
||||
|
||||
The next chapter will question some fundamental aspects of the formulations so far, namely the update step computed via gradients.
|
||||
To re-cap, the approaches explained in the previous chapters either dealt with purely _supervised_ training, integrated the physical model as a _physical loss term_ or included it via _differentiable physics_ (DP) operators embedded into the training graph.
|
||||
|
Loading…
Reference in New Issue
Block a user