PG conclusions, list formatting

This commit is contained in:
NT 2021-03-26 10:28:05 +08:00
parent 145fa437c1
commit 0c8cb1a996
4 changed files with 35 additions and 22 deletions

View File

@ -26,13 +26,13 @@ actual solver in the training loop via a DP approach.
To summarize the pros and cons of training NNs via differentiable physics:
✅ Pro:
- uses physical model and numerical methods for discretization
- efficiency of selected methods carries over to training
- tight coupling of physical models and NNs possible
- Uses physical model and numerical methods for discretization.
- Efficiency of selected methods carries over to training.
- Tight coupling of physical models and NNs possible.
❌ Con:
- not compatible with all simulators (need to provide gradients)
- require more heavy machinery (in terms of framework support) than previously discussed methods
- Not compatible with all simulators (need to provide gradients).
- Require more heavy machinery (in terms of framework support) than previously discussed methods.
Especially the last point is one that is bound to strongly improve in a fairly short time, but for now it's important to keep in mind that not every simulator is suitable for DP training out of the box. Hence, in this book we'll focus on examples using phiflow, which was designed for interfacing with deep learning frameworks.
Next we can target more some complex scenarios to showcase what can be achieved with differentiable physics.

View File

@ -1,14 +1,27 @@
Discussion
=======================
The training via physical gradients,
... **TODO** ...
In a way, the learning via physical gradients provides the tightest possible coupling
of physics and NNs: the full non-linear process of the PDE model directly steers
the optimization of the NN.
Naturally, this comes at a cost - invertible simulators are more difficult to build
(and less common) than the first-order gradients which are relatively commonly used
for learning processes and adjoint optimizations. Nonetheless, if they're available,
they can speed up convergence, and yield models that have an inherently better performance.
Thus, once trained, these models can give a performance that we simply can't obtain
by, e.g., training longer with a simpler approach. So, if we plan to evaluate these
models often (e.g., ship them in an application), this increased one-time cost
can pay off in the long run.
## Summary
✅ Pro:
- ...
- Very accurate "gradient" information for learning and optimization.
- Improved convergence and model performance.
- Tightest possible coupling of model PDEs and learning.
❌ Con:
- ...
- Requires inverse simulators (at least local ones).
- less wide-spread availability than, e.g., differentiable physics simulators.

View File

@ -57,14 +57,14 @@ Bringing these numerical methods back into the picture will be one of the centra
goals of the next sections.
✅ Pro:
- uses physical model
- derivatives can be conveniently compute via backpropagation
- Uses physical model.
- Derivatives can be conveniently compute via backpropagation.
❌ Con:
- quite slow ...
- physical constraints are enforced only as soft constraints
- largely incompatible _classical_ numerical methods
- accuracy of derivatives relies on learned representation
- Quite slow ...
- Physical constraints are enforced only as soft constraints.
- Largely incompatible _classical_ numerical methods.
- Accuracy of derivatives relies on learned representation.
Next, let's look at how we can leverage numerical methods to improve the DL accuracy and efficiency
by making use of differentiable solvers.

View File

@ -96,18 +96,18 @@ avoid overfitting.
## Supervised Training in a nutshell
To summarize:
To summarize, supervised training has the following properties.
✅ Pros:
- very fast training
- stable and simple
- great starting point
- Very fast training.
- Stable and simple.
- Great starting point.
❌ Con:
- lots of data needed
- sub-optimal performance, accuracy and generalization
- Lots of data needed.
- Sub-optimal performance, accuracy and generalization.
Outlook: interactions with external "processes" (such as embedding into a solver) are tricky with supervised training.
Outlook: any interactions with external "processes" (such as embedding into a solver) are tricky with supervised training.
First, we'll look at bringing model equations into the picture via soft-constraints, and afterwards
we'll revisit the challenges of bringing together numerical simulations and learned approaches.