2021-03-03 04:17:43 +01:00
|
|
|
Discussion
|
|
|
|
=======================
|
|
|
|
|
2021-06-27 16:49:32 +02:00
|
|
|
In a way, the learning via physical gradients provide the tightest possible coupling
|
2021-03-26 03:28:05 +01:00
|
|
|
of physics and NNs: the full non-linear process of the PDE model directly steers
|
|
|
|
the optimization of the NN.
|
|
|
|
|
|
|
|
Naturally, this comes at a cost - invertible simulators are more difficult to build
|
2021-06-27 16:49:32 +02:00
|
|
|
(and less common) than the first-order gradients from
|
|
|
|
deep learning and adjoint optimizations. Nonetheless, if they're available,
|
|
|
|
invertible simulators can speed up convergence, and yield models that have an inherently better performance.
|
2021-03-26 03:28:05 +01:00
|
|
|
Thus, once trained, these models can give a performance that we simply can't obtain
|
|
|
|
by, e.g., training longer with a simpler approach. So, if we plan to evaluate these
|
|
|
|
models often (e.g., ship them in an application), this increased one-time cost
|
|
|
|
can pay off in the long run.
|
2021-03-03 04:17:43 +01:00
|
|
|
|
2021-04-11 14:17:03 +02:00
|
|
|

|
|
|
|
|
2021-03-03 04:17:43 +01:00
|
|
|
## Summary
|
|
|
|
|
|
|
|
✅ Pro:
|
2021-03-26 03:28:05 +01:00
|
|
|
- Very accurate "gradient" information for learning and optimization.
|
|
|
|
- Improved convergence and model performance.
|
|
|
|
- Tightest possible coupling of model PDEs and learning.
|
2021-03-03 04:17:43 +01:00
|
|
|
|
|
|
|
❌ Con:
|
2021-03-26 03:28:05 +01:00
|
|
|
- Requires inverse simulators (at least local ones).
|
2021-06-27 16:49:32 +02:00
|
|
|
- Less wide-spread availability than, e.g., differentiable physics simulators.
|