Correct a who
to a how
Fixed erroneous wording of `who the errors evolve` to `how the errors evolve`.
This commit is contained in:
parent
c5727cd08b
commit
0028e0c783
@ -43,7 +43,7 @@ In this case the PDE solver essentially represents an _on-the-fly_ data generato
|
||||
|
||||
In general, there's no combination of NN layers and DP operators that is _forbidden_ (as long as their dimensions are compatible). One that makes particular sense is to "unroll" the iterations of a time stepping process of a simulator, and let the state of a system be influenced by an NN.
|
||||
|
||||
In this case we compute a (potentially very long) sequence of PDE solver steps in the forward pass. In-between these solver steps, an NN modifies the state of our system, which is then used to compute the next PDE solver step. During the backpropagation pass, we move backwards through all of these steps to evaluate contributions to the loss function (it can be evaluated in one or more places anywhere in the execution chain), and to backprop the gradient information through the DP and NN operators. This unrolling of solver iterations essentially gives feedback to the NN about how it's "actions" influence the state of the physical system and resulting loss. Due to the iterative nature of this process, many errors start out very small, and then slowly increase exponentially over the course of iterations. Hence they are extremely difficult to detect in a single evaluation, e.g., from a simple supervised training setup. In these cases it is crucial to provide feedback to the NN at training time who the errors evolve over course of the iterations. Additionally, a pre-computation of the states is not possible for such iterative cases, as the iterations depend on the state of the NN. Naturally, the NN state is unknown before training time and changes while being trained. Hence, a DP-based training is crucial to provide the NN with gradients about how it influences the solver iterations.
|
||||
In this case we compute a (potentially very long) sequence of PDE solver steps in the forward pass. In-between these solver steps, an NN modifies the state of our system, which is then used to compute the next PDE solver step. During the backpropagation pass, we move backwards through all of these steps to evaluate contributions to the loss function (it can be evaluated in one or more places anywhere in the execution chain), and to backprop the gradient information through the DP and NN operators. This unrolling of solver iterations essentially gives feedback to the NN about how it's "actions" influence the state of the physical system and resulting loss. Due to the iterative nature of this process, many errors start out very small, and then slowly increase exponentially over the course of iterations. Hence they are extremely difficult to detect in a single evaluation, e.g., from a simple supervised training setup. In these cases it is crucial to provide feedback to the NN at training time how the errors evolve over course of the iterations. Additionally, a pre-computation of the states is not possible for such iterative cases, as the iterations depend on the state of the NN. Naturally, the NN state is unknown before training time and changes while being trained. Hence, a DP-based training is crucial to provide the NN with gradients about how it influences the solver iterations.
|
||||
|
||||
```{figure} resources/diffphys-multistep.jpg
|
||||
---
|
||||
|
Loading…
Reference in New Issue
Block a user