spellcheck

This commit is contained in:
NT
2021-03-09 16:39:54 +08:00
parent 42061e7d00
commit c443f2bfdf
12 changed files with 55 additions and 55 deletions

View File

@@ -10,7 +10,7 @@ The central goal of this methods is to use existing numerical solvers, and equip
them with functionality to compute gradients with respect to their inputs.
Once this is realized for all operators of a simulation, we can leverage
the autodiff functionality of DL frameworks with back-propagation to let gradient
information from from a simulator into an NN and vice versa. This has numerous
information from from a simulator into an ANN and vice versa. This has numerous
advantages such as improved learning feedback and generalization, as we'll outline below.
In contrast to physics-informed loss functions, it also enables handling more complex
solution manifolds instead of single inverse problems.
@@ -54,9 +54,9 @@ $\partial \mathcal P_i / \partial \mathbf{u}$.
Note that we typically don't need derivatives
for all parameters of $\mathcal P$, e.g. we omit $\nu$ in the following, assuming that this is a
given model parameter, with which the NN should not interact.
given model parameter, with which the ANN should not interact.
Naturally, it can vary within the solution manifold that we're interested in,
but $\nu$ will not be the output of a NN representation. If this is the case, we can omit
but $\nu$ will not be the output of a ANN representation. If this is the case, we can omit
providing $\partial \mathcal P_i / \partial \nu$ in our solver. However, the following learning process
natuarlly transfers to including $\nu$ as a degree of freedom.
@@ -189,7 +189,7 @@ Informally, we'd like to find a motion that deforms $d^{~0}$ into a target state
The simplest way to express this goal is via an $L^2$ loss between the two states. So we want
to minimize the loss function $F=|d(t^e) - d^{\text{target}}|^2$.
Note that as described here this is a pure optimization task, there's no NN involved,
Note that as described here this is a pure optimization task, there's no ANN involved,
and our goal is to obtain $\mathbf{u}$. We do not want to apply this motion to other, unseen _test data_,
as would be custom in a real learning task.
@@ -204,7 +204,7 @@ We'd now like to find the minimizer for this objective by
_gradient descent_ (GD), where the
gradient is determined by the differentiable physics approach described earlier in this chapter.
Once things are working with GD, we can relatively easily switch to better optimizers or bring
an NN into the picture, hence it's always a good starting point.
an ANN into the picture, hence it's always a good starting point.
As the discretized velocity field $\mathbf{u}$ contains all our degrees of freedom,
what we need to update the velocity by an amount
@@ -276,15 +276,15 @@ a bit more complex, matrix inversion, eg Poisson solve
dont backprop through all CG steps (available in phiflow though)
rather, re-use linear solver to compute multiplication by inverse matrix
[note 1: essentialy yields implicit derivative, cf implicit function theorem & co]
[note 1: essentially yields implicit derivative, cf implicit function theorem & co]
[note 2: time can be "virtual" , solving for steady state
only assumption: some iterative procedure, not just single eplicit step - then things simplify.]
only assumption: some iterative procedure, not just single explicit step - then things simplify.]
## Summary of Differentiable Physics so far
To summarize, using differentiable physical simulations
gives us a tool to include phsyical equations with a chosen discretization into DL learning.
gives us a tool to include physical equations with a chosen discretization into DL learning.
In contrast to the residual constraints of the previous chapter,
this makes it possible to left NNs seamlessly interact with physical solvers.