update dp burgers for phiflow2

This commit is contained in:
NT 2021-02-26 22:29:52 +08:00
parent e99b2a0b4a
commit feb1477391
7 changed files with 783 additions and 151 deletions

View File

@ -19,7 +19,6 @@
- file: diffphys
sections:
- file: diffphys-code-gradient.ipynb
- file: diffphys-code-tf.ipynb
- file: diffphys-discuss.md
- file: diffphys-code-ns-v2a.ipynb
- file: diffphys-code-sol.ipynb
@ -29,6 +28,8 @@
- file: overview-burgers-forw-v1.ipynb
- file: overview-ns-forw-v1.ipynb
- file: diffphys-code-ns.ipynb
- file: diffphys-code-gradient-v1.ipynb
- file: diffphys-code-tf.ipynb
- file: jupyter-book-reference
sections:
- file: jupyter-book-reference-markdown

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -336,7 +336,8 @@
"\n",
"Now we have both versions, so let's compare both reconstructions in more detail.\n",
"\n",
"Let's first look at the solutions side by side. The code below generates an image with 3 versions, from top to bottom: the \"ground truth\" (GT) solution as given by the regular forward simulation, in the middle the PINN reconstruction, and at the bottom the differentiable physics version."
"Let's first look at the solutions side by side. The code below generates an image with 3 versions, from top to bottom: the \"ground truth\" (GT) solution as given by the regular forward simulation, in the middle the PINN reconstruction, and at the bottom the differentiable physics version.\n",
"\n"
]
},
{
@ -481,7 +482,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
"version": "3.8.5"
}
},
"nbformat": 4,

View File

@ -25,7 +25,7 @@ TODO, visual overview of DP training
## Differentiable Operators
With this direction we build on existing numerical solvers. I.e.,
With the DP direction we build on existing numerical solvers. I.e.,
the approach is strongly relying on the algorithms developed in the larger field
of computational methods for a vast range of physical effects in our world.
To start with we need a continuous formulation as model for the physical effect that we'd like
@ -35,7 +35,7 @@ method for discretization of the equation.
Let's assume we have a continuous formulation $\mathcal P^*(\mathbf{x}, \nu)$ of the physical quantity of
interest $\mathbf{u}(\mathbf{x}, t): \mathbb R^d \times \mathbb R^+ \rightarrow \mathbb R^d$,
with a model parameter $\nu$ (e.g., a diffusion or viscosity constant).
with model parameters $\nu$ (e.g., diffusion, viscosity, or conductivity constants).
The component of $\mathbf{u}$ will be denoted by a numbered subscript, i.e.,
$\mathbf{u} = (u_1,u_2,\dots,u_d)^T$.
%and a corresponding discrete version that describes the evolution of this quantity over time: $\mathbf{u}_t = \mathcal P(\mathbf{x}, \mathbf{u}, t)$.
@ -54,9 +54,11 @@ $\partial \mathcal P_i / \partial \mathbf{u}$.
Note that we typically don't need derivatives
for all parameters of $\mathcal P$, e.g. we omit $\nu$ in the following, assuming that this is a
given model parameter, with which the NN should not interact. Naturally, it can vary,
by $\nu$ will not be the output of a NN representation. If this is the case, we can omit
providing $\partial \mathcal P_i / \partial \nu$ in our solver.
given model parameter, with which the NN should not interact.
Naturally, it can vary within the solution manifold that we're interested in,
but $\nu$ will not be the output of a NN representation. If this is the case, we can omit
providing $\partial \mathcal P_i / \partial \nu$ in our solver. However, the following learning process
natuarlly transfers to including $\nu$ as a degree of freedom.
## Jacobians
@ -93,7 +95,7 @@ this would cause huge memory overheads and unnecessarily slow down training.
Instead, for backpropagation, we can provide faster operations that compute products
with the Jacobian transpose because we always have a scalar loss function at the end of the chain.
[TODO check transpose of Jacobians in equations]
**[TODO check transpose of Jacobians in equations]**
Given the formulation above, we need to resolve the derivatives
of the chain of function compositions of the $\mathcal P_i$ at some current state $\mathbf{u}^n$ via the chain rule.
@ -121,7 +123,7 @@ as this [nice survey by Baydin et al.](https://arxiv.org/pdf/1502.05767.pdf).
## Learning via DP Operators
Thus, long story short, once the operators of our simulator support computations of the Jacobian-vector
Thus, once the operators of our simulator support computations of the Jacobian-vector
products, we can integrate them into DL pipelines just like you would include a regular fully-connected layer
or a ReLU activation.
@ -175,9 +177,6 @@ procedure for a _forward_ solve.
Note that to simplify things, we assume that $\mathbf{u}$ is only a function in space,
i.e. constant over time. We'll bring back the time evolution of $\mathbf{u}$ later on.
%
[TODO, write out simple finite diff approx?]
[denote discrete d as $\mathbf{d}$ below?]
%
Let's denote this re-formulation as $\mathcal P$. It maps a state of $d(t)$ into a
new state at an evoled time, i.e.:
@ -186,7 +185,7 @@ $$
$$
As a simple example of an optimization and learning task, let's consider the problem of
finding an motion $\mathbf{u}$ such that starting with a given initial state $d^{~0}$ at $t^0$,
finding a motion $\mathbf{u}$ such that starting with a given initial state $d^{~0}$ at $t^0$,
the time evolved scalar density at time $t^e$ has a certain shape or configuration $d^{\text{target}}$.
Informally, we'd like to find a motion that deforms $d^{~0}$ into a target state.
The simplest way to express this goal is via an $L^2$ loss between the two states. So we want
@ -273,8 +272,9 @@ be preferable to actually constructing $A$.
## A (slightly) more complex example
[TODO]
more complex, matrix inversion, eg Poisson solve
**[TODO]**
a bit more complex, matrix inversion, eg Poisson solve
dont backprop through all CG steps (available in phiflow though)
rather, re-use linear solver to compute multiplication by inverse matrix

View File

@ -82,6 +82,12 @@ See also... Test link: {doc}`supervised`
---
## TODOs , include
- general motivation: repeated solves in classical solvers -> potential for ML
- PINNs: often need weighting of added loss terms for different parts
## TODOs , Planned content
Loose collection of notes and TODOs:

View File

@ -23,14 +23,14 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using phiflow version: 2.0.0\n"
"Using phiflow version: 2.0.0rc0\n"
]
}
],
@ -156,7 +156,7 @@
{
"data": {
"text/plain": [
"[<matplotlib.lines.Line2D at 0x7faed7f5eeb0>]"
"[<matplotlib.lines.Line2D at 0x7f7ef47d8160>]"
]
},
"execution_count": 5,
@ -177,9 +177,8 @@
}
],
"source": [
"# we only need \"velocity.data\" from each phiflow state\n",
"#vels = [v.values.numpy('x') for v in velocities]\n",
"vels = [v.values.numpy('x,vector') for v in velocities] # vel vx\n",
"# get \"velocity.values\" from each phiflow state with a channel dimensions, i.e. \"vector\"\n",
"vels = [v.values.numpy('x,vector') for v in velocities] # gives a list of 2D arrays \n",
"\n",
"import pylab\n",
"\n",
@ -203,14 +202,14 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Vels array shape: (128, 33, 1)\n"
"resulting image size(128, 528)\n"
]
},
{
@ -230,18 +229,18 @@
"def show_state(a):\n",
" # we only have 33 time steps, blow up by a factor of 2^4 to make it easier to see\n",
" # (could also be done with more evaluations of network)\n",
" a=np.expand_dims(a, axis=2)\n",
" for i in range(4):\n",
" a = np.concatenate( [a,a] , axis=2)\n",
"\n",
" a = np.reshape( a, [a.shape[0],a.shape[1]*a.shape[2]] )\n",
" #print(\"resulting image size\" +format(a.shape))\n",
" print(\"resulting image size\" +format(a.shape))\n",
"\n",
" fig, axes = pylab.subplots(1, 1, figsize=(16, 5))\n",
" im = axes.imshow(a, origin='upper', cmap='inferno')\n",
" pylab.colorbar(im) \n",
" \n",
"#vels_img = np.asarray( np.stack(vels), dtype=np.float32 ).transpose() # no component channel\n",
"vels_img = np.asarray( np.concatenate(vels, axis=-1), dtype=np.float32 ) # vel vx\n",
"vels_img = np.reshape(vels_img, list(vels_img.shape)+[1] ) ; print(\"Vels array shape: \"+format(vels_img.shape))\n",
"vels_img = np.asarray( np.concatenate(vels, axis=-1), dtype=np.float32 ) \n",
"\n",
"# save for comparison with reconstructions later on\n",
"np.savez_compressed(\"./temp/burgers-groundtruth-solution.npz\", np.reshape(vels_img,[N,STEPS+1])) # remove batch & channel dimension\n",