update dp burgers for phiflow2
This commit is contained in:
parent
e99b2a0b4a
commit
feb1477391
3
_toc.yml
3
_toc.yml
@ -19,7 +19,6 @@
|
|||||||
- file: diffphys
|
- file: diffphys
|
||||||
sections:
|
sections:
|
||||||
- file: diffphys-code-gradient.ipynb
|
- file: diffphys-code-gradient.ipynb
|
||||||
- file: diffphys-code-tf.ipynb
|
|
||||||
- file: diffphys-discuss.md
|
- file: diffphys-discuss.md
|
||||||
- file: diffphys-code-ns-v2a.ipynb
|
- file: diffphys-code-ns-v2a.ipynb
|
||||||
- file: diffphys-code-sol.ipynb
|
- file: diffphys-code-sol.ipynb
|
||||||
@ -29,6 +28,8 @@
|
|||||||
- file: overview-burgers-forw-v1.ipynb
|
- file: overview-burgers-forw-v1.ipynb
|
||||||
- file: overview-ns-forw-v1.ipynb
|
- file: overview-ns-forw-v1.ipynb
|
||||||
- file: diffphys-code-ns.ipynb
|
- file: diffphys-code-ns.ipynb
|
||||||
|
- file: diffphys-code-gradient-v1.ipynb
|
||||||
|
- file: diffphys-code-tf.ipynb
|
||||||
- file: jupyter-book-reference
|
- file: jupyter-book-reference
|
||||||
sections:
|
sections:
|
||||||
- file: jupyter-book-reference-markdown
|
- file: jupyter-book-reference-markdown
|
||||||
|
285
diffphys-code-gradient-v1.ipynb
Normal file
285
diffphys-code-gradient-v1.ipynb
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@ -336,7 +336,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"Now we have both versions, so let's compare both reconstructions in more detail.\n",
|
"Now we have both versions, so let's compare both reconstructions in more detail.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Let's first look at the solutions side by side. The code below generates an image with 3 versions, from top to bottom: the \"ground truth\" (GT) solution as given by the regular forward simulation, in the middle the PINN reconstruction, and at the bottom the differentiable physics version."
|
"Let's first look at the solutions side by side. The code below generates an image with 3 versions, from top to bottom: the \"ground truth\" (GT) solution as given by the regular forward simulation, in the middle the PINN reconstruction, and at the bottom the differentiable physics version.\n",
|
||||||
|
"\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -481,7 +482,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.7.6"
|
"version": "3.8.5"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
26
diffphys.md
26
diffphys.md
@ -25,7 +25,7 @@ TODO, visual overview of DP training
|
|||||||
|
|
||||||
## Differentiable Operators
|
## Differentiable Operators
|
||||||
|
|
||||||
With this direction we build on existing numerical solvers. I.e.,
|
With the DP direction we build on existing numerical solvers. I.e.,
|
||||||
the approach is strongly relying on the algorithms developed in the larger field
|
the approach is strongly relying on the algorithms developed in the larger field
|
||||||
of computational methods for a vast range of physical effects in our world.
|
of computational methods for a vast range of physical effects in our world.
|
||||||
To start with we need a continuous formulation as model for the physical effect that we'd like
|
To start with we need a continuous formulation as model for the physical effect that we'd like
|
||||||
@ -35,7 +35,7 @@ method for discretization of the equation.
|
|||||||
|
|
||||||
Let's assume we have a continuous formulation $\mathcal P^*(\mathbf{x}, \nu)$ of the physical quantity of
|
Let's assume we have a continuous formulation $\mathcal P^*(\mathbf{x}, \nu)$ of the physical quantity of
|
||||||
interest $\mathbf{u}(\mathbf{x}, t): \mathbb R^d \times \mathbb R^+ \rightarrow \mathbb R^d$,
|
interest $\mathbf{u}(\mathbf{x}, t): \mathbb R^d \times \mathbb R^+ \rightarrow \mathbb R^d$,
|
||||||
with a model parameter $\nu$ (e.g., a diffusion or viscosity constant).
|
with model parameters $\nu$ (e.g., diffusion, viscosity, or conductivity constants).
|
||||||
The component of $\mathbf{u}$ will be denoted by a numbered subscript, i.e.,
|
The component of $\mathbf{u}$ will be denoted by a numbered subscript, i.e.,
|
||||||
$\mathbf{u} = (u_1,u_2,\dots,u_d)^T$.
|
$\mathbf{u} = (u_1,u_2,\dots,u_d)^T$.
|
||||||
%and a corresponding discrete version that describes the evolution of this quantity over time: $\mathbf{u}_t = \mathcal P(\mathbf{x}, \mathbf{u}, t)$.
|
%and a corresponding discrete version that describes the evolution of this quantity over time: $\mathbf{u}_t = \mathcal P(\mathbf{x}, \mathbf{u}, t)$.
|
||||||
@ -54,9 +54,11 @@ $\partial \mathcal P_i / \partial \mathbf{u}$.
|
|||||||
|
|
||||||
Note that we typically don't need derivatives
|
Note that we typically don't need derivatives
|
||||||
for all parameters of $\mathcal P$, e.g. we omit $\nu$ in the following, assuming that this is a
|
for all parameters of $\mathcal P$, e.g. we omit $\nu$ in the following, assuming that this is a
|
||||||
given model parameter, with which the NN should not interact. Naturally, it can vary,
|
given model parameter, with which the NN should not interact.
|
||||||
by $\nu$ will not be the output of a NN representation. If this is the case, we can omit
|
Naturally, it can vary within the solution manifold that we're interested in,
|
||||||
providing $\partial \mathcal P_i / \partial \nu$ in our solver.
|
but $\nu$ will not be the output of a NN representation. If this is the case, we can omit
|
||||||
|
providing $\partial \mathcal P_i / \partial \nu$ in our solver. However, the following learning process
|
||||||
|
natuarlly transfers to including $\nu$ as a degree of freedom.
|
||||||
|
|
||||||
## Jacobians
|
## Jacobians
|
||||||
|
|
||||||
@ -93,7 +95,7 @@ this would cause huge memory overheads and unnecessarily slow down training.
|
|||||||
Instead, for backpropagation, we can provide faster operations that compute products
|
Instead, for backpropagation, we can provide faster operations that compute products
|
||||||
with the Jacobian transpose because we always have a scalar loss function at the end of the chain.
|
with the Jacobian transpose because we always have a scalar loss function at the end of the chain.
|
||||||
|
|
||||||
[TODO check transpose of Jacobians in equations]
|
**[TODO check transpose of Jacobians in equations]**
|
||||||
|
|
||||||
Given the formulation above, we need to resolve the derivatives
|
Given the formulation above, we need to resolve the derivatives
|
||||||
of the chain of function compositions of the $\mathcal P_i$ at some current state $\mathbf{u}^n$ via the chain rule.
|
of the chain of function compositions of the $\mathcal P_i$ at some current state $\mathbf{u}^n$ via the chain rule.
|
||||||
@ -121,7 +123,7 @@ as this [nice survey by Baydin et al.](https://arxiv.org/pdf/1502.05767.pdf).
|
|||||||
|
|
||||||
## Learning via DP Operators
|
## Learning via DP Operators
|
||||||
|
|
||||||
Thus, long story short, once the operators of our simulator support computations of the Jacobian-vector
|
Thus, once the operators of our simulator support computations of the Jacobian-vector
|
||||||
products, we can integrate them into DL pipelines just like you would include a regular fully-connected layer
|
products, we can integrate them into DL pipelines just like you would include a regular fully-connected layer
|
||||||
or a ReLU activation.
|
or a ReLU activation.
|
||||||
|
|
||||||
@ -175,9 +177,6 @@ procedure for a _forward_ solve.
|
|||||||
Note that to simplify things, we assume that $\mathbf{u}$ is only a function in space,
|
Note that to simplify things, we assume that $\mathbf{u}$ is only a function in space,
|
||||||
i.e. constant over time. We'll bring back the time evolution of $\mathbf{u}$ later on.
|
i.e. constant over time. We'll bring back the time evolution of $\mathbf{u}$ later on.
|
||||||
%
|
%
|
||||||
[TODO, write out simple finite diff approx?]
|
|
||||||
[denote discrete d as $\mathbf{d}$ below?]
|
|
||||||
%
|
|
||||||
Let's denote this re-formulation as $\mathcal P$. It maps a state of $d(t)$ into a
|
Let's denote this re-formulation as $\mathcal P$. It maps a state of $d(t)$ into a
|
||||||
new state at an evoled time, i.e.:
|
new state at an evoled time, i.e.:
|
||||||
|
|
||||||
@ -186,7 +185,7 @@ $$
|
|||||||
$$
|
$$
|
||||||
|
|
||||||
As a simple example of an optimization and learning task, let's consider the problem of
|
As a simple example of an optimization and learning task, let's consider the problem of
|
||||||
finding an motion $\mathbf{u}$ such that starting with a given initial state $d^{~0}$ at $t^0$,
|
finding a motion $\mathbf{u}$ such that starting with a given initial state $d^{~0}$ at $t^0$,
|
||||||
the time evolved scalar density at time $t^e$ has a certain shape or configuration $d^{\text{target}}$.
|
the time evolved scalar density at time $t^e$ has a certain shape or configuration $d^{\text{target}}$.
|
||||||
Informally, we'd like to find a motion that deforms $d^{~0}$ into a target state.
|
Informally, we'd like to find a motion that deforms $d^{~0}$ into a target state.
|
||||||
The simplest way to express this goal is via an $L^2$ loss between the two states. So we want
|
The simplest way to express this goal is via an $L^2$ loss between the two states. So we want
|
||||||
@ -273,8 +272,9 @@ be preferable to actually constructing $A$.
|
|||||||
|
|
||||||
## A (slightly) more complex example
|
## A (slightly) more complex example
|
||||||
|
|
||||||
[TODO]
|
**[TODO]**
|
||||||
more complex, matrix inversion, eg Poisson solve
|
|
||||||
|
a bit more complex, matrix inversion, eg Poisson solve
|
||||||
dont backprop through all CG steps (available in phiflow though)
|
dont backprop through all CG steps (available in phiflow though)
|
||||||
rather, re-use linear solver to compute multiplication by inverse matrix
|
rather, re-use linear solver to compute multiplication by inverse matrix
|
||||||
|
|
||||||
|
6
intro.md
6
intro.md
@ -82,6 +82,12 @@ See also... Test link: {doc}`supervised`
|
|||||||
---
|
---
|
||||||
|
|
||||||
|
|
||||||
|
## TODOs , include
|
||||||
|
|
||||||
|
- general motivation: repeated solves in classical solvers -> potential for ML
|
||||||
|
- PINNs: often need weighting of added loss terms for different parts
|
||||||
|
|
||||||
|
|
||||||
## TODOs , Planned content
|
## TODOs , Planned content
|
||||||
|
|
||||||
Loose collection of notes and TODOs:
|
Loose collection of notes and TODOs:
|
||||||
|
@ -23,14 +23,14 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 7,
|
"execution_count": 17,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
{
|
{
|
||||||
"name": "stdout",
|
"name": "stdout",
|
||||||
"output_type": "stream",
|
"output_type": "stream",
|
||||||
"text": [
|
"text": [
|
||||||
"Using phiflow version: 2.0.0\n"
|
"Using phiflow version: 2.0.0rc0\n"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@ -156,7 +156,7 @@
|
|||||||
{
|
{
|
||||||
"data": {
|
"data": {
|
||||||
"text/plain": [
|
"text/plain": [
|
||||||
"[<matplotlib.lines.Line2D at 0x7faed7f5eeb0>]"
|
"[<matplotlib.lines.Line2D at 0x7f7ef47d8160>]"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"execution_count": 5,
|
"execution_count": 5,
|
||||||
@ -177,9 +177,8 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"source": [
|
"source": [
|
||||||
"# we only need \"velocity.data\" from each phiflow state\n",
|
"# get \"velocity.values\" from each phiflow state with a channel dimensions, i.e. \"vector\"\n",
|
||||||
"#vels = [v.values.numpy('x') for v in velocities]\n",
|
"vels = [v.values.numpy('x,vector') for v in velocities] # gives a list of 2D arrays \n",
|
||||||
"vels = [v.values.numpy('x,vector') for v in velocities] # vel vx\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"import pylab\n",
|
"import pylab\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -203,14 +202,14 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 6,
|
"execution_count": 16,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
{
|
{
|
||||||
"name": "stdout",
|
"name": "stdout",
|
||||||
"output_type": "stream",
|
"output_type": "stream",
|
||||||
"text": [
|
"text": [
|
||||||
"Vels array shape: (128, 33, 1)\n"
|
"resulting image size(128, 528)\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -230,18 +229,18 @@
|
|||||||
"def show_state(a):\n",
|
"def show_state(a):\n",
|
||||||
" # we only have 33 time steps, blow up by a factor of 2^4 to make it easier to see\n",
|
" # we only have 33 time steps, blow up by a factor of 2^4 to make it easier to see\n",
|
||||||
" # (could also be done with more evaluations of network)\n",
|
" # (could also be done with more evaluations of network)\n",
|
||||||
|
" a=np.expand_dims(a, axis=2)\n",
|
||||||
" for i in range(4):\n",
|
" for i in range(4):\n",
|
||||||
" a = np.concatenate( [a,a] , axis=2)\n",
|
" a = np.concatenate( [a,a] , axis=2)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" a = np.reshape( a, [a.shape[0],a.shape[1]*a.shape[2]] )\n",
|
" a = np.reshape( a, [a.shape[0],a.shape[1]*a.shape[2]] )\n",
|
||||||
" #print(\"resulting image size\" +format(a.shape))\n",
|
" print(\"resulting image size\" +format(a.shape))\n",
|
||||||
|
"\n",
|
||||||
" fig, axes = pylab.subplots(1, 1, figsize=(16, 5))\n",
|
" fig, axes = pylab.subplots(1, 1, figsize=(16, 5))\n",
|
||||||
" im = axes.imshow(a, origin='upper', cmap='inferno')\n",
|
" im = axes.imshow(a, origin='upper', cmap='inferno')\n",
|
||||||
" pylab.colorbar(im) \n",
|
" pylab.colorbar(im) \n",
|
||||||
" \n",
|
" \n",
|
||||||
"#vels_img = np.asarray( np.stack(vels), dtype=np.float32 ).transpose() # no component channel\n",
|
"vels_img = np.asarray( np.concatenate(vels, axis=-1), dtype=np.float32 ) \n",
|
||||||
"vels_img = np.asarray( np.concatenate(vels, axis=-1), dtype=np.float32 ) # vel vx\n",
|
|
||||||
"vels_img = np.reshape(vels_img, list(vels_img.shape)+[1] ) ; print(\"Vels array shape: \"+format(vels_img.shape))\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"# save for comparison with reconstructions later on\n",
|
"# save for comparison with reconstructions later on\n",
|
||||||
"np.savez_compressed(\"./temp/burgers-groundtruth-solution.npz\", np.reshape(vels_img,[N,STEPS+1])) # remove batch & channel dimension\n",
|
"np.savez_compressed(\"./temp/burgers-groundtruth-solution.npz\", np.reshape(vels_img,[N,STEPS+1])) # remove batch & channel dimension\n",
|
||||||
|
Loading…
x
Reference in New Issue
Block a user