diffphys code cleanup
This commit is contained in:
parent
91f2656a0a
commit
82800c94ed
@ -89,9 +89,9 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Gradients\n",
|
"## Gradients\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The `record_gradients` function of phiflow triggers the generation of a gradient tape to compute gradients of a simulation via `math.gradients(loss, values)`.\n",
|
"The `math.gradient` operation of phiflow generates a gradient function for a scalar loss, and we use it below to compute gradients of a whole simulation with the chosen number of 32 time steps.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"To use it for the Burgers case we need to specify a loss function: we want the solution at $t=0.5$ to match the reference data. Thus we simply compute an $L^2$ difference between step number 16 and our constraint array as `loss`. Afterwards, we evaluate the gradient of the initial velocity state `velocity` with respect to this loss."
|
"To use it for the Burgers case we need to compute an appropriate loss: we want the solution at $t=0.5$ to match the reference data. Thus we simply compute an $L^2$ difference between step number 16 and our constraint array as `loss`. Afterwards, we evaluate the gradient function of the initial velocity state `velocity` with respect to this loss. Phiflow's `math.gradient` generates a function that returns a gradient for each parameter, and as we only have a single one in form of the velocity here, `grad[0]` represents the gradient for the initial velocity."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -609,7 +609,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"* You can try to adjust the training parameters to further improve the reconstruction.\n",
|
"* You can try to adjust the training parameters to further improve the reconstruction.\n",
|
||||||
"* Activate a different optimizer, and observe the changing (not necessarily improved) convergence behavior.\n",
|
"* Activate a different optimizer, and observe the changing (not necessarily improved) convergence behavior.\n",
|
||||||
"* Vary the number of steps, or the resolution of the simulation and reconstruction.\n"
|
"* Vary the number of steps, or the resolution of the simulation and reconstruction.\n",
|
||||||
|
"* Try adding `@jit_compile` in a line before `loss_function`. This will incude a one-time compilation cost, but greatly speed up the optimization. \n"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
@ -618,16 +618,6 @@
|
|||||||
"outputId": "e38b8b33-7d6f-40e8-ce64-0250c908db7a"
|
"outputId": "e38b8b33-7d6f-40e8-ce64-0250c908db7a"
|
||||||
},
|
},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
{
|
|
||||||
"output_type": "stream",
|
|
||||||
"name": "stderr",
|
|
||||||
"text": [
|
|
||||||
"/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:16: DeprecationWarning: Domain is deprecated and will be removed in a future release. Use a dict instead, e.g. CenteredGrid(values, extrapolation, **domain_dict)\n",
|
|
||||||
" app.launch_new_instance()\n",
|
|
||||||
"/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:16: FutureWarning: Domain is deprecated and will be removed in a future release. Use a dict instead, e.g. CenteredGrid(values, extrapolation, **domain_dict)\n",
|
|
||||||
" app.launch_new_instance()\n"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"output_type": "stream",
|
"output_type": "stream",
|
||||||
"name": "stdout",
|
"name": "stdout",
|
||||||
|
Loading…
Reference in New Issue
Block a user