update diffphys ns text

This commit is contained in:
NT 2021-01-18 18:52:45 +08:00
parent 1b939ac205
commit 33a648e451

View File

@ -19,7 +19,7 @@
" \\frac{\\partial d}{\\partial{t}} + \\mathbf{u} \\cdot \\nabla d = 0 \n",
"$\n",
"\n",
"TODO, overview of reconstruction!\n",
"As optimization objective we'll consider a more difficult variant of the previous example: the state of the observed density $d$ should match a given target after $n=20$ steps of simulation. In contrast to before, the marker $d$ cannot be modified in any way, but only the initial state of the velocity $\\mathbf{u}$ at $t=0$. This gives us a split between observable quantities for the loss formulation, and quantities that we can interact with during the optimization (or later on via NNs).\n",
"\n",
"First, let's get the loading of python modules out of the way:"
]
@ -45,9 +45,7 @@
"source": [
"## Setting up the simulation\n",
"\n",
"Like before ... TODO\n",
"\n",
"But now let's set up four fluid simulations that run in parallel, i.e. a mini batch similar to DL training. In phiflow we can directly pass a `batch_size=4` parameter to the `Fluid` object. Each fluid simulation is fully independent. In this case they differ by having circular Inflows at different locations.\n",
"To make things a bit more interesting - and to move a bit closer to a NN training process - let's set up of four fluid simulations that run in parallel, i.e. a mini batch similar to DL training. In phiflow we can directly pass a `batch_size=4` parameter to the `Fluid` object. Each fluid simulation is fully independent. In this case they differ by having circular Inflows at different locations.\n",
"\n",
"Like before, let's plot the marker density after a few steps of simulation (each call to `step()` now updates all four simulations). Note that the boundaries between the four simulations are not visible in the image, but it shows four completely separate density states. The different inflow positions in conjunction with the solid wall boundaries (zero Dirichlet for velocity, and Neumann for pressure), result in four different end states of the simulation."
]
@ -98,7 +96,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we see four simulations, 0 .. 3. Simulation `0`, with the curved plume on the far left, will be our reference, while the other three will be modified in the optimization procedure below."
"Now we see four simulations, (0) to (3). This final density of simulation (0), with the curved plume on the far left, will be our **reference state**, while the initial velocity of the other three will be modified in the optimization procedure below."
]
},
{
@ -109,10 +107,10 @@
"source": [
"## Differentiation\n",
"\n",
"The simulation we just computed was using purely NumPy (non-differentiable) operations.\n",
"The simulation we just computed was using purely (non-differentiable) operations from numpy.\n",
"To enable differentiability, we need to build a TensorFlow graph that computes this result.\n",
"\n",
"(Note, the first line is only necessary when running in environments that by default have newer tensorflow versions installed, e.g., `colab`. Uncomment if you're running this notebook there.)"
"(Note, the first line of the next block, `%tensorflow_version 1.x` is only necessary when running in environments that by default have newer tensorflow versions installed, e.g., `colab`. Uncomment if you're running this notebook there.)"
]
},
{
@ -163,7 +161,7 @@
"id": "3mpyowRYUSS4"
},
"source": [
"Let's set up the simulation just like before. But now, we want to optimize the initial velocities so that all simulations arrive at a final state that is similar to the first simulation from the previous example. I.e., the state shown in the left-most image above.\n",
"Now that we have imported `phi.tf.fluid`, let's set up the simulation just like before. But now, we want to start from a velocity that we can modify, i.e. a variable. Then we can optimize these initial velocities so that all simulations arrive at a final state that is similar to the first simulation from the previous example. I.e., the state shown in the left-most image above.\n",
"\n",
"This is a fairly tough task: we're producing diffent dynamics by changing the boundary conditions (the marker inflow position), and an optimizer should now find a single initial velocity state, that gives the same state as simulation `0` above at $t=30$. Thus, after 20 steps with $\\Delta t=1.5$ the simulation should reproduce a different set of boundary conditions from the velocity state. It would be much easier to simply change the position of the marker inflow to arrive at this goal, but -- to make things a bit more difficult -- the inflow is _not_ a degree of freedom. The optimizer can only change the velocity $\\mathbf{u}$ at time $t=0$.\n",
"\n",
@ -313,7 +311,9 @@
"source": [
"When calling `session.run` now, the full simulation is evaluated using TensorFlow operations.\n",
"This will take advantage of your GPU, if available.\n",
"If you compile Φ<sub>Flow</sub> with [CUDA support](https://github.com/tum-pbs/PhiFlow/blob/master/documentation/Installation_Instructions.md), the TensorFlow graph will use optimized operators for efficient simulation and training runs."
"If you compile Φ<sub>Flow</sub> with [CUDA support](https://github.com/tum-pbs/PhiFlow/blob/master/documentation/Installation_Instructions.md), the TensorFlow graph will also use optimized operators for efficient simulation and training runs.\n",
"\n",
"The `session.run()` call of the following code block will now retrieve the final fluid density, and for that it actually needs to process all 20 simulation steps of the TF graph that we just constructed."
]
},
{
@ -366,9 +366,9 @@
},
"source": [
"Next, we define the *loss* function. This is the value we want to decrease via optimization.\n",
"For this example, we want the marker densities of all final simulation states to match the left-most one, called `target`.\n",
"For this example, we want the marker densities of all final simulation states to match the left-most one, called `target`, in terms of an $L^2$ norm.\n",
"\n",
"For the optimizer, we choose gradient descent for this example."
"For the optimizer, we choose again gradient descent for this example."
]
},
{