minor correction in DP chapter

This commit is contained in:
NT 2021-07-21 16:14:08 +02:00
parent 68ec5820df
commit e57749de3a
2 changed files with 24 additions and 13 deletions

View File

@ -120,7 +120,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 5,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
@ -138,15 +138,14 @@
}
],
"source": [
"import os, sys, logging, argparse, pickle, glob, random, distutils.dir_util\n",
"import os, sys, logging, argparse, pickle, glob, random, distutils.dir_util, urllib.request\n",
"\n",
"if not os.path.isfile('sol-data-karman-2d-train.pickle'):\n",
" import urllib.request\n",
" url=\"https://ge.in.tum.de/download/2020-solver-in-the-loop/sol-karman-2d-data.pickle\"\n",
"fname_train = 'sol-data-karman-2d-train.pickle'\n",
"if not os.path.isfile(fname_train):\n",
" print(\"Downloading training data (73MB), this can take a moment the first time...\")\n",
" urllib.request.urlretrieve(url, 'sol-data-karman-2d-train.pickle')\n",
" urllib.request.urlretrieve(\"https://ge.in.tum.de/download/2020-solver-in-the-loop/\"+fname_train, fname_train)\n",
"\n",
"with open('sol-data-karman-2d-train.pickle', 'rb') as f: dataPreloaded = pickle.load(f)\n",
"with open(fname_train, 'rb') as f: dataPreloaded = pickle.load(f)\n",
"print(\"Loaded data, {} training sims\".format(len(dataPreloaded)) )\n"
]
},
@ -161,7 +160,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
@ -169,7 +168,19 @@
"id": "BGN4GqxkIueM",
"outputId": "095adbf8-1ef6-41fd-938e-6cafcf0fdfdc"
},
"outputs": [],
"outputs": [
{
"ename": "ModuleNotFoundError",
"evalue": "No module named 'phi.tf.util'",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mModuleNotFoundError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-4-7310d213a1a8>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mphi\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mflow\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0;34m*\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 5\u001b[0;31m \u001b[0;32mimport\u001b[0m \u001b[0mphi\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mutil\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 6\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mtensorflow\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mtf\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'phi.tf.util'"
]
}
],
"source": [
"#!pip install --upgrade --quiet phiflow\n",
"#%tensorflow_version 1.x\n",

View File

@ -201,9 +201,9 @@ $$
$$
As a simple example of an inverse problem and learning task, let's consider the problem of
finding a unknown motion $\mathbf{u}$:
finding an unknown motion $\mathbf{u}$:
this motion should transform a given initial scalar density state $d^{~0}$ at time $t^0$
into state that's evolved by $\mathcal P$ to a later "end" time $t^e$
into a state that's evolved by $\mathcal P$ to a later "end" time $t^e$
with a certain shape or configuration $d^{\text{target}}$.
Informally, we'd like to find a motion that deforms $d^{~0}$ through the PDE model into a target state.
The simplest way to express this goal is via an $L^2$ loss between the two states. So we want
@ -276,7 +276,7 @@ so that we can be more specific.
### Introducing a specific advection scheme
In the following we'll make use of a simple [first order upwinding scheme](https://en.wikipedia.org/wiki/Upwind_scheme)
on a Cartesian grid in 1D, with marker density and velocity $d_i$ and $u_i$ for cell $i$.
on a Cartesian grid in 1D, with marker density $d_i$ and velocity $u_i$ for cell $i$.
We omit the $(t)$ for quantities at time $t$ for brevity, i.e., $d_i(t)$ is written as $d_i$ below.
From above, we'll use our _physical model_ that updates the marker density
$d_i(t+\Delta t) = \mathcal P ( d_i(t), \mathbf{u}(t), t + \Delta t)$, which
@ -329,7 +329,7 @@ $d$ from time $t$ to $t+\Delta t$, but we could of course have an arbitrary numb
steps. After all, above we stated the goal to advance the initial marker state $d(t^0)$ to
the target state at time $t^e$, which could encompass a long interval of time.
In the expression above for $d_i(t+\Delta t)$, each of the $d_i(t)$ in turn depend
In the expression above for $d_i(t+\Delta t)$, each of the $d_i(t)$ in turn depends
on the velocity and density states at time $t-\Delta t$, i.e., $d_i(t-\Delta t)$. Thus we have to trace back
the influence of our loss $L$ all the way back to how $\mathbf{u}$ influences the initial marker
state. This can involve a large number of evaluations of our advection scheme via $\mathcal P$.