upd intro parabola, cleanup
This commit is contained in:
5
_toc.yml
5
_toc.yml
@@ -1,6 +1,8 @@
|
||||
# PBDL Table of content (cf https://jupyterbook.org/customize/toc.html)
|
||||
#
|
||||
- file: intro
|
||||
- file: intro.md
|
||||
sections:
|
||||
- file: intro-teaser.ipynb
|
||||
- file: overview.md
|
||||
sections:
|
||||
- file: overview-equations.md
|
||||
@@ -21,6 +23,7 @@
|
||||
- file: diffphys-discuss.md
|
||||
- file: diffphys-code-ns.ipynb
|
||||
- file: diffphys-code-sol.ipynb
|
||||
- file: diffphys-outlook.md
|
||||
- file: jupyter-book-reference
|
||||
sections:
|
||||
- file: markdown
|
||||
|
||||
@@ -9,15 +9,16 @@
|
||||
"# Differentiable Fluid Simulations\n",
|
||||
"\n",
|
||||
"Next, we'll target a more complex example with the Navier-Stokes equations as model. We'll target a 2D case with velocity $\\mathbf{u}$, no explicit viscosity term, and a marker density $d$ that drives a simple Boussinesq buoyancy term $\\eta d$ adding a force along the y dimension:\n",
|
||||
"$\n",
|
||||
" \\frac{\\partial u_x}{\\partial{t}} + \\mathbf{u} \\cdot \\nabla u_x = - \\frac{1}{\\rho} \\nabla p \n",
|
||||
"\n",
|
||||
"$\\begin{aligned}\n",
|
||||
" \\frac{\\partial u_x}{\\partial{t}} + \\mathbf{u} \\cdot \\nabla u_x &= - \\frac{1}{\\rho} \\nabla p \n",
|
||||
" \\\\\n",
|
||||
" \\frac{\\partial u_y}{\\partial{t}} + \\mathbf{u} \\cdot \\nabla u_y = - \\frac{1}{\\rho} \\nabla p + \\eta d\n",
|
||||
" \\frac{\\partial u_y}{\\partial{t}} + \\mathbf{u} \\cdot \\nabla u_y &= - \\frac{1}{\\rho} \\nabla p + \\eta d\n",
|
||||
" \\\\\n",
|
||||
" \\text{subject to} \\quad \\nabla \\cdot \\mathbf{u} = 0,\n",
|
||||
" \\text{s.t.} \\quad \\nabla \\cdot \\mathbf{u} &= 0,\n",
|
||||
" \\\\\n",
|
||||
" \\frac{\\partial d}{\\partial{t}} + \\mathbf{u} \\cdot \\nabla d = 0 \n",
|
||||
"$\n",
|
||||
" \\frac{\\partial d}{\\partial{t}} + \\mathbf{u} \\cdot \\nabla d &= 0 \n",
|
||||
"\\end{aligned}$\n",
|
||||
"\n",
|
||||
"As optimization objective we'll consider a more difficult variant of the previous example: the state of the observed density $d$ should match a given target after $n=20$ steps of simulation. In contrast to before, the marker $d$ cannot be modified in any way, but only the initial state of the velocity $\\mathbf{u}$ at $t=0$. This gives us a split between observable quantities for the loss formulation, and quantities that we can interact with during the optimization (or later on via NNs).\n",
|
||||
"\n",
|
||||
|
||||
@@ -1,17 +1,4 @@
|
||||
{
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"name": "diffphys-code-sol.ipynb",
|
||||
"provenance": [],
|
||||
"collapsed_sections": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"name": "python3"
|
||||
}
|
||||
},
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -113,6 +100,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
@@ -120,6 +108,16 @@
|
||||
"id": "JwZudtWauiGa",
|
||||
"outputId": "bd3a4a4d-706f-4210-ee4e-ca4e370b762d"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Downloading training data (73MB), this can take a moment the first time...\n",
|
||||
"Loaded data, 6 training sims\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import os, sys, logging, argparse, pickle, glob, random, distutils.dir_util\n",
|
||||
"\n",
|
||||
@@ -131,17 +129,6 @@
|
||||
"\n",
|
||||
"with open('data-karman2d-train.pickle', 'rb') as f: dataPreloaded = pickle.load(f)\n",
|
||||
"print(\"Loaded data, {} training sims\".format(len(dataPreloaded)) )\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Downloading training data (73MB), this can take a moment the first time...\n",
|
||||
"Loaded data, 6 training sims\n"
|
||||
],
|
||||
"name": "stdout"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -155,6 +142,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
@@ -162,6 +150,36 @@
|
||||
"id": "BGN4GqxkIueM",
|
||||
"outputId": "095adbf8-1ef6-41fd-938e-6cafcf0fdfdc"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[K |████████████████████████████████| 2.7MB 4.3MB/s \n",
|
||||
"\u001b[?25h Building wheel for phiflow (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
|
||||
"TensorFlow 1.x selected.\n",
|
||||
"Could not load resample cuda libraries: CUDA binaries not found at /usr/local/lib/python3.6/dist-packages/phi/tf/cuda/build/resample.so. Run \"python setup.py cuda\" to compile them\n",
|
||||
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/phi/tf/util.py:119: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.\n",
|
||||
"\n",
|
||||
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/phi/tf/profiling.py:12: The name tf.RunOptions is deprecated. Please use tf.compat.v1.RunOptions instead.\n",
|
||||
"\n",
|
||||
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/phi/tf/profiling.py:13: The name tf.RunMetadata is deprecated. Please use tf.compat.v1.RunMetadata instead.\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/usr/local/lib/python3.6/dist-packages/phi/viz/display.py:80: UserWarning: GUI is disabled because of missing dependencies: No module named 'dash_core_components'. To install all dependencies, run $ pip install phiflow[gui]\n",
|
||||
" warnings.warn('GUI is disabled because of missing dependencies: %s. To install all dependencies, run $ pip install phiflow[gui]' % import_error)\n",
|
||||
"/usr/local/lib/python3.6/dist-packages/phi/tf/flow.py:15: UserWarning: TensorFlow-CUDA solver is not available. To compile it, download phiflow sources and run\n",
|
||||
"$ python setup.py tf_cuda\n",
|
||||
"before reinstalling phiflow.\n",
|
||||
" warnings.warn(\"TensorFlow-CUDA solver is not available. To compile it, download phiflow sources and run\\n$ python setup.py tf_cuda\\nbefore reinstalling phiflow.\")\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"!pip install --upgrade --quiet phiflow\n",
|
||||
"%tensorflow_version 1.x\n",
|
||||
@@ -175,37 +193,6 @@
|
||||
"random.seed(42)\n",
|
||||
"np.random.seed(42)\n",
|
||||
"tf.compat.v1.set_random_seed(42)\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[K |████████████████████████████████| 2.7MB 4.3MB/s \n",
|
||||
"\u001b[?25h Building wheel for phiflow (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
|
||||
"TensorFlow 1.x selected.\n",
|
||||
"Could not load resample cuda libraries: CUDA binaries not found at /usr/local/lib/python3.6/dist-packages/phi/tf/cuda/build/resample.so. Run \"python setup.py cuda\" to compile them\n",
|
||||
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/phi/tf/util.py:119: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.\n",
|
||||
"\n",
|
||||
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/phi/tf/profiling.py:12: The name tf.RunOptions is deprecated. Please use tf.compat.v1.RunOptions instead.\n",
|
||||
"\n",
|
||||
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/phi/tf/profiling.py:13: The name tf.RunMetadata is deprecated. Please use tf.compat.v1.RunMetadata instead.\n",
|
||||
"\n"
|
||||
],
|
||||
"name": "stdout"
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/usr/local/lib/python3.6/dist-packages/phi/viz/display.py:80: UserWarning: GUI is disabled because of missing dependencies: No module named 'dash_core_components'. To install all dependencies, run $ pip install phiflow[gui]\n",
|
||||
" warnings.warn('GUI is disabled because of missing dependencies: %s. To install all dependencies, run $ pip install phiflow[gui]' % import_error)\n",
|
||||
"/usr/local/lib/python3.6/dist-packages/phi/tf/flow.py:15: UserWarning: TensorFlow-CUDA solver is not available. To compile it, download phiflow sources and run\n",
|
||||
"$ python setup.py tf_cuda\n",
|
||||
"before reinstalling phiflow.\n",
|
||||
" warnings.warn(\"TensorFlow-CUDA solver is not available. To compile it, download phiflow sources and run\\n$ python setup.py tf_cuda\\nbefore reinstalling phiflow.\")\n"
|
||||
],
|
||||
"name": "stderr"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -238,9 +225,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "6WNMcdWUw4EP"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"class KarmanFlow(IncompressibleFlow):\n",
|
||||
" def __init__(self, pressure_solver=None, make_input_divfree=False, make_output_divfree=True):\n",
|
||||
@@ -262,9 +251,7 @@
|
||||
" fluid = fluid.copied_with(velocity=StaggeredGrid([cy.data, cx.data], fluid.velocity.box))\n",
|
||||
"\n",
|
||||
" return super().step(fluid=fluid, dt=dt, obstacles=[self.obst], gravity=gravity, density_effects=[self.infl], velocity_effects=())\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -290,9 +277,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "qIrWYTy6xscA"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def network_small(tensor_in):\n",
|
||||
" return keras.Sequential([\n",
|
||||
@@ -301,9 +290,7 @@
|
||||
" keras.layers.Conv2D(filters=64, kernel_size=5, padding='same', activation=tf.nn.relu),\n",
|
||||
" keras.layers.Conv2D(filters=2, kernel_size=5, padding='same', activation=None), # u, v\n",
|
||||
" ])\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -316,9 +303,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "TyfpA7Fbx0ro"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def network_medium(tensor_in):\n",
|
||||
" l_input = keras.layers.Input(tensor=tensor_in)\n",
|
||||
@@ -357,9 +346,7 @@
|
||||
"\n",
|
||||
" l_output = keras.layers.Conv2D(filters=2, kernel_size=5, padding='same')(block_5)\n",
|
||||
" return keras.models.Model(inputs=l_input, outputs=l_output)\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -376,9 +363,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "hhGFpTjGyRyg"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def to_keras(fluidstate, ext_const_channel):\n",
|
||||
" # drop the unused edges of the staggered velocity grid making its dim same to the centered grid's\n",
|
||||
@@ -392,9 +381,7 @@
|
||||
"\n",
|
||||
"def to_staggered(tensor_cen, box):\n",
|
||||
" return StaggeredGrid(math.pad(tensor_cen, ((0,0), (0,1), (0,1), (0,0))), box=box)\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -415,9 +402,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "tjywcdD2y20t"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"class Dataset():\n",
|
||||
" def __init__(self, data_preloaded, num_frames, num_sims=None, batch_size=1):\n",
|
||||
@@ -484,9 +473,7 @@
|
||||
"\n",
|
||||
" def nextStep(self):\n",
|
||||
" self.stepIdx += 1\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -501,9 +488,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "Dfwd4TnqN1Tn"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
" # for class Dataset():\n",
|
||||
"\n",
|
||||
@@ -535,9 +524,7 @@
|
||||
" ][0] for i in range(self.batchSize)\n",
|
||||
" ]\n",
|
||||
" return [marker_dens, velocity, ext]\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -550,6 +537,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
@@ -557,6 +545,15 @@
|
||||
"id": "59EBdEdj0QR2",
|
||||
"outputId": "8043f090-4e7b-4178-d2d2-513981e3905b"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Data stats: {'std': (2.2194703, (0.32598782, 0.1820292)), 'ext.std': [1732512.6262166172]}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"output_dir = \"./\" # TODO create? , replaced params['train'] and params['tf']\n",
|
||||
"nsims = 6\n",
|
||||
@@ -567,16 +564,6 @@
|
||||
"#dataset.newEpoch()\n",
|
||||
"#print(format(getData(dataset,1)))\n",
|
||||
"#print(format(dataset.getData(1)))\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Data stats: {'std': (2.2194703, (0.32598782, 0.1820292)), 'ext.std': [1732512.6262166172]}\n"
|
||||
],
|
||||
"name": "stdout"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -594,6 +581,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
@@ -601,6 +589,39 @@
|
||||
"id": "EjgkdCzKP2Ip",
|
||||
"outputId": "6b21bd54-15aa-4440-b274-c3a68ab244f8"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\n",
|
||||
"Instructions for updating:\n",
|
||||
"If using Keras pass *_constraint arguments to layers.\n",
|
||||
"Model: \"sequential\"\n",
|
||||
"_________________________________________________________________\n",
|
||||
"Layer (type) Output Shape Param # \n",
|
||||
"=================================================================\n",
|
||||
"conv2d (Conv2D) (3, 64, 32, 32) 2432 \n",
|
||||
"_________________________________________________________________\n",
|
||||
"conv2d_1 (Conv2D) (3, 64, 32, 64) 51264 \n",
|
||||
"_________________________________________________________________\n",
|
||||
"conv2d_2 (Conv2D) (3, 64, 32, 2) 3202 \n",
|
||||
"=================================================================\n",
|
||||
"Total params: 56,898\n",
|
||||
"Trainable params: 56,898\n",
|
||||
"Non-trainable params: 0\n",
|
||||
"_________________________________________________________________\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:23: DeprecationWarning: placeholder_like may not respect the batch dimension. For State objects, use placeholder(state.shape) instead.\n",
|
||||
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:24: DeprecationWarning: placeholder_like may not respect the batch dimension. For State objects, use placeholder(state.shape) instead.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# one of the most crucial! how many simulation steps to look into the future while training\n",
|
||||
"msteps = 4\n",
|
||||
@@ -631,40 +652,6 @@
|
||||
"#network = network_medium(to_keras(source_in, Re_in)) # optionally switch to larger network\n",
|
||||
"network.summary() \n",
|
||||
"\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\n",
|
||||
"Instructions for updating:\n",
|
||||
"If using Keras pass *_constraint arguments to layers.\n",
|
||||
"Model: \"sequential\"\n",
|
||||
"_________________________________________________________________\n",
|
||||
"Layer (type) Output Shape Param # \n",
|
||||
"=================================================================\n",
|
||||
"conv2d (Conv2D) (3, 64, 32, 32) 2432 \n",
|
||||
"_________________________________________________________________\n",
|
||||
"conv2d_1 (Conv2D) (3, 64, 32, 64) 51264 \n",
|
||||
"_________________________________________________________________\n",
|
||||
"conv2d_2 (Conv2D) (3, 64, 32, 2) 3202 \n",
|
||||
"=================================================================\n",
|
||||
"Total params: 56,898\n",
|
||||
"Trainable params: 56,898\n",
|
||||
"Non-trainable params: 0\n",
|
||||
"_________________________________________________________________\n"
|
||||
],
|
||||
"name": "stdout"
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:23: DeprecationWarning: placeholder_like may not respect the batch dimension. For State objects, use placeholder(state.shape) instead.\n",
|
||||
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:24: DeprecationWarning: placeholder_like may not respect the batch dimension. For State objects, use placeholder(state.shape) instead.\n"
|
||||
],
|
||||
"name": "stderr"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -682,9 +669,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "D5NeMcLGQaxh"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prediction, correction = [], []\n",
|
||||
"for i in range(msteps):\n",
|
||||
@@ -711,9 +700,7 @@
|
||||
" ]\n",
|
||||
"\n",
|
||||
" prediction[-1] = prediction[-1].copied_with(velocity=prediction[-1].velocity + correction[-1])\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -726,9 +713,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "K2JcO3-QQgC9"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loss_steps = [\n",
|
||||
" tf.nn.l2_loss(\n",
|
||||
@@ -738,9 +727,7 @@
|
||||
" for i in range(msteps)\n",
|
||||
"]\n",
|
||||
"loss = tf.reduce_sum(loss_steps)/msteps\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -755,9 +742,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "PuljFamYQksW"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"lr = 1e-4\n",
|
||||
"adapt_lr = True\n",
|
||||
@@ -778,9 +767,7 @@
|
||||
"if resume>0: \n",
|
||||
" ld_network = keras.models.load_model(output_dir+'/nn_epoch{:04d}.h5'.format(resume))\n",
|
||||
" network.set_weights(ld_network.get_weights())\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -793,9 +780,11 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "Am3hSdNgRPEh"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\n",
|
||||
"def lr_schedule(epoch, current_lr):\n",
|
||||
@@ -805,9 +794,7 @@
|
||||
" elif epoch == 15: lr *= 1e-1\n",
|
||||
" elif epoch == 10: lr *= 1e-1\n",
|
||||
" return lr\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -822,50 +809,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "m3Nd8YyHRVFQ",
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "m3Nd8YyHRVFQ",
|
||||
"outputId": "148d951b-7070-4a95-c6d7-0fd91d29606e"
|
||||
},
|
||||
"source": [
|
||||
"current_lr = lr\n",
|
||||
"steps = 0\n",
|
||||
"for j in range(epochs): # training\n",
|
||||
" dataset.newEpoch(exclude_tail=msteps)\n",
|
||||
" if j<resume:\n",
|
||||
" print('resume: skipping {} epoch'.format(j+1))\n",
|
||||
" steps += dataset.numSteps*dataset.numBatches\n",
|
||||
" continue\n",
|
||||
"\n",
|
||||
" current_lr = lr_schedule(j, current_lr) if adapt_lr else lr\n",
|
||||
" for ib in range(dataset.numBatches): \n",
|
||||
" for i in range(dataset.numSteps): \n",
|
||||
" batch = getData(dataset, consecutive_frames=msteps) # should be dataset.getData\n",
|
||||
" re_nr = batch[2] # Reynolds numbers\n",
|
||||
" source = source.copied_with(density=batch[0][0], velocity=batch[1][0])\n",
|
||||
" reference = [ reference[k].copied_with(density=batch[0][k+1], velocity=batch[1][k+1]) for k in range(msteps) ]\n",
|
||||
"\n",
|
||||
" my_feed_dict = { source_in: source, Re_in: re_nr, lr_in: current_lr }\n",
|
||||
" my_feed_dict.update(zip(reference_in, reference))\n",
|
||||
" _, l2 = sess.run([train_step, loss], my_feed_dict)\n",
|
||||
" steps += 1\n",
|
||||
"\n",
|
||||
" if (j==0 and i<3) or (j==0 and ib==0 and i%31==0) or (ib==0 and i%124==0):\n",
|
||||
" print('epoch {:03d}/{:03d}, batch {:03d}/{:03d}, step {:04d}/{:04d}: loss={}'.format( j+1, epochs, ib+1, dataset.numBatches, i+1, dataset.numSteps, l2 ))\n",
|
||||
" dataset.nextStep()\n",
|
||||
"\n",
|
||||
" dataset.nextBatch()\n",
|
||||
"\n",
|
||||
" if j%10==9: network.save(output_dir+'/nn_epoch{:04d}.h5'.format(j+1))\n",
|
||||
"\n",
|
||||
"# all done! save final version\n",
|
||||
"network.save(output_dir+'/final.h5')\n"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"epoch 001/004, batch 001/002, step 0001/0496: loss=8114.626953125\n",
|
||||
@@ -901,9 +855,42 @@
|
||||
"epoch 004/004, batch 001/002, step 0125/0496: loss=4.158570289611816\n",
|
||||
"epoch 004/004, batch 001/002, step 0249/0496: loss=4.282064437866211\n",
|
||||
"epoch 004/004, batch 001/002, step 0373/0496: loss=5.2111334800720215\n"
|
||||
],
|
||||
"name": "stdout"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"current_lr = lr\n",
|
||||
"steps = 0\n",
|
||||
"for j in range(epochs): # training\n",
|
||||
" dataset.newEpoch(exclude_tail=msteps)\n",
|
||||
" if j<resume:\n",
|
||||
" print('resume: skipping {} epoch'.format(j+1))\n",
|
||||
" steps += dataset.numSteps*dataset.numBatches\n",
|
||||
" continue\n",
|
||||
"\n",
|
||||
" current_lr = lr_schedule(j, current_lr) if adapt_lr else lr\n",
|
||||
" for ib in range(dataset.numBatches): \n",
|
||||
" for i in range(dataset.numSteps): \n",
|
||||
" batch = getData(dataset, consecutive_frames=msteps) # should be dataset.getData\n",
|
||||
" re_nr = batch[2] # Reynolds numbers\n",
|
||||
" source = source.copied_with(density=batch[0][0], velocity=batch[1][0])\n",
|
||||
" reference = [ reference[k].copied_with(density=batch[0][k+1], velocity=batch[1][k+1]) for k in range(msteps) ]\n",
|
||||
"\n",
|
||||
" my_feed_dict = { source_in: source, Re_in: re_nr, lr_in: current_lr }\n",
|
||||
" my_feed_dict.update(zip(reference_in, reference))\n",
|
||||
" _, l2 = sess.run([train_step, loss], my_feed_dict)\n",
|
||||
" steps += 1\n",
|
||||
"\n",
|
||||
" if (j==0 and i<3) or (j==0 and ib==0 and i%31==0) or (ib==0 and i%124==0):\n",
|
||||
" print('epoch {:03d}/{:03d}, batch {:03d}/{:03d}, step {:04d}/{:04d}: loss={}'.format( j+1, epochs, ib+1, dataset.numBatches, i+1, dataset.numSteps, l2 ))\n",
|
||||
" dataset.nextStep()\n",
|
||||
"\n",
|
||||
" dataset.nextBatch()\n",
|
||||
"\n",
|
||||
" if j%10==9: network.save(output_dir+'/nn_epoch{:04d}.h5'.format(j+1))\n",
|
||||
"\n",
|
||||
"# all done! save final version\n",
|
||||
"network.save(output_dir+'/final.h5')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -933,14 +920,38 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "RumKebW_05xp"
|
||||
},
|
||||
"source": [
|
||||
""
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"collapsed_sections": [],
|
||||
"name": "diffphys-code-sol.ipynb",
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.6"
|
||||
}
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
}
|
||||
418
intro-teaser.ipynb
Normal file
418
intro-teaser.ipynb
Normal file
File diff suppressed because one or more lines are too long
16
intro.md
16
intro.md
@@ -15,11 +15,12 @@ more tightly coupled learning algorithms with differentiable simulations.
|
||||
height: 220px
|
||||
name: pbdl-teaser
|
||||
---
|
||||
Some examples ... preview teaser ...
|
||||
Some visual examples of hybrid solvers, i.e. numerical simulators that are enhanced by trained neural networks.
|
||||
```
|
||||
% Teaser, simple version:
|
||||
% 
|
||||
|
||||
## Coming up
|
||||
|
||||
As a _sneak preview_, in the next chapters we'll show:
|
||||
|
||||
@@ -38,9 +39,6 @@ and we're eager to improve it. Thanks in advance!
|
||||
|
||||
This collection of materials is a living document, and will grow and change over time.
|
||||
Feel free to contribute 😀
|
||||
|
||||
[TUM Physics-based Simulation Group](https://ge.in.tum.de).
|
||||
|
||||
We also maintain a [link collection](https://github.com/thunil/Physics-Based-Deep-Learning) with recent research papers.
|
||||
|
||||
```{admonition} Code, executable, right here, right now
|
||||
@@ -52,16 +50,6 @@ immediately see what happens -- give it a try...
|
||||
Oh, and it's great because it's [literate programming](https://en.wikipedia.org/wiki/Literate_programming).
|
||||
```
|
||||
|
||||
## More Specifically
|
||||
|
||||
To be a bit more specific, _physics_ is a huge field, and we can't cover everything...
|
||||
|
||||
```{note}
|
||||
For now our focus is:
|
||||
- field-based simulations , less Lagrangian
|
||||
- simulations, not experiments
|
||||
- combination with _deep learning_ (plenty of other interesting ML techniques)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
Model Equations
|
||||
============================
|
||||
|
||||
overview of PDE models to be used later on ...
|
||||
TODO
|
||||
|
||||
give an overview of PDE models to be used later on ...
|
||||
continuous pde $\mathcal P^*$
|
||||
|
||||
$\vx \in \Omega \subseteq \mathbb{R}^d$
|
||||
$\mathbf{x} \in \Omega \subseteq \mathbb{R}^d$
|
||||
for the domain $\Omega$ in $d$ dimensions,
|
||||
time $t \in \mathbb{R}^{+}$.
|
||||
|
||||
@@ -47,11 +49,8 @@ and the abbreviations used inn: {doc}`notation`, at the bottom of the left panel
|
||||
% \newcommand{\corr}{\mathcal{C}} % just C for now...
|
||||
% \newcommand{\nnfunc}{F} % {\text{NN}}
|
||||
|
||||
|
||||
Some notation from SoL, move with parts from overview into "appendix"?
|
||||
|
||||
|
||||
|
||||
We typically solve a discretized PDE $\mathcal{P}$ by performing discrete time steps of size $\Delta t$.
|
||||
Each subsequent step can depend on any number of previous steps,
|
||||
$\mathbf{u}(\mathbf{x},t+\Delta t) = \mathcal{P}(\mathbf{u}(\mathbf{x},t), \mathbf{u}(\mathbf{x},t-\Delta t),...)$,
|
||||
@@ -83,12 +82,13 @@ $\mathbf{u} = (u_x,u_y,u_z)^T$ for $d=3$.
|
||||
|
||||
Burgers' equation in 2D. It represents a well-studied advection-diffusion PDE:
|
||||
|
||||
$\frac{\partial u_x}{\partial{t}} + \mathbf{u} \cdot \nabla u_x =
|
||||
$\begin{aligned}
|
||||
\frac{\partial u_x}{\partial{t}} + \mathbf{u} \cdot \nabla u_x &=
|
||||
\nu \nabla\cdot \nabla u_x + g_x(t),
|
||||
\\
|
||||
\frac{\partial u_y}{\partial{t}} + \mathbf{u} \cdot \nabla u_y =
|
||||
\frac{\partial u_y}{\partial{t}} + \mathbf{u} \cdot \nabla u_y &=
|
||||
\nu \nabla\cdot \nabla u_y + g_y(t)
|
||||
$,
|
||||
\end{aligned}$,
|
||||
|
||||
where $\nu$ and $\mathbf{g}$ denote diffusion constant and external forces, respectively.
|
||||
|
||||
@@ -104,15 +104,15 @@ Later on, additional equations...
|
||||
|
||||
Navier-Stokes, in 2D:
|
||||
|
||||
$
|
||||
\frac{\partial u_x}{\partial{t}} + \mathbf{u} \cdot \nabla u_x =
|
||||
$\begin{aligned}
|
||||
\frac{\partial u_x}{\partial{t}} + \mathbf{u} \cdot \nabla u_x &=
|
||||
- \frac{1}{\rho}\nabla{p} + \nu \nabla\cdot \nabla u_x
|
||||
\\
|
||||
\frac{\partial u_y}{\partial{t}} + \mathbf{u} \cdot \nabla u_y =
|
||||
\frac{\partial u_y}{\partial{t}} + \mathbf{u} \cdot \nabla u_y &=
|
||||
- \frac{1}{\rho}\nabla{p} + \nu \nabla\cdot \nabla u_y
|
||||
\\
|
||||
\text{subject to} \quad \nabla \cdot \mathbf{u} = 0
|
||||
$
|
||||
\text{subject to} \quad \nabla \cdot \mathbf{u} &= 0
|
||||
\end{aligned}$
|
||||
|
||||
|
||||
|
||||
@@ -121,28 +121,29 @@ Navier-Stokes, in 2D with Boussinesq:
|
||||
%$\frac{\partial u_x}{\partial{t}} + \mathbf{u} \cdot \nabla u_x$
|
||||
%$ -\frac{1}{\rho} \nabla p $
|
||||
|
||||
$
|
||||
\frac{\partial u_x}{\partial{t}} + \mathbf{u} \cdot \nabla u_x = - \frac{1}{\rho} \nabla p
|
||||
$\begin{aligned}
|
||||
\frac{\partial u_x}{\partial{t}} + \mathbf{u} \cdot \nabla u_x &= - \frac{1}{\rho} \nabla p
|
||||
\\
|
||||
\frac{\partial u_y}{\partial{t}} + \mathbf{u} \cdot \nabla u_y = - \frac{1}{\rho} \nabla p + \eta d
|
||||
\frac{\partial u_y}{\partial{t}} + \mathbf{u} \cdot \nabla u_y &= - \frac{1}{\rho} \nabla p + \eta d
|
||||
\\
|
||||
\text{subject to} \quad \nabla \cdot \mathbf{u} = 0,
|
||||
\text{subject to} \quad \nabla \cdot \mathbf{u} &= 0,
|
||||
\\
|
||||
\frac{\partial d}{\partial{t}} + \mathbf{u} \cdot \nabla d = 0
|
||||
$
|
||||
|
||||
\frac{\partial d}{\partial{t}} + \mathbf{u} \cdot \nabla d &= 0
|
||||
\end{aligned}$
|
||||
|
||||
|
||||
Navier-Stokes, in 3D:
|
||||
|
||||
$
|
||||
\frac{\partial u_x}{\partial{t}} + \mathbf{u} \cdot \nabla u_x = - \frac{1}{\rho} \nabla p + \nu \nabla\cdot \nabla u_x
|
||||
\begin{aligned}
|
||||
\frac{\partial u_x}{\partial{t}} + \mathbf{u} \cdot \nabla u_x &= - \frac{1}{\rho} \nabla p + \nu \nabla\cdot \nabla u_x
|
||||
\\
|
||||
\frac{\partial u_y}{\partial{t}} + \mathbf{u} \cdot \nabla u_y = - \frac{1}{\rho} \nabla p + \nu \nabla\cdot \nabla u_y
|
||||
\frac{\partial u_y}{\partial{t}} + \mathbf{u} \cdot \nabla u_y &= - \frac{1}{\rho} \nabla p + \nu \nabla\cdot \nabla u_y
|
||||
\\
|
||||
\frac{\partial u_z}{\partial{t}} + \mathbf{u} \cdot \nabla u_z = - \frac{1}{\rho} \nabla p + \nu \nabla\cdot \nabla u_z
|
||||
\frac{\partial u_z}{\partial{t}} + \mathbf{u} \cdot \nabla u_z &= - \frac{1}{\rho} \nabla p + \nu \nabla\cdot \nabla u_z
|
||||
\\
|
||||
\text{subject to} \quad \nabla \cdot \mathbf{u} = 0.
|
||||
\text{subject to} \quad \nabla \cdot \mathbf{u} &= 0.
|
||||
\end{aligned}
|
||||
$
|
||||
|
||||
|
||||
|
||||
22
overview.md
22
overview.md
@@ -11,6 +11,8 @@ a starting point for new researchers as well as a hands-on introduction into
|
||||
state-of-the-art resarch topics.
|
||||
|
||||
|
||||
|
||||
|
||||
```{figure} resources/overview-pano.jpg
|
||||
---
|
||||
height: 240px
|
||||
@@ -127,6 +129,21 @@ starting points with code examples, and illustrate pros and cons of the
|
||||
different approaches. In particular, it's important to know in which scenarios
|
||||
each of the different techniques is particularly useful.
|
||||
|
||||
|
||||
## More Specifically
|
||||
|
||||
To be a bit more specific, _physics_ is a huge field, and we can't cover everything...
|
||||
|
||||
```{note}
|
||||
For now our focus are:
|
||||
- _field-based simulations_ (no Lagrangian methods)
|
||||
- combinations with _deep learning_ (plenty of other interesting ML techniques, but not here)
|
||||
- experiments as _outlook_ (replace synthetic data with real)
|
||||
```
|
||||
|
||||
It's also worth noting that we're starting to build the methods from some very
|
||||
fundamental steps. Here are some considerations for skipping ahead to the later chapters.
|
||||
|
||||
```{admonition} You can skip ahead if...
|
||||
:class: tip
|
||||
|
||||
@@ -138,6 +155,9 @@ A brief look at our _Notation_ won't hurt in both cases, though!
|
||||
```
|
||||
|
||||
---
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
|
||||
<!-- ## A brief history of PBDL in the context of Fluids
|
||||
|
||||
@@ -154,6 +174,8 @@ PINNs ... and more ... -->
|
||||
|
||||
## Deep Learning and Neural Networks
|
||||
|
||||
TODO
|
||||
|
||||
Very brief intro, basic equations... approximate $f^*(x)=y$ with NN $f(x;\theta)$ ...
|
||||
|
||||
learn via GD, $\partial f / \partial \theta$
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
"\n",
|
||||
"## Preliminaries\n",
|
||||
"\n",
|
||||
"Let's just load TF for..."
|
||||
"Let's just load TF and phiflow for now, and initialize the random sampling."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -54,7 +54,7 @@
|
||||
"from phi.tf.flow import *\n",
|
||||
"import numpy as np\n",
|
||||
"\n",
|
||||
"#rnd = TF_BACKEND # sample different points in the domain each iteration\n",
|
||||
"#rnd = TF_BACKEND # for phiflow: sample different points in the domain each iteration\n",
|
||||
"rnd = math.choose_backend(1) # use same random points for all iterations"
|
||||
]
|
||||
},
|
||||
|
||||
BIN
resources/intro-fluid-bifurcation.jpeg
Normal file
BIN
resources/intro-fluid-bifurcation.jpeg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 38 KiB |
BIN
resources/intro-teaser-side-by-side.png
Normal file
BIN
resources/intro-teaser-side-by-side.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 130 KiB |
Reference in New Issue
Block a user