added diff phys ns case
This commit is contained in:
parent
8a0fadffc8
commit
52d5eb202e
1
_toc.yml
1
_toc.yml
@ -15,6 +15,7 @@
|
||||
sections:
|
||||
- file: diffphys-code-gradient.ipynb
|
||||
- file: diffphys-code-tf.ipynb
|
||||
- file: diffphys-code-ns.ipynb
|
||||
- file: jupyter-book-reference
|
||||
sections:
|
||||
- file: markdown
|
||||
|
||||
@ -4,7 +4,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Burgers Optimizationwith a Gradient Differentiable Physics\n",
|
||||
"# Burgers Optimization with a Gradient from Differentiable Physics\n",
|
||||
"\n",
|
||||
"manual!\n",
|
||||
"\n",
|
||||
|
||||
400
diffphys-code-ns.ipynb
Normal file
400
diffphys-code-ns.ipynb
Normal file
@ -0,0 +1,400 @@
|
||||
{
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"name": "Differentiable Fluid Simulations with Φ-Flow.ipynb",
|
||||
"provenance": [],
|
||||
"collapsed_sections": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"name": "python3",
|
||||
"display_name": "Python 3"
|
||||
}
|
||||
},
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "o4JZ84moBKMr"
|
||||
},
|
||||
"source": [
|
||||
"# Differentiable Fluid Simulations with Φ<sub>Flow</sub>\n",
|
||||
"\n",
|
||||
"This notebook steps you through setting up fluid simulations and using TensorFlow's differentiation to optimize them.\n",
|
||||
"\n",
|
||||
"Execute the cell below to install the [Φ<sub>Flow</sub> Python package from GitHub](https://github.com/tum-pbs/PhiFlow)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "Z6YoAVKebfNV"
|
||||
},
|
||||
"source": [
|
||||
"!pip install --upgrade --quiet phiflow"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "da1uZcDXdVcF"
|
||||
},
|
||||
"source": [
|
||||
"from phi.flow import * # The Dash GUI is not supported on Google Colab, ignore the warning\n",
|
||||
"import pylab"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "BVV1IKVqDfLl"
|
||||
},
|
||||
"source": [
|
||||
"# Setting up a simulation\n",
|
||||
"\n",
|
||||
"Φ<sub>Flow</sub> is object-oriented, i.e. you assemble your simulation by constructing a number of objects and adding them to the world.\n",
|
||||
"\n",
|
||||
"The following code sets up four fluid simulations that run in parallel (`batch_size=4`). Each fluid simulation has a circular Inflow at a different location."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "WrA3IXDxv31P"
|
||||
},
|
||||
"source": [
|
||||
"world = World()\n",
|
||||
"fluid = world.add(Fluid(Domain([40, 32], boundaries=CLOSED), buoyancy_factor=0.1, batch_size=4), physics=IncompressibleFlow())\n",
|
||||
"world.add(Inflow(Sphere(center=[[5,4], [5,8], [5,12], [5,16]], radius=3), rate=0.2));"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "ExA0Pi2sFVka"
|
||||
},
|
||||
"source": [
|
||||
"The inflow affects the fluid's marker density. Because the `buoyancy_factor` is positive, the marker creates an upward force.\n",
|
||||
"\n",
|
||||
"Let's plot the marker density after one simulation frame."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "WmGZdOwswOva"
|
||||
},
|
||||
"source": [
|
||||
"world.step()\n",
|
||||
"pylab.imshow(np.concatenate(fluid.density.data[...,0], axis=1), origin='lower', cmap='magma')"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "PA-2tGuWGHv2"
|
||||
},
|
||||
"source": [
|
||||
"We can run more steps by repeatedly calling `world.step()`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "0hZk5HX3w4Or"
|
||||
},
|
||||
"source": [
|
||||
"for frame in range(20):\n",
|
||||
" print('Computing frame %d' % frame)\n",
|
||||
" world.step(dt=1.5)"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "Mfl80CjZxZcL"
|
||||
},
|
||||
"source": [
|
||||
"pylab.imshow(np.concatenate(fluid.density.data[...,0], axis=1), origin='lower', cmap='magma')"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "rdSTbMoaS0Uz"
|
||||
},
|
||||
"source": [
|
||||
"# Differentiation\n",
|
||||
"\n",
|
||||
"The simulation we just computed was using purely NumPy (non-differentiable) operations.\n",
|
||||
"To enable differentiability, we need to build a TensorFlow graph that computes this result."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "mphMP0sYIOz-"
|
||||
},
|
||||
"source": [
|
||||
"%tensorflow_version 1.x\n",
|
||||
"from phi.tf.flow import * # Causes deprecation warnings with TF 1.15\n",
|
||||
"import pylab\n",
|
||||
"session = Session(None) # Used to run the TensorFlow graph"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "3mpyowRYUSS4"
|
||||
},
|
||||
"source": [
|
||||
"Let's set up the simulation just like before. But now, we want to optimize the initial velocities so that all simulations arrive at a final state that is similar to the first simulation from the previous example. I.e., the state shown in the left-most image above.\n",
|
||||
"\n",
|
||||
"To achieve this, we create a TensorFlow variable for the velocity at t=0.\n",
|
||||
"It is initialized with zeros (like with the NumPy simulation above) and can later be used as a target for optimization."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "NlJMJikaHOL6"
|
||||
},
|
||||
"source": [
|
||||
"world = World()\n",
|
||||
"fluid = world.add(Fluid(Domain([40, 32], boundaries=CLOSED), buoyancy_factor=0.1, batch_size=4), physics=IncompressibleFlow())\n",
|
||||
"world.add(Inflow(Sphere(center=[[5,4], [5,8], [5,12], [5,16]], radius=3), rate=0.2));\n",
|
||||
"fluid.velocity = variable(fluid.velocity) # create TensorFlow variable\n",
|
||||
"initial_state = fluid.state # Remember the state at t=0 for later visualization\n",
|
||||
"session.initialize_variables()"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "t-NDwXRCYHjw"
|
||||
},
|
||||
"source": [
|
||||
"\n",
|
||||
"\n",
|
||||
"Note that we actually created two variables, one for each velocity component. If you're interested in how this magic works, have a look at the [Struct documentation](https://github.com/tum-pbs/PhiFlow/blob/master/documentation/Structs.ipynb)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "2q5gfaH2YHr6"
|
||||
},
|
||||
"source": [
|
||||
"[print(grid.data) for grid in fluid.velocity.unstack()];"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "vSdGIEzCgq5-"
|
||||
},
|
||||
"source": [
|
||||
"If you look closely, you'll notice that the shapes of the variables differ. This is because the velocity is sampled in [staggered form](https://github.com/tum-pbs/PhiFlow/blob/master/documentation/Staggered_Grids.md).\n",
|
||||
"\n",
|
||||
"The simulation now contains variables in the initial state.\n",
|
||||
"Since all later states depend on the value of the variable, the `step` method cannot directly compute concrete state values.\n",
|
||||
"Instead, `world.step` will extend the TensorFlow graph by the operations needed to perform the step.\n",
|
||||
"\n",
|
||||
"To execute the graph with actual data, we can use `session.run`, just like with regular TensorFlow 1.x. While `run` would usually be used to infer predictions from a learning model, it now executes the graph of simulation steps."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "wSrIezfWHjcQ"
|
||||
},
|
||||
"source": [
|
||||
"world.step()\n",
|
||||
"pylab.imshow(np.concatenate(session.run(fluid.density).data[...,0], axis=1), origin='lower', cmap='magma')"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "iJc6UdYHhtOH"
|
||||
},
|
||||
"source": [
|
||||
"Let's build a graph for the full simulation."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "b9xHtdDQRrjL"
|
||||
},
|
||||
"source": [
|
||||
"for frame in range(20):\n",
|
||||
" print('Building graph for frame %d' % frame)\n",
|
||||
" world.step(dt=1.5)"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "2VQ92g2rs6wM"
|
||||
},
|
||||
"source": [
|
||||
"When calling `session.run` now, the full simulation is evaluated using TensorFlow operations.\n",
|
||||
"This will take advantage of your GPU, if available.\n",
|
||||
"If you compile Φ<sub>Flow</sub> with [CUDA support](https://github.com/tum-pbs/PhiFlow/blob/master/documentation/Installation_Instructions.md), the TensorFlow graph will use optimized operators for efficient simulation and training runs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "TA6Ibs-mXsTc"
|
||||
},
|
||||
"source": [
|
||||
"print('Computing frames...')\n",
|
||||
"pylab.imshow(np.concatenate(session.run(fluid.density).data[...,0], axis=1), origin='lower', cmap='magma')"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "IClfRMfoyGUa"
|
||||
},
|
||||
"source": [
|
||||
"Next, we define the *loss* function (also called *cost* or *objective* function). This is the value we want to decrease via optimization.\n",
|
||||
"For this example, we want the marker densities of all final simulation states to match the left-most one, called `target`.\n",
|
||||
"\n",
|
||||
"For the optimizer, we choose gradient descent for this example."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "7KPpyIwjYETi"
|
||||
},
|
||||
"source": [
|
||||
"target = session.run(fluid.density).data[0,...]\n",
|
||||
"loss = math.l2_loss(fluid.density.data[1:,...] - target)\n",
|
||||
"optim = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)\n",
|
||||
"session.initialize_variables()\n",
|
||||
"print('Initial loss: %f' % session.run(loss))"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "AALD66-N0U5F"
|
||||
},
|
||||
"source": [
|
||||
"With the loss and optimizer set up, all that's left is to run the actual optimization."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "pvvF6xqmaRLX"
|
||||
},
|
||||
"source": [
|
||||
"for optim_step in range(10):\n",
|
||||
" print('Running optimization step %d. %s' % (optim_step, '' if optim_step else 'The first step sets up the adjoint graph.'))\n",
|
||||
" _, loss_value = session.run([optim, loss])\n",
|
||||
" print('Loss: %f' % loss_value)"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "SQBtCmhZaYYj"
|
||||
},
|
||||
"source": [
|
||||
"pylab.imshow(np.concatenate(session.run(fluid.density).data[...,0], axis=1), origin='lower', cmap='magma')"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "HP7aDQfpKifp"
|
||||
},
|
||||
"source": [
|
||||
"Now that the optimization has done its work, we can have a look at the now-optimized initial velocity field."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "vRaBt5vGdSEY"
|
||||
},
|
||||
"source": [
|
||||
"optimized_velocity_field = session.run(initial_state.velocity).at_centers()"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "i7ZahlUudex8"
|
||||
},
|
||||
"source": [
|
||||
"pylab.title('Initial y-velocity (optimized)')\n",
|
||||
"pylab.imshow(np.concatenate(optimized_velocity_field.data[...,0], axis=1), origin='lower')"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"metadata": {
|
||||
"id": "Pqw5BDxmdkut"
|
||||
},
|
||||
"source": [
|
||||
"pylab.title('Initial x-velocity (optimized)')\n",
|
||||
"pylab.imshow(np.concatenate(optimized_velocity_field.data[...,1], axis=1), origin='lower')"
|
||||
],
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "ooqVxCPM8PXl"
|
||||
},
|
||||
"source": [
|
||||
"This notebook provided an introduction to running fluid simulations in NumPy and TensorFlow.\n",
|
||||
"It demonstrated how to use the gradients provided by Φ<sub>Flow</sub> to run simple optimizations over the course of several timesteps.\n",
|
||||
"\n",
|
||||
"For additional examples, e.g. coupling simulations with neural networks, please check the [other demos](https://github.com/tum-pbs/PhiFlow/tree/master/demos)."
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
@ -4,7 +4,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Burgers backwards opt in phiflow-TF optim\n",
|
||||
"# Burgers backwards opt via phiflow-TF optim\n",
|
||||
"\n",
|
||||
"Use actual TF optimizer"
|
||||
]
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user