diff --git a/_toc.yml b/_toc.yml index c85f5c9..9800da9 100644 --- a/_toc.yml +++ b/_toc.yml @@ -28,8 +28,6 @@ parts: - file: diffphys-code-burgers.ipynb - file: diffphys-dpvspinn.md - file: diffphys-code-ns.ipynb -- caption: Differentiable Physics with NNs - chapters: - file: diffphys-examples.md - file: diffphys-code-sol.ipynb - file: diffphys-code-control.ipynb @@ -44,6 +42,7 @@ parts: - file: probmodels-uncond.md - file: probmodels-graph.md - file: probmodels-graph-ellipse.ipynb + - file: probmodels-discuss.md - caption: Reinforcement Learning chapters: - file: reinflearn-intro.md diff --git a/diffphys-code-burgers.ipynb b/diffphys-code-burgers.ipynb index c90cd22..8da5f1a 100644 --- a/diffphys-code-burgers.ipynb +++ b/diffphys-code-burgers.ipynb @@ -325,7 +325,7 @@ "Optimization step 35, loss: 0.008185\n", "Optimization step 40, loss: 0.005186\n", "Optimization step 45, loss: 0.003263\n", - "Runtime 130.33s\n" + "Runtime 132.33s\n" ] } ], diff --git a/diffphys-discuss.md b/diffphys-discuss.md index c0e0eae..b987579 100644 --- a/diffphys-discuss.md +++ b/diffphys-discuss.md @@ -1,4 +1,4 @@ -Discussion +Discussion of Differentiable Physics ======================= The previous sections have explained the _differentiable physics_ approach for deep learning, and have given a range of examples: from a very basic gradient calculation, all the way to complex learning setups powered by advanced simulations. This is a good time to take a step back and evaluate: in the end, the differentiable physics components of these approaches are not too complicated. They are largely based on existing numerical methods, with a focus on efficiently using those methods not only to do a forward simulation, but also to compute gradient information. diff --git a/physicalloss-code.ipynb b/physicalloss-code.ipynb index ddbac96..314f5fc 100644 --- a/physicalloss-code.ipynb +++ b/physicalloss-code.ipynb @@ -6,7 +6,7 @@ "id": "HykKFEeAoXan" }, "source": [ - "# Burgers Optimization with a Physics-Informed NN\n", + "# Burgers Optimization with a PINN\n", "\n", "To illustrate how the physics-informed losses work for variant 2, let's consider a reconstruction task\n", "as an inverse problem example.\n", @@ -55,7 +55,7 @@ "source": [ "## Preliminaries\n", "\n", - "This notebook is a bit older, and hence requires an older tensorflow version. The next cell installs/downgrades TF to a compatible version:\n", + "This notebook is a bit older, and hence requires an older tensorflow version. The next cell installs/downgrades TF to a compatible version. This can lead to \"errors\" on colab due to pip dependencies, which you can safely ignore:\n", "\n" ] }, @@ -369,7 +369,7 @@ "Step 8000, loss: 0.033604\n", "Step 9000, loss: 0.031556\n", "Step 10000, loss: 0.029434\n", - "Runtime 110.02s\n" + "Runtime 101.02s\n" ] } ], diff --git a/physicalloss-discuss.md b/physicalloss-discuss.md index 7db4c05..5eb8527 100644 --- a/physicalloss-discuss.md +++ b/physicalloss-discuss.md @@ -46,7 +46,7 @@ have to start training the NN from scratch. Thus, the physical soft constraints allow us to encode solutions to PDEs with the tools of NNs. As they're more widely used, we'll focus on PINNs (v2) here: -An inherent drawback is that they yield single solutions, +An inherent drawback is that they typically yield single solutions or very narrow solution manifolds, and that they do not combine with traditional numerical techniques well. In comparison to the Neural surrogates/operators from {doc}`supervised` we've made a step backwards in some way. diff --git a/physicalloss-div.ipynb b/physicalloss-div.ipynb index 879a5b0..bd63d74 100644 --- a/physicalloss-div.ipynb +++ b/physicalloss-div.ipynb @@ -64,7 +64,7 @@ }, "outputs": [], "source": [ - "!pip install --quiet phiflow==3.0 tqdm\n", + "!pip install --quiet phiflow==3.3 tqdm\n", "from tqdm import tqdm\n", "from phiml import nn\n", "\n", @@ -159,7 +159,7 @@ "source": [ "Here you can see the flowlines together with velocity magnitudes and the divergence per cell. The latter is exactly what we're want to remove. This visualization shows that the divergence is smaller than the actual magnitude of the velocities, with an average of around 0.4, as indicated by the L2 output right above the images.\n", "\n", - "Next, we will define a Navier-Stokes simulation step. Given our reduced setup without external forces or obstacles, it's very simple: a call to an advection function in PhiFlow, followed by `fluid.make_incompressible()` to invoke the Poisson solver. We'll directly annotate this function and the following ones for JIT compilation with `@jit_compile`. This is important for good performance on GPUs, but it makes debugging much harder. So when changing the code, it's highly recommended to remove them. The code will work just as well without, just slower. Once everything's running as it should, re-activate JIT compilation for the _real_ training runs. \n" + "Next, we will define a Navier-Stokes simulation step. Given our reduced setup without external forces or obstacles, it's very simple: a call to an advection function in PhiFlow, followed by `fluid.make_incompressible()` to invoke the Poisson solver. Here we need to pass a custom solver to treat the rank deficiency in the periodic solve (it's ambiguous with respect to constant offsets). This is not neceessary later on for situations with a unique pressure solution. We'll also directly annotate this function and the following ones for JIT compilation with `@jit_compile`. This is important for good performance on GPUs, but it makes debugging much harder. So when changing the code, it's highly recommended to remove them. The code will work just as well without, just slower. Once everything's running as it should, re-activate JIT compilation for the _real_ training runs. \n" ] }, { @@ -173,12 +173,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "/home/thuerey/anaconda3/envs/torch24/lib/python3.12/site-packages/phiml/math/_optimize.py:631: UserWarning: Possible rank deficiency detected. Matrix might be singular which can lead to convergence problems. Please specify using Solve(rank_deficiency=...).\n", - " warnings.warn(\"Possible rank deficiency detected. Matrix might be singular which can lead to convergence problems. Please specify using Solve(rank_deficiency=...).\")\n", - "/home/thuerey/anaconda3/envs/torch24/lib/python3.12/site-packages/phiml/backend/torch/_torch_backend.py:800: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at ../aten/src/ATen/SparseCsrTensorImpl.cpp:53.)\n", - " return torch.sparse_csr_tensor(row_pointers, column_indices, values, shape, device=values.device)\n", - "/home/thuerey/anaconda3/envs/torch24/lib/python3.12/site-packages/phiml/math/_optimize.py:631: UserWarning: Possible rank deficiency detected. Matrix might be singular which can lead to convergence problems. Please specify using Solve(rank_deficiency=...).\n", - " warnings.warn(\"Possible rank deficiency detected. Matrix might be singular which can lead to convergence problems. Please specify using Solve(rank_deficiency=...).\")\n" + " warnings.warn(\"Possible rank deficiency detected. Matrix might be singular which can lead to convergence problems. \")\n" ] } ], @@ -186,7 +181,7 @@ "@jit_compile\n", "def step(v, dt = 1.0):\n", " v = advect.mac_cormack(v, v, dt)\n", - " v, p = fluid.make_incompressible(v, [])\n", + " v, p = fluid.make_incompressible(v, [], solve=Solve(rank_deficiency=0))\n", " return v, p\n", "\n", "v,p = step(vel)" @@ -426,14 +421,14 @@ "vel_untrained, pres_untrained = eval_nn(vel.batch[0])\n", "\n", "# optionally, visualize the outputs: this doesn't look much different from before as the NN is untrained\n", - "#plot = vis.plot( {\"vel untrained\": vel_untrained.batch[0], \"vel len. untrained\": math.vec_length(vel_untrained.batch[0].at_centers().values), \"div. untrained\": field.divergence(vel_untrained.batch[0]), })\n", + "#plot = vis.plot( {\"vel untrained\": vel_untrained, \"vel len. untrained\": math.vec_length(vel_untrained.at_centers().values), \"div. untrained\": field.divergence(vel_untrained), })\n", "\n", "# print the loss and divergence sum of the corrected velocity from untrained NN\n", "loss, div_untrained = loss_div(vel_untrained)\n", "print(f\"Loss for untrained network: {loss}\")\n", "\n", "# also, we visualize the pressure field\n", - "plot = vis.plot(pres_untrained.batch[0],title=\"pres. untrained\")" + "plot = vis.plot(pres_untrained,title=\"pres. untrained\")" ] }, { @@ -744,7 +739,7 @@ "def step_obs(v, dt, f, obstacle):\n", " v = v + f * dt\n", " v = advect.mac_cormack(v, v, dt)\n", - " v, p = fluid.make_incompressible(v, obstacle)\n", + " v, p = fluid.make_incompressible(v, obstacle, solve=Solve(rank_deficiency=0))\n", " return v, p" ] }, @@ -773,10 +768,6 @@ "name": "stderr", "output_type": "stream", "text": [ - " 0%| | 0/50 [00:00