ellipse code updates, added run-in-colab-links
This commit is contained in:
parent
458934b3c8
commit
38ca428a8a
@ -9,6 +9,7 @@
|
||||
"# From DDPM to Flow Matching\n",
|
||||
"\n",
|
||||
"We'll be using a learning task where we can reliably generate arbitrary amounts of ground truth data, to make sure we can quantify how well the target distribution was learned. Specifically, we'll focus on Reynolds-averaged Navier-Stokes simulations around airfoils, which have the interesting characteristic that typical solvers (such as OpenFoam) transition from steady solutions to oscillating ones for larger Reynolds numbers. This transition is exactly what we'll give as a task to diffusion models below. (Details can be found in our [diffuion-based flow prediction repository](https://github.com/tum-pbs/Diffusion-based-Flow-Prediction/).) Also, to make the notebook self-contained, we'll revisit the most important concepts from the previous section.\n",
|
||||
"[[run in colab]](https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/probmodels-ddpm-fm.ipynb)\n",
|
||||
"\n",
|
||||
"```{note} \n",
|
||||
"If you're directly continuing reading from the previous chapter, note that there's an important difference: for simplicity, we'll apply denoising and flow-matching to a **forward** problem here. We won't be aiming to recover $x$ for an observation $y$, but rather assume we have initial conditions $x$ from which we want to compute a solution $y$. So don't be surprised by the switched $x$ and $y$ below.\n",
|
||||
|
File diff suppressed because one or more lines are too long
@ -11,6 +11,7 @@
|
||||
"=======================\n",
|
||||
"\n",
|
||||
"This notebook will illustrate some of the concepts introduced in {doc}`probmodels-intro`, such as the training of score functions via log likelihoods, and what they look like in a clear and reduced problem. At the same time, the setup provides integration of a simple _differentiable simulator_ to illustrate the concept of physics-based diffusion modeling with the SMDP method from {doc}`probmodels-phys` [(full paper)](https://arxiv.org/abs/2301.10250). This approach combines physics and score matching along a merged time dimension to solve inverse problems. \n",
|
||||
"[[run in colab]](https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/probmodels-sbisim.ipynb)\n",
|
||||
"\n",
|
||||
"## Toy Problem setup\n",
|
||||
"\n",
|
||||
|
@ -20,7 +20,9 @@
|
||||
"This motivates - as in the previous sections - to view the steps of a time series as a probabilistic distribution over time rather than a deterministic series of states.\n",
|
||||
"A probabilistic simulator can learn to take into account the influence of the un-observed state, and infer solutions from variations of this un-observed part of the system. Worst case, if this un-observed state has a negligible influence, we should see a mean state with an variance that's effectively zero. So there's nothing to loose! \n",
|
||||
"\n",
|
||||
"The following notebook introduces an effective, distribution-based approach for temporal predictions:\n",
|
||||
"The following notebook \n",
|
||||
"[[run in colab]](https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/probmodels-time.ipynb)\n",
|
||||
" introduces an effective, distribution-based approach for temporal predictions:\n",
|
||||
"* conditional diffusion models are used to compute autoregressive rollouts to obtain a \"probabilistic simulator\"; \n",
|
||||
"* it is of course highly interesting to compare this diffusion-based predictor to the deterministic baselines and neural operators from the previous chapters;\n",
|
||||
"* in order to evaluate the results w.r.t. their accuracy, we'll employ a transonic fluid flow for which we can compute statistics from a simulated reference.\n",
|
||||
|
Loading…
Reference in New Issue
Block a user