fixed typos

This commit is contained in:
N_T 2024-10-25 14:00:47 +08:00
parent 9bd9f531ea
commit 2685e69f7d

View File

@ -8,7 +8,7 @@
"source": [
"# Learning the Helmholtz-Hodge Decomposition\n",
"\n",
"In the following notebook we'll following the aforementioned paper by Tompson et al. {cite}`tompson2017` and train a neural network that can essentially perform a Helmholtz-Hodge decomposition. This is a very classic and time consuming part of many numerical solvers, and enables splitting an aribtrary vector field into a solenoidal (divergence-free) and irrotational part (the pressure gradient). Because this is traditionally veru time consuming, it's an interesting goal for a learned approach. As a stepping stone towards integrating full solvers, we'll formulate a physics-based loss via a discretized PDE-constraint.\n"
"In the following notebook we'll following the aforementioned paper by Tompson et al. {cite}`tompson2017` and train a neural network that can essentially perform a Helmholtz-Hodge decomposition. This is a very classic and time consuming part of many numerical solvers, and enables splitting an arbitrary vector field into a solenoidal (divergence-free) and irrotational part (the pressure gradient). Because this is traditionally very time consuming, it's an interesting goal for a learned approach. As a stepping stone towards integrating full solvers, we'll formulate a physics-based loss via a discretized PDE-constraint.\n"
]
},
{
@ -103,7 +103,7 @@
"id": "-6mIFTtQCXwn"
},
"source": [
"We will use random, periodic flow fields generated via PhiFlow's frequency-based synthesis. The next cell will prec-compute all these flow fields (100 in total), and visualize an example. Note, we're simply taking random, divergent flow fields here. We don't know what the correct divergence-free counterparts are."
"We will use random, periodic flow fields generated via PhiFlow's frequency-based synthesis. The next cell will pre-compute all these flow fields (100 in total), and visualize an example. Note, we're simply taking random, divergent flow fields here. We don't know what the correct divergence-free counterparts are."
]
},
{
@ -425,7 +425,7 @@
"source": [
"vel_untrained, pres_untrained = eval_nn(vel.batch[0])\n",
"\n",
"# optionally, visualize the outputs: this doesnt look much different from before as the NN is untrained\n",
"# optionally, visualize the outputs: this doesn't look much different from before as the NN is untrained\n",
"#plot = vis.plot( {\"vel untrained\": vel_untrained.batch[0], \"vel len. untrained\": math.vec_length(vel_untrained.batch[0].at_centers().values), \"div. untrained\": field.divergence(vel_untrained.batch[0]), })\n",
"\n",
"# print the loss and divergence sum of the corrected velocity from untrained NN\n",
@ -453,7 +453,7 @@
"source": [
"## Training\n",
"\n",
"With our setup so far, training is very simple: we simply choose a few of the random flow fields, and evaulate the network, and change its weights so that the divergence loss becomes smaller. We simply loop over our precomputed flow fields without randomization below for a fixed number of epochs. Our network converges surprisingly fast. After a few epochs the loss should have decreased by more than two orders of magnitude."
"With our setup so far, training is very simple: we simply choose a few of the random flow fields, and evaluate the network, and change its weights so that the divergence loss becomes smaller. We simply loop over our precomputed flow fields without randomization below for a fixed number of epochs. Our network converges surprisingly fast. After a few epochs the loss should have decreased by more than two orders of magnitude."
]
},
{
@ -528,7 +528,7 @@
"source": [
"## Testing with New Inputs\n",
"\n",
"We can check this by producing a few new inputs. Below, we'll likewise use PhiFlow's noise generation to get new fields, but to make things interesting we're increasing the scale by a factor of $2 \\times$. Hence, the network will receive divergence inputs with magnitudes it hasn't seen before. These samples are effectivly _out of the distribution_ of the training inputs."
"We can check this by producing a few new inputs. Below, we'll likewise use PhiFlow's noise generation to get new fields, but to make things interesting we're increasing the scale by a factor of $2 \\times$. Hence, the network will receive divergence inputs with magnitudes it hasn't seen before. These samples are effectively _out of the distribution_ of the training inputs."
]
},
{
@ -627,7 +627,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"THe quantative evalation confirms that our network has learned to reduce the divergence. The remaining one is larger than the one from PhiFlow's solver, but nonetheless orders of magnitude smaller than the original one. The images at the bottom nicely visualize this.\n",
"The quantitative evaluation confirms that our network has learned to reduce the divergence. The remaining one is larger than the one from PhiFlow's solver, but nonetheless orders of magnitude smaller than the original one. The images at the bottom nicely visualize this.\n",
"\n",
"Next, let's check what pressure field the network has actually learned to produce."
]
@ -832,9 +832,9 @@
"id": "dnd8eqT9iRXO"
},
"source": [
"The plot above shows the last frame of the simulations, and they're remarkably simiar. This results shows that the NN has successfully learned to reduce the divergence of arbitrary flows, despite having only seen the synthetic noisy inputs at training time.\n",
"The plot above shows the last frame of the simulations, and they're remarkably similar. This results shows that the NN has successfully learned to reduce the divergence of arbitrary flows, despite having only seen the synthetic noisy inputs at training time.\n",
"\n",
"Moreover, it highlights the success of the differentiable discrete operator used for training. The training converges very quickly, and the trained network generalizaes in a _zero-shot_ fashion to completely new inputs. The network hasn't even seen any obstcles at training time, and can still infer (mostly correct) pressure fields to handle them. This aspect is nonetheless an interesting point for improvements of this implementation. The NN could receive the obstacle geometry as an additional input, and could be trained to pay special attention to the boundary region via increased loss values.\n",
"Moreover, it highlights the success of the differentiable discrete operator used for training. The training converges very quickly, and the trained network generalizes in a _zero-shot_ fashion to completely new inputs. The network hasn't even seen any obstacles at training time, and can still infer (mostly correct) pressure fields to handle them. This aspect is nonetheless an interesting point for improvements of this implementation. The NN could receive the obstacle geometry as an additional input, and could be trained to pay special attention to the boundary region via increased loss values.\n",
"\n",
"Lastly, if you're interested in watching the full evolution of the simulated trajectories, you can comment out and run the following cell. It generates a movie that can be watched in the browser via PhiFlow's `plot(...,animate=X)` function."
]