fixed typos

This commit is contained in:
N_T 2025-06-13 16:19:45 +02:00
parent cc2a7ef4ce
commit be1dba99e4
2 changed files with 8 additions and 8 deletions

View File

@ -271,7 +271,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "extensive-forward",
"metadata": {},
"outputs": [],
@ -284,8 +284,8 @@
" return torch.square(y)\n",
"\n",
"# Define custom loss function using the \"physics\" operator P\n",
"def loss_function(y_true, y_pred):\n",
" return criterion(y_true, P(y_pred))\n"
"def loss_function(x_true, y_pred):\n",
" return criterion(x_true, P(y_pred))\n"
]
},
{
@ -293,9 +293,9 @@
"id": "conscious-budapest",
"metadata": {},
"source": [
"The loss function is the crucial point for training: we directly incorporate the function to learn, $f$ called `nn_dp`, into the loss. Keras will evaluate `nn_dp` for an inpupt from `X`, and provide the output in the second argument `y_from_nn_dp`. On this output, we'll run our \"solver\" `P`, and the result should match the correct answer `y_true`. In this simple case, the `loss_dp` function simply computes the square of the prediction `y_pred`. \n",
"The loss function is the crucial point for training: we directly incorporate the \"physics\" function to learn, the $\\mathcal P$ computing the squared value, into the loss. The predicted y value squared, as produced by our neural network, gives the X value, and should match the ground-truth `x_true`. Here the `criterion` is simply the mean-squared error between the two, i.e. $|\\mathcal P(y_{\\text{pred}}) - x_{\\text{true}}|^2$, which we are minimizing during training. \n",
"\n",
"Later on, a lot more could happen here: we could evaluate finite-difference stencils on the predicted solution, or compute a whole implicit time-integration step of a solver. Here we have a simple _mean-squared error_ term of the form $|\\mathcal P(y_{\\text{pred}}) - x_{\\text{true}}|^2$, which we are minimizing during training. It's not necessary to make it so simple: the more knowledge and numerical methods we can incorporate, the better we can guide the training process.\n",
"Later on, a lot more could happen in $\\mathcal P$ instead of simply a squaring: we could evaluate finite-difference stencils on the predicted solution, or compute a whole implicit time-integration step of a solver. It's not necessary to make it so simple: the more knowledge and numerical methods we can incorporate, the better we can guide the training process. Not that the square function by PyTorch is already differentiable, and hence represents our \"differentiable solver\" in this example.\n",
"\n",
"Let's instantiate the neural network again, and train the network with the _differentiable physics_ loss:\n"
]

View File

@ -286,7 +286,7 @@
"\n",
"## A Simple Normalizing Flow based on Affine Couplings\n",
"\n",
"Let's build a simple network that puts these ideas to use. We'll use a fully connected NN (`FCNN` below) with three layers and ReLU activations as a building block to turn it into an invertible layer as described above. The cell below then provides a base class `RealNVP2D` that concatenates multiple of these building blocks (6 in our example b elow) to form an NN that we can train to learn our toy GM distribution shown just above."
"Let's build a simple network that puts these ideas to use. We'll use a fully connected NN (`FCNN` below) with three layers and ReLU activations as a building block to turn it into an invertible layer as described above. The cell below then provides a base class `RealNVP2D`, which stands for `volume-preserving flows` in 2D. It concatenates multiple of these building blocks (6 in our example below) to form an NN that we can train to learn our toy GM distribution shown just above."
]
},
{
@ -414,7 +414,7 @@
"source": [
"### Setup Dataset and Train the Normalizing Flow\n",
"\n",
"As dataset we'll simply sample from the GM, allocate a `RealNVP` model, and train it for the chosen number of epochs (50 below)."
"As dataset we'll simply sample from the GM, allocate a `RealNVP2D` model, and train it for the chosen number of epochs (50 below)."
]
},
{
@ -625,7 +625,7 @@
"\n",
"### Visualizing Different Layers\n",
"\n",
"The invertible RealNVP network consisted of six layers, that step by step transform the prior distribution into the posterior. As the mapping of each layer is density-mass preserving, we can inspect what happens step by step. This is shown via the cell below:"
"The invertible NVP network consisted of six layers, that step by step transform the prior distribution into the posterior. As the mapping of each layer is density-mass preserving, we can inspect what happens step by step. This is shown via the cell below:"
]
},
{