smaller cleanup
This commit is contained in:
@@ -182,6 +182,14 @@
|
|||||||
"Now we have our simulation graph in TF, we can use TF to give us a gradient for the initial state for the loss. All we need to do is run `tf.gradients(loss, [state_in.velocity.data]`, which will give us a \n",
|
"Now we have our simulation graph in TF, we can use TF to give us a gradient for the initial state for the loss. All we need to do is run `tf.gradients(loss, [state_in.velocity.data]`, which will give us a \n",
|
||||||
"\n",
|
"\n",
|
||||||
"Thus now we have \"search direction\" for each velocity variable. Based on a linear approximation, the gradient tells us how to change each of them to increase the loss function (gradients _always_ point \"upwards\"). In the following code block, we're additionally saving all these gradients in a list called `grads`, such that we can visualize them later on. (Normally, we could discard each gradient after performing an update step.)\n",
|
"Thus now we have \"search direction\" for each velocity variable. Based on a linear approximation, the gradient tells us how to change each of them to increase the loss function (gradients _always_ point \"upwards\"). In the following code block, we're additionally saving all these gradients in a list called `grads`, such that we can visualize them later on. (Normally, we could discard each gradient after performing an update step.)\n",
|
||||||
|
"\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Optimization \n",
|
||||||
"\n",
|
"\n",
|
||||||
"Based on the gradient, we can now take a step in the opposite direction to bring the loss down (instead of increasing it). Below we're using a learning rate `LR=5` for this step. Afterwards, we're re-evaluating the loss for the updated state to check how we did. "
|
"Based on the gradient, we can now take a step in the opposite direction to bring the loss down (instead of increasing it). Below we're using a learning rate `LR=5` for this step. Afterwards, we're re-evaluating the loss for the updated state to check how we did. "
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -41,7 +41,7 @@
|
|||||||
"$\n",
|
"$\n",
|
||||||
"\\newcommand{\\pde}{\\mathcal{P}}\n",
|
"\\newcommand{\\pde}{\\mathcal{P}}\n",
|
||||||
"\\newcommand{\\net}{\\mathrm{CFE}}\n",
|
"\\newcommand{\\net}{\\mathrm{CFE}}\n",
|
||||||
"\\mathbf{u}_{n},d_{n} = \\pdec(\\net(\\pdec(\\net(\\cdots \\pdec(\\net( \\mathbf{u}_0,d_0 ))\\cdots)))) = (\\pdec\\net)^n ( \\mathbf{u}_0,d_0 ) .\n",
|
"\\mathbf{u}_{n},d_{n} = \\pde(\\net(~\\pde(\\net(\\cdots \\pde(\\net( \\mathbf{u}_0,d_0 ))\\cdots)))) = (\\pde\\net)^n ( \\mathbf{u}_0,d_0 ) .\n",
|
||||||
"$\n",
|
"$\n",
|
||||||
"\n",
|
"\n",
|
||||||
"minimizes the loss above. The $\\mathrm{OP}$ network is a predictor that determines the action of the $\\mathrm{CFE}$ network given the target $d^*$, i.e., $\\mathrm{OP}(\\mathbf{u},d,d^*)=d_{OP}$,\n",
|
"minimizes the loss above. The $\\mathrm{OP}$ network is a predictor that determines the action of the $\\mathrm{CFE}$ network given the target $d^*$, i.e., $\\mathrm{OP}(\\mathbf{u},d,d^*)=d_{OP}$,\n",
|
||||||
|
|||||||
1
intro.md
1
intro.md
@@ -87,6 +87,7 @@ See also... Test link: {doc}`supervised`
|
|||||||
- PINNs: often need weighting of added loss terms for different parts
|
- PINNs: often need weighting of added loss terms for different parts
|
||||||
- DP intro, check transpose of Jacobians in equations
|
- DP intro, check transpose of Jacobians in equations
|
||||||
- DP control, show targets at bottom?
|
- DP control, show targets at bottom?
|
||||||
|
- finish pictures...
|
||||||
|
|
||||||
|
|
||||||
## TODOs , Planned content
|
## TODOs , Planned content
|
||||||
|
|||||||
Reference in New Issue
Block a user