From 627c7746e855fce88e7adf21bae9559389394d4d Mon Sep 17 00:00:00 2001 From: NT Date: Wed, 10 Mar 2021 12:48:24 +0800 Subject: [PATCH] smaller cleanup --- diffphys-code-burgers.ipynb | 8 ++++++++ diffphys-control.ipynb | 2 +- intro.md | 1 + 3 files changed, 10 insertions(+), 1 deletion(-) diff --git a/diffphys-code-burgers.ipynb b/diffphys-code-burgers.ipynb index 94867cc..6433019 100644 --- a/diffphys-code-burgers.ipynb +++ b/diffphys-code-burgers.ipynb @@ -182,6 +182,14 @@ "Now we have our simulation graph in TF, we can use TF to give us a gradient for the initial state for the loss. All we need to do is run `tf.gradients(loss, [state_in.velocity.data]`, which will give us a \n", "\n", "Thus now we have \"search direction\" for each velocity variable. Based on a linear approximation, the gradient tells us how to change each of them to increase the loss function (gradients _always_ point \"upwards\"). In the following code block, we're additionally saving all these gradients in a list called `grads`, such that we can visualize them later on. (Normally, we could discard each gradient after performing an update step.)\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Optimization \n", "\n", "Based on the gradient, we can now take a step in the opposite direction to bring the loss down (instead of increasing it). Below we're using a learning rate `LR=5` for this step. Afterwards, we're re-evaluating the loss for the updated state to check how we did. " ] diff --git a/diffphys-control.ipynb b/diffphys-control.ipynb index e6d145c..b8ae85e 100644 --- a/diffphys-control.ipynb +++ b/diffphys-control.ipynb @@ -41,7 +41,7 @@ "$\n", "\\newcommand{\\pde}{\\mathcal{P}}\n", "\\newcommand{\\net}{\\mathrm{CFE}}\n", - "\\mathbf{u}_{n},d_{n} = \\pdec(\\net(\\pdec(\\net(\\cdots \\pdec(\\net( \\mathbf{u}_0,d_0 ))\\cdots)))) = (\\pdec\\net)^n ( \\mathbf{u}_0,d_0 ) .\n", + "\\mathbf{u}_{n},d_{n} = \\pde(\\net(~\\pde(\\net(\\cdots \\pde(\\net( \\mathbf{u}_0,d_0 ))\\cdots)))) = (\\pde\\net)^n ( \\mathbf{u}_0,d_0 ) .\n", "$\n", "\n", "minimizes the loss above. The $\\mathrm{OP}$ network is a predictor that determines the action of the $\\mathrm{CFE}$ network given the target $d^*$, i.e., $\\mathrm{OP}(\\mathbf{u},d,d^*)=d_{OP}$,\n", diff --git a/intro.md b/intro.md index 4be859d..ee48e29 100644 --- a/intro.md +++ b/intro.md @@ -87,6 +87,7 @@ See also... Test link: {doc}`supervised` - PINNs: often need weighting of added loss terms for different parts - DP intro, check transpose of Jacobians in equations - DP control, show targets at bottom? +- finish pictures... ## TODOs , Planned content