Merge pull request #183 from SOVIETIC-BOSS88/patch-16
Update 17_foundations.ipynb Small typo
This commit is contained in:
commit
5e169675cf
@ -1774,7 +1774,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We've seen that PyTorch computes all the gradient we need with a magic call to `loss.backward`, but let's explore what's happening behind the scenes.\n",
|
||||
"We've seen that PyTorch computes all the gradients we need with a magic call to `loss.backward`, but let's explore what's happening behind the scenes.\n",
|
||||
"\n",
|
||||
"Now comes the part where we need to compute the gradients of the loss with respect to all the weights of our model, so all the floats in `w1`, `b1`, `w2`, and `b2`. For this, we will need a bit of math—specifically the *chain rule*. This is the rule of calculus that guides how we can compute the derivative of a composed function:\n",
|
||||
"\n",
|
||||
|
Loading…
Reference in New Issue
Block a user