From 7d2ae8e167c3cde8c248fcc0497aee92a58f00ae Mon Sep 17 00:00:00 2001 From: SOVIETIC-BOSS88 Date: Wed, 3 Jun 2020 22:08:32 +0200 Subject: [PATCH] Update 17_foundations.ipynb --- 17_foundations.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/17_foundations.ipynb b/17_foundations.ipynb index cb626de..ab0ca8a 100644 --- a/17_foundations.ipynb +++ b/17_foundations.ipynb @@ -1774,7 +1774,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We've seen that PyTorch computes all the gradient we need with a magic call to `loss.backward`, but let's explore what's happening behind the scenes.\n", + "We've seen that PyTorch computes all the gradients we need with a magic call to `loss.backward`, but let's explore what's happening behind the scenes.\n", "\n", "Now comes the part where we need to compute the gradients of the loss with respect to all the weights of our model, so all the floats in `w1`, `b1`, `w2`, and `b2`. For this, we will need a bit of math—specifically the *chain rule*. This is the rule of calculus that guides how we can compute the derivative of a composed function:\n", "\n",