Ch17 fixes (#323)
* Fix one typo. * Fix second typo. * Fix additional typos. * Fix typos. * Add one more typo fix. * Move tensor dim correction.
This commit is contained in:
parent
cf9fae191c
commit
0cde386c56
@ -44,7 +44,7 @@
|
||||
"source": [
|
||||
"This chapter begins a journey where we will dig deep into the internals of the models we used in the previous chapters. We will be covering many of the same things we've seen before, but this time around we'll be looking much more closely at the implementation details, and much less closely at the practical issues of how and why things are as they are.\n",
|
||||
"\n",
|
||||
"We will build everything from scratch, only using basic indexing into a tensor. We]ll write a neural net from the ground up, then implement backpropagation manually, so we know exactly what's happening in PyTorch when we call `loss.backward`. We'll also see how to extend PyTorch with custom *autograd* functions that allow us to specify our own forward and backward computations."
|
||||
"We will build everything from scratch, only using basic indexing into a tensor. We'll write a neural net from the ground up, then implement backpropagation manually, so we know exactly what's happening in PyTorch when we call `loss.backward`. We'll also see how to extend PyTorch with custom *autograd* functions that allow us to specify our own forward and backward computations."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1074,7 +1074,7 @@
|
||||
"\n",
|
||||
"```\n",
|
||||
"Image (3d tensor): 256 x 256 x 3\n",
|
||||
"Scale (1d tensor): (1) 256 x 256\n",
|
||||
"Scale (2d tensor): (1) 256 x 256\n",
|
||||
"Error\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
@ -1200,7 +1200,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As we saw in <<chapter_mnist_basics>>, to train a model, we will need to compute all the gradients of a given a loss with respect to its parameters, which is known as the *backward pass*. The *forward pass* is where we compute the output of the model on a given input, based on the matrix products. As we define our first neural net, we will also delve into the problem of properly initializing the weights, which is crucial for making training start properly."
|
||||
"As we saw in <<chapter_mnist_basics>>, to train a model, we will need to compute all the gradients of a given loss with respect to its parameters, which is known as the *backward pass*. The *forward pass* is where we compute the output of the model on a given input, based on the matrix products. As we define our first neural net, we will also delve into the problem of properly initializing the weights, which is crucial for making training start properly."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2248,7 +2248,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The structure used to build a more complex model that takes advantage of those `Function`s is a `torch.nn.Module`. This is the base structure for all models, and all the neural nets you have seen up until now were from that class. It mostly helps to register all the trainable parameters, which as we've seen can be used in the training loop.\n",
|
||||
"The structure used to build a more complex model that takes advantage of those `Function`s is a `torch.nn.Module`. This is the base structure for all models, and all the neural nets you have seen up until now inherited from that class. It mostly helps to register all the trainable parameters, which as we've seen can be used in the training loop.\n",
|
||||
"\n",
|
||||
"To implement an `nn.Module` you just need to:\n",
|
||||
"\n",
|
||||
|
@ -42,7 +42,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now that we know how to build up pretty much anything from scratch, let's use that knowledge to create entirely new (and very useful!) functionality: the *class activation map*. It gives a us some insight into why a CNN made the predictions it did.\n",
|
||||
"Now that we know how to build up pretty much anything from scratch, let's use that knowledge to create entirely new (and very useful!) functionality: the *class activation map*. It gives us some insight into why a CNN made the predictions it did.\n",
|
||||
"\n",
|
||||
"In the process, we'll learn about one handy feature of PyTorch we haven't seen before, the *hook*, and we'll apply many of the concepts introduced in the rest of the book. If you want to really test out your understanding of the material in this book, after you've finished this chapter, try putting it aside and recreating the ideas here yourself from scratch (no peeking!)."
|
||||
]
|
||||
|
Loading…
Reference in New Issue
Block a user