Update 05_pet_breeds.ipynb

Small changes: 

- the first chapter, we didn't specified -->> the first chapter, we didn't specify

- were looking for the validation loss -->> we're looking for the validation loss
This commit is contained in:
SOVIETIC-BOSS88 2020-04-18 14:30:57 +02:00 committed by GitHub
parent 3a869940ee
commit dcf56c0c1e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -1604,7 +1604,7 @@
"- one order of magnitude less than where the minimum loss was achieved (i.e. the minimum divided by 10)\n",
"- the last point where the loss was clearly decreasing. \n",
"\n",
"The Learning Rate Finder computes those points on the curve to help you. Both these rules usually give around the same value. In the first chapter, we didn't specified a learning rate, using the default value from the fastai library (which is 1e-3)."
"The Learning Rate Finder computes those points on the curve to help you. Both these rules usually give around the same value. In the first chapter, we didn't specify a learning rate, using the default value from the fastai library (which is 1e-3)."
]
},
{
@ -2276,7 +2276,7 @@
"source": [
"Often you will find that you are limited by time, rather than generalisation and accuracy, when choosing how many epochs to train for. So your first approach to training should be to simply pick a number of epochs that will train in the amount of time that you are happy to wait for. Have a look at the training and validation loss plots, like showed above, and in particular your metrics, and if you see that they are still getting better even in your final epochs, then you know that you have not trained for too long.\n",
"\n",
"On the other hand, you may well see that the metrics you have chosen are really getting worse at the end of training. Remember, it's not just that were looking for the validation loss to get worse, but your actual metrics. Your validation loss will first of all during training get worse because it gets overconfident, and only later will get worse because it is incorrectly memorising the data. We only care in practice about the latter issue. Our loss function is just something, remember, that we used to allow our optimiser to have something it could differentiate and optimise; it's not actually the thing we care about in practice.\n",
"On the other hand, you may well see that the metrics you have chosen are really getting worse at the end of training. Remember, it's not just that we're looking for the validation loss to get worse, but your actual metrics. Your validation loss will first of all during training get worse because it gets overconfident, and only later will get worse because it is incorrectly memorising the data. We only care in practice about the latter issue. Our loss function is just something, remember, that we used to allow our optimiser to have something it could differentiate and optimise; it's not actually the thing we care about in practice.\n",
"\n",
"Before the days of 1cycle training it was very common to save the model at the end of each epoch, and then select whichever model had the best accuracy, out of all of the models saved in each epoch. This is known as *early stopping*. However, with one cycle training, it is very unlikely to give you the best answer, because those epochs in the middle occur before the learning rate has had a chance to reach the small values, where it can really find the best result. Therefore, if you find that you have overfit, what you should actually do is to retrain your model from scratch, and this time select a total number of epochs based on where your previous best results were found.\n",
"\n",