Fix ch07 typos (#306)

* Fix one typo.

* Fix second typo.

* Fix additional typos.

* Spacing and sentence logic fix.

* Rollback of wrongly changed capitalization.
This commit is contained in:
Jakub Duchniewicz 2020-11-29 15:54:05 +01:00 committed by GitHub
parent 83a40acc4d
commit fda223b147
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -727,7 +727,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Mixup, introduced in the 2017 paper [\"*mixup*: Beyond Empirical Risk Minimization\"](https://arxiv.org/abs/1710.09412) byHongyi Zhang et al., is a very powerful data augmentation technique that can provide dramatically higher accuracy, especially when you don't have much data and don't have a pretrained model that was trained on data similar to your dataset. The paper explains: \"While data augmentation consistently leads to improved generalization, the procedure is dataset-dependent, and thus requires the use of expert knowledge.\" For instance, it's common to flip images as part of data augmentation, but should you flip only horizontally, or also vertically? The answer is that it depends on your dataset. In addition, if flipping (for instance) doesn't provide enough data augmentation for you, you can't \"flip more.\" It's helpful to have data augmentation techniques where you can \"dial up\" or \"dial down\" the amount of change, to see what works best for you.\n",
"Mixup, introduced in the 2017 paper [\"*mixup*: Beyond Empirical Risk Minimization\"](https://arxiv.org/abs/1710.09412) by Hongyi Zhang et al., is a very powerful data augmentation technique that can provide dramatically higher accuracy, especially when you don't have much data and don't have a pretrained model that was trained on data similar to your dataset. The paper explains: \"While data augmentation consistently leads to improved generalization, the procedure is dataset-dependent, and thus requires the use of expert knowledge.\" For instance, it's common to flip images as part of data augmentation, but should you flip only horizontally, or also vertically? The answer is that it depends on your dataset. In addition, if flipping (for instance) doesn't provide enough data augmentation for you, you can't \"flip more.\" It's helpful to have data augmentation techniques where you can \"dial up\" or \"dial down\" the amount of change, to see what works best for you.\n",
"\n",
"Mixup works as follows, for each image:\n",
"\n",
@ -867,7 +867,7 @@
"\n",
"With Mixup we no longer have that problem, because our labels will only be exactly 1 or 0 if we happen to \"mix\" with another image of the same class. The rest of the time our labels will be a linear combination, such as the 0.7 and 0.3 we got in the church and gas station example earlier.\n",
"\n",
"One issue with this, however, is that Mixup is \"accidentally\" making the labels bigger than 0, or smaller than 1. That is to say, we're not *explicitly* telling our model that we want to change the labels in this way. So, if we want to change to make the labels closer to, or further away from 0 and 1, we have to change the amount of Mixup—which also changes the amount of data augmentation, which might not be what we want. There is, however, a way to handle this more directly, which is to use *label smoothing*."
"One issue with this, however, is that Mixup is \"accidentally\" making the labels bigger than 0, or smaller than 1. That is to say, we're not *explicitly* telling our model that we want to change the labels in this way. So, if we want to make the labels closer to, or further away from 0 and 1, we have to change the amount of Mixup—which also changes the amount of data augmentation, which might not be what we want. There is, however, a way to handle this more directly, which is to use *label smoothing*."
]
},
{