This commit is contained in:
Sylvain Gugger 2020-03-21 08:38:44 -07:00
parent 070c0cda00
commit 90a8e316e2
2 changed files with 35 additions and 0 deletions

View File

@ -2168,6 +2168,24 @@
"learn.fine_tune(4, 1e-2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#clean\n",
"If you hit a \"CUDA out of memory error\" after running this cell, click on the menu Kernel, then restart. Instead of executing the cell above, copy and paste the following code in it:\n",
"\n",
"```\n",
"from fastai2.text.all import *\n",
"\n",
"dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test', bs=32)\n",
"learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)\n",
"learn.fine_tune(4, 1e-2)\n",
"```\n",
"\n",
"This reduces the batch size to 32 (we will explain this later). If you keep hitting the same error, change 32 by 16."
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@ -1072,6 +1072,23 @@
"learn.fine_tune(4, 1e-2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you hit a \"CUDA out of memory error\" after running this cell, click on the menu Kernel, then restart. Instead of executing the cell above, copy and paste the following code in it:\n",
"\n",
"```\n",
"from fastai2.text.all import *\n",
"\n",
"dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test', bs=32)\n",
"learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)\n",
"learn.fine_tune(4, 1e-2)\n",
"```\n",
"\n",
"This reduces the batch size to 32 (we will explain this later). If you keep hitting the same error, change 32 by 16."
]
},
{
"cell_type": "code",
"execution_count": null,