Some typos in chapter 10
This commit is contained in:
parent
f524b58c53
commit
05faaeb4de
20
10_nlp.ipynb
20
10_nlp.ipynb
@ -103,7 +103,7 @@
|
||||
"\n",
|
||||
"- **Tokenization**:: convert the text into a list of words (or characters, or substrings, depending on the granularity of your model)\n",
|
||||
"- **Numericalization**:: make a list of all of the unique words which appear (the vocab), and convert each word into a number, by looking up its index in the vocab\n",
|
||||
"- **Language model data loader** creation:: fastai provides an `LMDataLoader` class which automatically handles creating a dependent variable which is offset from the independent variable buy one token. It also handles some important details, such as how to shuffle the training data in such a way that the dependent and independent variables maintain their structure as required\n",
|
||||
"- **Language model data loader** creation:: fastai provides an `LMDataLoader` class which automatically handles creating a dependent variable which is offset from the independent variable by one token. It also handles some important details, such as how to shuffle the training data in such a way that the dependent and independent variables maintain their structure as required\n",
|
||||
"- **Language model** creation:: we need a special kind of model which does something we haven't seen before: handles input lists which could be arbitrarily big or small. There are a number of ways to do this; in this chapter we will be using a *recurrent neural network*. We will get to the details of this in the <<chapter_nlp_dive>>, but for now, you can think of it as just another deep neural network.\n",
|
||||
"\n",
|
||||
"Let's take a look at how each step works in detail."
|
||||
@ -347,7 +347,7 @@
|
||||
"\n",
|
||||
"Here is a brief summary of what each does:\n",
|
||||
"\n",
|
||||
"- `fix_html`:: replace special HTML characters by a readable version (IMDb reviwes have quite a few of them for instance) ;\n",
|
||||
"- `fix_html`:: replace special HTML characters by a readable version (IMDb reviews have quite a few of them for instance) ;\n",
|
||||
"- `replace_rep`:: replace any character repeated three times or more by a special token for repetition (xxrep), the number of times it's repeated, then the character ;\n",
|
||||
"- `replace_wrep`:: replace any word repeated three times or more by a special token for word repetition (xxwrep), the number of times it's repeated, then the word ;\n",
|
||||
"- `spec_add_spaces`:: add spaces around / and # ;\n",
|
||||
@ -1276,7 +1276,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As we have seen at the beginning of this chapter to train a state-of-the-art text classifier using transfer learning will take two steps: first we need to fine-tune our langauge model pretrained on Wikipedia to the corpus of IMDb reviews, then we can use that model to train a classifier.\n",
|
||||
"As we have seen at the beginning of this chapter to train a state-of-the-art text classifier using transfer learning will take two steps: first we need to fine-tune our language model pretrained on Wikipedia to the corpus of IMDb reviews, then we can use that model to train a classifier.\n",
|
||||
"\n",
|
||||
"As usual, let's start with assembling our data."
|
||||
]
|
||||
@ -1515,7 +1515,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can them finetune the model after unfreezing:"
|
||||
"We can then finetune the model after unfreezing:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2251,6 +2251,18 @@
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
Loading…
Reference in New Issue
Block a user