Follow-up
This commit is contained in:
@@ -304,10 +304,25 @@
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"jupytext": {
|
||||
"split_at_heading": true
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1553,7 +1553,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"1. If the dataset for your project is so big and complicated that working with it takes a significant amount of time, what should you do?\n",
|
||||
"1. Why do we concatenating the documents in our dataset before creating a language model?\n",
|
||||
"1. Why do we concatenate the documents in our dataset before creating a language model?\n",
|
||||
"1. To use a standard fully connected network to predict the fourth word given the previous three words, what two tweaks do we need to make?\n",
|
||||
"1. How can we share a weight matrix across multiple layers in PyTorch?\n",
|
||||
"1. Write a module which predicts the third word given the previous two words of a sentence, without peeking.\n",
|
||||
@@ -1626,6 +1626,18 @@
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
Reference in New Issue
Block a user