diff --git a/03_ethics.ipynb b/03_ethics.ipynb index 3f03af4..5479ac5 100644 --- a/03_ethics.ipynb +++ b/03_ethics.ipynb @@ -386,7 +386,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\"A" + "\"A" ] }, { diff --git a/04_mnist_basics.ipynb b/04_mnist_basics.ipynb index 993d2a6..132ddb1 100644 --- a/04_mnist_basics.ipynb +++ b/04_mnist_basics.ipynb @@ -67,8 +67,6 @@ "source": [ "The story of deep learning is one of tenacity and grit from a handful of dedicated researchers. After early hopes (and hype!) neural networks went out of favor in the 1990's and 2000's, and just a handful of researchers kept trying to make them work well. Three of them, Yann Lecun, Yoshua Bengio and Geoffrey Hinton were awarded the highest honor in computer science, the Turing Award (generally considered the \"Nobel Prize of computer science\") after triumphing despite the deep skepticism and disinterest of the wider machine learning and statistics community.\n", "\n", - "\"Picture\n", - "\n", "Geoff Hinton has told of how even academic papers showing dramatically better results than anything previously published would be rejected from top journals and conferences, just because they used a neural network. Yann Lecun's work on convolutional neural networks, which we will study in the next section, showed that these models could read hand-written text--something that had never been achieved before. However his breakthrough was ignored by most researchers, even as it was used commercially to read 10% of the checks in the US!\n", "\n", "In addition to these three Turing Award winners, there are many other researchers who have battled to get us to where we are today. For instance, Jurgen Schmidhuber (who many believe should have shared in the Turing Award) pioneered many important ideas, including working with his student Sepp Hochreiter on the *LSTM* architecture (widely used for speech recognition and other text modeling tasks, and used in the IMDB example in <>). Perhaps most important of all, Paul Werbos in 1974 invented back-propagation for neural networks, the technique shown in this chapter and used universally for training neural networks ([Werbos 1994](https://books.google.com/books/about/The_Roots_of_Backpropagation.html?id=WdR3OOM2gBwC)). His development was almost entirely ignored for decades, but today it is the most important foundation of modern AI.\n", diff --git a/05_pet_breeds.ipynb b/05_pet_breeds.ipynb index d18b601..b5a6da4 100644 --- a/05_pet_breeds.ipynb +++ b/05_pet_breeds.ipynb @@ -2042,7 +2042,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\"Impact" + "\"Impact" ] }, { diff --git a/06_multicat.ipynb b/06_multicat.ipynb index 4a1b1a6..a03b395 100644 --- a/06_multicat.ipynb +++ b/06_multicat.ipynb @@ -269,10 +269,97 @@ ] }, { - "cell_type": "markdown", + "cell_type": "code", + "execution_count": null, "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
a
01
12
23
34
\n", + "
" + ], + "text/plain": [ + " a\n", + "0 1\n", + "1 2\n", + "2 3\n", + "3 4" + ] + }, + "execution_count": null, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ - "TK" + "df1 = pd.DataFrame()\n", + "df1['a'] = [1,2,3,4]\n", + "df1" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "0 11\n", + "1 22\n", + "2 33\n", + "3 44\n", + "dtype: int64" + ] + }, + "execution_count": null, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "df1['b'] = [10, 20, 30, 40]\n", + "df1['a'] + df1['b']" ] }, { diff --git a/07_sizing_and_tta.ipynb b/07_sizing_and_tta.ipynb index dcd0749..2c5bdee 100644 --- a/07_sizing_and_tta.ipynb +++ b/07_sizing_and_tta.ipynb @@ -615,13 +615,6 @@ "> jargon: test time augmentation (TTA): during inference or validation, creating multiple versions of each image, using data augmentation, and then taking the average or maximum of the predictions for each augmented version of the image" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "TK pic of TTA" - ] - }, { "cell_type": "markdown", "metadata": {}, diff --git a/08_collab.ipynb b/08_collab.ipynb index 1322805..be2e586 100644 --- a/08_collab.ipynb +++ b/08_collab.ipynb @@ -2221,7 +2221,16 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "TK Add a conclusion" + "## Conclusion" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For our first non computer vision application, we looked at recommendation systems and saw how gradient descent can learn intrinsic factors or bias about items from a history of ratings. Those can then give us information about the data. \n", + "\n", + "We also built our first model in PyTorch. We will do a lot more of this in the next section of the book, but first, let's finish our dive into the other general applications of deep learning, continuing with tabular data." ] }, { diff --git a/clean/06_multicat.ipynb b/clean/06_multicat.ipynb index c2e8d64..2e24878 100644 --- a/clean/06_multicat.ipynb +++ b/clean/06_multicat.ipynb @@ -191,6 +191,100 @@ "df['fname']" ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
a
01
12
23
34
\n", + "
" + ], + "text/plain": [ + " a\n", + "0 1\n", + "1 2\n", + "2 3\n", + "3 4" + ] + }, + "execution_count": null, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "df1 = pd.DataFrame()\n", + "df1['a'] = [1,2,3,4]\n", + "df1" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "0 11\n", + "1 22\n", + "2 33\n", + "3 44\n", + "dtype: int64" + ] + }, + "execution_count": null, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "df1['b'] = [10, 20, 30, 40]\n", + "df1['a'] + df1['b']" + ] + }, { "cell_type": "markdown", "metadata": {}, diff --git a/clean/08_collab.ipynb b/clean/08_collab.ipynb index 3b311d7..b352c45 100644 --- a/clean/08_collab.ipynb +++ b/clean/08_collab.ipynb @@ -1631,6 +1631,13 @@ "### End sidebar" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Conclusion" + ] + }, { "cell_type": "markdown", "metadata": {}, diff --git a/images/turing.jpg b/images/turing.jpg deleted file mode 100644 index 85f9542..0000000 Binary files a/images/turing.jpg and /dev/null differ diff --git a/images/turing_300.jpg b/images/turing_300.jpg deleted file mode 100644 index dc63913..0000000 Binary files a/images/turing_300.jpg and /dev/null differ