cosmetic fixes 01
This commit is contained in:
parent
284a24325b
commit
afc00ace86
@ -36,7 +36,7 @@
|
||||
"\n",
|
||||
"You will see bits in the text like this: \"TK: figure showing bla here\" or \"TK: expand introduction\". \"TK\" is used to make places where we know something is missing and we will add them. This does not alter any of the core content as those are usually small parts/figures that are relatively independent form the flow and self-explanatory.\n",
|
||||
"\n",
|
||||
"Throughout the book, the version of the fastai library used is version 2. That version is not yet officially released and is for now separate from the main project. You can find it [here](https://github.com/fastai/fastai2)."
|
||||
"Throughout the book, the version of the fastai library used is version 2. That version is not yet officially released and is for now separate from the main project. You can find it [here](https://github.com/fastai/fastai2). TK book website mention here also https://book.fast.ai"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -182,7 +182,7 @@
|
||||
"\n",
|
||||
"We will learn in this book about how modern neural networks handle each of these requirement. In the 1980's most models were built with a second layer of neurons, thus avoiding the problem that had been identified by Minsky (this was their \"pattern of connectivity among units\", to use the framework above). And indeed, neural networks were widely used during the 80s and 90s for real, practical projects. However, again a misunderstanding of the theoretical issues held back the field. In theory, adding just one extra layer of neurons was enough to allow any mathematical model to be approximated with these neural networks, but in practice such networks were often too big and slow to be useful.\n",
|
||||
"\n",
|
||||
"Although there were researchers 30 years ago showing that to get good performance in practice you need to use even more layers of neurons, it is only in the last decade that this has been more widely appreciated. Thanks to this understanding, along with the improved ability to use these in practice thanks to improvements in computer hardware, increases in data availability, and algorithmic tweaks that allow neural networks to be trained faster and more easily, neural networks are now finally living out their potential. We now have what Rosenblatt had promised: \"a machine capable of perceiving, recognizing and identifying its surroundings without any human training or control\". And you will learn how to build them in this book."
|
||||
"Although researchers showed 30 years ago that to get practical good performance you need to use even more layers of neurons, it is only in the last decade that this has been more widely appreciated. Neural networks are now finally living out their potential, thanks to the understanding to use more layers as well as improved ability to do so thanks to improvements in computer hardware, increases in data availability, and algorithmic tweaks that allow neural networks to be trained faster and more easily. We now have what Rosenblatt had promised: \"a machine capable of perceiving, recognizing and identifying its surroundings without any human training or control\". And you will learn how to build them in this book."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -233,7 +233,7 @@
|
||||
"source": [
|
||||
"Since we are going to be spending a lot of time together, let's get to know each other a bit… We are Sylvain and Jeremy, your guides on this journey. We hope that you will find us well suited for this position.\n",
|
||||
"\n",
|
||||
"Jeremy has been using and teaching machine learning for around 30 years. He started using neural networks 25 years ago. During this time he has led many companies and projects which have machine learning at their core, including founding the first company to focus on deep learning and medicine, Enlitic, and taking on the role of Pres and chief scientist of the world's largest machine learning community, Kaggle. He is the co-founder, along with Dr Rachel Thomas, of fast.ai, the organisation which built the course that this book is based on.\n",
|
||||
"Jeremy has been using and teaching machine learning for around 30 years. He started using neural networks 25 years ago. During this time he has led many companies and projects which have machine learning at their core, including founding the first company to focus on deep learning and medicine, Enlitic, and taking on the role of President and Chief Scientist of the world's largest machine learning community, Kaggle. He is the co-founder, along with Dr Rachel Thomas, of fast.ai, the organisation which built the course that this book is based on.\n",
|
||||
"\n",
|
||||
"From time to time you will hear directly from us, in sidebars like this one from Jeremy:"
|
||||
]
|
||||
@ -310,7 +310,7 @@
|
||||
"source": [
|
||||
"The hardest part of deep learning is artisanal: how do you know if you've got enough data; whether it is in the right format; if your model is training properly; and if it's not, what should you do about it? That is why we believe in learning by doing. As with basic data science skills, with deep learning you only get better through practical experience. Trying to spend too much time on the theory can be counterproductive. The key is to just code and try to solve problems: the theory can come later, when you have context and motivation.\n",
|
||||
"\n",
|
||||
"There will be times when the journey will feel hard. Times where you feel stuck. Don't give up! Rewind through the book to find the last bit where you definitely weren't stuck, and then read slowly through from there to find the first thing that isn't clear. Then try some code experiments yourself, and Google around for more tutorials on whatever the issue you're stuck with is--often you'll find some different angle on the material which might help it to click. Also, it's to not understand everything on first reading. Trying to understand the material serially before proceeding can sometimes be hard. Sometimes things click into place after you got more context from parts down the road, from having a bigger picture. So if you do get stuck on a section, trying moving on anyway and make a note to come back to it later.\n",
|
||||
"There will be times when the journey will feel hard. Times where you feel stuck. Don't give up! Rewind through the book to find the last bit where you definitely weren't stuck, and then read slowly through from there to find the first thing that isn't clear. Then try some code experiments yourself, and Google around for more tutorials on whatever the issue you're stuck with is--often you'll find some different angle on the material which might help it to click. Also, it's expected and normal to not understand everything (especially the code) on first reading. Trying to understand the material serially before proceeding can sometimes be hard. Sometimes things click into place after you got more context from parts down the road, from having a bigger picture. So if you do get stuck on a section, trying moving on anyway and make a note to come back to it later.\n",
|
||||
"\n",
|
||||
"Remember, you don't need any particular academic background to succeed at deep learning. Many important breakthroughs are made in research and industry by folks without a PhD, such as the paper [Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks](https://arxiv.org/abs/1511.06434), one of the most influential papers of the last decade, with over 5000 citations, which was written by Alec Radford when he was an under-graduate. Even at Tesla, where they're trying to solve the extremely tough challenge of making a self-driving car, CEO [Elon Musk says](https://twitter.com/elonmusk/status/1224089444963311616):\n",
|
||||
"\n",
|
||||
@ -335,7 +335,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Whether you're excited to identify if plants are diseased from pictures of their leaves, auto-generate knitting patterns, diagnose TB from x-rays, or determine when a raccoon is using your cat door, we will get you using deep learning on your own problems (via pre-trained models from others) as quickly as possible, and then will progressively drill into more details. You'll learn how to use deep learning to solve your own problems at state-of-the-art accuracy within the first 30 minutes of the next chapter! (And feel free to skip straight to there now if you're dying to get coding right away.) There is a pernicious myth out there that you need to have computing resources and datasets the size of those at Google to be able to do deep learning, and it's not true.\n",
|
||||
"Whether you're excited to identify if plants are diseased from pictures of their leaves, auto-generate knitting patterns, diagnose TB from x-rays, or determine when a raccoon is using your cat door, we will get you using deep learning on your own problems (via pre-trained models from others) as quickly as possible, and then will progressively drill into more details. You'll learn how to use deep learning to solve your own problems at state-of-the-art accuracy within the first 30 minutes of the next chapter! (And feel free to skip straight there now if you're dying to get coding right away.) There is a pernicious myth out there that you need to have computing resources and datasets the size of those at Google to be able to do deep learning, and it's not true.\n",
|
||||
"\n",
|
||||
"So, what sort of tasks make for good test cases? You could train your model to distinguish between Picasso and Monet paintings or to pick out pictures of your daughter instead of pictures of your son. It helps to focus on your hobbies and passions–setting yourself four of five little projects rather than striving to solve a big, grand problem tends to work better when you're getting started. Since it is easy to get stuck, trying to be too ambitious too early can often backfire. Then, once you've got the basics mastered, aim to complete something you're really proud of!"
|
||||
]
|
||||
@ -344,7 +344,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> J: Deep learning can be set to work on almost any problem. For instance, my first startup was a company called FastMail, which provided enhanced email services when it launched in 1999 (and still does to this day). In 2002 I set it up to use a primitive form of deep learning – single-layer neural networks – to help to categorise emails and stop customers from receiving spam."
|
||||
"> J: Deep learning can be set to work on almost any problem. For instance, my first startup was a company called FastMail, which provided enhanced email services when it launched in 1999 (and still does to this day). In 2002 I set it up to use a primitive form of deep learning – single-layer neural networks – to help categorise emails and stop customers from receiving spam."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -533,7 +533,7 @@
|
||||
"source": [
|
||||
"When you click on a cell it will be selected. Click on the cell now which begins with the line \"# CLICK ME\". The first character in that line represents a comment in Python, so is ignored when executing the cell. The rest of the cell is, believe it or not, a complete system for creating and training a state-of-the-art model for recognizing cats versus dogs. So, let's train it now! To do so, just press shift-enter on your keyboard, or press the \"play\" button on the toolbar. Then, wait a few minutes while the following things happen:\n",
|
||||
"\n",
|
||||
"1. A dataset containing called the [Oxford-IIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) that contains 7,349 images of cats and dogs from 37 different breeds will be downloaded from the fast.ai datasets collection to your GPU server, and will then be extracted\n",
|
||||
"1. A dataset containing called the [Oxford-IIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) that contains 7,349 images of cats and dogs from 37 different breeds will be downloaded from the fast.ai datasets collection to the GPU server you are using, and will then be extracted\n",
|
||||
"2. A *pretrained model* will be downloaded from the Internet, which has already been trained on 1.3 million images, using a competition winning model\n",
|
||||
"3. The pretrained model will be *fine-tuned* using the latest advances in transfer learning, to create a model that is specially customised for recognising dogs and cats\n",
|
||||
"\n",
|
||||
@ -669,7 +669,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Jupyter will always print or show the result of the last line (if there is one). For instance, here is an example of cell that outputs an image:"
|
||||
"Jupyter will always print or show the result of the last line (if there is one). For instance, here is an example of a cell that outputs an image:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -705,7 +705,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"So, how do we know if this model is any good? You can see the error rate (proportion of images that were incorrectly identified) printed as the last column of the table. As you can see, the model is nearly perfect, even although the training time was only a few seconds (not including the one-time downloading of dataset and pretrained model). In fact, the accuracy you've achieved already is far better than anybody had ever achieved just 10 years ago!\n",
|
||||
"So, how do we know if this model is any good? You can see the error rate (proportion of images that were incorrectly identified) printed as the second last column of the table. As you can see, the model is nearly perfect, even although the training time was only a few seconds (not including the one-time downloading of dataset and pretrained model). In fact, the accuracy you've achieved already is far better than anybody had ever achieved just 10 years ago!\n",
|
||||
"\n",
|
||||
"Finally, let's check that this model actually works. Go and get a photo of a dog, or a cat; if you don't have one handy, just search Google images and download an image that you find there. Now execute the cell with `uploader` defined. It will output a button you can click, so you can select the image you want to classify."
|
||||
]
|
||||
@ -1498,13 +1498,13 @@
|
||||
" label_func=is_cat, item_tfms=Resize(224))\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"The third line tells fastai what kind of dataset we have, and how it is structured. There are various different classes for different kinds of deep learning dataset and problem--here we're using `ImageDataLoaders`. The first part of the class name will generally be the type of data you have, such as image, or text. The second part will generally be the type of problem you are solving, such as classification, or regression.\n",
|
||||
"The fourth line tells fastai what kind of dataset we have, and how it is structured. There are various different classes for different kinds of deep learning dataset and problem--here we're using `ImageDataLoaders`. The first part of the class name will generally be the type of data you have, such as image, or text. The second part will generally be the type of problem you are solving, such as classification, or regression.\n",
|
||||
"\n",
|
||||
"The other important piece of information that we have to tell fastai is how to get the labels from the dataset. Computer vision datasets are normally structured in such a way that the label for an image is part of the file name, or path, most commonly the parent folder name. Fastai comes with a number of standardized labelling methods, and ways to write your own. Here we define a function `is_cat` which labels cats based on a filename rule provided by the dataset creators.\n",
|
||||
"The other important piece of information that we have to tell fastai is how to get the labels from the dataset. Computer vision datasets are normally structured in such a way that the label for an image is part of the file name, or path, most commonly the parent folder name. Fastai comes with a number of standardized labelling methods, and ways to write your own. Here we define a function on the third line: `is_cat` which labels cats based on a filename rule provided by the dataset creators.\n",
|
||||
"\n",
|
||||
"Finally, we define the `Transform`s that we need. A `Transform` contains code that is applied automatically during training; fastai includes many pre-defined `Transform`s, and adding new ones is as simple as creating a Python function. There are two kinds: `item_tfms` are applied to each item (in this case, each item is resized to a 224 pixel square); `batch_tfms` are applied to a *batch* of items at a time using the GPU, so they're particularly fast (we'll see many examples of these throughout this book).\n",
|
||||
"\n",
|
||||
"Why 224 pixels? This is the standard size for historical reasons (old pretrained models require this size exactly), but you can pass pretty much anything. If you increase the size, you'll often get a model with better results (since it will be able to focus on more details) but at the price of speed and memory consumption; or visa versa if you decrease the size. "
|
||||
"Why 224 pixels? This is the standard size for historical reasons (old pretrained models require this size exactly), but you can pass pretty much anything. If you increase the size, you'll often get a model with better results (since it will be able to focus on more details) but at the price of speed and memory consumption; or vice versa if you decrease the size. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1540,7 +1540,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Overfitting is the most important and challenging single issue** when training for all machine learning practitioners, and all algorithms. As we will see, it is very easy to create a model that does a great job at making predictions on the exact data which it has been trained on, but it is much harder to make predictions on data that it has never seen before. And of course this is the data that will actually matter in practice. For instance, if you create a hand-written digit classifier (as we will very soon!) and use it to recognise numbers written on cheques, then you are never going to see any of the numbers that the model was trained on -- every cheque will have slightly different variations of writing to deal with. We will learn many methods to avoid overfitting in this book. However, you should only use those methods after you have confirmed that overfitting is actually occurring (i.e. you have actually observed the validation accuracy getting worse during training). We often see practitioners using over-fitting avoidance techniques even when they have enough data that they didn't need to do so, ending up with a model that could be less accurate than what they could have gotten."
|
||||
"**Overfitting is the most important and challenging single issue** when training for all machine learning practitioners, and all algorithms. As we will see, it is very easy to create a model that does a great job at making predictions on the exact data which it has been trained on, but it is much harder to make predictions on data that it has never seen before. And of course this is the data that will actually matter in practice. For instance, if you create a hand-written digit classifier (as we will very soon!) and use it to recognise numbers written on cheques, then you are never going to see any of the numbers that the model was trained on -- every cheque will have slightly different variations of writing to deal with. We will learn many methods to avoid overfitting in this book. However, you should only use those methods after you have confirmed that overfitting is actually occurring (i.e. you have actually observed the validation accuracy getting worse during training). We often see practitioners using over-fitting avoidance techniques even when they have enough data that they didn't need to do so, ending up with a model that could be less accurate than what they could have achieved."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1558,11 +1558,11 @@
|
||||
"learn = cnn_learner(dls, resnet34, metrics=error_rate)\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"The fourth line tells fastai to create a *convolutional neural network* (CNN), and selects what *architecture* to use (i.e. what kind of model to create), what data we want to train it on, and what *metric* to use. A CNN is the current state of the art approach to creating computer vision models. We'll be learning all about how they work in this book. Their structure is inspired by how the human vision system works.\n",
|
||||
"The fifth line tells fastai to create a *convolutional neural network* (CNN), and selects what *architecture* to use (i.e. what kind of model to create), what data we want to train it on, and what *metric* to use. A CNN is the current state of the art approach to creating computer vision models. We'll be learning all about how they work in this book. Their structure is inspired by how the human vision system works.\n",
|
||||
"\n",
|
||||
"There are many different architectures in fastai, which we will be learning about in this book, as well as discussing how to create your own. Most of the time, however, picking an architecture isn't a very important part of the deep learning process. It's something that academics love to talk about, but in practice it is unlikely to be something you need to spend much time on. There are some standard architectures that work most of the time, and in this case we're using one called _ResNet_ that will be learning a lot about during the book, and is both fast and accurate for many datasets and problems. The \"34\" in `resnet34` refers to the number of layers in this variant of the architecture (other options are \"18\", \"50\", \"101\", and \"152\"). Models using architectures with more layers take longer to train, and are more prone to overfitting (i.e. you can't train them for as many epochs before the accuracy on the validation set starts getting worse). On the other hand, when using more data, they can be quite a bit more accurate.\n",
|
||||
"\n",
|
||||
"A *metric* is a function that is called to measure how good the model is, using the validation set, and will be printed at the end of each *epoch*. In this case, we're using `error_rate`, which is a function provided by fastai which does just what it says: tells you what percentage of images in the validation set are being classified incorrectly. Another common metric for classification is `accuracy` (which is just `1.0 - error_rate`). fastai provides many more, which will be discussed throughout this book."
|
||||
"A *metric* is a function that is called to measure how good the model is, using the validation set, and will be printed at the end of each *epoch*. In this case, we're using `error_rate`, which is a function provided by fastai which does just what it says: tells you what percentage of images in the validation set are being classified incorrectly. Another common metric for classification is `accuracy` (which is just `1.0 - error_rate`). fastai provides many more, which will be discussed throughout this book. TK? worth mentioning an error in label in validation set will show as if in error_rate of model?"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1584,11 +1584,11 @@
|
||||
"learn.fine_tune(1)\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"The fifth line tells fastai how to *fit* the model. As we've discussed, the architecture only describes a *template* for a mathematical function; but it doesn't actually do anything until we provide values for the millions of parameters it contains.\n",
|
||||
"The sixth line tells fastai how to *fit* the model. As we've discussed, the architecture only describes a *template* for a mathematical function; but it doesn't actually do anything until we provide values for the millions of parameters it contains.\n",
|
||||
"\n",
|
||||
"This is the key to deep learning — how to fit the parameters of a model to get it to solve your problem. In order to fit a model, we have to provide at least one piece of information: how many times to look at each image (known as number of *epochs*). The number of epochs you select will largely depend on how much time you have available, and how long you find it takes in practice to fit your model. If you select a number that is too small, you can always train for more epochs later.\n",
|
||||
"\n",
|
||||
"But why is the method called `fine_tune`, and not `fit`? fastai actually *does* have a method called `fit`, which does indeed fit a model (i.e. look at images in the training set multiple times, each time updating the *parameters* to make the predictions closer and closer to the *target labels*). But in this case, we've started with a pretrained model, and we don't want to through away all those capabilities that it already has. As we'll learn in this book, there are some important tricks to adapt a pretrained model for a new dataset -- a process called *fine-tuning*. When you use the `fine_tune` method, fastai will use these tricks for you. There are a few parameters you can set (which we'll discuss later), but in the default form shown here, it does two steps:\n",
|
||||
"But why is the method called `fine_tune`, and not `fit`? fastai actually *does* have a method called `fit`, which does indeed fit a model (i.e. look at images in the training set multiple times, each time updating the *parameters* to make the predictions closer and closer to the *target labels*). But in this case, we've started with a pretrained model, and we don't want to throw away all those capabilities that it already has. As we'll learn in this book, there are some important tricks to adapt a pretrained model for a new dataset -- a process called *fine-tuning*. When you use the `fine_tune` method, fastai will use these tricks for you. There are a few parameters you can set (which we'll discuss later), but in the default form shown here, it does two steps:\n",
|
||||
"\n",
|
||||
"1. Use one *epoch* to fit just those parts of the model necessary to get the new random *head* to work correctly with your dataset\n",
|
||||
"1. Use the number of epochs requested when calling the method to fit the entire model, updating the weights of the later layers (especially the head) faster than the earlier layers (which, as we'll see, generally don't require many changes from the pretrained weights).\n",
|
||||
@ -2121,7 +2121,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This model is using the the IMDb dataset from the paper [Learning Word Vectors for Sentiment Analysis]((http://ai.stanford.edu/~amaas/data/sentiment/)). It works well with movie reviews of many thousands of words. But let's test it out on a very short one, to see it do its thing:"
|
||||
"This model is using the IMDb dataset from the paper [Learning Word Vectors for Sentiment Analysis]((https://ai.stanford.edu/~amaas/data/sentiment/)). It works well with movie reviews of many thousands of words. But let's test it out on a very short one, to see it do its thing:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2184,9 +2184,9 @@
|
||||
"learn.fine_tune(4, 1e-2)\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"The outputs themselves can be deceiving: they have the results of the last time the cell was executed, but if you change the code inside a cell without executing it, you will keep them.\n",
|
||||
"The outputs themselves can be deceiving: they have the results of the last time the cell was executed, but if you change the code inside a cell without executing it, the old (misleading) results will remain.\n",
|
||||
"\n",
|
||||
"Except when we mention it explicitely, the notebooks provided on the book website are meant to be run in order, from top to bottom. In general, when experimenting, you will find yourself executing cells in any order to go fast (which is a super neat feature of Jupyter Notebooks) but once you have explored and arrive at the final version of your code, make sure you can run the cells of your notebooks in order (your future self won't necessarily remember the convoluted path you took otherwise!). \n",
|
||||
"Except when we mention it explicitly, the notebooks provided on the book website are meant to be run in order, from top to bottom. In general, when experimenting, you will find yourself executing cells in any order to go fast (which is a super neat feature of Jupyter Notebooks) but once you have explored and arrive at the final version of your code, make sure you can run the cells of your notebooks in order (your future self won't necessarily remember the convoluted path you took otherwise!). \n",
|
||||
"\n",
|
||||
"In edit mode, pressing `0` twice will restart the *kernel* (which is the engine powering your notebook). This will wipe your state clean and make it as if you had just started in the notebook. Clean then on the \"Cell\" menu and then on \"Run All Above\" to run all the cells above the point you are. We have found this to be very useful when developing the fastai library."
|
||||
]
|
||||
@ -2634,7 +2634,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Each of the models we trained showed a training and validation loss. A good validation set is one of the most important piece of your training, let's see why and learn how to create one."
|
||||
"Each of the models we trained showed a training and validation loss. A good validation set is one of the most important pieces of your training, let's see why and learn how to create one."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2733,7 +2733,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For example, Kaggle had a competition to [predict the sales in a chain of Ecuadorian grocery stores](https://www.kaggle.com/c/favorita-grocery-sales-forecasting). Kaggle's *training data* ran from Jan 1 2013 to Aug 15 2017 and the test data spanned Aug 16 2017 to Aug 31 2017. That way, the competition organizer ensured that entrants were making predictions for a time period that was *in the future*, from the perspective of their model. This is similar to the way quant hedge fund traders do *back-testing* to check whether their models are predictive of future periods, based on passed data."
|
||||
"For example, Kaggle had a competition to [predict the sales in a chain of Ecuadorian grocery stores](https://www.kaggle.com/c/favorita-grocery-sales-forecasting). Kaggle's *training data* ran from Jan 1 2013 to Aug 15 2017 and the test data spanned Aug 16 2017 to Aug 31 2017. That way, the competition organizer ensured that entrants were making predictions for a time period that was *in the future*, from the perspective of their model. This is similar to the way quant hedge fund traders do *back-testing* to check whether their models are predictive of future periods, based on past data."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2815,8 +2815,8 @@
|
||||
"1. What were the two theoretical misunderstandings that held back the field of neural networks?\n",
|
||||
"1. What is a GPU?\n",
|
||||
"1. Open a notebook and execute a cell containing: `1+1`. What happens?\n",
|
||||
"1. Follow through each cell of the stripped version of the notebook for this chapter. Before executing each cell, guess what will happen.\n",
|
||||
"1. Complete the Jupyter Notebook online appendix.\n",
|
||||
"1. Follow through each cell of the stripped version of the notebook for this chapter. Before executing each cell, anticipate what will happen.\n",
|
||||
"1. Complete the Jupyter Notebook online appendix. TK link?\n",
|
||||
"1. Why is it hard to use a traditional computer program to recognize images in a photo?\n",
|
||||
"1. What did Samuel mean by \"Weight Assignment\"?\n",
|
||||
"1. What term do we normally use in deep learning for what Samuel called \"Weights\"?\n",
|
||||
@ -2874,8 +2874,33 @@
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.5"
|
||||
},
|
||||
"toc": {
|
||||
"base_numbering": 1,
|
||||
"nav_menu": {},
|
||||
"number_sections": false,
|
||||
"sideBar": true,
|
||||
"skip_h1_title": true,
|
||||
"title_cell": "Table of Contents",
|
||||
"title_sidebar": "Contents",
|
||||
"toc_cell": false,
|
||||
"toc_position": {},
|
||||
"toc_section_display": true,
|
||||
"toc_window_display": false
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user