First batch of edits
This commit is contained in:
@@ -14,70 +14,70 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Your deep learning journey"
|
||||
"# Your Deep Learning Journey"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deep learning is for everyone"
|
||||
"## Deep Learning Is for Everyone"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Neural networks: a brief history"
|
||||
"## Neural Networks: A Brief History"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Who we are"
|
||||
"## Who We Are"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## How to learn deep learning"
|
||||
"## How to Learn Deep Learning"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Your projects and your mindset"
|
||||
"### Your Projects and Your Mindset"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## The software: PyTorch, fastai, and Jupyter"
|
||||
"## The Software: PyTorch, fastai, and Jupyter"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Your first model"
|
||||
"## Your First Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Getting a GPU deep learning server"
|
||||
"### Getting a GPU Deep Learning Server"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Running your first notebook"
|
||||
"### Running Your First Notebook"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -166,7 +166,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Sidebar: This book was written in Jupyter Notebooks"
|
||||
"### Sidebar: This Book Was Written in Jupyter Notebooks"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -291,7 +291,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### What is machine learning?"
|
||||
"### What Is Machine Learning?"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -627,14 +627,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### What is a neural network?"
|
||||
"### What Is a Neural Network?"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### A bit of deep learning jargon"
|
||||
"### A Bit of Deep Learning Jargon"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -757,53 +757,53 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Limitations inherent to machine learning\n",
|
||||
"### Limitations Inherent To Machine Learning\n",
|
||||
"\n",
|
||||
"From this picture we can now see some fundamental things about training a deep learning model:\n",
|
||||
"\n",
|
||||
"- A model cannot be created without data ;\n",
|
||||
"- A model can only learn to operate on the patterns seen in the input data used to train it ;\n",
|
||||
"- This learning approach only creates *predictions*, not recommended *actions* ;\n",
|
||||
"- It's not enough to just have examples of input data; we need *labels* for that data too (e.g. pictures of dogs and cats aren't enough to train a model; we need a label for each one, saying which ones are dogs, and which are cats).\n",
|
||||
"- A model cannot be created without data.\n",
|
||||
"- A model can only learn to operate on the patterns seen in the input data used to train it.\n",
|
||||
"- This learning approach only creates *predictions*, not recommended *actions*.\n",
|
||||
"- It's not enough to just have examples of input data; we need *labels* for that data too (e.g., pictures of dogs and cats aren't enough to train a model; we need a label for each one, saying which ones are dogs, and which are cats).\n",
|
||||
"\n",
|
||||
"Generally speaking, we've seen that most organizations that think they don't have enough data, actually mean they don't have enough *labeled* data. If any organization is interested in doing something in practice with a model, then presumably they have some inputs they plan to run their model against. And presumably they've been doing that some other way for a while (e.g. manually, or with some heuristic program), so they have data from those processes! For instance, a radiology practice will almost certainly have an archive of medical scans (since they need to be able to check how their patients are progressing over time), but those scans may not have structured labels containing a list of diagnoses or interventions (since radiologists generally create free text natural language reports, not structured data). We'll be discussing labeling approaches a lot in this book, since it's such an important issue in practice.\n",
|
||||
"Generally speaking, we've seen that most organizations that say they don't have enough data, actually mean they don't have enough *labeled* data. If any organization is interested in doing something in practice with a model, then presumably they have some inputs they plan to run their model against. And presumably they've been doing that some other way for a while (e.g., manually, or with some heuristic program), so they have data from those processes! For instance, a radiology practice will almost certainly have an archive of medical scans (since they need to be able to check how their patients are progressing over time), but those scans may not have structured labels containing a list of diagnoses or interventions (since radiologists generally create free-text natural language reports, not structured data). We'll be discussing labeling approaches a lot in this book, because it's such an important issue in practice.\n",
|
||||
"\n",
|
||||
"Since these kinds of machine learning models can only make *predictions* (i.e. attempt to replicate labels), this can result in a significant gap between organizational goals and model capabilities. For instance, in this book you'll learn how to create a *recommendation system* that can predict what products a user might purchase. This is often used in e-commerce, such as to customize products shown on a home page, by showing the highest-ranked items. But such a model is generally created by looking at a user and their buying history (*inputs*) and what they went on to buy or look at (*labels*), which means that the model is likely to tell you about products they already have, or already know about, rather than new products that they are most likely to be interested in hearing about. That's very different to what, say, an expert at your local bookseller might do, where they ask questions to figure out your taste, and then tell you about authors or series that you've never heard of before."
|
||||
"Since these kinds of machine learning models can only make *predictions* (i.e., attempt to replicate labels), this can result in a significant gap between organizational goals and model capabilities. For instance, in this book you'll learn how to create a *recommendation system* that can predict what products a user might purchase. This is often used in e-commerce, such as to customize products shown on a home page by showing the highest-ranked items. But such a model is generally created by looking at a user and their buying history (*inputs*) and what they went on to buy or look at (*labels*), which means that the model is likely to tell you about products the user already has or already knows about, rather than new products that they are most likely to be interested in hearing about. That's very different to what, say, an expert at your local bookseller might do, where they ask questions to figure out your taste, and then tell you about authors or series that you've never heard of before."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### How our image recognizer works"
|
||||
"### How Our Image Recognizer Works"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### What our image recognizer learned"
|
||||
"### What Our Image Recognizer Learned"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Image recognizers can tackle non-image tasks"
|
||||
"### Image Recognizers Can Tackle Non-Image Tasks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Jargon recap"
|
||||
"### Jargon Recap"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deep learning is not just for image classification"
|
||||
"## Deep Learning Is Not Just for Image Classification"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1114,7 +1114,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Sidebar: The order matters"
|
||||
"### Sidebar: The Order Matters"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1441,7 +1441,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Sidebar: Datasets: food for models"
|
||||
"### Sidebar: Datasets: Food for Models"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1455,14 +1455,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Validation sets and test sets"
|
||||
"## Validation Sets and Test Sets"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Use judgment in defining test sets"
|
||||
"### Use Judgment in Defining Test Sets"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1483,7 +1483,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It can be hard to know in pages and pages of prose what are the key things you really need to focus on and remember. So we've prepared a list of questions and suggested steps to complete at the end of each chapter. All the answers are in the text of the chapter, so if you're not sure about anything here, re-read that part of the text and make sure you understand it. Answers to all these questions are also available on the [book website](https://book.fast.ai). You can also visit [the forums](https://forums.fast.ai) if you get stuck to get help from other folks studying this material."
|
||||
"It can be hard to know in pages and pages of prose what the key things are that you really need to focus on and remember. So, we've prepared a list of questions and suggested steps to complete at the end of each chapter. All the answers are in the text of the chapter, so if you're not sure about anything here, reread that part of the text and make sure you understand it. Answers to all these questions are also available on the [book's website](https://book.fast.ai). You can also visit [the forums](https://forums.fast.ai) if you get stuck to get help from other folks studying this material."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1491,33 +1491,35 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"1. Do you need these for deep learning?\n",
|
||||
"\n",
|
||||
" - Lots of math T / F\n",
|
||||
" - Lots of data T / F\n",
|
||||
" - Lots of expensive computers T / F\n",
|
||||
" - A PhD T / F\n",
|
||||
" \n",
|
||||
"1. Name five areas where deep learning is now the best in the world.\n",
|
||||
"1. What was the name of the first device that was based on the principle of the artificial neuron?\n",
|
||||
"1. Based on the book of the same name, what are the requirements for \"Parallel Distributed Processing\"?\n",
|
||||
"1. Based on the book of the same name, what are the requirements for parallel distributed processing (PDP)?\n",
|
||||
"1. What were the two theoretical misunderstandings that held back the field of neural networks?\n",
|
||||
"1. What is a GPU?\n",
|
||||
"1. Open a notebook and execute a cell containing: `1+1`. What happens?\n",
|
||||
"1. Follow through each cell of the stripped version of the notebook for this chapter. Before executing each cell, guess what will happen.\n",
|
||||
"1. Complete the Jupyter Notebook online appendix.\n",
|
||||
"1. Why is it hard to use a traditional computer program to recognize images in a photo?\n",
|
||||
"1. What did Samuel mean by \"Weight Assignment\"?\n",
|
||||
"1. What term do we normally use in deep learning for what Samuel called \"Weights\"?\n",
|
||||
"1. Draw a picture that summarizes Arthur Samuel's view of a machine learning model\n",
|
||||
"1. What did Samuel mean by \"weight assignment\"?\n",
|
||||
"1. What term do we normally use in deep learning for what Samuel called \"weights\"?\n",
|
||||
"1. Draw a picture that summarizes Samuel's view of a machine learning model.\n",
|
||||
"1. Why is it hard to understand why a deep learning model makes a particular prediction?\n",
|
||||
"1. What is the name of the theorem that a neural network can solve any mathematical problem to any level of accuracy?\n",
|
||||
"1. What is the name of the theorem that shows that a neural network can solve any mathematical problem to any level of accuracy?\n",
|
||||
"1. What do you need in order to train a model?\n",
|
||||
"1. How could a feedback loop impact the rollout of a predictive policing model?\n",
|
||||
"1. Do we always have to use 224x224 pixel images with the cat recognition model?\n",
|
||||
"1. Do we always have to use 224\\*224-pixel images with the cat recognition model?\n",
|
||||
"1. What is the difference between classification and regression?\n",
|
||||
"1. What is a validation set? What is a test set? Why do we need them?\n",
|
||||
"1. What will fastai do if you don't provide a validation set?\n",
|
||||
"1. Can we always use a random sample for a validation set? Why or why not?\n",
|
||||
"1. What is overfitting? Provide an example.\n",
|
||||
"1. What is a metric? How does it differ to \"loss\"?\n",
|
||||
"1. What is a metric? How does it differ from \"loss\"?\n",
|
||||
"1. How can pretrained models help?\n",
|
||||
"1. What is the \"head\" of a model?\n",
|
||||
"1. What kinds of features do the early layers of a CNN find? How about the later layers?\n",
|
||||
@@ -1533,14 +1535,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Each chapter also has a \"further research\" with questions that aren't fully answered in the text, or include more advanced assignments. Answers to these questions aren't on the book website--you'll need to do your own research!"
|
||||
"Each chapter also has a \"Further Research\" section that poses questions that aren't fully answered in the text, or gives more advanced assignments. Answers to these questions aren't on the book's website; you'll need to do your own research!"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1548,8 +1550,15 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"1. Why is a GPU useful for deep learning? How is a CPU different, and why is it less effective for deep learning?\n",
|
||||
"1. Try to think of three areas where feedback loops might impact use of machine learning. See if you can find documented examples of that happening in practice."
|
||||
"1. Try to think of three areas where feedback loops might impact the use of machine learning. See if you can find documented examples of that happening in practice."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -15,28 +15,28 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# From model to production"
|
||||
"# From Model to Production"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## The practice of deep learning"
|
||||
"## The Practice of Deep Learning"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Starting your project"
|
||||
"### Starting Your Project"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### The state of deep learning"
|
||||
"### The State of Deep Learning"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -78,21 +78,28 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### The Drivetrain approach"
|
||||
"#### Other data types"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Gathering data"
|
||||
"### The Drivetrain Approach"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To download images with Bing Image Search, you should sign up at Microsoft for *Bing Image Search*. You will be given a key, which you can either paste here, replacing \"XXX\":"
|
||||
"## Gathering Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To download images with Bing Image Search, sign up at Microsoft for a free account. You will be given a key, which you can copy and enter in a cell as follows (replacing 'XXX' with your key and executing it):"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -280,7 +287,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Sidebar: Getting help in Jupyter notebooks"
|
||||
"### Sidebar: Getting Help in Jupyter Notebooks"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -294,7 +301,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## From data to DataLoaders"
|
||||
"## From Data to DataLoaders"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -306,7 +313,7 @@
|
||||
"bears = DataBlock(\n",
|
||||
" blocks=(ImageBlock, CategoryBlock), \n",
|
||||
" get_items=get_image_files, \n",
|
||||
" splitter=RandomSplitter(valid_pct=0.3, seed=42),\n",
|
||||
" splitter=RandomSplitter(valid_pct=0.2, seed=42),\n",
|
||||
" get_y=parent_label,\n",
|
||||
" item_tfms=Resize(128))"
|
||||
]
|
||||
@@ -418,7 +425,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Data augmentation"
|
||||
"### Data Augmentation"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -449,7 +456,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Training your model, and using it to clean your data"
|
||||
"## Training Your Model, and Using It to Clean Your Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -673,14 +680,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Turning your model into an online application"
|
||||
"## Turning Your Model into an Online Application"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Using the model for inference"
|
||||
"### Using the Model for Inference"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -776,7 +783,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Creating a Notebook app from the model"
|
||||
"### Creating a Notebook App from the Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -965,7 +972,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Turning your notebook into a real app"
|
||||
"### Turning Your Notebook into a Real App"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -990,21 +997,21 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## How to avoid disaster"
|
||||
"## How to Avoid Disaster"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Unforeseen consequences and feedback loops"
|
||||
"### Unforeseen Consequences and Feedback Loops"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Get writing!"
|
||||
"## Get Writing!"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1018,21 +1025,21 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"1. Provide an example of where the bear classification model might work poorly, due to structural or style differences to the training data.\n",
|
||||
"1. Provide an example of where the bear classification model might work poorly in production, due to structural or style differences in the training data.\n",
|
||||
"1. Where do text models currently have a major deficiency?\n",
|
||||
"1. What are possible negative societal implications of text generation models?\n",
|
||||
"1. In situations where a model might make mistakes, and those mistakes could be harmful, what is a good alternative to automating a process?\n",
|
||||
"1. What kind of tabular data is deep learning particularly good at?\n",
|
||||
"1. What's a key downside of directly using a deep learning model for recommendation systems?\n",
|
||||
"1. What are the steps of the Drivetrain approach?\n",
|
||||
"1. How do the steps of the Drivetrain approach map to a recommendation system?\n",
|
||||
"1. What are the steps of the Drivetrain Approach?\n",
|
||||
"1. How do the steps of the Drivetrain Approach map to a recommendation system?\n",
|
||||
"1. Create an image recognition model using data you curate, and deploy it on the web.\n",
|
||||
"1. What is `DataLoaders`?\n",
|
||||
"1. What four things do we need to tell fastai to create `DataLoaders`?\n",
|
||||
"1. What does the `splitter` parameter to `DataBlock` do?\n",
|
||||
"1. How do we ensure a random split always gives the same validation set?\n",
|
||||
"1. What letters are often used to signify the independent and dependent variables?\n",
|
||||
"1. What's the difference between crop, pad, and squish resize approaches? When might you choose one over the other?\n",
|
||||
"1. What's the difference between the crop, pad, and squish resize approaches? When might you choose one over the others?\n",
|
||||
"1. What is data augmentation? Why is it needed?\n",
|
||||
"1. What is the difference between `item_tfms` and `batch_tfms`?\n",
|
||||
"1. What is a confusion matrix?\n",
|
||||
@@ -1041,29 +1048,29 @@
|
||||
"1. What are IPython widgets?\n",
|
||||
"1. When might you want to use CPU for deployment? When might GPU be better?\n",
|
||||
"1. What are the downsides of deploying your app to a server, instead of to a client (or edge) device such as a phone or PC?\n",
|
||||
"1. What are 3 examples of problems that could occur when rolling out a bear warning system in practice?\n",
|
||||
"1. What is \"out of domain data\"?\n",
|
||||
"1. What are three examples of problems that could occur when rolling out a bear warning system in practice?\n",
|
||||
"1. What is \"out-of-domain data\"?\n",
|
||||
"1. What is \"domain shift\"?\n",
|
||||
"1. What are the 3 steps in the deployment process?\n",
|
||||
"1. For a project you're interested in applying deep learning to, consider the thought experiment \"what would happen if it went really, really well?\"\n",
|
||||
"1. What are the three steps in the deployment process?"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"1. Consider how the Drivetrain Approach maps to a project or problem you're interested in.\n",
|
||||
"1. When might it be best to avoid certain types of data augmentation?\n",
|
||||
"1. For a project you're interested in applying deep learning to, consider the thought experiment \"What would happen if it went really, really well?\"\n",
|
||||
"1. Start a blog, and write your first blog post. For instance, write about what you think deep learning might be useful for in a domain you're interested in."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"1. Consider how the Drivetrain approach maps to a project or problem you're interested in.\n",
|
||||
"1. When might it be best to avoid certain types of data augmentation?"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Sidebar: Acknowledgement: Dr Rachel Thomas"
|
||||
"### Sidebar: Acknowledgement: Dr. Rachel Thomas"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -25,42 +25,42 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Key examples for data ethics"
|
||||
"## Key Examples for Data Ethics"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Bugs and recourse: Buggy algorithm used for healthcare benefits"
|
||||
"### Bugs and Recourse: Buggy Algorithm Used for Healthcare Benefits"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Feedback loops: YouTube's recommendation system"
|
||||
"### Feedback Loops: YouTube's Recommendation System"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Bias: Professor Lantanya Sweeney \"arrested\""
|
||||
"### Bias: Professor Lantanya Sweeney \"Arrested\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Why does this matter?"
|
||||
"### Why Does This Matter?"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Integrating machine learning with product design"
|
||||
"## Integrating Machine Learning with Product Design"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -74,14 +74,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Recourse and accountability"
|
||||
"### Recourse and Accountability"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Feedback loops"
|
||||
"### Feedback Loops"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -109,77 +109,70 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Aggregation Bias"
|
||||
"#### Aggregation bias"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Representation Bias"
|
||||
"#### Representation bias"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Addressing different types of bias"
|
||||
"### Addressing different types of bias"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Humans are biased, so does algorithmic bias matter?"
|
||||
"### Disinformation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Disinformation"
|
||||
"## Identifying and Addressing Ethical Issues"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Identifying and addressing ethical issues"
|
||||
"### Analyze a Project You Are Working On"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Analyze a project you are working on"
|
||||
"### Processes to Implement"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Processes to implement"
|
||||
"#### Ethical lenses"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Ethical Lenses"
|
||||
"### The Power of Diversity"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### The power of diversity"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Fairness, accountability, and transparency"
|
||||
"### Fairness, Accountability, and Transparency"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -193,21 +186,21 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### The effectiveness of regulation"
|
||||
"### The Effectiveness of Regulation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Rights and policy"
|
||||
"### Rights and Policy"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Cars: a historical precedent"
|
||||
"### Cars: A Historical Precedent"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -230,16 +223,16 @@
|
||||
"source": [
|
||||
"1. Does ethics provide a list of \"right answers\"?\n",
|
||||
"1. How can working with people of different backgrounds help when considering ethical questions?\n",
|
||||
"1. What was the role of IBM in Nazi Germany? Why did the company participate as they did? Why did the workers participate?\n",
|
||||
"1. What was the role of the first person jailed in the VW diesel scandal?\n",
|
||||
"1. What was the role of IBM in Nazi Germany? Why did the company participate as it did? Why did the workers participate?\n",
|
||||
"1. What was the role of the first person jailed in the Volkswagen diesel scandal?\n",
|
||||
"1. What was the problem with a database of suspected gang members maintained by California law enforcement officials?\n",
|
||||
"1. Why did YouTube's recommendation algorithm recommend videos of partially clothed children to pedophiles, even though no employee at Google programmed this feature?\n",
|
||||
"1. Why did YouTube's recommendation algorithm recommend videos of partially clothed children to pedophiles, even though no employee at Google had programmed this feature?\n",
|
||||
"1. What are the problems with the centrality of metrics?\n",
|
||||
"1. Why did Meetup.com not include gender in their recommendation system for tech meetups?\n",
|
||||
"1. Why did Meetup.com not include gender in its recommendation system for tech meetups?\n",
|
||||
"1. What are the six types of bias in machine learning, according to Suresh and Guttag?\n",
|
||||
"1. Give two examples of historical race bias in the US.\n",
|
||||
"1. Where are most images in Imagenet from?\n",
|
||||
"1. In the paper \"Does Machine Learning Automate Moral Hazard and Error\" why is sinusitis found to be predictive of a stroke?\n",
|
||||
"1. Where are most images in ImageNet from?\n",
|
||||
"1. In the paper [\"Does Machine Learning Automate Moral Hazard and Error\"](https://scholar.harvard.edu/files/sendhil/files/aer.p20171084.pdf) why is sinusitis found to be predictive of a stroke?\n",
|
||||
"1. What is representation bias?\n",
|
||||
"1. How are machines and people different, in terms of their use for making decisions?\n",
|
||||
"1. Is disinformation the same as \"fake news\"?\n",
|
||||
@@ -252,7 +245,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research:"
|
||||
"### Further Research:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -260,12 +253,12 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"1. Read the article \"What Happens When an Algorithm Cuts Your Healthcare\". How could problems like this be avoided in the future?\n",
|
||||
"1. Research to find out more about YouTube's recommendation system and its societal impacts. Do you think recommendation systems must always have feedback loops with negative results? What approaches could Google take? What about the government?\n",
|
||||
"1. Read the paper \"Discrimination in Online Ad Delivery\". Do you think Google should be considered responsible for what happened to Dr Sweeney? What would be an appropriate response?\n",
|
||||
"1. Research to find out more about YouTube's recommendation system and its societal impacts. Do you think recommendation systems must always have feedback loops with negative results? What approaches could Google take to avoid them? What about the government?\n",
|
||||
"1. Read the paper [\"Discrimination in Online Ad Delivery\"](https://arxiv.org/abs/1301.6822). Do you think Google should be considered responsible for what happened to Dr. Sweeney? What would be an appropriate response?\n",
|
||||
"1. How can a cross-disciplinary team help avoid negative consequences?\n",
|
||||
"1. Read the paper \"Does Machine Learning Automate Moral Hazard and Error\" in American Economic Review. What actions do you think should be taken to deal with the issues identified in this paper?\n",
|
||||
"1. Read the paper \"Does Machine Learning Automate Moral Hazard and Error\". What actions do you think should be taken to deal with the issues identified in this paper?\n",
|
||||
"1. Read the article \"How Will We Prevent AI-Based Forgery?\" Do you think Etzioni's proposed approach could work? Why?\n",
|
||||
"1. Complete the section \"Analyze a project you are working on\" in this chapter.\n",
|
||||
"1. Complete the section \"Analyze a Project You Are Working On\" in this chapter.\n",
|
||||
"1. Consider whether your team could be more diverse. If so, what approaches might help?"
|
||||
]
|
||||
},
|
||||
@@ -273,26 +266,26 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Section 1: that's a wrap!"
|
||||
"## Section 1: That's a Wrap!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Congratulations! You've made it to the end of the first section of the book. In this section we've tried to show you what deep learning can do, and how you can use it to create real applications and products. At this point, you will get a lot more out of the book if you spend some time trying out what you've learnt. Perhaps you have already been doing this as you go along — in which case, great! But if not, that's no problem either… Now is a great time to start experimenting yourself.\n",
|
||||
"Congratulations! You've made it to the end of the first section of the book. In this section we've tried to show you what deep learning can do, and how you can use it to create real applications and products. At this point, you will get a lot more out of the book if you spend some time trying out what you've learned. Perhaps you have already been doing this as you go along—in which case, great! If not, that's no problem either... Now is a great time to start experimenting yourself.\n",
|
||||
"\n",
|
||||
"If you haven't been to the book website yet, head over there now. Remember, you can find it here: [book.fast.ai](https://book.fast.ai). It's really important that you have got yourself set up to run the notebooks. Becoming an effective deep learning practitioner is all about practice. So you need to be training models. So please go get the notebooks running now if you haven't already! And also have a look on the website for any important updates or notices; deep learning changes fast, and we can't change the words that are printed in this book, so the website is where you need to look to ensure you have the most up-to-date information.\n",
|
||||
"If you haven't been to the [book's website](https://book.fast.ai) yet, head over there now. It's really important that you get yourself set up to run the notebooks. Becoming an effective deep learning practitioner is all about practice, so you need to be training models. So, please go get the notebooks running now if you haven't already! And also have a look on the website for any important updates or notices; deep learning changes fast, and we can't change the words that are printed in this book, so the website is where you need to look to ensure you have the most up-to-date information.\n",
|
||||
"\n",
|
||||
"Make sure that you have completed the following steps:\n",
|
||||
"\n",
|
||||
"- Connected to one of the GPU Jupyter servers recommended on the book website\n",
|
||||
"- Run the first notebook yourself\n",
|
||||
"- Uploaded an image that you find in the first notebook; then try a few different images of different kinds to see what happens\n",
|
||||
"- Run the second notebook, collecting your own dataset based on image search queries that you come up with\n",
|
||||
"- Thought about how you can use deep learning to help you with your own projects, including what kinds of data you could use, what kinds of problems may come up, and how you might be able to mitigate these issues in practice.\n",
|
||||
"- Connect to one of the GPU Jupyter servers recommended on the book's website.\n",
|
||||
"- Run the first notebook yourself.\n",
|
||||
"- Upload an image that you find in the first notebook; then try a few different images of different kinds to see what happens.\n",
|
||||
"- Run the second notebook, collecting your own dataset based on image search queries that you come up with.\n",
|
||||
"- Think about how you can use deep learning to help you with your own projects, including what kinds of data you could use, what kinds of problems may come up, and how you might be able to mitigate these issues in practice.\n",
|
||||
"\n",
|
||||
"In the next section of the book we will learn about how and why deep learning works, instead of just seeing how we can use it in practice. Understanding the how and why is important for both practitioners and researchers, because in this fairly new field nearly every project requires some level of customisation and debugging. The better you understand the foundations of deep learning, the better your models will be. These foundations are less important for executives, product managers, and so forth (although still useful, so feel free to keep reading!), but they are critical for anybody who is actually training and deploying models themselves."
|
||||
"In the next section of the book you will learn about how and why deep learning works, instead of just seeing how you can use it in practice. Understanding the how and why is important for both practitioners and researchers, because in this fairly new field nearly every project requires some level of customization and debugging. The better you understand the foundations of deep learning, the better your models will be. These foundations are less important for executives, product managers, and so forth (although still useful, so feel free to keep reading!), but they are critical for anybody who is actually training and deploying models themselves."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -17,21 +17,21 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Under the hood: training a digit classifier"
|
||||
"# Under the Hood: Training a Digit Classifier"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Pixels: the foundations of computer vision"
|
||||
"## Pixels: The Foundations of Computer Vision"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Sidebar: Tenacity and deep learning"
|
||||
"## Sidebar: Tenacity and Deep Learning"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1249,7 +1249,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## First try: pixel similarity"
|
||||
"## First Try: Pixel Similarity"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1495,7 +1495,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### NumPy arrays and PyTorch tensors"
|
||||
"### NumPy Arrays and PyTorch Tensors"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1677,7 +1677,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Computing metrics using broadcasting"
|
||||
"## Computing Metrics Using Broadcasting"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2039,7 +2039,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### The gradient"
|
||||
"### Calculating Gradients"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2170,14 +2170,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Stepping with a learning rate"
|
||||
"### Stepping With a Learning Rate"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### An end-to-end SGD example"
|
||||
"### An End-to-End SGD Example"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2243,6 +2243,13 @@
|
||||
"def mse(preds, targets): return ((preds-targets)**2).mean()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Step 1: Initialize the parameters"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -2262,6 +2269,13 @@
|
||||
"orig_params = params.clone()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Step 2: Calculate the predictions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -2306,6 +2320,13 @@
|
||||
"show_preds(preds)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Step 3: Calculate the loss"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -2327,6 +2348,13 @@
|
||||
"loss"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Step 4: Calculate the gradients"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -2388,6 +2416,13 @@
|
||||
"params"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Step 5: Step the weights. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -2458,6 +2493,13 @@
|
||||
" return preds"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Step 6: Repeat the process "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -2522,7 +2564,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Summarizing gradient descent"
|
||||
"#### Step 7: stop"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Summarizing Gradient Descent"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2642,7 +2691,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## MNIST loss function"
|
||||
"## The MNIST Loss Function"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2993,7 +3042,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### SGD and mini-batches"
|
||||
"### SGD and Mini-Batches"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -3070,7 +3119,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Putting it all together"
|
||||
"## Putting It All Together"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -3411,7 +3460,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Creating an optimizer"
|
||||
"### Creating an Optimizer"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -3677,7 +3726,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Adding a non-linearity"
|
||||
"## Adding a Nonlinearity"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -4106,6 +4155,13 @@
|
||||
"learn.recorder.values[-1][2]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Going Deeper"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -4154,14 +4210,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Jargon recap"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### _Choose Your Own Adventure_ reminder"
|
||||
"## Jargon Recap"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -4175,20 +4224,20 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"1. How is a greyscale image represented on a computer? How about a color image?\n",
|
||||
"1. How is a grayscale image represented on a computer? How about a color image?\n",
|
||||
"1. How are the files and folders in the `MNIST_SAMPLE` dataset structured? Why?\n",
|
||||
"1. Explain how the \"pixel similarity\" approach to classifying digits works.\n",
|
||||
"1. What is a list comprehension? Create one now that selects odd numbers from a list and doubles them.\n",
|
||||
"1. What is a \"rank 3 tensor\"?\n",
|
||||
"1. What is a \"rank-3 tensor\"?\n",
|
||||
"1. What is the difference between tensor rank and shape? How do you get the rank from the shape?\n",
|
||||
"1. What are RMSE and L1 norm?\n",
|
||||
"1. How can you apply a calculation on thousands of numbers at once, many thousands of times faster than a Python loop?\n",
|
||||
"1. Create a 3x3 tensor or array containing the numbers from 1 to 9. Double it. Select the bottom right 4 numbers.\n",
|
||||
"1. Create a 3\\*3 tensor or array containing the numbers from 1 to 9. Double it. Select the bottom-right four numbers.\n",
|
||||
"1. What is broadcasting?\n",
|
||||
"1. Are metrics generally calculated using the training set, or the validation set? Why?\n",
|
||||
"1. What is SGD?\n",
|
||||
"1. Why does SGD use mini batches?\n",
|
||||
"1. What are the 7 steps in SGD for machine learning?\n",
|
||||
"1. Why does SGD use mini-batches?\n",
|
||||
"1. What are the seven steps in SGD for machine learning?\n",
|
||||
"1. How do we initialize the weights in a model?\n",
|
||||
"1. What is \"loss\"?\n",
|
||||
"1. Why can't we always use a high learning rate?\n",
|
||||
@@ -4196,18 +4245,18 @@
|
||||
"1. Do you need to know how to calculate gradients yourself?\n",
|
||||
"1. Why can't we use accuracy as a loss function?\n",
|
||||
"1. Draw the sigmoid function. What is special about its shape?\n",
|
||||
"1. What is the difference between loss and metric?\n",
|
||||
"1. What is the difference between a loss function and a metric?\n",
|
||||
"1. What is the function to calculate new weights using a learning rate?\n",
|
||||
"1. What does the `DataLoader` class do?\n",
|
||||
"1. Write pseudo-code showing the basic steps taken each epoch for SGD.\n",
|
||||
"1. Create a function which, if passed two arguments `[1,2,3,4]` and `'abcd'`, returns `[(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')]`. What is special about that output data structure?\n",
|
||||
"1. Write pseudocode showing the basic steps taken in each epoch for SGD.\n",
|
||||
"1. Create a function that, if passed two arguments `[1,2,3,4]` and `'abcd'`, returns `[(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')]`. What is special about that output data structure?\n",
|
||||
"1. What does `view` do in PyTorch?\n",
|
||||
"1. What are the \"bias\" parameters in a neural network? Why do we need them?\n",
|
||||
"1. What does the `@` operator do in python?\n",
|
||||
"1. What does the `@` operator do in Python?\n",
|
||||
"1. What does the `backward` method do?\n",
|
||||
"1. Why do we have to zero the gradients?\n",
|
||||
"1. What information do we have to pass to `Learner`?\n",
|
||||
"1. Show python or pseudo-code for the basic steps of a training loop.\n",
|
||||
"1. Show Python or pseudocode for the basic steps of a training loop.\n",
|
||||
"1. What is \"ReLU\"? Draw a plot of it for values from `-2` to `+2`.\n",
|
||||
"1. What is an \"activation function\"?\n",
|
||||
"1. What's the difference between `F.relu` and `nn.ReLU`?\n",
|
||||
@@ -4218,7 +4267,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -4226,7 +4275,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"1. Create your own implementation of `Learner` from scratch, based on the training loop shown in this chapter.\n",
|
||||
"1. Complete all the steps in this chapter using the full MNIST datasets (that is, for all digits, not just threes and sevens). This is a significant project and will take you quite a bit of time to complete! You'll need to do some of your own research to figure out how to overcome some obstacles you'll meet on the way."
|
||||
"1. Complete all the steps in this chapter using the full MNIST datasets (that is, for all digits, not just 3s and 7s). This is a significant project and will take you quite a bit of time to complete! You'll need to do some of your own research to figure out how to overcome some obstacles you'll meet on the way."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -14,14 +14,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Image classification"
|
||||
"# Image Classification"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## From dogs and cats, to pet breeds"
|
||||
"## From Dogs and Cats to Pet Breeds"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -139,7 +139,7 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"hide_input": true
|
||||
"hide_input": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -182,7 +182,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Checking and debugging a DataBlock"
|
||||
"### Checking and Debugging a DataBlock"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -373,14 +373,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Cross entropy loss"
|
||||
"## Cross-Entropy Loss"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Viewing activations and labels"
|
||||
"### Viewing Activations and Labels"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -606,7 +606,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Log likelihood"
|
||||
"### Log Likelihood"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -782,7 +782,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Taking the `log`"
|
||||
"### Taking the Log"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -944,14 +944,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Improving our model"
|
||||
"## Improving Our Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Learning rate finder"
|
||||
"### The Learning Rate Finder"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1161,7 +1161,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Unfreezing and transfer learning"
|
||||
"### Unfreezing and Transfer Learning"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1360,7 +1360,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Discriminative learning rates"
|
||||
"### Discriminative Learning Rates"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1555,14 +1555,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Selecting the number of epochs"
|
||||
"### Selecting the Number of Epochs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deeper architectures"
|
||||
"### Deeper Architectures"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1692,7 +1692,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Summary"
|
||||
"## Conclusion"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1707,35 +1707,35 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"1. Why do we first resize to a large size on the CPU, and then to a smaller size on the GPU?\n",
|
||||
"1. If you are not familiar with regular expressions, find a regular expression tutorial, and some problem sets, and complete them. Have a look on the book website for suggestions.\n",
|
||||
"1. If you are not familiar with regular expressions, find a regular expression tutorial, and some problem sets, and complete them. Have a look on the book's website for suggestions.\n",
|
||||
"1. What are the two ways in which data is most commonly provided, for most deep learning datasets?\n",
|
||||
"1. Look up the documentation for `L` and try using a few of the new methods is that it adds.\n",
|
||||
"1. Look up the documentation for the Python pathlib module and try using a few methods of the Path class.\n",
|
||||
"1. Look up the documentation for the Python `pathlib` module and try using a few methods of the `Path` class.\n",
|
||||
"1. Give two examples of ways that image transformations can degrade the quality of the data.\n",
|
||||
"1. What method does fastai provide to view the data in a DataLoader?\n",
|
||||
"1. What method does fastai provide to help you debug a DataBlock?\n",
|
||||
"1. What method does fastai provide to view the data in a `DataLoaders`?\n",
|
||||
"1. What method does fastai provide to help you debug a `DataBlock`?\n",
|
||||
"1. Should you hold off on training a model until you have thoroughly cleaned your data?\n",
|
||||
"1. What are the two pieces that are combined into cross entropy loss in PyTorch?\n",
|
||||
"1. What are the two pieces that are combined into cross-entropy loss in PyTorch?\n",
|
||||
"1. What are the two properties of activations that softmax ensures? Why is this important?\n",
|
||||
"1. When might you want your activations to not have these two properties?\n",
|
||||
"1. Calculate the \"exp\" and \"softmax\" columns of <<bear_softmax>> yourself (i.e. in a spreadsheet, with a calculator, or in a notebook).\n",
|
||||
"1. Why can't we use torch.where to create a loss function for datasets where our label can have more than two categories?\n",
|
||||
"1. Calculate the `exp` and `softmax` columns of <<bear_softmax>> yourself (i.e., in a spreadsheet, with a calculator, or in a notebook).\n",
|
||||
"1. Why can't we use `torch.where` to create a loss function for datasets where our label can have more than two categories?\n",
|
||||
"1. What is the value of log(-2)? Why?\n",
|
||||
"1. What are two good rules of thumb for picking a learning rate from the learning rate finder?\n",
|
||||
"1. What two steps does the fine_tune method do?\n",
|
||||
"1. In Jupyter notebook, how do you get the source code for a method or function?\n",
|
||||
"1. What two steps does the `fine_tune` method do?\n",
|
||||
"1. In Jupyter Notebook, how do you get the source code for a method or function?\n",
|
||||
"1. What are discriminative learning rates?\n",
|
||||
"1. How is a Python slice object interpreted when passed as a learning rate to fastai?\n",
|
||||
"1. Why is early stopping a poor choice when using one cycle training?\n",
|
||||
"1. What is the difference between resnet 50 and resnet101?\n",
|
||||
"1. What does to_fp16 do?"
|
||||
"1. How is a Python `slice` object interpreted when passed as a learning rate to fastai?\n",
|
||||
"1. Why is early stopping a poor choice when using 1cycle training?\n",
|
||||
"1. What is the difference between `resnet50` and `resnet101`?\n",
|
||||
"1. What does `to_fp16` do?"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1743,7 +1743,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"1. Find the paper by Leslie Smith that introduced the learning rate finder, and read it.\n",
|
||||
"1. See if you can improve the accuracy of the classifier in this chapter. What's the best accuracy you can achieve? Have a look on the forums and book website to see what other students have achieved with this dataset, and how they did it."
|
||||
"1. See if you can improve the accuracy of the classifier in this chapter. What's the best accuracy you can achieve? Look on the forums and the book's website to see what other students have achieved with this dataset, and how they did it."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -14,7 +14,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Training a state-of-the-art model"
|
||||
"# Training a State-of-the-Art Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -270,7 +270,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Progressive resizing"
|
||||
"## Progressive Resizing"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -443,7 +443,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Test time augmentation"
|
||||
"## Test Time Augmentation"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -528,7 +528,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Sidebar: Papers and math"
|
||||
"### Sidebar: Papers and Math"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -576,14 +576,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Label smoothing"
|
||||
"## Label Smoothing"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Sidebar: Label smoothing, the paper"
|
||||
"### Sidebar: Label Smoothing, the Paper"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -620,23 +620,23 @@
|
||||
"1. Is using TTA at inference slower or faster than regular inference? Why?\n",
|
||||
"1. What is Mixup? How do you use it in fastai?\n",
|
||||
"1. Why does Mixup prevent the model from being too confident?\n",
|
||||
"1. Why does a training with Mixup for 5 epochs end up worse than a training without Mixup?\n",
|
||||
"1. Why does training with Mixup for five epochs end up worse than training without Mixup?\n",
|
||||
"1. What is the idea behind label smoothing?\n",
|
||||
"1. What problems in your data can label smoothing help with?\n",
|
||||
"1. When using label smoothing with 5 categories, what is the target associated with the index 1?\n",
|
||||
"1. What is the first step to take when you want to prototype quick experiments on a new dataset."
|
||||
"1. When using label smoothing with five categories, what is the target associated with the index 1?\n",
|
||||
"1. What is the first step to take when you want to prototype quick experiments on a new dataset?"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research\n",
|
||||
"### Further Research\n",
|
||||
"\n",
|
||||
"1. Use the fastai documentation to build a function that crops an image to a square in the four corners, then implement a TTA method that averages the predictions on a center crop and those four crops. Did it help? Is it better than the TTA method of fastai?\n",
|
||||
"1. Find the Mixup paper on arxiv and read it. Pick one or two more recent articles introducing variants of Mixup and read them, then try to implement them on your problem.\n",
|
||||
"1. Find the script training Imagenette using Mixup and use it as an example to build a script for a long training on your own project. Execute it and see if it helped.\n",
|
||||
"1. Read the sidebar on the math of label smoothing, and look at the relevant section of the original paper, and see if you can follow it. Don't be afraid to ask for help!"
|
||||
"1. Use the fastai documentation to build a function that crops an image to a square in each of the four corners, then implement a TTA method that averages the predictions on a center crop and those four crops. Did it help? Is it better than the TTA method of fastai?\n",
|
||||
"1. Find the Mixup paper on arXiv and read it. Pick one or two more recent articles introducing variants of Mixup and read them, then try to implement them on your problem.\n",
|
||||
"1. Find the script training Imagenette using Mixup and use it as an example to build a script for a long training on your own project. Execute it and see if it helps.\n",
|
||||
"1. Read the sidebar \"Label Smoothing, the Paper\", look at the relevant section of the original paper and see if you can follow it. Don't be afraid to ask for help!"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -14,14 +14,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Collaborative filtering deep dive"
|
||||
"# Collaborative Filtering Deep Dive"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## A first look at the data"
|
||||
"## A First Look at the Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -198,7 +198,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Learning the latent factors"
|
||||
"## Learning the Latent Factors"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -587,7 +587,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Collaborative filtering from scratch"
|
||||
"## Collaborative Filtering from Scratch"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -907,7 +907,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Weight decay"
|
||||
"### Weight Decay"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1009,7 +1009,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Creating our own Embedding module"
|
||||
"### Creating Our Own Embedding Module"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1207,7 +1207,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Interpreting embeddings and biases"
|
||||
"## Interpreting Embeddings and Biases"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1433,7 +1433,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embedding distance"
|
||||
"### Embedding Distance"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1464,14 +1464,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Boot strapping a collaborative filtering model"
|
||||
"## Boot Strapping a Collaborative Filtering Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deep learning for collaborative filtering"
|
||||
"## Deep Learning for Collaborative Filtering"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1670,7 +1670,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Sidebar: kwargs and delegates"
|
||||
"### Sidebar: Kwargs and Delegates"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1735,7 +1735,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research\n",
|
||||
"### Further Research\n",
|
||||
"\n",
|
||||
"1. Take a look at all the differences between the `Embedding` version of `DotProductBias` and the `create_params` version, and try to understand why each of those changes is required. If you're not sure, try reverting each change, to see what happens. (NB: even the type of brackets used in `forward` has changed!)\n",
|
||||
"1. Find three other areas where collaborative filtering is being used, and find out what pros and cons of this approach in those areas.\n",
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"hide_input": true
|
||||
"hide_input": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -34,28 +34,28 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Tabular modelling deep dive"
|
||||
"# Tabular Modeling Deep Dive"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Categorical embeddings"
|
||||
"## Categorical Embeddings"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Beyond deep learning"
|
||||
"## Beyond Deep Learning"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## The dataset"
|
||||
"## The Dataset"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -147,7 +147,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Look at the data"
|
||||
"### Look at the Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -253,14 +253,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Decision trees"
|
||||
"## Decision Trees"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Handling dates"
|
||||
"### Handling Dates"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -945,7 +945,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Creating the decision tree"
|
||||
"### Creating the Decision Tree"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -6841,14 +6841,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Categorical variables"
|
||||
"### Categorical Variables"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Random forests"
|
||||
"## Random Forests"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -6865,7 +6865,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Creating a random forest"
|
||||
"### Creating a Random Forest"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -6965,7 +6965,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Out-of-bag error"
|
||||
"### Out-of-Bag Error"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -6992,14 +6992,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Model interpretation"
|
||||
"## Model Interpretation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Tree variance for prediction confidence"
|
||||
"### Tree Variance for Prediction Confidence"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -7064,7 +7064,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Feature importance"
|
||||
"### Feature Importance"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -7216,7 +7216,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Removing low-importance variables"
|
||||
"### Removing Low-Importance Variables"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -7325,7 +7325,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Removing redundant features"
|
||||
"### Removing Redundant Features"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -7490,7 +7490,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Partial dependence"
|
||||
"### Partial Dependence"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -7569,14 +7569,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Data leakage"
|
||||
"### Data Leakage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Tree interpreter"
|
||||
"### Tree Interpreter"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -7658,14 +7658,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Extrapolation and neural networks"
|
||||
"## Extrapolation and Neural Networks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### The extrapolation problem"
|
||||
"### The Extrapolation Problem"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -7779,7 +7779,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Finding out of domain data"
|
||||
"### Finding out of Domain Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -7978,7 +7978,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Using a neural network"
|
||||
"### Using a Neural Network"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -8297,7 +8297,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Sidebar: fastai's Tabular classes"
|
||||
"### Sidebar: fastai's Tabular Classes"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -8355,14 +8355,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Combining embeddings with other methods"
|
||||
"### Combining Embeddings with Other Methods"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Conclusion: our advice for tabular modeling"
|
||||
"## Conclusion: Our Advice for Tabular Modeling"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -8415,7 +8415,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -15,14 +15,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# NLP deep dive: RNNs"
|
||||
"# NLP Deep Dive: RNNs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Text preprocessing"
|
||||
"## Text Preprocessing"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -36,7 +36,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Word tokenization with fastai"
|
||||
"### Word Tokenization with fastai"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -186,7 +186,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Subword tokenization"
|
||||
"### Subword Tokenization"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -412,7 +412,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Putting our texts into batches for a language model"
|
||||
"### Putting Our Texts Into Batches for a Language Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -849,14 +849,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Training a text classifier"
|
||||
"## Training a Text Classifier"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Language model using DataBlock"
|
||||
"### Language Model Using DataBlock"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -919,7 +919,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Fine tuning the language model"
|
||||
"### Fine Tuning the Language Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -980,7 +980,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Saving and loading models"
|
||||
"### Saving and Loading Models"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1130,7 +1130,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Text generation"
|
||||
"### Text Generation"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1189,7 +1189,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Creating the classifier DataLoaders"
|
||||
"### Creating the Classifier DataLoaders"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1305,7 +1305,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Fine tuning the classifier"
|
||||
"### Fine Tuning the Classifier"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1486,7 +1486,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Disinformation and language models"
|
||||
"## Disinformation and Language Models"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1535,7 +1535,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -15,14 +15,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Data munging with fastai's mid-level API"
|
||||
"# Data Munging With fastai's mid-Level API"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Going deeper into fastai's layered API"
|
||||
"## Going Deeper into fastai's Layered API"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -179,7 +179,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Writing your own Transform"
|
||||
"### Writing Your Own Transform"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -315,7 +315,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## TfmdLists and Datasets: Transformed collections"
|
||||
"## TfmdLists and Datasets: Transformed Collections"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -599,7 +599,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Applying the mid-tier data API: SiamesePair"
|
||||
"## Applying the mid-Tier Data API: SiamesePair"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -836,7 +836,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -851,7 +851,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Becoming a deep learning practitioner"
|
||||
"## Becoming a Deep Learning Practitioner"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -14,14 +14,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# A language model from scratch"
|
||||
"# A Language Model from Scratch"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## The data"
|
||||
"## The Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -176,7 +176,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Our first language model from scratch"
|
||||
"## Our First Language Model from Scratch"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -235,7 +235,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Our language model in PyTorch"
|
||||
"### Our Language Model in PyTorch"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -352,7 +352,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Our first recurrent neural network"
|
||||
"### Our First Recurrent Neural Network"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -450,7 +450,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Maintaining the state of an RNN"
|
||||
"### Maintaining the State of an RNN"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -634,7 +634,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Creating more signal"
|
||||
"### Creating More Signal"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -860,7 +860,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## The model"
|
||||
"## The Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1030,7 +1030,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Exploding or disappearing activations"
|
||||
"### Exploding or Disappearing Activations"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1044,7 +1044,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Building an LSTM from scratch"
|
||||
"### Building an LSTM from Scratch"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1140,7 +1140,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Training a language model using LSTMs"
|
||||
"### Training a Language Model Using LSTMs"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1339,14 +1339,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### AR and TAR regularization"
|
||||
"### AR and TAR Regularization"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Training a weight-tied regularized LSTM"
|
||||
"### Training a Weight-Tied Regularized LSTM"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1597,7 +1597,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -17,14 +17,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Convolutional neural networks"
|
||||
"# Convolutional Neural Networks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## The magic of convolutions"
|
||||
"## The Magic of Convolutions"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1253,7 +1253,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Mapping a convolution kernel"
|
||||
"### Mapping a Convolution Kernel"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1479,21 +1479,21 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Strides and padding"
|
||||
"### Strides and Padding"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Understanding the convolution equations"
|
||||
"### Understanding the Convolution Equations"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Our first convolutional neural network"
|
||||
"## Our First Convolutional Neural Network"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1737,7 +1737,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Understanding convolution arithmetic"
|
||||
"### Understanding Convolution Arithmetic"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1808,21 +1808,21 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Receptive fields"
|
||||
"### Receptive Fields"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### A note about Twitter"
|
||||
"### A Note about Twitter"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Colour images"
|
||||
"## Colour Images"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1896,7 +1896,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Improving training stability"
|
||||
"## Improving Training Stability"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1982,7 +1982,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### A simple baseline"
|
||||
"### A Simple Baseline"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2125,7 +2125,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Increase batch size"
|
||||
"### Increase Batch Size"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2204,7 +2204,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 1cycle training"
|
||||
"### 1cycle Training"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2353,7 +2353,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Batch normalization"
|
||||
"### Batch Normalization"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2634,7 +2634,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -23,7 +23,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Going back to Imagenette"
|
||||
"## Going Back to Imagenette"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -230,14 +230,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Building a modern CNN: ResNet"
|
||||
"## Building a Modern CNN: ResNet"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Skip-connections"
|
||||
"### Skip-Connections"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -446,7 +446,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### A state-of-the-art ResNet"
|
||||
"### A State-of-the-Art ResNet"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -602,7 +602,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Bottleneck layers"
|
||||
"### Bottleneck Layers"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -856,7 +856,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -14,14 +14,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Application architectures deep dive"
|
||||
"# Application Architectures Deep Dive"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Computer vision"
|
||||
"## Computer Vision"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -97,7 +97,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### A Siamese network"
|
||||
"### A Siamese Network"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -353,7 +353,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Natural language processing"
|
||||
"## Natural Language Processing"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -367,7 +367,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Wrapping up architectures"
|
||||
"## Wrapping up Architectures"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -405,7 +405,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -16,14 +16,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# The training process"
|
||||
"# The Training Process"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Let's start with SGD"
|
||||
"## Let's Start with SGD"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -229,7 +229,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## A generic optimizer"
|
||||
"## A Generic Optimizer"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -591,7 +591,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Decoupled weight_decay"
|
||||
"## Decoupled Weight Decay"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -605,7 +605,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Creating a callback"
|
||||
"### Creating a Callback"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -647,7 +647,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Callback ordering and exceptions"
|
||||
"### Callback Ordering and Exceptions"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -714,7 +714,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -16,28 +16,28 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# A neural net from the foundations"
|
||||
"# A Neural Net from the Foundations"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## A neural net layer from scratch"
|
||||
"## A Neural Net Layer from Scratch"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Modeling a neuron"
|
||||
"### Modeling a Neuron"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Matrix multiplication from scratch"
|
||||
"### Matrix Multiplication from Scratch"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -112,6 +112,13 @@
|
||||
"%timeit -n 20 t2=m1@m2"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Elementwise Arithmetic"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -710,7 +717,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Einstein summation"
|
||||
"### Einstein Summation"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -743,14 +750,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## The forward and backward passes"
|
||||
"## The Forward and Backward Passes"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Defining and initializing a layer"
|
||||
"### Defining and Initializing a Layer"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1149,7 +1156,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Gradients and backward pass"
|
||||
"### Gradients and Backward Pass"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1251,7 +1258,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Refactor the model"
|
||||
"### Refactor the Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1573,7 +1580,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -16,14 +16,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# CNN interpretation with CAM"
|
||||
"# CNN Interpretation with CAM"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## CAM and hooks"
|
||||
"## CAM and Hooks"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -450,7 +450,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# fastai Learner from scratch"
|
||||
"# fastai Learner from Scratch"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1079,7 +1079,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Scheduling the learning rate"
|
||||
"### Scheduling the Learning Rate"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1335,7 +1335,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Further research"
|
||||
"### Further Research"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Concluding thoughts"
|
||||
"# Concluding Thoughts"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Creating a blog"
|
||||
"# Creating a Blog"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -29,35 +29,35 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Creating the repository"
|
||||
"### Creating the Repository"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setting up your homepage"
|
||||
"### Setting up Your Homepage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Creating posts"
|
||||
"### Creating Posts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Synchronizing GitHub and your computer"
|
||||
"### Synchronizing GitHub and Your Computer"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Jupyter for blogging"
|
||||
"### Jupyter for Blogging"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user