Various typo and explanation fixes
Too many to mention. Read through several chapters and made changes as I went. I've had people (correctly) question me on several points, and pointing out typos.
This commit is contained in:
parent
dae0f4b50b
commit
0d8036a109
@ -349,13 +349,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Not ready for public consumption. In development.\n",
|
||||
"\n",
|
||||
"> author's note: The chapter on g-h filters is fairly complete as far as planned content goes. The content for the discrete Bayesian chapter, chapter 2, is also fairly complete. After that I have questions in my mind as to the best way to present the statistics needed to understand the filters. I try to avoid the 'dump a semester of math into 4 pages' approach of most textbooks, but then again perhaps I put things off a bit too long. In any case, the subsequent chapters are due a strong editing cycle where I decide how to best develop these concepts. Otherwise I am pretty happy with the content for the one dimensional and multidimensional Kalman filter chapters. I know the code works, I am using it in real world projects at work, but there are areas where the content about the covariance matrices is pretty bad. The implementation is fine, the description is poor. Sorry. It will be corrected. \n",
|
||||
"\n",
|
||||
"> Beyond that the chapters are much more in a state of flux. Reader beware. My writing methodology is to just vomit out whatever is in my head, just to get material, and then go back and think through presentation, test code, refine, and so on. Whatever is checked in in these later chapters may be wrong and not ready for your use. \n",
|
||||
"\n",
|
||||
"> Finally, nothing has been spell checked or proof read yet. I with IPython Notebook had spell check, but it doesn't seem to."
|
||||
"Not ready for public consumption. In development."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -401,9 +395,11 @@
|
||||
"source": [
|
||||
"** author's note**. *The book is still being written, and so I am not focusing on issues like supporting multiple versions of Python. I am staying more or less on the bleeding edge of Python 3 for the time being. If you follow my suggestion of installing Anaconda all of the versioning problems will be taken care of for you, and you will not alter or affect any existing installation of Python on your machine. I am aware that telling somebody to install a specific packaging system is not a long term solution, but I can either focus on endless regression testing for every minor code change, or work on delivering the book, and then doing one sweep through it to maximize compatibility. I opt for the latter. In the meantime I welcome bug reports if the book does not work on your platform.*\n",
|
||||
"\n",
|
||||
"If you want to run the notebook on your computer, which is what I recommend, then you will have to have IPython 2.4 or later installed. I do not cover how to do that in this book; requirements change based on what other python installations you may have, whether you use a third party package like Anaconda Python, what operating system you are using, and so on.\n",
|
||||
"If you want to run the notebook on your computer, which is what I recommend, then you will have to have IPython 2.4 or later installed. IPython is an interactive architecture that provides IPython Notebook, the tool used to write this book. Note that the IPython version has nothing to do with the Python version. IPython 2.4 can run Python 3.4, IPython 3.0 can run Python 2.7, and so on. \n",
|
||||
"\n",
|
||||
"The notebook format was changed as of IPython 3.0. If you are running 2.4 you will still be able to open and run the notebooks, but they will be downconverted for you. If you make changes DO NOT push 2.4 version notebooks to me! I strongly recommend updating to 3.0 as soon as possible, as this format change will just become more frustrating with time.\n",
|
||||
"I do not cover how to install IPython in this book; requirements change based on what other python installations you may have, whether you use a third party package like Anaconda Python, what operating system you are using, and so on.\n",
|
||||
"\n",
|
||||
"The IPython Notebook format was changed as of IPython 3.0. If you are running 2.4 you will still be able to open and run the notebooks, but they will be downconverted for you. If you make changes DO NOT push 2.4 version notebooks to me! I strongly recommend updating to 3.0 as soon as possible, as this format change will just become more frustrating with time.\n",
|
||||
"\n",
|
||||
"You will need Python 2.7 or later installed. Almost all of my work is done in Python 3.4, but I periodically test on 2.7. I do not promise any specific check in will work in 2.7 however. I do use Python's \"from __future__ import ...\" statement to help with compatibility. For example, all prints need to use parenthesis. If you try to add, say, \"print 3.14\" into the book your script will fail; you must write \"print (3.4)\" as in Python 3.X.\n",
|
||||
"\n",
|
||||
@ -427,7 +423,6 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"I am writing an open source Bayesian filtering Python library called **FilterPy**. It is available on github at (https://github.com/rlabbe/filterpy). To ensure that you have the latest release you will want to grab a copy from github, and follow your Python installation's instructions for adding it to the Python search path.\n",
|
||||
"\n",
|
||||
"I have also made the project available on PyPi, the Python Package Index. I will be honest, I am not updating this as fast as I am changing the code in the library. That will change as the library and this book mature. To install from PyPi, at the command line issue the command\n",
|
||||
@ -445,7 +440,7 @@
|
||||
"\n",
|
||||
"Some chapters introduce functions that are useful for the rest of the book. Those functions are initially defined within the Notebook itself, but the code is also stored in a Python file that is imported if needed in later chapters. I do document when I do this where the function is first defined, but this is still a work in progress. I try to avoid this because then I always face the issue of code in the directory becoming out of sync with the code in the book. However, IPython Notebook does not give us a way to refer to code cells in other notebooks, so this is the only mechanism I know of to share functionality across notebooks.\n",
|
||||
"\n",
|
||||
"There is an undocumented directory called **exp**. This is where I write and test code prior to putting it in the book. There is some interesting stuff in there, and feel free to look at it. As the book evolves I plan to create examples and projects, and a lot of this material will end up there. Small experiments will eventually just be deleted. If you are just interested in reading the book you can safely ignore this directory. \n",
|
||||
"There is an undocumented directory called **experiments**. This is where I write and test code prior to putting it in the book. There is some interesting stuff in there, and feel free to look at it. As the book evolves I plan to create examples and projects, and a lot of this material will end up there. Small experiments will eventually just be deleted. If you are just interested in reading the book you can safely ignore this directory. \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The directory **styles** contains a css file containing the style guide for the book. The default look and feel of IPython Notebook is rather plain. Work is being done on this. I have followed the examples set by books such as [Probabilistic Programming and Bayesian Methods for Hackers](http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter1_Introduction/Chapter1_Introduction.ipynb). I have also been very influenced by Professor Lorena Barba's fantastic work, [available here](https://github.com/barbagroup/CFDPython). I owe all of my look and feel to the work of these projects. "
|
||||
@ -574,7 +569,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.4.2"
|
||||
"version": "3.4.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
38813
01_g-h_filter.ipynb
38813
01_g-h_filter.ipynb
File diff suppressed because it is too large
Load Diff
@ -290,7 +290,7 @@
|
||||
"\n",
|
||||
"When I begin listening to the sensor I have no reason to believe that Simon is at any particular position in the hallway. He is equally likely to be in any position. The probability that he is in each position is therefore 1/10. \n",
|
||||
"\n",
|
||||
"Let us represent our belief of his position at any time in a numpy array."
|
||||
"Let us represent our belief of his position at any time in a NumPy array."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -484,7 +484,7 @@
|
||||
"WLWwiMcZ3gAAAABJRU5ErkJggg==\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c3a4b7f0>"
|
||||
"<matplotlib.figure.Figure at 0x7fa850d3eac8>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -505,7 +505,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We call this a <i>multimodal</i> distribution because we have multiple beliefs about the position of our dog. Of course we are not saying that we think he is simultaneously in three different locations, merely that so far we have narrowed down our knowledge in his position to these locations. \n",
|
||||
"We call this a <i>multimodal</i> distribution because we have multiple beliefs about the position of our dog. Of course we are not saying that we think he is simultaneously in three different locations, merely that so far we have narrowed down our knowledge in his position to be one of these three locations. \n",
|
||||
"\n",
|
||||
"I hand coded the `pos_belief` array in the code above. How would we implement this in code? Well, hallway represents each door as a 1, and wall as 0, so we will multiply the hallway variable by the percentage, like so;"
|
||||
]
|
||||
@ -549,7 +549,7 @@
|
||||
" * door\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"Can we deduce where Simon is at the end of that sequence? Of course! Given the hallway's layout there is only one place where you can be in front of a door, move once to the right, and be in front of another door, and that is at the left end. Therefore we can confidently state that Simon is in front of the second doorway. If this is not clear, suppose Simon had started at the second or third door. After moving to the right, his sensor would have returned 'wall'. Therefore the only possibility is that he is now in front of the second door. We denote this in Python with:"
|
||||
"Can we deduce where Simon is at the end of that sequence? Of course! Given the hallway's layout there is only one place where you can be in front of a door, move once to the right, and be in front of another door, and that is at the left end. Therefore we can confidently state that Simon is in front of the second doorway. If this is not clear, suppose Simon had started at the second or third door. After moving to the right, his sensor would have returned 'wall'. That doesn't match the sensor readings, so we know he didn't start there. We can continue with that logic for all the remaining starting positions. Therefore the only possibility is that he is now in front of the second door. We denote this in Python with:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -576,7 +576,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Obviously I carefully constructed the hallway layout and sensor readings to give us an exact answer quickly. Real problems will not be so clear cut. But this should trigger your intuition - the first sensor reading only gave us very low probabilities (0.333) for Simon's location, but after a position update and another sensor reading we knew much more about where he is. You might suspect, correctly, that if you had a very long hallway with a large number of doors that after several sensor readings and positions updates we would either be able to know where Simon was, or have the possibilities narrowed down to a small number of possibilities. For example, suppose we had a long sequence of \"door, right, door, right, wall, right, wall, right, door, right, door, right, wall, right, wall, right, wall, right, wall, right, door\". Simon could only be located where we had a sequence of [1,1,0,0,1,1,0,0,0,0,1] in the hallway. There might be only one match for that, or at most a few. Either way we will be far more certain about his position then when we started.\n",
|
||||
"Obviously I carefully constructed the hallway layout and sensor readings to give us an exact answer quickly. Real problems will not be so clear cut. But this should trigger your intuition - the first sensor reading only gave us very low probabilities (0.333) for Simon's location, but after a position update and another sensor reading we knew much more about where he is. You might suspect, correctly, that if you had a very long hallway with a large number of doors that after several sensor readings and positions updates we would either be able to know where Simon was, or have the possibilities narrowed down to a small number of possibilities. For example, suppose we had a long sequence of \"door, right, door, right, wall, right, wall, right, door, right, door, right, wall, right, wall, right, wall, right, wall, right, door\". Simon could only have started in a location where his movements had a door sequence of [1,1,0,0,1,1,0,0,0,0,1] in the hallway. There might be only one match for that, or at most a few. Either way we will be far more certain about his position then when we started.\n",
|
||||
"\n",
|
||||
"We could work through the code to implement this solution, but instead let us consider a real world complication to the problem."
|
||||
]
|
||||
@ -596,7 +596,7 @@
|
||||
"\n",
|
||||
"At first this may seem like an insurmountable problem. If the sensor is noisy it casts doubt on every piece of data. How can we conclude anything if we are always unsure?\n",
|
||||
"\n",
|
||||
"The key, as with the problem above, is probabilities. We are already comfortable with assigning a probabilistic belief about the location of the dog; now we just have to incorporate the additional uncertainty caused by the sensor noise. Say we think there is a 50% chance that our dog is in front of a specific door and we get a reading of 'door'. Well, we think that is only likely to be true 0.6 of the time, so we multiply: $0.5 * 0.6= 0.3$. Likewise, if we think the chances that our dog is in front of a wall is 0.1, and the reading is 'door', we would multiply the probability by the chances of a miss: $0.1 * 0.2 = 0.02$.\n",
|
||||
"The key, as with the problem above, is probabilities. We are already comfortable with assigning a probabilistic belief about the location of the dog; now we just have to incorporate the additional uncertainty caused by the sensor noise. Say we think there is a 50% chance that our dog is in front of a specific door and then we get a reading of 'door'. Well, we think that is only likely to be true 0.6 of the time, so we multiply: $0.5 * 0.6= 0.3$. Likewise, if we think the chances that our dog is in front of a wall is 0.1, and the reading is 'door', we would multiply the probability by the chances of a miss: $0.1 * 0.2 = 0.02$.\n",
|
||||
"\n",
|
||||
"However, we more or less chose 0.6 and 0.2 at random; if we multiply the `pos_belief` array by these values the end result will no longer represent a true probability distribution. "
|
||||
]
|
||||
@ -764,7 +764,7 @@
|
||||
"FtXJCJBlAAAAAElFTkSuQmCC\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c375c780>"
|
||||
"<matplotlib.figure.Figure at 0x7fa850a44ef0>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -959,7 +959,7 @@
|
||||
"FtPch8/CAAAAAElFTkSuQmCC\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c3686438>"
|
||||
"<matplotlib.figure.Figure at 0x7fa85097d908>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -1201,7 +1201,7 @@
|
||||
"ACQirgEAIBFxDQAAiYhrAABIRFwDAEAi4hoAABL5fwpmJuNqHGHfAAAAAElFTkSuQmCC\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c36e9b70>"
|
||||
"<matplotlib.figure.Figure at 0x7fa85096e4e0>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -1247,7 +1247,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We want to solve real world problems, and we have already stated that all sensors have noise. Therefore the code above must be wrong. What if the sensor reported that our dog moved one space, but he actually moved two spaces, or zero? Once again this may initially sound like an insurmountable problem, but let's just model it in math. Since this is just an example, we will create a pretty simple noise model for the sensor - later in the book we will handle far more sophisticated errors.\n",
|
||||
"We want to solve real world problems, and we have already stated that all sensors have noise. Therefore the code above must be wrong since it assumes perfect measurements. What if the sensor reported that our dog moved one space, but he actually moved two spaces, or zero? Once again this may initially sound like an insurmountable problem, but let's just model it in math. Since this is just an example, we will create a pretty simple noise model for the sensor - later in the book we will handle far more sophisticated errors.\n",
|
||||
"\n",
|
||||
"We will say that when the sensor sends a movement update, it is 80% likely to be right, and it is 10% likely to overshoot one position to the right, and 10% likely to undershoot to the left. That is, if we say the movement was 4 (meaning 4 spaces to the right), the dog is 80% likely to have moved 4 spaces to the right, 10% to have moved 3 spaces, and 10% to have moved 5 spaces.\n",
|
||||
"\n",
|
||||
@ -1405,7 +1405,7 @@
|
||||
"DQAAiYhrAABIRFwDAEAi4hoAABIR1wAAkIi4BgCARP4/nnsht+19KnYAAAAASUVORK5CYII=\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c33fe7f0>"
|
||||
"<matplotlib.figure.Figure at 0x7fa850704ac8>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -1440,7 +1440,8 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
"collapsed": false,
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
@ -1588,7 +1589,7 @@
|
||||
"iGsAAEhEXAMAQCLiGgAAEhHXAACQiLgGAIBExDUAACTy/wCvjTrhgem4kgAAAABJRU5ErkJggg==\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c356f630>"
|
||||
"<matplotlib.figure.Figure at 0x7fa86c351978>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -1756,7 +1757,7 @@
|
||||
"iGsAAEhEXAMAQCLiGgAAEhHXAACQiLgGAIBExDUAACTy/wCvjTrhgem4kgAAAABJRU5ErkJggg==\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c33cc320>"
|
||||
"<matplotlib.figure.Figure at 0x7fa850659320>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -1771,7 +1772,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This is not a coincidence, or the result of a carefully chosen example - it is always true of the update step. This is inevitable; if our sensor is noisy we will lose a bit of information on every update. Suppose we were to perform the update an infinite number of times - what would the result be? If we lose information on every step, we must eventually end up with no information at all, and our probabilities will be equally distributed across the `pos_belief` array. Let's try this with say 500 iterations.\n"
|
||||
"This is not a coincidence, or the result of a carefully chosen example - it is always true of the predict step. This is inevitable; if our sensor is noisy we will lose a bit of information on every prediction. Suppose we were to perform the prediction an infinite number of times - what would the result be? If we lose information on every step, we must eventually end up with no information at all, and our probabilities will be equally distributed across the `pos_belief` array. Let's try this with 500 iterations.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1935,7 +1936,7 @@
|
||||
"/wDmJxpYHvwEtQAAAABJRU5ErkJggg==\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c34610f0>"
|
||||
"<matplotlib.figure.Figure at 0x7fa85073e6a0>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -1977,13 +1978,13 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The problem of losing information during an update may make it seem as if our system would quickly devolve into no knowledge. However, recall that our process is not an endless series of updates, but of *measure->update->measure->update->measure->update...* The output of the measure step is fed into the update. The update step, with a degraded certainty, is then fed into the measure step. \n",
|
||||
"The problem of losing information during a prediction may make it seem as if our system would quickly devolve into no knowledge. However, recall that our process is not an endless series of predictions, but of *update->predict->update->predict->update->predict->update...* The output of the update step, where we measure the current position, is fed into the prediction. The prediction step, with a degraded certainty, is then fed back into the update step where we measure the position again.\n",
|
||||
"\n",
|
||||
"Let's think about this intuitively. After the first measure->update round we have degraded the knowledge we gained by the measurement by a small amount. But now we take another measurement. When we try to incorporate that new measurement into our belief, do we become more certain, less certain, or equally certain. Consider a simple case - you are sitting in your office. A co-worker asks another co-worker where you are, and they report \"in his office\". You keep sitting there while they ask and answer \"has he moved\"? \"No\" \"Where is he\" \"In his office\". Eventually you get up and move, and lets say the person didn't see you move. At that time the questions will go \"Has he moved\" \"no\" (but you have!) \"Where is he\" \"In the kitchen\". Wow! At that moment the statement that you haven't moved conflicts strongly with the next measurement that you are in the kitchen. If we were modeling these with probabilities the probability that you are in your office would lower, and the probability that you are in the kitchen would go up a little bit. But now imagine the subsequent conversation: \"has he moved\" \"no\" \"where is he\" \"in the kitchen\". Pretty quickly the belief that you are in your office would fade away, and the belief that you are in the kitchen would increase to near certainty. The belief that you are in the office will never go to zero, nor will the belief that you are in the kitchen ever go to 1.0 because of the chances of error, but in practice your co-workers would be correct to be quite confident in their system.\n",
|
||||
"Let's think about this intuitively. After the first update->predict round we have degraded the knowledge we gained by the measurement by a small amount. But now we take another measurement. When we try to incorporate that new measurement into our belief, do we become more certain, less certain, or equally certain? Consider a simple case - you are sitting in your office. A co-worker asks another co-worker where you are, and they report \"in his office\". You keep sitting there while they ask and answer \"has he moved\"? \"No\" \"Where is he\" \"In his office\". Eventually you get up and move, and lets say the person didn't see you move. At that time the questions will go \"Has he moved\" \"no\" (but you have!) \"Where is he\" \"In the kitchen\". Wow! At that moment the statement that you haven't moved conflicts strongly with the next measurement that you are in the kitchen. If we were modeling these with probabilities the probability that you are in your office would lower, and the probability that you are in the kitchen would go up a little bit. But now imagine the subsequent conversation: \"has he moved\" \"no\" \"where is he\" \"in the kitchen\". Pretty quickly the belief that you are in your office would fade away, and the belief that you are in the kitchen would increase to near certainty. The belief that you are in the office will never go to zero, nor will the belief that you are in the kitchen ever go to 1.0 because of the chances of error, but in practice your co-workers would be correct to be quite confident in their system.\n",
|
||||
"\n",
|
||||
"That is what intuition tells us. What does the math tell us?\n",
|
||||
"\n",
|
||||
"Well, we have already programmed the measure step, and we have programmed the update step. All we need to do is feed the result of one into the other, and we will have programmed our dog tracker!!! Let's see how it performs. We will input data as if the dog started at position 0 and moved right at each update. However, as in a real world application, we will start with no knowledge and assign equal probability to all positions. "
|
||||
"Well, we have already programmed the update step, and we have programmed the predict step. All we need to do is feed the result of one into the other, and we will have programmed our dog tracker!!! Let's see how it performs. We will input measurements as if the dog started at position 0 and moved right at each update. However, as in a real world application, we will start with no knowledge and assign equal probability to all positions. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2169,7 +2170,7 @@
|
||||
"QmCC\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c36dbb38>"
|
||||
"<matplotlib.figure.Figure at 0x7fa8506f84e0>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -2186,7 +2187,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"So after the first update we have assigned a high probability to each door position, and a low probability to each wall position. The update step shifted these probabilities to the right, smearing them about a bit. Now let's look at what happens at the next sense."
|
||||
"So after the first update we have assigned a high probability to each door position, and a low probability to each wall position. The predict step shifted these probabilities to the right, smearing them about a bit. Now let's look at what happens at the next sense."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2353,7 +2354,7 @@
|
||||
"YII=\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c33ccd30>"
|
||||
"<matplotlib.figure.Figure at 0x7fa8509989e8>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -2530,7 +2531,7 @@
|
||||
"kMj/Azv6KDQ33AZlAAAAAElFTkSuQmCC\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c34313c8>"
|
||||
"<matplotlib.figure.Figure at 0x7fa8508ebc50>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -2708,7 +2709,7 @@
|
||||
"SOT/ATXCA84OAAAAA0lEQVQfKKaMWAKYAAAAAElFTkSuQmCC\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c354fc50>"
|
||||
"<matplotlib.figure.Figure at 0x7fa85062e5f8>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -2749,7 +2750,7 @@
|
||||
"source": [
|
||||
"You may be suspicious of the results above because I always passed correct sensor data into the functions. However, we are claiming that this code implements a *filter* - it should filter out bad sensor measurements. Does it do that?\n",
|
||||
"\n",
|
||||
"To make this easy to program and visualize I will change the layout of the hallway to mostly alternating doors and hallways:"
|
||||
"To make this easy to program and visualize I will change the layout of the hallway to mostly alternating doors and hallways, and run the algorithm on 5 correct measurements:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2915,7 +2916,7 @@
|
||||
"rkJggg==\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c3626cf8>"
|
||||
"<matplotlib.figure.Figure at 0x7fa8508067b8>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -3096,7 +3097,7 @@
|
||||
"gETENQAAJPL/AKVaFt8zsqsBAAAAAElFTkSuQmCC\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c3529ac8>"
|
||||
"<matplotlib.figure.Figure at 0x7fa85076d7f0>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -3583,7 +3584,7 @@
|
||||
"RArh4pqIiIiISCFcXBMRERERKYSLayIiIiIihfx/3pDuDjpTJXEAAAAASUVORK5CYII=\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c357fe10>"
|
||||
"<matplotlib.figure.Figure at 0x7fa850873da0>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -3622,9 +3623,9 @@
|
||||
"\n",
|
||||
"With that said, while this filter is used in industry, it is not used often because it has several limitations. Getting around those limitations is the motivation behind the chapters in the rest of this book.\n",
|
||||
"\n",
|
||||
"The first problem is scaling. Our dog tracking problem used only one variable, $pos$, to denote the dog's position. Most interesting problems will want to track several things in a large space. Realistically, at a minimum we would want to track our dogs $(x,y)$ coordinate, and probably his velocity $(\\dot{x},\\dot{y})$ as well. We have not covered the multidimensional case, but instead of a histogram we use a multidimensional grid to store the probabilities at each discrete location. Each *sense()* and *update()* step requires updating all values in the grid, so a simple four variable problem would require $O(n^4)$ running time *per time step*. Realistic filters have 10 or more variables to track, leading to exorbitant computation requirements.\n",
|
||||
"The first problem is scaling. Our dog tracking problem used only one variable, $pos$, to denote the dog's position. Most interesting problems will want to track several things in a large space. Realistically, at a minimum we would want to track our dogs $(x,y)$ coordinate, and probably his velocity $(\\dot{x},\\dot{y})$ as well. We have not covered the multidimensional case, but instead of a histogram we use a multidimensional grid to store the probabilities at each discrete location. Each `update()` and `predict()` step requires updating all values in the grid, so a simple four variable problem would require $O(n^4)$ running time *per time step*. Realistic filters can have 10 or more variables to track, leading to exorbitant computation requirements.\n",
|
||||
"\n",
|
||||
"The second problem is that the histogram is discrete, but we live in a continuous world. The histogram requires that you model the output of your filter as a set of discrete points. In our dog in the hallway example, we used 10 positions, which is obviously far too few positions for anything but a toy problem. For example, for a 100 meter hallway you would need 10,000 positions to model the hallway to 1cm accuracy. So each sense and update operation would entail performing calculations for 10,000 different probabilities. It gets exponentially worse as we add dimensions. If our dog was roaming in a $100x100 m^2$ courtyard, we would need 100,000,000 bins ($10,000^2$) to get 1cm accuracy.\n",
|
||||
"The second problem is that the histogram is discrete, but we live in a continuous world. The histogram requires that you model the output of your filter as a set of discrete points. In our dog in the hallway example, we used 10 positions, which is obviously far too few positions for anything but a toy problem. For example, for a 100 meter hallway you would need 10,000 positions to model the hallway to 1cm accuracy. So each update and predict operation would entail performing calculations for 10,000 different probabilities. It gets exponentially worse as we add dimensions. If our dog was roaming in a $100x100 m^2$ courtyard, we would need 100,000,000 bins ($10,000^2$) to get 1cm accuracy.\n",
|
||||
"\n",
|
||||
"A third problem is that the histogram is multimodal. This is not always a problem - an entire class of filters, the particle filters, are multimodal and are often used because of this property. But imagine if the GPS in your car reported to you that it is 40% sure that you are on D street, but 30% sure you are on Willow Avenue. I doubt that you would find that useful. Also, GPSs report their error - they might report that you are at $(109.878W, 38.326N)$ with an error of $9m$. There is no clear mathematical way to extract error information from a histogram. Heuristics suggest themselves to be sure, but there is no exact determination. You may or may not care about that while driving, but you surely do care if you are trying to send a rocket to Mars or track and hit an oncoming missile.\n",
|
||||
"\n",
|
||||
@ -3787,7 +3788,7 @@
|
||||
"rkJggg==\n"
|
||||
],
|
||||
"text/plain": [
|
||||
"<matplotlib.figure.Figure at 0x7f39c375fb70>"
|
||||
"<matplotlib.figure.Figure at 0x7fa8505c6d30>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
@ -3806,7 +3807,7 @@
|
||||
"source": [
|
||||
" The largest probabilities are in position 0 and position 5. This does not fit our physical intuition at all. A dog cannot be in two places at once (my dog Simon certainly tries - his food bowl and my lap often have equal allure to him). We would have to use heuristics to decide how to interpret this distribution, and there is usually no satisfactory answer. This is not always a weakness - a considerable amount of literature has been written on *Multi-Hypothesis Tracking (MHT)*. We cannot always distill our knowledge to one conclusion, and MHT uses various techniques to maintain multiple story lines at once, using backtracking schemes to go *back in time* to correct hypothesis once more information is known. This will be the subject of later chapters. In other cases we truly have a multimodal situation - we may be optically tracking pedestrians on the street and need to represent all of their positions. \n",
|
||||
" \n",
|
||||
"In practice it is the exponential increase in computation time that leads to the discrete Bayes filter being the least frequently used of all filters in this book. Many problems are best formulated as discrete or multimodal, but we have other filter choices with better performance. With that said, if I had a small problem that this technique could handle I would choose to use it; it is trivial to implement, debug, and understand, all virtues in my book."
|
||||
"In practice it is the exponential increase in computation time that leads to the discrete Bayes filter being the least frequently used of all filters in this book. Many problems are best formulated as discrete or multimodal, but we have other filter choices with better performance. With that said, if I had a small problem that this technique could handle I would choose to use it; it is trivial to implement, debug, and understand, all virtues."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -3832,9 +3833,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The code is very small, but the result is huge! We will go into the math more later, but we have implemented a form of a Bayesian filter. It is commonly called a Histogram filter. The Kalman filter is also a Bayesian filter, and uses this same logic to produce its results. The math is a bit more complicated, but not by much. For now, we will just explain that Bayesian statistics compute the likelihood of the present based on the past. If we know there are two doors in a row, and the sensor reported two doors in a row, it is likely that we are positioned near those doors. Bayesian statistics just formalize that example, and Bayesian filters formalize filtering data based on that math by implementing the sense->update->sense->update process. \n",
|
||||
"The code is very small, but the result is huge! We will go into the math more later, but we have implemented a form of a Bayesian filter. It is commonly called a Histogram filter. The Kalman filter is also a Bayesian filter, and uses this same logic to produce its results. The math is a bit more complicated, but not by much. For now, we will just explain that Bayesian statistics compute the likelihood of some estimate about the present based on imperfect knowledge of the past. If we know there are two doors in a row, and the sensor reported two doors in a row, it is likely that we are positioned near those doors. Bayesian statistics just formalize that example, and Bayesian filters formalize filtering data based on that math by implementing the predict->update->predict->update process. \n",
|
||||
"\n",
|
||||
"We have learned how to start with no information and derive information from noisy sensors. Even though our sensors are very noisy (most sensors are more than 80% accurate, for example) we quickly converge on the most likely position for our dog. We have learned how the update step always degrades our knowledge, but the addition of another measurement, even when it might have noise in it, improves our knowledge, allowing us to converge on the most likely result.\n",
|
||||
"We have learned how to start with no information and derive information from noisy sensors. Even though the sensors in this chapter are very noisy (most sensors are more than 80% accurate, for example) we quickly converge on the most likely position for our dog. We have learned how the predict step always degrades our knowledge, but the addition of another measurement, even when it might have noise in it, improves our knowledge, allowing us to converge on the most likely result.\n",
|
||||
"\n",
|
||||
"If you followed the math carefully you will realize that all of this math is exact. The bar charts that we are displaying are not an *estimate* or *guess* - they are mathematically exact results that exactly represent our knowledge. The knowledge is probabilistic, to be sure, but it is exact, and correct.\n",
|
||||
"\n",
|
||||
@ -3891,7 +3892,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.4.2"
|
||||
"version": "3.4.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
4878
04_Gaussians.ipynb
4878
04_Gaussians.ipynb
File diff suppressed because it is too large
Load Diff
23762
05_Kalman_Filters.ipynb
23762
05_Kalman_Filters.ipynb
File diff suppressed because it is too large
Load Diff
22
pdf/readme.txt
Normal file
22
pdf/readme.txt
Normal file
@ -0,0 +1,22 @@
|
||||
This directory contains code to convert the book into the PDF file. The normal
|
||||
build process is to cd into this directory, and run buil_book from the command
|
||||
line. If the build is successful (no errors printed), then run clean_book from
|
||||
the command line. clean_book is not run automatically because if there is an
|
||||
error you probably need to look at the intermediate output to debug the issue.
|
||||
|
||||
I build the PDF my merging all of the notebooks into one huge one. I strip out
|
||||
the initial cells for the book formatting and table of contents, and do a few
|
||||
other things so it renders well in PDF.
|
||||
|
||||
There is some code to do the same from Windows (.bat files), but they are now
|
||||
a bit out of date.
|
||||
|
||||
There is also some experimental code to convert to html.
|
||||
|
||||
The files with short in the name combine only a couple of notebooks together.
|
||||
I use this to test the production without having to wait the relatively long
|
||||
time required to produce the entire book. Mostly this is for testing the
|
||||
scripts.
|
||||
|
||||
No one but me should need to run this stuff, but if you fork the project and
|
||||
want to generate a PDF, this is how you do it.
|
Loading…
Reference in New Issue
Block a user