Spelling and book formatting changes.
Ran spell check on the files (not easy, no built in support for spell checking. Formatted pdf to use a larger font.
This commit is contained in:
parent
110a4f7c6d
commit
d6cda0e0b5
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@ -260,9 +260,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If you've gotten this far I hope that you are thinking that the Kalman filter's fearsome reputation is somewhat undeserved. Sure, I handwaved some equations away, but I hope implemention has been fairly straightforward for you. The underlying concept is quite straightforward - take two measurements, or a measurement and a prediction, and choose the output to be somewhere between the two. If you believe the measurement more your guess will be closer to the measurement, and if you believe the prediction is more accurate your guess will lie closer it it. That's not rocket science (little joke - it is exactly this math that got Apollo to the moon and back!). \n",
|
||||
"If you've gotten this far I hope that you are thinking that the Kalman filter's fearsome reputation is somewhat undeserved. Sure, I hand waved some equations away, but I hope implementation has been fairly straightforward for you. The underlying concept is quite straightforward - take two measurements, or a measurement and a prediction, and choose the output to be somewhere between the two. If you believe the measurement more your guess will be closer to the measurement, and if you believe the prediction is more accurate your guess will lie closer it it. That's not rocket science (little joke - it is exactly this math that got Apollo to the moon and back!). \n",
|
||||
"\n",
|
||||
"Well, to be honest I have been choosing my problems carefully. For any arbitrary problem finding some of the matrices that we need to feed into the Kalman filter equations can be quite difficult. I haven't been *too tricky*, though. Equations like Newton's equations of motion can be trivially computed for Kalman filter applications, and they make up the bulk of the kind of problems that we want to solve. If you are a hobbiest, you can safely pass by this chapter for now, and perhaps forever. Some of the later chapters will assume the material in this chapter, but much of the work will still be acceessible to you. \n",
|
||||
"Well, to be honest I have been choosing my problems carefully. For any arbitrary problem finding some of the matrices that we need to feed into the Kalman filter equations can be quite difficult. I haven't been *too tricky*, though. Equations like Newton's equations of motion can be trivially computed for Kalman filter applications, and they make up the bulk of the kind of problems that we want to solve. If you are a hobbyist, you can safely pass by this chapter for now, and perhaps forever. Some of the later chapters will assume the material in this chapter, but much of the work will still be accessible to you. \n",
|
||||
"\n",
|
||||
"But, I urge everyone to at least read the first section, and to skim the rest. It is not much harder than what you have done - the difficulty comes in finding closed form expressions for specific problems, not understanding the math in this chapter. \n"
|
||||
]
|
||||
@ -272,7 +272,7 @@
|
||||
"level": 2,
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Modelling a Linear System that Has Noise"
|
||||
"Modeling a Linear System that Has Noise"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -281,7 +281,7 @@
|
||||
"source": [
|
||||
"We need to start by understanding the underlying equations and assumptions that the Kalman filter uses. We are trying to model real world phenomena, so what do we have to consider?\n",
|
||||
"\n",
|
||||
"First, each physical system has a process. For example, a car travelling at a certain velocity goes so far in a fixed amount of time, and it's velocity varies as a function of it's aceleration. We describe that behavior with the well known Newtonian equations we learned in high school.\n",
|
||||
"First, each physical system has a process. For example, a car traveling at a certain velocity goes so far in a fixed amount of time, and it's velocity varies as a function of it's acceleration. We describe that behavior with the well known Newtonian equations we learned in high school.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"$$\n",
|
||||
@ -336,7 +336,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"I promised that you would not have to understand how to derive Kalman filter equations, and that is true. However, I do think it is worth walking through the equations one by one and becoming familiar with the variables. If this is your first time through the material feel free to skip ahead to the next section. However, you will eventually want to work through this material, so why not now? You will need to have passing familarity with these equations to read material written about the Kalman filter, as they all presuppose that you are familiar with the equations. I will reiterate them here for easy reference.\n",
|
||||
"I promised that you would not have to understand how to derive Kalman filter equations, and that is true. However, I do think it is worth walking through the equations one by one and becoming familiar with the variables. If this is your first time through the material feel free to skip ahead to the next section. However, you will eventually want to work through this material, so why not now? You will need to have passing familiarity with these equations to read material written about the Kalman filter, as they all presuppose that you are familiar with the equations. I will reiterate them here for easy reference.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"$$\n",
|
||||
@ -387,7 +387,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The blue prediction line is the output of $\\mathbf{Hx}$, and the dot labelled \"measurement\" is $\\mathbf{z}$. Therefore, $\\gamma = \\mathbf{z} - \\mathbf{Hx}$ is how we compute the residual, drawn in red. So $\\gamma$ is the residual.\n",
|
||||
"The blue prediction line is the output of $\\mathbf{Hx}$, and the dot labeled \"measurement\" is $\\mathbf{z}$. Therefore, $\\gamma = \\mathbf{z} - \\mathbf{Hx}$ is how we compute the residual, drawn in red. So $\\gamma$ is the residual.\n",
|
||||
"\n",
|
||||
"The next line is the formidable:\n",
|
||||
"\n",
|
||||
@ -399,7 +399,7 @@
|
||||
"\n",
|
||||
"$$(\\mathbf{HPH}^T + \\mathbf{R})^{-1}$$\n",
|
||||
"\n",
|
||||
"Taking the inverse is linear algebra's way of doing $\\frac{1}{x}$. So if you accept my admittedly hand wavely explanation it can be seen to be computing:\n",
|
||||
"Taking the inverse is linear algebra's way of doing $\\frac{1}{x}$. So if you accept my admittedly hand wavey explanation it can be seen to be computing:\n",
|
||||
"\n",
|
||||
"$$ \n",
|
||||
"gain_{measurement\\,space} = \\frac{uncertainty_{prediction}}{uncertainty_{measurement}}\n",
|
||||
|
File diff suppressed because one or more lines are too long
276
Least_Squares_Filters.ipynb
Normal file
276
Least_Squares_Filters.ipynb
Normal file
@ -0,0 +1,276 @@
|
||||
{
|
||||
"metadata": {
|
||||
"name": "",
|
||||
"signature": "sha256:df77e6367b272d34fe0d1178b053a99c258abca7195ecda99ac5a7e8e192c698"
|
||||
},
|
||||
"nbformat": 3,
|
||||
"nbformat_minor": 0,
|
||||
"worksheets": [
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "heading",
|
||||
"level": 1,
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Least Squares Filters"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"collapsed": false,
|
||||
"input": [
|
||||
"#format the book\n",
|
||||
"%matplotlib inline\n",
|
||||
"from __future__ import division, print_function\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"import book_format\n",
|
||||
"book_format.load_style()"
|
||||
],
|
||||
"language": "python",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"html": [
|
||||
"<style>\n",
|
||||
"@import url('http://fonts.googleapis.com/css?family=Source+Code+Pro');\n",
|
||||
"\n",
|
||||
" div.cell{\n",
|
||||
" width: 850px;\n",
|
||||
" margin-left: 0% !important;\n",
|
||||
" margin-right: auto;\n",
|
||||
" }\n",
|
||||
" div.text_cell code {\n",
|
||||
" background: transparent;\n",
|
||||
" color: #000000;\n",
|
||||
" font-weight: 600;\n",
|
||||
" font-size: 11pt;\n",
|
||||
" font-style: bold;\n",
|
||||
" font-family: 'Source Code Pro', Consolas, monocco, monospace;\n",
|
||||
" }\n",
|
||||
" h1 {\n",
|
||||
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
|
||||
"\t}\n",
|
||||
"\t\n",
|
||||
" div.input_area {\n",
|
||||
" background: #F6F6F9;\n",
|
||||
" border: 1px solid #586e75;\n",
|
||||
" }\n",
|
||||
"\n",
|
||||
" .text_cell_render h1 {\n",
|
||||
" font-weight: 200;\n",
|
||||
" font-size: 30pt;\n",
|
||||
" line-height: 100%;\n",
|
||||
" color:#c76c0c;\n",
|
||||
" margin-bottom: 0.5em;\n",
|
||||
" margin-top: 1em;\n",
|
||||
" display: block;\n",
|
||||
" white-space: wrap;\n",
|
||||
" } \n",
|
||||
" h2 {\n",
|
||||
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
|
||||
" }\n",
|
||||
" .text_cell_render h2 {\n",
|
||||
" font-weight: 200;\n",
|
||||
" font-size: 20pt;\n",
|
||||
" font-style: italic;\n",
|
||||
" line-height: 100%;\n",
|
||||
" color:#c76c0c;\n",
|
||||
" margin-bottom: 0.5em;\n",
|
||||
" margin-top: 1.5em;\n",
|
||||
" display: block;\n",
|
||||
" white-space: nowrap;\n",
|
||||
" } \n",
|
||||
" h3 {\n",
|
||||
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
|
||||
" }\n",
|
||||
" .text_cell_render h3 {\n",
|
||||
" font-weight: 300;\n",
|
||||
" font-size: 18pt;\n",
|
||||
" line-height: 100%;\n",
|
||||
" color:#d77c0c;\n",
|
||||
" margin-bottom: 0.5em;\n",
|
||||
" margin-top: 2em;\n",
|
||||
" display: block;\n",
|
||||
" white-space: nowrap;\n",
|
||||
" }\n",
|
||||
" h4 {\n",
|
||||
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
|
||||
" }\n",
|
||||
" .text_cell_render h4 {\n",
|
||||
" font-weight: 300;\n",
|
||||
" font-size: 16pt;\n",
|
||||
" color:#d77c0c;\n",
|
||||
" margin-bottom: 0.5em;\n",
|
||||
" margin-top: 0.5em;\n",
|
||||
" display: block;\n",
|
||||
" white-space: nowrap;\n",
|
||||
" }\n",
|
||||
" h5 {\n",
|
||||
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
|
||||
" }\n",
|
||||
" .text_cell_render h5 {\n",
|
||||
" font-weight: 300;\n",
|
||||
" font-style: normal;\n",
|
||||
" color: #1d3b84;\n",
|
||||
" font-size: 16pt;\n",
|
||||
" margin-bottom: 0em;\n",
|
||||
" margin-top: 1.5em;\n",
|
||||
" display: block;\n",
|
||||
" white-space: nowrap;\n",
|
||||
" }\n",
|
||||
" div.text_cell_render{\n",
|
||||
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
|
||||
" line-height: 135%;\n",
|
||||
" font-size: 110%;\n",
|
||||
" width:750px;\n",
|
||||
" margin-left:auto;\n",
|
||||
" margin-right:auto;\n",
|
||||
" text-align:justify;\n",
|
||||
" text-justify:inter-word;\n",
|
||||
" }\n",
|
||||
" div.output_subarea.output_text.output_pyout {\n",
|
||||
" overflow-x: auto;\n",
|
||||
" overflow-y: scroll;\n",
|
||||
" max-height: 300px;\n",
|
||||
" }\n",
|
||||
" div.output_subarea.output_stream.output_stdout.output_text {\n",
|
||||
" overflow-x: auto;\n",
|
||||
" overflow-y: scroll;\n",
|
||||
" max-height: 300px;\n",
|
||||
" }\n",
|
||||
" code{\n",
|
||||
" font-size: 70%;\n",
|
||||
" }\n",
|
||||
" .rendered_html code{\n",
|
||||
" background-color: transparent;\n",
|
||||
" }\n",
|
||||
" ul{\n",
|
||||
" margin: 2em;\n",
|
||||
" }\n",
|
||||
" ul li{\n",
|
||||
" padding-left: 0.5em; \n",
|
||||
" margin-bottom: 0.5em; \n",
|
||||
" margin-top: 0.5em; \n",
|
||||
" }\n",
|
||||
" ul li li{\n",
|
||||
" padding-left: 0.2em; \n",
|
||||
" margin-bottom: 0.2em; \n",
|
||||
" margin-top: 0.2em; \n",
|
||||
" }\n",
|
||||
" ol{\n",
|
||||
" margin: 2em;\n",
|
||||
" }\n",
|
||||
" ol li{\n",
|
||||
" padding-left: 0.5em; \n",
|
||||
" margin-bottom: 0.5em; \n",
|
||||
" margin-top: 0.5em; \n",
|
||||
" }\n",
|
||||
" ul li{\n",
|
||||
" padding-left: 0.5em; \n",
|
||||
" margin-bottom: 0.5em; \n",
|
||||
" margin-top: 0.2em; \n",
|
||||
" }\n",
|
||||
" a:link{\n",
|
||||
" font-weight: bold;\n",
|
||||
" color:#447adb;\n",
|
||||
" }\n",
|
||||
" a:visited{\n",
|
||||
" font-weight: bold;\n",
|
||||
" color: #1d3b84;\n",
|
||||
" }\n",
|
||||
" a:hover{\n",
|
||||
" font-weight: bold;\n",
|
||||
" color: #1d3b84;\n",
|
||||
" }\n",
|
||||
" a:focus{\n",
|
||||
" font-weight: bold;\n",
|
||||
" color:#447adb;\n",
|
||||
" }\n",
|
||||
" a:active{\n",
|
||||
" font-weight: bold;\n",
|
||||
" color:#447adb;\n",
|
||||
" }\n",
|
||||
" .rendered_html :link {\n",
|
||||
" text-decoration: underline; \n",
|
||||
" }\n",
|
||||
" .rendered_html :hover {\n",
|
||||
" text-decoration: none; \n",
|
||||
" }\n",
|
||||
" .rendered_html :visited {\n",
|
||||
" text-decoration: none;\n",
|
||||
" }\n",
|
||||
" .rendered_html :focus {\n",
|
||||
" text-decoration: none;\n",
|
||||
" }\n",
|
||||
" .rendered_html :active {\n",
|
||||
" text-decoration: none;\n",
|
||||
" }\n",
|
||||
" .warning{\n",
|
||||
" color: rgb( 240, 20, 20 )\n",
|
||||
" } \n",
|
||||
" hr {\n",
|
||||
" color: #f3f3f3;\n",
|
||||
" background-color: #f3f3f3;\n",
|
||||
" height: 1px;\n",
|
||||
" }\n",
|
||||
" blockquote{\n",
|
||||
" display:block;\n",
|
||||
" background: #fcfcfc;\n",
|
||||
" border-left: 5px solid #c76c0c;\n",
|
||||
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
|
||||
" width:680px;\n",
|
||||
" padding: 10px 10px 10px 10px;\n",
|
||||
" text-align:justify;\n",
|
||||
" text-justify:inter-word;\n",
|
||||
" }\n",
|
||||
" blockquote p {\n",
|
||||
" margin-bottom: 0;\n",
|
||||
" line-height: 125%;\n",
|
||||
" font-size: 100%;\n",
|
||||
" }\n",
|
||||
"</style>\n",
|
||||
"<script>\n",
|
||||
" MathJax.Hub.Config({\n",
|
||||
" TeX: {\n",
|
||||
" extensions: [\"AMSmath.js\"]\n",
|
||||
" },\n",
|
||||
" tex2jax: {\n",
|
||||
" inlineMath: [ ['$','$'], [\"\\\\(\",\"\\\\)\"] ],\n",
|
||||
" displayMath: [ ['$$','$$'], [\"\\\\[\",\"\\\\]\"] ]\n",
|
||||
" },\n",
|
||||
" displayAlign: 'center', // Change this to 'center' to center equations.\n",
|
||||
" \"HTML-CSS\": {\n",
|
||||
" styles: {'.MathJax_Display': {\"margin\": 4}}\n",
|
||||
" }\n",
|
||||
" });\n",
|
||||
"</script>\n"
|
||||
],
|
||||
"metadata": {},
|
||||
"output_type": "pyout",
|
||||
"prompt_number": 1,
|
||||
"text": [
|
||||
"<IPython.core.display.HTML at 0x7fc878ec3d50>"
|
||||
]
|
||||
}
|
||||
],
|
||||
"prompt_number": 1
|
||||
},
|
||||
{
|
||||
"cell_type": "heading",
|
||||
"level": 2,
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Introduction"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {}
|
||||
}
|
||||
]
|
||||
}
|
File diff suppressed because one or more lines are too long
@ -264,11 +264,11 @@
|
||||
"\n",
|
||||
"Not ready for public consumption. In development.\n",
|
||||
"\n",
|
||||
"author's note: The chapter on g-h filters is fairly complete as far as planned content goes. The content for the discrete Bayesian chapter, chapter 2, is also fairly complete. After that I have questions in my mind as to the best way to present the statistics needed to understand the filters. I try to avoid the 'dump a sememster of math into 4 pages' approash of most textbooks, but then again perhaps I put things off a bit too long. In any case, the subsequent chapters are due a strong editting cycle where I decide how to best develop these concepts. Otherwise I am pretty happy with the content for the one dimensional and multidimensional Kalman filter chapters. I know the code works, I am using it in real world projects at work, but there are areas where the content about the covariance matrices is pretty bad. The implementation is fine, the description is poor. Sorry. It will be corrected. \n",
|
||||
"author's note: The chapter on g-h filters is fairly complete as far as planned content goes. The content for the discrete Bayesian chapter, chapter 2, is also fairly complete. After that I have questions in my mind as to the best way to present the statistics needed to understand the filters. I try to avoid the 'dump a semester of math into 4 pages' approach of most textbooks, but then again perhaps I put things off a bit too long. In any case, the subsequent chapters are due a strong editing cycle where I decide how to best develop these concepts. Otherwise I am pretty happy with the content for the one dimensional and multidimensional Kalman filter chapters. I know the code works, I am using it in real world projects at work, but there are areas where the content about the covariance matrices is pretty bad. The implementation is fine, the description is poor. Sorry. It will be corrected. \n",
|
||||
"\n",
|
||||
"Beyond that the chapters are much more in a state of flux. Reader beware. My writing methodology is to just vomit out whatever is in my head, just to get material, and then go back and think through presentation, test code, refine, and so on. Whatever is checked in in these later chapters may be wrong and not ready for your use. \n",
|
||||
"\n",
|
||||
"Finally, nothing has been spell checked or proof read yet. I with IPython Notebook had spell check, but it doesn't seem to. \n"
|
||||
"Finally, nothing has been spell checked or proof read yet. I wish IPython Notebook had spell check, but it doesn't seem to. \n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -283,21 +283,21 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This is a book for programmers that have a need or interest in Kalman filtering. The motivation for this book came out of my desire for a gentle introduction to Kalman filtering. I'm a software engineer that spent almost two decades in the avionics field, and so I have always been 'bumping elbows' with the Kalman filter, but never implemented one myself. As I moved into solving tracking problems with computer vision the need became urgent. There are classic textbooks in the field, such as Grewal and Andrew's excellent *Kalman Filtering*. But sitting down and trying to read many of these books is a dismal and trying experience if you do not have the background. Typcially the first few chapters fly through several years of undergraduate math, blithely referring you to textbooks on, for example, It\u014d calculus, and presenting an entire semester's worth of statistics in a few brief paragraphs. These books are good textbooks for an upper undergraduate course, and an invaluable reference to researchers and professionals, but the going is truly difficult for the more casual reader. Symbology is introduced without explanation, different texts use different words and variables names for the same concept, and the books are almost devoid of examples or worked problems. I often found myself able to parse the words and comprehend the mathematics of a defition, but had no idea as to what real world phenomena these words and math were attempting to describe. \"But what does that *mean?*\" was my repeated thought.\n",
|
||||
"This is a book for programmers that have a need or interest in Kalman filtering. The motivation for this book came out of my desire for a gentle introduction to Kalman filtering. I'm a software engineer that spent almost two decades in the avionics field, and so I have always been 'bumping elbows' with the Kalman filter, but never implemented one myself. They always has a fearsome reputation for difficulty, and I did not have the requisite education. Everyone I met that did implement them had multiple graduate courses on the topic and extensive industrial experience with them. As I moved into solving tracking problems with computer vision the need to implement them myself became urgent. There are classic textbooks in the field, such as Grewal and Andrew's excellent *Kalman Filtering*. But sitting down and trying to read many of these books is a dismal and trying experience if you do not have the background. Typically the first few chapters fly through several years of undergraduate math, blithely referring you to textbooks on, for example, It\u014d calculus, and presenting an entire semester's worth of statistics in a few brief paragraphs. These books are good textbooks for an upper undergraduate course, and an invaluable reference to researchers and professionals, but the going is truly difficult for the more casual reader. Symbology is introduced without explanation, different texts use different words and variables names for the same concept, and the books are almost devoid of examples or worked problems. I often found myself able to parse the words and comprehend the mathematics of a definition, but had no idea as to what real world phenomena these words and math were attempting to describe. \"But what does that *mean?*\" was my repeated thought.\n",
|
||||
"\n",
|
||||
"However, as I began to finally understand the Kalman filter I realized the underlying concepts are quite straightforward. A few simple probability rules, some intuition about how we integrate disparate knowledge to explain events in our everyday life and the core concepts of the Kalman filter are accessible. Kalman filters have a reputation for difficulty, but shorn of much of the formal terminology the beauty of the subject and of their math became clear to me, and I fell in love with the topic. \n",
|
||||
"\n",
|
||||
"As I began to understand the math and theory more difficulties itself. A book or paper's author makes some statement of fact and presents a graph as proof. Unfortunately, why the statement is true is not clear to me, nor is the method by which you might make that plot obvious. Or maybe I wonder \"is this true if R=0?\" Or the author provides pseudocode - at such a high level that the implementation is not obvious. Some books offer Matlab code, but I do not have a license to that expensive package. Finally, many books end each chapter with many useful exercises. Exercises which you need to understand if you want to implement Kalman filters for yourself, but excercises with no answers. If you are using the book in a classroom, perhaps this is okay, but it is terrible for the independent reader. I loathe that an author witholds information from me, presumably to avoid 'cheating' by the student in the classroom.\n",
|
||||
"As I began to understand the math and theory more difficulties itself. A book or paper's author makes some statement of fact and presents a graph as proof. Unfortunately, why the statement is true is not clear to me, nor is the method by which you might make that plot obvious. Or maybe I wonder \"is this true if R=0?\" Or the author provides pseudocode - at such a high level that the implementation is not obvious. Some books offer Matlab code, but I do not have a license to that expensive package. Finally, many books end each chapter with many useful exercises. Exercises which you need to understand if you want to implement Kalman filters for yourself, but exercises with no answers. If you are using the book in a classroom, perhaps this is okay, but it is terrible for the independent reader. I loathe that an author withholds information from me, presumably to avoid 'cheating' by the student in the classroom.\n",
|
||||
"\n",
|
||||
"None of this necessary, from my point of view. Certainly if you are designing a Kalman filter for a aircraft or missile you must thoroughly master of all of the mathematics and topics in a typical Kalman filter textbook. I just want to track an image on a screen, or write some code for my Arduino project. I want to know how the plots in the book are made, and chose different parameters than the author chose. I want to run simulations. I want to inject more noise in the signal and see how a filter performs. There are thousands of opportunities for using Kalman filters in everyday code, and yet this fairly straightforward topic is the provence of rocket scientists and academics.\n",
|
||||
"None of this necessary, from my point of view. Certainly if you are designing a Kalman filter for a aircraft or missile you must thoroughly master of all of the mathematics and topics in a typical Kalman filter textbook. I just want to track an image on a screen, or write some code for my Arduino project. I want to know how the plots in the book are made, and chose different parameters than the author chose. I want to run simulations. I want to inject more noise in the signal and see how a filter performs. There are thousands of opportunities for using Kalman filters in everyday code, and yet this fairly straightforward topic is the provenance of rocket scientists and academics.\n",
|
||||
"\n",
|
||||
"I wrote this book to address all of those needs. This is not the book for you if you program avionics for Boeing or design radars for Ratheon. Go get a degree at Georgia Tech, UW, or the like, because you'll need it. This book is for the hobbiest, the curious, and the working engineer that needs to filter or smooth data. \n",
|
||||
"I wrote this book to address all of those needs. This is not the book for you if you program avionics for Boeing or design radars for Ratheon. Go get a degree at Georgia Tech, UW, or the like, because you'll need it. This book is for the hobbyist, the curious, and the working engineer that needs to filter or smooth data. \n",
|
||||
"\n",
|
||||
"This book is interactive. While you can read it online as static content, I urge you to use it as intended. It is written using IPython Notebook, which allows me to combine text, python, and python output in one place. Every plot, every piece of data in this book is generated from Python that is available to you right inside the notebook. Want to double the value of a parameter? Click on the Python cell, change the parameter's value, and click 'Run'. A new plot or printed output will appear in the book. \n",
|
||||
"\n",
|
||||
"This book has exercises, but it also has the answers. I trust you. If you just need an answer, go ahead and read the answer. If you want to internalize this knowledge, try to implement the exercise before you read the answer. \n",
|
||||
"\n",
|
||||
"This book has supporting libraries for computing statistics, plotting various things related to filters, and for the various filters that we cover. This does require a strong caveat; most of the code is written for didactic purposes. It is rare that I chose the most efficient solution (which often obscures the intent of the code), and in the first parts of the book I did not concern myself with numerical stability. This is important to understand - Kalman filters in aircraft are carefully designed and implemented to be numerically stable; the naive implemention is not stable in many cases. If you are serious about Kalman filters this book will not be the last book you need. My intention is to introduce you to the concepts and mathematics, and to get you to the point where the textbooks are approachable.\n",
|
||||
"This book has supporting libraries for computing statistics, plotting various things related to filters, and for the various filters that we cover. This does require a strong caveat; most of the code is written for didactic purposes. It is rare that I chose the most efficient solution (which often obscures the intent of the code), and in the first parts of the book I did not concern myself with numerical stability. This is important to understand - Kalman filters in aircraft are carefully designed and implemented to be numerically stable; the naive implementation is not stable in many cases. If you are serious about Kalman filters this book will not be the last book you need. My intention is to introduce you to the concepts and mathematics, and to get you to the point where the textbooks are approachable.\n",
|
||||
"\n",
|
||||
"Finally, this book is free. The cost for the books required to learn Kalman filtering is somewhat prohibitive even for a Silicon Valley engineer like myself; I cannot believe the are within the reach of someone in a depressed economy, or a financially struggling student. I have gained so much from free software like Python, and free books like those from Allen B. Downey [here](http://www.greenteapress.com/). It's time to repay that. So, the book is free, it is hosted on free servers, and it uses only free and open software such as IPython and mathjax to create the book. "
|
||||
]
|
||||
@ -342,7 +342,7 @@
|
||||
"source": [
|
||||
"If you want to run the notebook on your computer, which is what I recommend, then you will have to have IPython installed. I do not cover how to do that in this book; requirements change based on what other python installations you may have, whether you use a third party package like Anaconda Python, what operating system you are using, and so on. \n",
|
||||
"\n",
|
||||
"To use all features you will have to have Ipython 2.0 installed, which is released and stable as of April 2014. Most of the book does not require that, but I do make use of the interactive plotting widgets introduced in this release. A few cells will not run if you have an older version installed.\n",
|
||||
"To use all features you will have to have IPython 2.0 installed, which is released and stable as of April 2014. Most of the book does not require that, but I do make use of the interactive plotting widgets introduced in this release. A few cells will not run if you have an older version installed.\n",
|
||||
"\n",
|
||||
"You will need Python 2.7 or later installed. Almost all of my work is done in Python 2.7, but I periodically test on 3.3. I do not promise any specific check in will work in 3.X, however. I do use Python's \"from __future__ import ...\" statement to help with compatibility. For example, all prints need to use parenthesis. If you try to add, say, \"print 3.14\" into the book your script will fail; you must write \"print (3.4)\" as in Python 3.X.\n",
|
||||
"\n",
|
||||
|
@ -263,9 +263,9 @@
|
||||
"In the previous chapter we developed the Extended Kalman Filter to allow us to use the Kalman filter with nonlinear problems. It is by far the most commonly used Kalman filter. However, it requires that you be able to analytically derive the Jacobian blah blah limp prose.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"However, for many problems finding the Jacobian is either very difficult or impossible. Futhermore, being an approximation, the EKF can diverge. For all these reasons there is a need for a different way to approximate the Gaussian being passed through a nonlinear transfer function. In the last chapter I showed you this plot:\n",
|
||||
"However, for many problems finding the Jacobian is either very difficult or impossible. Furthermore, being an approximation, the EKF can diverge. For all these reasons there is a need for a different way to approximate the Gaussian being passed through a nonlinear transfer function. In the last chapter I showed you this plot:\n",
|
||||
"\n",
|
||||
"**author's note - need to add calcuation of mean/var to the output.**"
|
||||
"**author's note - need to add calculation of mean/var to the output.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -301,9 +301,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"I generated this by taking 500,000 samples from the input, passing it through the nonlinear tranform, and building a histogram of the result. From that histogram we can then compute a mean and a variance that we compared to the output of the EKF.\n",
|
||||
"I generated this by taking 500,000 samples from the input, passing it through the nonlinear transform, and building a histogram of the result. From that histogram we can then compute a mean and a variance that we compared to the output of the EKF.\n",
|
||||
"\n",
|
||||
"It has perhaps occured to you that this sampling process constitutes a solution to our problem. This is called a 'monte carlo' approach, and it used by some Kalman filter designs, such as the *Ensemble filter*. Sampling requires no specialized knowledge programming, and does not require a closed form solution. No matter how nonlinear or poorly behaved the tranfer function is, as long as we sample with enough points we will build an accurate output distribution.\n",
|
||||
"It has perhaps occurred to you that this sampling process constitutes a solution to our problem. This is called a 'monte carlo' approach, and it used by some Kalman filter designs, such as the *Ensemble filter*. Sampling requires no specialized knowledge programming, and does not require a closed form solution. No matter how nonlinear or poorly behaved the transfer function is, as long as we sample with enough points we will build an accurate output distribution.\n",
|
||||
"\n",
|
||||
"\"Enough points\" is the rub. The graph above was created with 500,000 points, and the output is still not smooth. You wouldn't need to use that many points to get a reasonable estimate of the mean and variance, but it will require many points. What's worse, this is only for 1 dimension. In general, the number of points required increases by the power of the number of dimensions. If you need $50$ points for 1 dimension, you need $50^2$ for two dimensions, $50^3$ for three dimensions, and so on. So while this approach does work, it is very computationally expensive. The Unscented Kalman filter uses a somewhat similar technique but reduces the amount of computation needed by a drastic amount. \n",
|
||||
"\n",
|
||||
@ -350,7 +350,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"So what would be fewest number of sampled points that we can use, and what kinds of constraints does this problem formulation put on the points? We will assume that we have no special knowledge about the nonlinear tranform as we want to find a generalized algorithm. For reasons that come clear in the next section, we will call these points *sigma points*.\n",
|
||||
"So what would be fewest number of sampled points that we can use, and what kinds of constraints does this problem formulation put on the points? We will assume that we have no special knowledge about the nonlinear transform as we want to find a generalized algorithm. For reasons that come clear in the next section, we will call these points *sigma points*.\n",
|
||||
"\n",
|
||||
"Let's consider the simplest possible case, and see if it offers any insight. The simplest possible system is *identity* - the transformation does not alter the input. It should be clear that if our algorithm does not work for the identity transformation then the filter will never converge. In other words, if the input is 1 (for a one dimensional system), the output must also be 1. If the output was different, such as 1.1, then when we fed 1.1 into the transform at the next time step, we'd get out yet another number, maybe 1.23. The filter would run away (diverge). \n",
|
||||
"\n",
|
||||
@ -446,7 +446,7 @@
|
||||
"source": [
|
||||
"So our desire is to have an algorithm for selecting sigma points based on some criteria. Maybe we know something about our nonlinear problem, and we know we want our sigma points to be very close together, or very far apart. Or through experimentation we decide that a certain choice of basis vectors from our hyperellipse are the best axis to choose our sigma points from. But we want this to be an algorithm - we don't want to have to hard code in a specific selection algorithm for each different problem. So we are going to want to be able to set some parameters to tell the algorithm how to automatically select the points and weights for us. That may seem a bit abstract, so let's just launch into it, and try to develop an intuitive understanding as we go.\n",
|
||||
"\n",
|
||||
"Assume a n-dimensional state variable $\\mathbf{x}$ with mean $\\mu$ and covariance $\\Sigma$. We want to choose $2n+1$ sigma points to approximate the gaussian distribution of $\\mathbf{x}$.\n",
|
||||
"Assume a n-dimensional state variable $\\mathbf{x}$ with mean $\\mu$ and covariance $\\Sigma$. We want to choose $2n+1$ sigma points to approximate the Gaussian distribution of $\\mathbf{x}$.\n",
|
||||
"\n",
|
||||
"Our first sigma point is always going to be the mean of our input. We will call this $\\mathcal{X}_0$. So,\n",
|
||||
"\n",
|
||||
@ -506,7 +506,7 @@
|
||||
"$$\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"In other words, we geneate sigma points from an existing state variable from its mean and covariance matrix. We pass those sigma points through the nonlinear function that we are trying to filter. Then we use equations (2) and (3) to regenerate an approximation for the mean and covariance of the output."
|
||||
"In other words, we generate sigma points from an existing state variable from its mean and covariance matrix. We pass those sigma points through the nonlinear function that we are trying to filter. Then we use equations (2) and (3) to regenerate an approximation for the mean and covariance of the output."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -558,7 +558,7 @@
|
||||
"\\end{aligned}\n",
|
||||
"$$\n",
|
||||
"\n",
|
||||
"These two lines of code implenent these equations with the `np.full()` method, which creates and fills an array with the same value. Then the value for the mean($W_0$) is computed and overwrites the filled in value. We make $W$ a $(2n+1)\\times1$ dimension array simply because linear algebra with numpy proceeds much more smoothly when all arrays are 2 dimensional, so the one dimensional array `[1,2,3]` is better espressed in numpy as `[[1,2,3]]`.\n",
|
||||
"These two lines of code implenent these equations with the `np.full()` method, which creates and fills an array with the same value. Then the value for the mean($W_0$) is computed and overwrites the filled in value. We make $W$ a $(2n+1)\\times1$ dimension array simply because linear algebra with numpy proceeds much more smoothly when all arrays are 2 dimensional, so the one dimensional array `[1,2,3]` is better expressed in numpy as `[[1,2,3]]`.\n",
|
||||
"\n",
|
||||
" W = np.full((2*n+1,1), .5 / (n+kappa))\n",
|
||||
" W[0] = kappa / (n+kappa)\n",
|
||||
|
@ -10,7 +10,6 @@
|
||||
((* block tableofcontents *))\tableofcontents((* endblock tableofcontents *))
|
||||
((* endblock predoc *))
|
||||
|
||||
|
||||
((* block title *))
|
||||
\title{Kalman and Bayesian Filters in Python}
|
||||
\author{Roger R Labbe Jr}
|
||||
|
163
g-h_filter.ipynb
163
g-h_filter.ipynb
File diff suppressed because one or more lines are too long
@ -30,6 +30,6 @@ def merge_notebooks(filenames):
|
||||
|
||||
if __name__ == '__main__':
|
||||
#merge_notebooks(sys.argv[1:])
|
||||
merge_notebooks(['Preface.ipynb', 'Signals_and_Noise.ipynb','g-h_filter.ipynb', 'discrete_bayes.ipynb', 'Gaussians.ipynb', 'Kalman_Filters.ipynb', 'Multidimensional_Kalman_Filters.ipynb', 'Kalman_Filter_Math.ipynb', 'Extended_Kalman_Filters.ipynb', 'Unscented_Kalman_Filter.ipynb', 'Designing_Nonlinear_Kalman_Filters.ipynb'])
|
||||
merge_notebooks(['Preface.ipynb', 'Signals_and_Noise.ipynb','g-h_filter.ipynb', 'discrete_bayes.ipynb', 'Least_Squares_Filters.ipynb', 'Gaussians.ipynb', 'Kalman_Filters.ipynb', 'Multidimensional_Kalman_Filters.ipynb', 'Kalman_Filter_Math.ipynb', 'Extended_Kalman_Filters.ipynb', 'Unscented_Kalman_Filter.ipynb', 'Designing_Nonlinear_Kalman_Filters.ipynb', 'Appendix_Symbols_and_Notations.ipynb'])
|
||||
# merge_notebooks(['Preface.ipynb', 'Signals_and_Noise.ipynb' g-h_filter.ipynb discrete_bayes.ipynb Gaussians.ipynb Kalman_Filters.ipynb Multidimensional_Kalman_Filters.ipynb Kalman_Filter_Math.ipynb Designing_Kalman_Filters.ipynb Extended_Kalman_Filters.ipynb Unscented_Kalman_Filter.ipynb'])
|
||||
|
22
report.tplx
Normal file
22
report.tplx
Normal file
@ -0,0 +1,22 @@
|
||||
|
||||
% Default to the notebook output style
|
||||
((* if not cell_style is defined *))
|
||||
((* set cell_style = 'style_ipython.tplx' *))
|
||||
((* endif *))
|
||||
|
||||
% Inherit from the specified cell style.
|
||||
((* extends cell_style *))
|
||||
|
||||
|
||||
%===============================================================================
|
||||
% Latex Book
|
||||
%===============================================================================
|
||||
|
||||
((* block predoc *))
|
||||
((( super() )))
|
||||
((* block tableofcontents *))\tableofcontents((* endblock tableofcontents *))
|
||||
((* endblock predoc *))
|
||||
|
||||
((* block docclass *))
|
||||
\documentclass[12pt]{report}
|
||||
((* endblock docclass *))
|
Loading…
Reference in New Issue
Block a user