Revisions based on Radford's comments.
This commit is contained in:
parent
2e0e513912
commit
8a9ecc026e
@ -117,9 +117,6 @@
|
||||
" font-family: 'Arimo',verdana,arial,sans-serif;\n",
|
||||
" line-height: 125%;\n",
|
||||
" font-size: 120%;\n",
|
||||
" width:740px;\n",
|
||||
" margin-left:auto;\n",
|
||||
" margin-right:auto;\n",
|
||||
" text-align:justify;\n",
|
||||
" text-justify:inter-word;\n",
|
||||
" }\n",
|
||||
@ -2322,7 +2319,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.4.3"
|
||||
"version": "3.4.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -117,9 +117,6 @@
|
||||
" font-family: 'Arimo',verdana,arial,sans-serif;\n",
|
||||
" line-height: 125%;\n",
|
||||
" font-size: 120%;\n",
|
||||
" width:740px;\n",
|
||||
" margin-left:auto;\n",
|
||||
" margin-right:auto;\n",
|
||||
" text-align:justify;\n",
|
||||
" text-justify:inter-word;\n",
|
||||
" }\n",
|
||||
@ -1113,7 +1110,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In the previous chapter we incorporated an uncertain measurement with an uncertain estimate by multiplying their Gaussians together. The result was another Gaussian with a smaller variance. If two piece of uncertain information corroborate each other we should be more certain in our conclusion. The graphs look like this:"
|
||||
"In the previous chapter we incorporated an uncertain measurement with an uncertain estimate by multiplying their Gaussians together. The result was another Gaussian with a smaller variance. If two pieces of uncertain information corroborate each other we should be more certain in our conclusion. The graphs look like this:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1217,14 +1214,15 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now suppose that there is a radar to the lower right of the aircraft. Further suppose that the radar is very accurate in the bearing measurement, but not very accurate at the range. That covariance, which is just the uncertainty in the reading might look like this (plotted in blue):"
|
||||
"Now suppose that there is a radar to the lower left of the aircraft. Further suppose that the radar is very accurate in the bearing measurement, but not very accurate at the range. That covariance, which is just the uncertainty in the reading might look like this (plotted in blue):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
"collapsed": false,
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
@ -4002,7 +4000,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.4.3"
|
||||
"version": "3.4.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -117,9 +117,6 @@
|
||||
" font-family: 'Arimo',verdana,arial,sans-serif;\n",
|
||||
" line-height: 125%;\n",
|
||||
" font-size: 120%;\n",
|
||||
" width:740px;\n",
|
||||
" margin-left:auto;\n",
|
||||
" margin-right:auto;\n",
|
||||
" text-align:justify;\n",
|
||||
" text-justify:inter-word;\n",
|
||||
" }\n",
|
||||
@ -291,7 +288,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The title of this book is *Kalman and Bayesian Filters in Python* but to date I have not touched on the Bayesian aspect much. There was enough going on in the earlier chapters that adding this form of reasoning about filters could be a distraction rather than a help. I now which to take some time to explain what Bayesian probability is and how a Kalman filter is in fact a Bayesian filter. This is not just a diversion. First of all, a lot of the Kalman filter literature uses this formulation when talking about filters, so you will need to understand what they are talking about. Second, this math plays a strong role in filtering design once we move past the Kalman filter. \n",
|
||||
"The title of this book is *Kalman and Bayesian Filters in Python* but to date I have not touched on the Bayesian aspect much. There was enough going on in the earlier chapters that adding this form of reasoning about filters could be a distraction rather than a help. I now wish to take some time to explain what Bayesian probability is and how a Kalman filter is in fact a Bayesian filter. This is not just a diversion. First of all, a lot of the Kalman filter literature uses this formulation when talking about filters, so you will need to understand what they are talking about. Second, this math plays a strong role in filtering design once we move past the Kalman filter. \n",
|
||||
"\n",
|
||||
"To do so we will go back to our first tracking problem - tracking a dog in a hallway. Recall the update step - we believed with some degree of precision that the dog was at position 7 (for example), and then receive a measurement that the dog is at position 7.5. We want to incorporate that measurement into our belief. In the *Discrete Bayes* chapter we used histograms to denote our estimates at each hallway position, and in the *One Dimensional Kalman Filters* we used Gaussians. Both are method of using *Bayesian* probability.\n",
|
||||
"\n",
|
||||
@ -2513,7 +2510,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.4.3"
|
||||
"version": "3.4.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -16,11 +16,19 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": 2,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The autoreload extension is already loaded. To reload it, use:\n",
|
||||
" %reload_ext autoreload\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
@ -117,9 +125,6 @@
|
||||
" font-family: 'Arimo',verdana,arial,sans-serif;\n",
|
||||
" line-height: 125%;\n",
|
||||
" font-size: 120%;\n",
|
||||
" width:740px;\n",
|
||||
" margin-left:auto;\n",
|
||||
" margin-right:auto;\n",
|
||||
" text-align:justify;\n",
|
||||
" text-justify:inter-word;\n",
|
||||
" }\n",
|
||||
@ -250,7 +255,7 @@
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"execution_count": 1,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@ -354,9 +359,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"I particularly like the following way of looking at the problem, which I am borrowing from Dan Simon's *Optimal State Estimation* [1]. Consider a tracking problem where we get the range and bearing to a target, and we want to track its position. Suppose the distance is 50km, and the reported angle is 90$^\\circ$. Now, given that sensors are imperfect, assume that the errors in both range and angle are distributed in a Gaussian manner. That is, each time we take a reading the range will be $50\\pm\\sigma^2_{range}$ and the angle will be $90\\pm\\sigma^2_{angle}$. Given an infinite number of measurements what is the expected value of the position?\n",
|
||||
"I particularly like the following way of looking at the problem, which I am borrowing from Dan Simon's *Optimal State Estimation* [1]. Consider a tracking problem where we get the range and bearing to a target, and we want to track its position. Suppose the distance is 50 km, and the reported angle is 90$^\\circ$. Now, given that sensors are imperfect, assume that the errors in both range and angle are distributed in a Gaussian manner. That is, each time we take a reading the range will be $50\\pm\\sigma^2_{range}$ and the angle will be $90\\pm\\sigma^2_{angle}$. Given an infinite number of measurements what is the expected value of the position?\n",
|
||||
"\n",
|
||||
"I have been recommending using intuition to solve problems in this book, so let's see how it fares for this problem (hint: nonlinear problems are *not* intuitive). We might reason that since the mean of the range will be 50km, and the mean of the angle will be 90$^\\circ$, that clearly the answer will be x=0 km, y=90 km.\n",
|
||||
"I have been recommending using intuition to solve problems in this book, so let's see how it fares for this problem (hint: nonlinear problems are *not* intuitive). We might reason that since the mean of the range will be 50 km, and the mean of the angle will be 90$^\\circ$, that clearly the answer will be x=0 km, y=90 km.\n",
|
||||
"\n",
|
||||
"Well, let's plot that and find out. Here are 300 points plotted with a normal distribution of the distance of 0.4 km, and the angle having a normal distribution of 0.35 radians. We compute the average of the all of the positions, and display it as a star. Our intuition is displayed with a triangle."
|
||||
]
|
||||
@ -491,7 +496,8 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The plot labeled 'input' is the histogram of the original data. This is passed through the transfer function $f(x)=2x+1$ which is displayed in the chart on the bottom right. The red lines shows how one value, $x=0$ is passed through the function. Each value from input is passed through in the same way to the output function on the left. For the output I computed the mean by taking the average of all the points, and drew the results with the dotted blue line. A solid blue line shows the actual mean for the point $x=0$. The output looks like a Gaussian, and is in fact a Gaussian. We can see that it is altered -the variance in the output is larger than the variance in the input, and the mean has been shifted from 0 to 1, which is what we would expect given the transfer function $f(x)=2x+1$ The $2x$ affects the variance, and the $+1$ shifts the mean The computed mean, represented by the dotted blue line, is nearly equal to the actual mean. If we used more points in our computation we could get arbitrarily close to the actual value.\n",
|
||||
"The plot labeled 'input' is the histogram of the original data. This is passed through the transfer function $f(x)=2x+1$ which is displayed in the chart on the bottom lweft\n",
|
||||
". The red lines shows how one value, $x=0$ is passed through the function. Each value from input is passed through in the same way to the output function on the left. For the output I computed the mean by taking the average of all the points, and drew the results with the dotted blue line. A solid blue line shows the actual mean for the point $x=0$. The output looks like a Gaussian, and is in fact a Gaussian. We can see that it is altered -the variance in the output is larger than the variance in the input, and the mean has been shifted from 0 to 1, which is what we would expect given the transfer function $f(x)=2x+1$ The $2x$ affects the variance, and the $+1$ shifts the mean The computed mean, represented by the dotted blue line, is nearly equal to the actual mean. If we used more points in our computation we could get arbitrarily close to the actual value.\n",
|
||||
"\n",
|
||||
"Now let's look at a nonlinear function and see how it affects the probability distribution."
|
||||
]
|
||||
@ -729,7 +735,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It is somewhat hard for me to look at probability distributions like this and really reason about what will happen in a real world filter. So let's think about tracking an aircraft with radar. the aircraft may have a covariance that looks like this:"
|
||||
"It is somewhat hard for me to look at probability distributions like this and really reason about what will happen in a real world filter. So let's think about tracking an aircraft with radar. The aircraft may have a covariance that looks like this:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -796,7 +802,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can see by inspection that the probable position of the aircraft is somewhere near x=11.4, y=2.7 because that is where the covariance ellipse and range measurement overlap. But the range measurement is nonlinear so we have to linearize it. We haven't covered this material yet, but the EKF will linearize at the last position of the aircraft - (10,2). At x=2 the range measurement has y=3, and so we linearize at that point."
|
||||
"We can see by inspection that the probable position of the aircraft is somewhere near x=11.4, y=2.7 because that is where the covariance ellipse and range measurement overlap. But the range measurement is nonlinear so we have to linearize it. We haven't covered this material yet, but the EKF will linearize at the last position of the aircraft - (10,2). At x=10 the range measurement has y=3, and so we linearize at that point."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -889,7 +895,7 @@
|
||||
"source": [
|
||||
"You may be impatient to solve a specific problem, and wondering which filter to use. So let me quickly run through the options. The subsequent chapters are somewhat independent of each other, and you can fruitfully skip around, though I recommend reading linearly if you truly want to master all of the material. \n",
|
||||
"\n",
|
||||
"The workhorses of nonlinear filters are the *linearized Kalman filter* and *extended Kalman filter*. These two techniques were invented shortly after Kalman published his paper and they have been the main technique used since then. The flight software in airplanes, the GPS in your car or phone almost certainly use one of these techniques. \n",
|
||||
"The workhorses of nonlinear filters are the *linearized Kalman filter* and *extended Kalman filter*. These two techniques were invented shortly after Kalman published his paper and they have been the main techniques used since then. The flight software in airplanes, the GPS in your car or phone almost certainly use one of these techniques. \n",
|
||||
"\n",
|
||||
"However, these techniques are extremely demanding. The EKF linearizes the differential equations at one point, which requires you to find a solution to a matrix of partial derivatives (a Jacobian). This can be difficult or impossible to do analytically. If impossible, you have to use numerical techniques to find the Jacobian, but this is expensive computationally and introduces error into the system. Finally, if the problem is quite nonlinear the linearization leads to a lot of error being introduced in each step, and the filters frequently diverge. You can not throw some equations into some arbitrary solver and expect to to get good results. It's a difficult field for professionals. I note that most Kalman filtering textbooks merely gloss over the extended Kalman filter (EKF) despite it being the most frequently used technique in real world applications. \n",
|
||||
"\n",
|
||||
@ -924,7 +930,7 @@
|
||||
"\n",
|
||||
"Some will quibble with that advice. A lot of recent publications are devoted to a comparison of the EKF, UKF, and perhaps a few other choices for a given problem. Do you not need to perform a similar comparison for your problem? If you are sending a rocket to Mars, then of course you do. You will be balancing issues such as accuracy, round off errors, divergence, mathematical proof of correctness, and the computational effort required. I can't imagine not knowing the EKF intimately. \n",
|
||||
"\n",
|
||||
"On the other hand the UKF works spectacularly! I use it at work for real world applications. I mostly haven't even tried to implement an EKF for these applications because I can verify that the UKF is working fine. Is it possible that I might eke out another 0.2% of performance from the EKF in certain situations? Sure! Do I care? No! I completely understand the UKF implementation, it is easy to test and verify, I can pass the code to others and be confident that they can understand and modify it, and I am not a masochist that wants to battle difficult equations when I already have a working solution. If the UKF or particle filters starts to perform poorly for some problem then I will turn other techniques, but not before then. And realistically, the UKF usually provides substantially better performance than the EKF over a wide range of problems and conditions. If \"really good\" is good enough I'm going to spend my time working on other problems. \n",
|
||||
"On the other hand the UKF works spectacularly! I use it at work for real world applications. I mostly haven't even tried to implement an EKF for these applications because I can verify that the UKF is working fine. Is it possible that I might eke out another 0.2% of performance from the EKF in certain situations? Sure! Do I care? No! I completely understand the UKF implementation, it is easy to test and verify, I can pass the code to others and be confident that they can understand and modify it, and I am not a masochist that wants to battle difficult equations when I already have a working solution. If the UKF or particle filters start to perform poorly for some problem then I will turn other techniques, but not before then. And realistically, the UKF usually provides substantially better performance than the EKF over a wide range of problems and conditions. If \"really good\" is good enough I'm going to spend my time working on other problems. \n",
|
||||
"\n",
|
||||
"I'm belaboring this point because in most textbooks the EKF is given center stage, and the UKF is either not mentioned at all or just given a 2 page gloss that leaves you completely unprepared to use the filter. This is not due to ignorance on the writer's part. The UKF is still relatively new, and it takes time to write new editions of books. At the time many books were written the UKF was either not discovered yet, or it was just an unproven but promising curiosity. But I am writing this now, the UKF has had enormous success, and it needs to be in your toolkit. That is what I will spend most of my effort trying to teach you. "
|
||||
]
|
||||
@ -946,7 +952,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.4.3"
|
||||
"version": "3.4.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -653,7 +653,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As I already said there are several published algorithms for choosing the sigma points. I will present the method first chosen by the inventors of the UKF. This algorithm is performed for you automatically by the FilterPy UKF code, but it valuable to understand how they are computed.\n",
|
||||
"As I already said there are several published algorithms for choosing the sigma points. I will present the method first chosen by the inventors of the UKF. This algorithm is performed for you automatically by the FilterPy UKF code, but it is valuable to understand how they are computed.\n",
|
||||
"\n",
|
||||
"Our first sigma point is always going to be the mean of our input. We will call this $\\mathcal{X}_0$. So,\n",
|
||||
"\n",
|
||||
@ -875,11 +875,11 @@
|
||||
"$$\\mathbf{H} = \\begin{bmatrix}1&0&0&0 \\\\ 0&0&1&0\n",
|
||||
"\\end{bmatrix}$$\n",
|
||||
"\n",
|
||||
"Let's say our our sensor gives positions in meters with an error of $1\\sigma=0.3$ meters in both the *x* and *y* coordinates. This gives us a measurement noise matrix of \n",
|
||||
"Let's say our sensor gives positions in meters with an error of $1\\sigma=0.3$ meters in both the *x* and *y* coordinates. This gives us a measurement noise matrix of \n",
|
||||
"\n",
|
||||
"$$\\mathbf{R} = \\begin{bmatrix}0.3^2 &0\\\\0 & 0.3^2\\end{bmatrix}$$\n",
|
||||
"\n",
|
||||
"Finally, let's assume that the process noise is discrete white noise model - that is, that over each time period the acceleration is constant. We can use `FilterPy`'s `Q_discrete_white_noise()` method to create this matrix for us, but for review the matrix is\n",
|
||||
"Finally, let's assume that the process noise can be represented by the discrete white noise model - that is, that over each time period the acceleration is constant. We can use `FilterPy`'s `Q_discrete_white_noise()` method to create this matrix for us, but for review the matrix is\n",
|
||||
"\n",
|
||||
"$$\\mathbf{Q} = \\begin{bmatrix}\n",
|
||||
"\\frac{1}{4}\\Delta t^4 & \\frac{1}{2}\\Delta t^3 \\\\\n",
|
||||
@ -1080,7 +1080,7 @@
|
||||
"source": [
|
||||
"Let's tackle our first nonlinear problem. To minimize the number of things that change I will keep the problem formulation very similar to the linear tracking problem above. For this problem we will write a filter to track a flying airplane using a stationary radar as the sensor. To keep the problem as close to the previous one as possible we will track in two dimensions, not three. We will track one dimension on the ground and the altitude of the aircraft. The second dimension on the ground adds no difficulty or different information, so we can do this with no loss of generality.\n",
|
||||
"\n",
|
||||
" Radars work by emitting a beam of radio waves and scanning for a return bounce. Anything in the beam's path will reflects some of the signal back to the radar. By timing how long it takes for the reflected signal to get back to the radar the system can compute the *slant distance* - the straight line distance from the radar installation to the object. We also get the bearing to the target. For this 2D problem that will be the angle above the ground plane. True integration of sensors for applications like military radar and aircraft navigation systems have to take many factors into account which I am not currently interested in trying to cover in this book. So if any practitioners in the the field are reading this they will be rightfully scoffing at my exposition. My only defense is to argue that I am not trying to teach you how to design a military grade radar tracking system, but instead trying to familiarize you with the implementation of the UKF. \n",
|
||||
" Radars work by emitting a beam of radio waves and scanning for a return bounce. Anything in the beam's path will reflect some of the signal back to the radar. By timing how long it takes for the reflected signal to get back to the radar the system can compute the *slant distance* - the straight line distance from the radar installation to the object. We also get the bearing to the target. For this 2D problem that will be the angle above the ground plane. True integration of sensors for applications like military radar and aircraft navigation systems have to take many factors into account which I am not currently interested in trying to cover in this book. So if any practitioners in the the field are reading this they will be rightfully scoffing at my exposition. My only defense is to argue that I am not trying to teach you how to design a military grade radar tracking system, but instead trying to familiarize you with the implementation of the UKF. \n",
|
||||
" \n",
|
||||
"For this example we want to take the slant range measurement from the radar and compute the horizontal position (distance of aircraft from the radar measured over the ground) and altitude of the aircraft, as in the diagram below."
|
||||
]
|
||||
@ -1157,7 +1157,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The next step is to design the measurement function. As in the linear Kalman filter the measurement function converts the filter's state into a measurement. So for this problem we need to position and velocity of the aircraft and convert it to the bearing and range from the radar station.\n",
|
||||
"The next step is to design the measurement function. As in the linear Kalman filter the measurement function converts the filter's state into a measurement. So for this problem we need the position and velocity of the aircraft and convert it to the bearing and range from the radar station.\n",
|
||||
"\n",
|
||||
"If we represent the position of the radar station with an (x,y) coordinate computation of the range and bearing is straightforward. To compute the range we use the Pythagorean theorem:\n",
|
||||
"\n",
|
||||
@ -3434,7 +3434,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.4.3"
|
||||
"version": "3.4.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
Loading…
Reference in New Issue
Block a user