fix some minor typing errors

This commit is contained in:
Kloppenburg Ernst (CR/PJ-AI-R4) 2018-01-25 08:12:21 +01:00
parent 0a9fd12682
commit 9e45bb8174

View File

@ -444,7 +444,7 @@
"\n",
"The fewest number of points that we can use is one per dimension. This is the number that the linear Kalman filter uses. The input to a Kalman filter for the distribution $\\mathcal{N}(\\mu,\\sigma^2)$ is $\\mu$ itself. So while this works for the linear case, it is not a good answer for the nonlinear case.\n",
"\n",
"Perhaps we can use one point per dimension, but altered somehow. However, if we were to pass some value $\\mu+\\Delta$ into the identity function $f(x)=x$ it would not converge, so this will not work. If we didn't alter $\\mu$ then this would the standard Kalman filter. We must conclude that one sample will not work.\n",
"Perhaps we can use one point per dimension, but altered somehow. However, if we were to pass some value $\\mu+\\Delta$ into the identity function $f(x)=x$ it would not converge, so this will not work. If we didn't alter $\\mu$ then this would be the standard Kalman filter. We must conclude that one sample will not work.\n",
"\n",
"What is the next lowest number we can choose? Two. Consider the fact that Gaussians are symmetric, and that we probably want to always have one of our sample points be the mean of the input for the identity function to work. Two points would require us to select the mean, and then one other point. That one other point would introduce an asymmetry in our input that we probably don't want. It would be very difficult to make this work for the identity function $f(x)=x$.\n",
"\n",
@ -874,7 +874,7 @@
"source": [
"## Using the UKF\n",
"\n",
"Let's solve some problems so you can gain confidence in how easy the UKF is to use. We will start with a linear problem you already know how to solve with the linear Kalman filter. Although the UKF was designed for nonlinear problems, it finds the same optimal result as the linear Kalman filter for linear problems. We will write a filter to track an object in 2D using a constant velocity model. This will allows us to focus on what is the same (and most is the same!) and what is different with the UKF. \n",
"Let's solve some problems so you can gain confidence in how easy the UKF is to use. We will start with a linear problem you already know how to solve with the linear Kalman filter. Although the UKF was designed for nonlinear problems, it finds the same optimal result as the linear Kalman filter for linear problems. We will write a filter to track an object in 2D using a constant velocity model. This will allow us to focus on what is the same (and most is the same!) and what is different with the UKF. \n",
"\n",
"Designing a Kalman filter requires you to specify the $\\bf{x}$, $\\bf{F}$, $\\bf{H}$, $\\bf{R}$, and $\\bf{Q}$ matrices. We have done this many times so I will give you the matrices without a lot of discussion. We want a constant velocity model, so we define $\\bf{x}$ to be\n",
"\n",
@ -1083,7 +1083,7 @@
"source": [
"## Tracking an Airplane\n",
"\n",
"Let's tackle our first nonlinear problem. We will write a filter to track an airplane using radar as the sensor. To keep the problem similar to the previous one as possible we will track in two dimensions. We will track one dimension on the ground and the altitude of the aircraft. Each dimension is independent so we can do this with no loss of generality.\n",
"Let's tackle our first nonlinear problem. We will write a filter to track an airplane using radar as the sensor. To keep the problem as similar to the previous one as possible we will track in two dimensions. We will track one dimension on the ground and the altitude of the aircraft. Each dimension is independent so we can do this with no loss of generality.\n",
"\n",
"Radars work by emitting radio waves or microwaves. Anything in the beam's path will reflect some of the signal back to the radar. By timing how long it takes for the reflected signal to return it can compute the *slant distance* to the target. Slant distance is the straight line distance from the radar to the object. Bearing is computed using the *directive gain* of the antenna.\n",
"\n",
@ -1422,7 +1422,7 @@
"source": [
"The filter is unable to track the changing altitude. What do we have to change in our design?\n",
"\n",
"I hope you answered add climb rate to the state, like so:\n",
"I hope you answered \"add climb rate to the state\", like so:\n",
"\n",
"\n",
"$$\\mathbf x = \\begin{bmatrix}\\mathtt{distance} \\\\\\mathtt{velocity}\\\\ \\mathtt{altitude} \\\\ \\mathtt{climb\\, rate}\\end{bmatrix}= \\begin{bmatrix}x \\\\\\dot x\\\\ y \\\\ \\dot y\\end{bmatrix}$$\n",
@ -1694,7 +1694,7 @@
"\n",
"The last sensor fusion problem was a toy example. Let's tackle a problem that is not so toy-like. Before GPS ships and aircraft navigated via various range and bearing systems such as VOR, LORAN, TACAN, DME, and so on. These systems emit beacons in the form of radio waves. The sensor extracts the range and/or bearing to the beacon from the signal. For example, an aircraft might have two VOR receivers. The pilot tunes each receiver to a different VOR station. Each VOR receiver displays the *radial* - the direction from the VOR station on the ground to the aircraft. The pilot uses a chart to find the intersection point of the radials, which identifies the location of the aircraft.\n",
"\n",
"That is a manual approach with low accuracy. A Kalman filter will produce far more accurate position estimates. Assume we have two sensors, each which provides a bearing only measurement to the target, as in the chart below. The width of the perimeters are proportional to the $3\\sigma$ of the sensor noise. The aircraft must be positioned somewhere within the intersection of the two perimeters with a high degee of probability."
"That is a manual approach with low accuracy. A Kalman filter will produce far more accurate position estimates. Assume we have two sensors, each of which provides a bearing only measurement to the target, as in the chart below. The width of the perimeters are proportional to the $3\\sigma$ of the sensor noise. The aircraft must be positioned somewhere within the intersection of the two perimeters with a high degree of probability."
]
},
{
@ -1925,7 +1925,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The geometry of the sensors relative to the tracked object imposes a physical limitation that can be extremely difficult to deal with when designing filters. If the radials of the VOR stations are nearly parallel to each other than a very small angular error translates into a very large distance error. What is worse, this behavior is nonlinear - the error in the *x-axis* vs the *y-axis* will vary depending on the actual bearing. These scatter plots show the error distribution for a 1°$\\sigma$ error for two different bearings."
"The geometry of the sensors relative to the tracked object imposes a physical limitation that can be extremely difficult to deal with when designing filters. If the radials of the VOR stations are nearly parallel to each other then a very small angular error translates into a very large distance error. What is worse, this behavior is nonlinear - the error in the *x-axis* vs the *y-axis* will vary depending on the actual bearing. These scatter plots show the error distribution for a 1°$\\sigma$ error for two different bearings."
]
},
{
@ -2010,7 +2010,7 @@
"\n",
"This graph makes it look easy because we have plotted 100 measurements for each position update. The movement of the aircraft is obvious. In contrast, the Kalman filter only gets one measurement per update. Therefore the filter will not be able to generate as good a fit as the dotted green line implies. \n",
"\n",
"Now consider that the bearing gives us no distance information. Suppose we set the initial estimate it 1,000 kilometers away from the sensor (vs the actual distance of 7.07 km) and make $\\mathbf P$ very small. At that distance a 1° error translates into a positional error of 17.5 km. The KF would never be able to converge onto the actual target position because the filter is incorrectly very certain about its position estimates and because there is no distance information provided in the measurements."
"Now consider that the bearing gives us no distance information. Suppose we set the initial estimate to 1,000 kilometers away from the sensor (vs the actual distance of 7.07 km) and make $\\mathbf P$ very small. At that distance a 1° error translates into a positional error of 17.5 km. The KF would never be able to converge onto the actual target position because the filter is incorrectly very certain about its position estimates and because there is no distance information provided in the measurements."
]
},
{
@ -2568,7 +2568,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"From these charts we can see that the improvement in the position is small, but the improvement in the velocity is good, and spectacular for the altitude. The difference in the position are very small, so I printed the difference between the UKF and the smoothed results for the last 5 points. I recommend always usng the RTS smoother if you can post-process your data."
"From these charts we can see that the improvement in the position is small, but the improvement in the velocity is good, and spectacular for the altitude. The difference in the position are very small, so I printed the difference between the UKF and the smoothed results for the last 5 points. I recommend always using the RTS smoother if you can post-process your data."
]
},
{
@ -2610,7 +2610,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"So what is going on here? We can see that for a mean of 0 the algorithm choose sigma points of 0, 3, and -3, but why? Recall the equation for computing the sigma points:\n",
"So what is going on here? We can see that for a mean of 0 the algorithm chooses sigma points of 0, 3, and -3, but why? Recall the equation for computing the sigma points:\n",
"\n",
"$$\\begin{aligned}\n",
"\\mathcal{X}_0 &= \\mu\\\\\n",
@ -2655,7 +2655,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that the sigma point spread over 100 standard deviations. If our data was Gaussian we'd be incorporating data many standard deviations away from the mean; for nonlinear problems this is unlikely to produce good results. But suppose our distribution was not Gaussian, but instead had very fat tails? We might need to sample from those tails to get a good estimate, and hence it would make sense to make $\\kappa$ larger (not 200, which was absurdly large to make the change in the sigma points stark). \n",
"We can see that the sigma points spread over 100 standard deviations. If our data was Gaussian we'd be incorporating data many standard deviations away from the mean; for nonlinear problems this is unlikely to produce good results. But suppose our distribution was not Gaussian, but instead had very fat tails? We might need to sample from those tails to get a good estimate, and hence it would make sense to make $\\kappa$ larger (not 200, which was absurdly large to make the change in the sigma points stark). \n",
"\n",
"With a similar line of reasoning, suppose that our distribution has nearly no tails - the probability distribution looks more like an inverted parabola. In such a case we'd probably want to pull the sigma points in closer to the mean to avoid sampling in regions where there will never be real data.\n",
"\n",