correction: lets to let's.

grammar is our friend.
This commit is contained in:
Roger Labbe 2016-09-25 12:36:14 -07:00
parent ae45efc403
commit 4b93469205
13 changed files with 28 additions and 33 deletions

View File

@ -4782,7 +4782,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"That is pretty good for an initial guess. Lets make $g$ larger to see the effect."
"That is pretty good for an initial guess. Let's make $g$ larger to see the effect."
]
},
{
@ -5142,7 +5142,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python [default]",
"language": "python",
"name": "python3"
},

View File

@ -504,7 +504,7 @@
"\n",
"The answer, as for the problem above, is with probabilities. We are already comfortable assigning a probabilistic belief to the location of the dog; now we have to incorporate the additional uncertainty caused by the sensor noise. \n",
"\n",
"Lets say we get a reading of **door**, and suppose that testing shows that the sensor is 3 times more likely to be right than wrong. We should scale the probability distribution by 3 where there is a door. If we do that the result will no longer be a probability distribution, but we will learn how to correct that in a moment.\n",
"Say we get a reading of **door**, and suppose that testing shows that the sensor is 3 times more likely to be right than wrong. We should scale the probability distribution by 3 where there is a door. If we do that the result will no longer be a probability distribution, but we will learn how to correct that in a moment.\n",
"\n",
"Let's look at that in Python code. Here I use the variable `z` to denote the measurement. `z` or `y` are customary choices in the literature for the measurement."
]
@ -779,7 +779,7 @@
"\n",
"Recall how quickly we were able to find an exact solution when we incorporated a series of measurements and movement updates. However, that occurred in a fictional world of perfect sensors. Might we be able to find an exact solution with noisy sensors?\n",
"\n",
"Unfortunately, the answer is no. Even if the sensor readings perfectly match an extremely complicated hallway map, we cannot be 100% certain that the dog is in a specific position - there is, after all, a tiny possibility that every sensor reading was wrong! Naturally, in a more typical situation most sensor readings will be correct, and we might be close to 100% sure of our answer, but never 100% sure. This may seem complicated, but lets go ahead and program the math.\n",
"Unfortunately, the answer is no. Even if the sensor readings perfectly match an extremely complicated hallway map, we cannot be 100% certain that the dog is in a specific position - there is, after all, a tiny possibility that every sensor reading was wrong! Naturally, in a more typical situation most sensor readings will be correct, and we might be close to 100% sure of our answer, but never 100% sure. This may seem complicated, but let's go ahead and program the math.\n",
"\n",
"First let's deal with the simple case - assume the movement sensor is perfect, and it reports that the dog has moved one space to the right. How would we alter our `belief` array?\n",
"\n",
@ -1652,7 +1652,7 @@
"\n",
"The prediction is our new *prior*. Time has moved forward and we made a prediction without benefit of knowing the measurements. \n",
"\n",
"Lets work an example. The current position of the dog is 17 m. Our epoch is 2 seconds long, and the dog is traveling at 15 m/s. Where do we predict he will be in two seconds? \n",
"Let's work an example. The current position of the dog is 17 m. Our epoch is 2 seconds long, and the dog is traveling at 15 m/s. Where do we predict he will be in two seconds? \n",
"\n",
"Clearly,\n",
"\n",
@ -2835,7 +2835,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice the tall bar at position 1. This corresponds with the (correct) case of starting at position 0, sensing a door, shifting 1 to the right, and sensing another door. No other positions make this set of observations as likely. Now lets add an update and then sense the wall."
"Notice the tall bar at position 1. This corresponds with the (correct) case of starting at position 0, sensing a door, shifting 1 to the right, and sensing another door. No other positions make this set of observations as likely. Now we will add an update and then sense the wall."
]
},
{
@ -5702,7 +5702,7 @@
"source": [
"There was a sensing error at time 1, but we are still quite confident in our position. \n",
"\n",
"Now lets run a very long simulation and see how the filter responds to errors."
"Now let's run a very long simulation and see how the filter responds to errors."
]
},
{

View File

@ -1357,7 +1357,7 @@
"$$P(x \\mid z) \\propto \\exp \\Big[-\\frac{1}{2}\\frac{x^2-2x(\\frac{\\sigma_z^2\\bar\\mu + \\bar\\sigma^2z}{\\bar\\sigma^2+\\sigma_z^2})}{\\frac{\\sigma_z^2\\bar\\sigma^2}{\\bar\\sigma^2+\\sigma_z^2}}\\Big ]\n",
"$$\n",
"\n",
"Proportionality lets us create or delete constants at will, so we can factor this into\n",
"Proportionality allows us create or delete constants at will, so we can factor this into\n",
"\n",
"$$P(x \\mid z) \\propto \\exp \\Big[-\\frac{1}{2}\\frac{(x-\\frac{\\sigma_z^2\\bar\\mu + \\bar\\sigma^2z}{\\bar\\sigma^2+\\sigma_z^2})^2}{\\frac{\\sigma_z^2\\bar\\sigma^2}{\\bar\\sigma^2+\\sigma_z^2}}\\Big ]\n",
"$$\n",

View File

@ -1045,7 +1045,7 @@
"source": [
"## Kalman Gain\n",
"\n",
"We see that the filter works. Now lets go back to the math to understand what is happening. The posterior $x$ is computed as the likelihood times the prior ($\\mathcal L\\cdot \\bar x$), where both are Gaussians.\n",
"We see that the filter works. Now let's go back to the math to understand what is happening. The posterior $x$ is computed as the likelihood times the prior ($\\mathcal L\\cdot \\bar x$), where both are Gaussians.\n",
"\n",
"Therefore the mean of the posterior is given by:\n",
"\n",
@ -1823,7 +1823,7 @@
"## Example: Bad Initial Estimate\n",
"\n",
"\n",
"Now let's lets look at the results when we make a bad initial estimate of position. To avoid obscuring the results I'll reduce the sensor variance to 30, but set the initial position to 1000 meters. Can the filter recover from a 1000 meter error?"
"Now let's look at the results when we make a bad initial estimate of position. To avoid obscuring the results I'll reduce the sensor variance to 30, but set the initial position to 1000 meters. Can the filter recover from a 1000 meter error?"
]
},
{

View File

@ -1,14 +1,5 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
},
{
"cell_type": "raw",
"metadata": {},
@ -1647,7 +1638,7 @@
"\n",
"The posterior retained the same shape and position as the radar measurement, but is smaller. We've seen this with one dimensional Gaussians. Multiplying two Gaussians makes the variance smaller because we are incorporating more information, hence we are less uncertain. Another point to recognize is that the covariance shape reflects the physical layout of the aircraft and the radar system. The importance of this will become clear in the next step.\n",
"\n",
"Now lets say we get a measurement from a second radar, this one to the lower right. The posterior from the last step becomes our new prior, which I plot in yellow. The new measurement is plotted in green."
"Now let's say we get a measurement from a second radar, this one to the lower right. The posterior from the last step becomes our new prior, which I plot in yellow. The new measurement is plotted in green."
]
},
{

View File

@ -1444,7 +1444,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This is the complete code for the filter, and most of it is boilerplate. I've made it flexible enough to support several uses in this chapter, so it is a bit verbose. Lets work through it line by line. \n",
"This is the complete code for the filter, and most of it is boilerplate. I've made it flexible enough to support several uses in this chapter, so it is a bit verbose. Let's work through it line by line. \n",
"\n",
"The first lines checks to see if you provided it with measurement data in `data`. If not, it creates the data using the `compute_dog_data` function we wrote earlier.\n",
"\n",
@ -2539,7 +2539,7 @@
"source": [
"In this case the Kalman filter is very uncertain about the initial state, so it converges onto the signal much faster. It is producing good output after only 5 to 6 epochs. With the theory we have developed so far this is about as good as we can do. However, this scenario is a bit artificial; if we do not know where the object is when we start tracking we do not initialize the filter to some arbitrary value, such as 0 m or 100 m. I address this in the **Filter Initialization** section below.\n",
"\n",
"Lets do another Kalman filter for our dog, and this time plot the covariance ellipses on the same plot as the position."
"Let's do another Kalman filter for our dog, and this time plot the covariance ellipses on the same plot as the position."
]
},
{

View File

@ -963,7 +963,7 @@
"\n",
"where $\\Gamma$ is the *noise gain* of the system, and $w$ is the constant piecewise acceleration (or velocity, or jerk, etc). \n",
"\n",
"Lets start by looking at a first order system. In this case we have the state transition function\n",
"Let's start by looking at a first order system. In this case we have the state transition function\n",
"\n",
"$$\\mathbf{F} = \\begin{bmatrix}1&\\Delta t \\\\ 0& 1\\end{bmatrix}$$\n",
"\n",
@ -1406,7 +1406,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This looks correct. So now lets plot the result of a much smaller step size."
"This looks correct. So now let's plot the result of a much smaller step size."
]
},
{

View File

@ -905,7 +905,7 @@
"\n",
"When we design the state variables and process model we must choose the order of the system we want to model. Let's say we are tracking something with a constant velocity. No real world process is perfect, and so there will be slight variations in the velocity over short time period. You might reason that the best approach is to use a second order filter, allowing the acceleration term to deal with the slight variations in velocity. \n",
"\n",
"In practice that doesn't work well. To thoroughly understand this issue lets see the effects of using a process model that does not match the order of the system being filtered. "
"In practice that doesn't work well. To thoroughly understand this issue let's see the effects of using a process model that does not match the order of the system being filtered. "
]
},
{
@ -1936,7 +1936,7 @@
"\n",
"$$\\mathbf{\\epsilon} = \\tilde{\\mathbf x}^\\mathsf T\\mathbf P^{-1}\\tilde{\\mathbf x}$$\n",
"\n",
"To understand this equation lets look at it if the state's dimension is one. In that case both x and P are scalars, so\n",
"To understand this equation let's look at it if the state's dimension is one. In that case both x and P are scalars, so\n",
"\n",
"$$\\epsilon = \\frac{x^2}{P}$$\n",
"\n",
@ -3843,6 +3843,7 @@
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python [default]",
"language": "python",

View File

@ -428,7 +428,7 @@
"source": [
"## The Effect of Nonlinear Functions on Gaussians\n",
"\n",
"Gaussians are not closed under an arbitrary nonlinear function. Recall the equations of the Kalman filter - at each evolution we pass the Gaussian representing the state through the process function to get the Gaussian at time $k$. Our process function was always linear, so the output was always another Gaussian. Let's look at that on a graph. I will take an arbitrary Gaussian and pass it through the function $f(x) = 2x + 1$ and plot the result. We know how to do this analytically, but lets use sampling. I will generate 500,000 points with a normal distribution, pass them through $f(x)$, and plot the results. I do it this way because the next example will be nonlinear, and we will have no way to compute this analytically."
"Gaussians are not closed under an arbitrary nonlinear function. Recall the equations of the Kalman filter - at each evolution we pass the Gaussian representing the state through the process function to get the Gaussian at time $k$. Our process function was always linear, so the output was always another Gaussian. Let's look at that on a graph. I will take an arbitrary Gaussian and pass it through the function $f(x) = 2x + 1$ and plot the result. We know how to do this analytically, but let's use sampling. I will generate 500,000 points with a normal distribution, pass them through $f(x)$, and plot the results. I do it this way because the next example will be nonlinear, and we will have no way to compute this analytically."
]
},
{
@ -927,6 +927,7 @@
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python [default]",
"language": "python",

View File

@ -2745,7 +2745,7 @@
"\n",
"We can see that as $\\lambda$ gets larger the fraction for the weight of the mean ($\\lambda/(n+\\lambda)$) approaches 1, and the fraction for the weights of the rest of the sigma points approaches 0. This is invariant on the size of your covariance. So as we sample further and further away from the mean we end up giving less weight to those samples, and if we sampled very close to the mean we'd give very similar weights to all.\n",
"\n",
"However, the advice that Van der Merwe gives is to constrain $\\alpha$ in the range $0 \\gt \\alpha \\ge 1$. He suggests $10^{-3}$ as a good value. Lets try that."
"However, the advice that Van der Merwe gives is to constrain $\\alpha$ in the range $0 \\gt \\alpha \\ge 1$. He suggests $10^{-3}$ as a good value. Let's try that."
]
},
{
@ -3449,6 +3449,7 @@
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python [default]",
"language": "python",

View File

@ -644,7 +644,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now lets write a simulation for our radar."
"Now let's write a simulation for our radar."
]
},
{
@ -1600,7 +1600,7 @@
"\n",
"I used the same initial conditions and landmark locations in the UKF chapter. The UKF achieves much better accuracy in terms of the error ellipse. Both perform roughly as well as far as their estimate for $\\mathbf x$ is concerned. \n",
"\n",
"Now lets add another landmark."
"Now let's add another landmark."
]
},
{
@ -1867,6 +1867,7 @@
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python [default]",
"language": "python",

View File

@ -1187,7 +1187,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see from the charts that the filter output for the position is very similar regardless of weather we use 2 standard deviations or three. But the computation of the velocity is a different matter. Let's explore this further. First, lets make the standard deviation very small."
"We can see from the charts that the filter output for the position is very similar regardless of weather we use 2 standard deviations or three. But the computation of the velocity is a different matter. Let's explore this further. First, let's make the standard deviation very small."
]
},
{
@ -2269,8 +2269,9 @@
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python [default]",
"language": "python",
"name": "python3"
},

View File

@ -1,6 +1,5 @@
ipython merge_book.py
jupyter nbconvert --to latex --template book book.ipynb
jupyter nbconvert --template book book.ipynb
ipython to_pdf.py
move /Y book.pdf ../Kalman_and_Bayesian_Filters_in_Python.pdf