Formatting latex for github.

It seems to want a blank line prior to the start of a $$ block.
This commit is contained in:
Roger Labbe 2015-05-09 13:53:02 -07:00
parent d48d466831
commit ef0d770c08

View File

@ -284,15 +284,15 @@
"\n",
"In this chapter we are dealing with a simpler form that we can discuss in terms of Newton's equations of motion: given a constant velocity v we can compute distance exactly with:\n",
"\n",
"$$ x = vt + x_0$$\n",
"$$x = vt + x_0$$\n",
"\n",
"If we instead assume constant acceleration we get\n",
"\n",
"$$ x = \\frac{1}{2}at^2 + v_0 t + x_0$$\n",
"$$x = \\frac{1}{2}at^2 + v_0 t + x_0$$\n",
"\n",
"And if we assume constant jerk we get\n",
"\n",
"$$ x = \\frac{1}{6}jt^3 + \\frac{1}{2}a_0 t^2 + v_0 t + x_0$$\n",
"$$x = \\frac{1}{6}jt^3 + \\frac{1}{2}a_0 t^2 + v_0 t + x_0$$\n",
"\n",
"As a reminder, we can generate these equations using basic calculus. Given a constant velocity v we can compute the distance traveled over time with the equation\n",
"\n",
@ -1422,7 +1422,9 @@
"metadata": {},
"source": [
"The radar is to the left of the aircraft, so I can use a covariance of \n",
"\n",
"$$\\Sigma = \\begin{bmatrix}2&1.9\\\\1.9&2\\end{bmatrix}$$\n",
"\n",
"to model the measurement. In the next graph I plot the original estimate in a very light yellow, the radar measurement in blue, and the new estimate based on multiplying the two Gaussians together in yellow."
]
},
@ -1782,7 +1784,7 @@
"source": [
"The brilliance of the Kalman filter is taking the insights of the chapter up to this point and finding an optimal mathematical solution. The Kalman filter finds what is called a *least squared fit* to the set of measurements to produce an optimal output. We will not trouble ourselves with the derivation of these equations. It runs to several pages, and offers a lot less insight than the words above, in my opinion. Furthermore, to create a Kalman filter for your application you will not be manipulating these equations, but only specifying a number of parameters that are used by them. It would be going too far to say that you will never need to understand these equations; but to start we can pass them by and I will present the code that implements them. So, first, let's see the equations. \n",
"> Kalman Filter Predict Step:\n",
"\n",
">\n",
"> $$\n",
"\\begin{aligned}\n",
"\\hat{\\textbf{x}}^-_{k+1} &= \\mathbf{F}_{k}\\hat{\\textbf{x}}_{k} + \\mathbf{B}_k\\mathbf{u}_k\\;\\;\\;&(1) \\\\\n",
@ -1791,7 +1793,7 @@
"$$\n",
"\n",
"> Kalman Filter Update Step:\n",
"\n",
">\n",
">$$\n",
"\\begin{aligned}\n",
"\\textbf{y}_k &= \\textbf{z}_k - \\textbf{H}_k\\hat{\\textbf{}x}^-_k\\;\\;\\;&(3) \\\\\n",
@ -2281,6 +2283,7 @@
"State variables can either be *observed variables* - directly measured by a sensor, or *unobserved variables* - inferred from the observed variables. For our dog tracking problem, our observed state variable is position, and the unobserved variable is velocity. \n",
"\n",
"In the previous chapter we would denote the dog's position being 3.2 meters as:\n",
"\n",
"$$\\mu = 3.2$$\n",
"\n",
"In this chapter we will use the multivariate Gaussian as described at the beginning of this chapter. For example, if we wanted to specify a position of 10.0 m and a velocity of 4.5 m/s, we would write:\n",
@ -2341,12 +2344,14 @@
"$\\mathbf{x}^- = \\mathbf{Fx}.$\n",
"\n",
"A quick review on how to represent linear equations with matrices. Take the following two equations:\n",
"\n",
"$$2x+3y = 8\\\\3x-y=1$$\n",
"\n",
"We can put this in matrix form by writing:\n",
"\n",
"$$\n",
"\\begin{bmatrix}2& 3 \\\\ 3& -1\\end{bmatrix}\\begin{bmatrix}x\\\\y\\end{bmatrix}=\\begin{bmatrix}8\\\\1\\end{bmatrix}$$\n",
"\n",
"If you perform the matrix multiplication in this equation the result will be the two equations above.\n",
"\n",
"So, given that $\\mathbf{x} = \\begin{bmatrix}x \\\\ \\dot{x}\\end{bmatrix}$ we can write:\n",
@ -2458,8 +2463,8 @@
"\n",
"$$\n",
"\\begin{aligned}\n",
" \\textbf{y} &= \\mathbf{z} - &\\begin{bmatrix}1&0\\end{bmatrix} &\\begin{bmatrix}x \\\\ \\dot{x}\\end{bmatrix}, or \\\\\n",
"\\textbf{y} &= \\mathbf{z} - &\\textbf{H}&\\begin{bmatrix}x \\\\ \\dot{x}\\end{bmatrix}\n",
"\\textbf{y} &= \\mathbf{z} - \\begin{bmatrix}1&0\\end{bmatrix} \\begin{bmatrix}x \\\\ \\dot{x}\\end{bmatrix}, or \\\\\n",
"\\textbf{y} &= \\mathbf{z} - \\textbf{H}\\begin{bmatrix}x \\\\ \\dot{x}\\end{bmatrix}\n",
"\\end{aligned}\n",
"$$\n",
"\n",
@ -2547,20 +2552,21 @@
"For these kinds of problems we can rely on precomputed forms for $\\mathbf{Q}$. We will learn how to derive these matrices in the next chapter. For now I present them without proof. If we assume that for each time period the acceleration due to process noise is constant and uncorrelated, we get the following.\n",
"\n",
"For constant velocity the form is\n",
" $$\\begin{bmatrix}\n",
" \\frac{1}{4}{\\Delta t}^4 & \\frac{1}{2}{\\Delta t}^3 \\\\\n",
" \\frac{1}{2}{\\Delta t}^3 & \\Delta t^2\n",
" \\end{bmatrix}\\sigma^2\n",
" $$\n",
"\n",
"$$\\begin{bmatrix}\n",
"\\frac{1}{4}{\\Delta t}^4 & \\frac{1}{2}{\\Delta t}^3 \\\\\n",
"\\frac{1}{2}{\\Delta t}^3 & \\Delta t^2\n",
"\\end{bmatrix}\\sigma^2\n",
"$$\n",
"\n",
"and for constant acceleration we have\n",
"\n",
" $$\\begin{bmatrix}\n",
" \\frac{1}{4}{\\Delta t}^4 & \\frac{1}{2}{\\Delta t}^3 & \\frac{1}{2}{\\Delta t}^2 \\\\\n",
" \\frac{1}{2}{\\Delta t}^3 & {\\Delta t}^2 & \\Delta t \\\\\n",
" \\frac{1}{2}{\\Delta t}^2 & \\Delta t & 1\n",
" \\end{bmatrix} \\sigma^2\n",
" $$\n"
"$$\\begin{bmatrix}\n",
"\\frac{1}{4}{\\Delta t}^4 & \\frac{1}{2}{\\Delta t}^3 & \\frac{1}{2}{\\Delta t}^2 \\\\\n",
"\\frac{1}{2}{\\Delta t}^3 & {\\Delta t}^2 & \\Delta t \\\\\n",
"\\frac{1}{2}{\\Delta t}^2 & \\Delta t & 1\n",
"\\end{bmatrix} \\sigma^2\n",
"$$"
]
},
{
@ -2974,6 +2980,7 @@
"The equations in this chapter look very different from the equations in the last chapter, yet I claimed the last chapter implemented a full 1-D (univariate) Kalman filter. \n",
"\n",
"Recall that the univariate equations for the update step are:\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\mu &=\\frac{\\sigma_1^2 \\mu_2 + \\sigma_2^2 \\mu_1} {\\sigma_1^2 + \\sigma_2^2}, \\\\\n",
@ -2982,6 +2989,7 @@
"$$\n",
"\n",
"and that the 1-D equations for the predict step are:\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\mu &= \\mu_1+\\mu_2, \\\\ \\sigma^2 &= \\sigma_1^2 + \\sigma_2^2\n",
@ -3147,7 +3155,8 @@
"\n",
"> **Note:** This section will provide you with a strong intuition into what the Kalman filter equations are actually doing. While this section is not strictly required, I recommend reading this section carefully as it should make the rest of the material easier to understand. It is not merely a proof of correctness that you would normally want to skip past! The equations look complicated, but they are actually doing something quite simple.\n",
"\n",
"Let's start with the predict step, which is slightly easier. Here are the multivariate equations. \n",
"Let's start with the predict step, which is slightly easier. Here are the multivariate equations.\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\mathbf{x}^- &= \\mathbf{F x} + \\mathbf{B u} \\\\\n",
@ -3162,6 +3171,7 @@
"Here the variables are not bold, denoting that they are not matrices or vectors. \n",
"\n",
"Our state transition is simple - the next state is the same as this state, so $F=1$. The same holds for the motion transition, so, $B=1$. Thus we have\n",
"\n",
"$$x = x + u$$\n",
"\n",
"which is equivalent to the Gaussian equation from the last chapter\n",
@ -3173,6 +3183,7 @@
"$$\\mathbf{P}^- = \\mathbf{FP{F}}^\\mathsf{T} + \\mathbf{Q}$$\n",
"\n",
"Again, since our state only has one variable $\\mathbf{P}$ and $\\mathbf{Q}$ must also be $1\\times 1$ matrix, which we can treat as scalars, yielding \n",
"\n",
"$$P^- = FPF^\\mathsf{T} + Q$$\n",
"\n",
"We already know $F=1$. The transpose of a scalar is the scalar, so $F^\\mathsf{T} = 1$. This yields\n",
@ -3180,6 +3191,7 @@
"$$P^- = P + Q$$\n",
"\n",
"which is equivalent to the Gaussian equation of \n",
"\n",
"$$\\sigma^2 = \\sigma_1^2 + \\sigma_2^2$$\n",
"\n",
"This proves that the multivariate equations are performing the same math as the univariate equations for the case of the dimension being 1."
@ -3190,6 +3202,7 @@
"metadata": {},
"source": [
"Here our our multivariate Kalman filter equations for the update step.\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\textbf{y} &= \\mathbf{z} - \\mathbf{H x^-}\\\\\n",
@ -3215,6 +3228,7 @@
"So let's finish off the algebra to prove this. It's straightforward, and not at all necessary for you to learn unless you are interested. Feel free to skim ahead to the last paragraph in this section if you prefer skipping the algebra.\n",
"\n",
"Recall that the univariate equations for the update step are:\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\mu &=\\frac{\\sigma_1^2 \\mu_2 + \\sigma_2^2 \\mu_1} {\\sigma_1^2 + \\sigma_2^2}, \\\\\n",
@ -3586,6 +3600,7 @@
"$$\\textbf{x}=\\begin{bmatrix}x\\\\\\dot{x}\\end{bmatrix}$$\n",
"\n",
"and the covariance matrix happens to be\n",
"\n",
"$$\\textbf{P}=\\begin{bmatrix}2&0\\\\0&6\\end{bmatrix}$$\n",
"\n",
"we know that the variance of $x$ is 2 m, and the variance of $\\dot{x}$ is 6 m/s. The off diagonal elements are all 0, so we also know that $x$ and $\\dot{x}$ are not correlated. Recall the ellipses that we drew of the covariance matrices. Let's look at the ellipse for the matrix."
@ -3668,7 +3683,6 @@
"(X_i - \\mu_i)(X_j - \\mu_j)\n",
"\\end{bmatrix}$$\n",
"\n",
"\n",
"We can rearrange the terms to get\n",
"\n",
"$$\\begin{aligned}\n",
@ -3677,7 +3691,6 @@
"&= \\sigma_{j,i}\n",
"\\end{aligned}$$\n",
"\n",
"\n",
"In general, we can state that $\\sigma_{i,j}=\\sigma_{j,i}$."
]
},