Fixes typos and small mistakes in chap. 7
This commit is contained in:
parent
0ebd3fd488
commit
9384d97093
@ -400,7 +400,7 @@
|
||||
"&= 1\\end{aligned}$$\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Using similar math we can compute that $VAR(a) = 0.25$ and $VAR(c)=4$. This allows us to fill in the covariance matrix with\n",
|
||||
"Using similar math we can compute that $VAR(b) = 0.25$ and $VAR(c)=4$. This allows us to fill in the covariance matrix with\n",
|
||||
"\n",
|
||||
"$$\\Sigma = \\begin{bmatrix}1 & & \\\\ & 0.25 & \\\\ &&4\\end{bmatrix}$$"
|
||||
]
|
||||
@ -847,7 +847,7 @@
|
||||
"\n",
|
||||
"However, I can present enough of the theory to allow us to create the system equations for many different Kalman filters, and give you enough background to at least follow the mathematics in the literature. My goal is to get you to the stage where you can read a Kalman filtering book or paper and understand it well enough to implement the algorithms. The background math is deep, but we end up using a few simple techniques over and over again in practice.\n",
|
||||
"\n",
|
||||
"I struggle a bit with the proper way to present this material. If you have not encountered this math before I fear reading this section will not be very profitable for you. In the **Extended Kalman Filter** chapter I take a more ad-hoc way of presenting this information where I expose a problem that the KF needs to solve, then provide the math without a lot of supporting theory. This gives you the motivation behind the mathematics at the cost of not knowing why the math I give you is correct. On the other hand, the following section gives you the math, but somewhat divorced from the specifics of the problem we are trying to solve. Only you know what kind of learner your are. If you like the presentation of the book so far (practical first, then the math) you may want to wait until you read the **Extended Kalman Filter** before \n",
|
||||
"I struggle a bit with the proper way to present this material. If you have not encountered this math before I fear reading this section will not be very profitable for you. In the **Extended Kalman Filter** chapter I take a more ad-hoc way of presenting this information where I expose a problem that the KF needs to solve, then provide the math without a lot of supporting theory. This gives you the motivation behind the mathematics at the cost of not knowing why the math I give you is correct. On the other hand, the following section gives you the math, but somewhat divorced from the specifics of the problem we are trying to solve. Only you know what kind of learner you are. If you like the presentation of the book so far (practical first, then the math) you may want to wait until you read the **Extended Kalman Filter** before.\n",
|
||||
"In particular, if your intent is to work with Extended Kalman filters (a very prelevant form of nonlinear Kalman filtering) you will need to understand this math at least at the level I present it. If that is not your intent this section may still prove to be beneficial if you need to simulate a nonlinear system in order to test your filter.\n",
|
||||
"\n",
|
||||
"Let's lay out the problem and discuss what the solution will be. We model *dynamic systems* with a set of first order *differential equations*. This should not be a surprise as calculus is the math of things that vary. For example, we say that velocity is the derivative of distance with respect to time\n",
|
||||
@ -900,7 +900,7 @@
|
||||
"\n",
|
||||
"If we let the solution to the left hand side by named $F(x)$, we get\n",
|
||||
"\n",
|
||||
"$$F(x) - f(x_0) = t-t_0$$\n",
|
||||
"$$F(x) - F(x_0) = t-t_0$$\n",
|
||||
"\n",
|
||||
"We then solve for x with\n",
|
||||
"\n",
|
||||
@ -1009,7 +1009,7 @@
|
||||
"source": [
|
||||
"This is not bad for only three terms. If you are curious, go ahead and implement this as a Python function to compute the series for an arbitrary number of terms. But I will forge ahead to the matrix form of the equation. \n",
|
||||
"\n",
|
||||
"Let's consider tracking an object moving in a vacuum. In one dimesion the differential equation for motion with zero acceleration is\n",
|
||||
"Let's consider tracking an object moving in a vacuum. In one dimension the differential equation for motion with zero acceleration is\n",
|
||||
"\n",
|
||||
"$$ v = \\dot{x}\\\\a=\\ddot{x} =0,$$\n",
|
||||
"\n",
|
||||
@ -1165,7 +1165,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We model kinematic systems using Newton's equations. So far in this book we have either used position and velocity, or position,velocity, and acceleration as the models for our systems. There is nothing stopping us from going further - we can model jerk, jounce, snap, and so on. We don't do that normally because adding terms beyond the dynamics of the real system actually degrades the solution. \n",
|
||||
"We model kinematic systems using Newton's equations. So far in this book we have either used position and velocity, or position, velocity, and acceleration as the models for our systems. There is nothing stopping us from going further - we can model jerk, jounce, snap, and so on. We don't do that normally because adding terms beyond the dynamics of the real system actually degrades the solution. \n",
|
||||
"\n",
|
||||
"Let's say that we need to model the position, velocity, and acceleration. We can then assume that acceleration is constant. Of course, there is process noise in the system and so the acceleration is not actually constant. In this section we will assume that the acceleration changes by a continuous time zero-mean white noise $w(t)$. In other words, we are assuming that velocity is acceleration changing by small amounts that over time average to 0 (zero-mean). \n",
|
||||
"\n",
|
||||
@ -2076,7 +2076,7 @@
|
||||
"\n",
|
||||
"$$\\delta \\mathbf{z}^+ = \\mathbf{z} - h(\\mathbf{x}^+)$$\n",
|
||||
"\n",
|
||||
"I don't use the plus superscript much because I find it quickly makes the equations unreadable, but $\\mathbf{x}^+$ it is the *a posteriori* state estimate, which is the predicted or unknown future state. In other words, the predict step of the linear Kalman filter computes this value. Here it is stands for the value of x which the ILS algorithm will compute on each iteration.\n",
|
||||
"I don't use the plus superscript much because I find it quickly makes the equations unreadable, but $\\mathbf{x}^+$ is the *a posteriori* state estimate, which is the predicted or unknown future state. In other words, the predict step of the linear Kalman filter computes this value. Here it is stands for the value of x which the ILS algorithm will compute on each iteration.\n",
|
||||
"\n",
|
||||
"These equations give us the following linear algebra equation:\n",
|
||||
"\n",
|
||||
|
Loading…
Reference in New Issue
Block a user