Fix typos
This commit is contained in:
parent
24b9fb3cf7
commit
ce03c0b5bc
@ -1038,7 +1038,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Fading memory filters are not normally classified as an adaptive filter since they do not adapt to the the input, but they do provide good performance with maneuvering targets. They also have the benefit of having a very simple computational form for first, second, and third order kinematic filters (e.g. the filters we are using in this chapter). This simple form does not require the Ricatti equations to compute the gain of the Kalman filter, which drastically reduces the amount of computation. However, there is also a form that works with the standard Kalman filter. I will focus on the latter in this chapter since our focus is more on adaptive filters. Both forms of the fading memory filter are implemented in `FilterPy`.\n",
|
||||
"Fading memory filters are not normally classified as an adaptive filter since they do not adapt to the input, but they do provide good performance with maneuvering targets. They also have the benefit of having a very simple computational form for first, second, and third order kinematic filters (e.g. the filters we are using in this chapter). This simple form does not require the Ricatti equations to compute the gain of the Kalman filter, which drastically reduces the amount of computation. However, there is also a form that works with the standard Kalman filter. I will focus on the latter in this chapter since our focus is more on adaptive filters. Both forms of the fading memory filter are implemented in `FilterPy`.\n",
|
||||
"\n",
|
||||
"The Kalman filter is recursive, but it incorporates all of the previous measurements into the current computation of the filter gain. If the target behavior is consistent with the process model than this allows the Kalman filter to find the optimal estimate for every measurement. Consider a ball in flight - we can clearly estimate the position of the ball at time t better if we take into account all the previous measurement. If we only used some of the measurements we would be less certain about the current position, and thus more influenced by the noise in the measurement. If this is still not clear, consider the worst case. Suppose we forget all but the last measurement and estimates. We would then have no confidence in the position and trajectory of the ball, and would have little choice but to weight the current measurement heavily. If the measurement is noisy, the estimate is noisy. We see this effect every time a Kalman filter is initialized. The early estimates are noisy, but then they settle down as more measurements are acquired.\n",
|
||||
"\n",
|
||||
@ -1320,7 +1320,7 @@
|
||||
"\n",
|
||||
"That looks messy, but it is straightforward. The numerator is just the likelihood from this time step multiplied by the probability that this filter was correct at the last time frame. We need all of the probabilities for the filter to sum to one, so we normalize by the probabilities for all of the other filters with the term in the denominator. \n",
|
||||
"\n",
|
||||
"That is a recursive definition, so we need to assign some initial probability for each filter. In the absence of better information, use $\\frac{1}{N}$ for each. Then we can compute the estimated state as the sum of the state from each filter multiplied the the probability of that filter being correct.\n",
|
||||
"That is a recursive definition, so we need to assign some initial probability for each filter. In the absence of better information, use $\\frac{1}{N}$ for each. Then we can compute the estimated state as the sum of the state from each filter multiplied by the probability of that filter being correct.\n",
|
||||
"\n",
|
||||
"Here is a complete implementation:"
|
||||
]
|
||||
@ -1388,11 +1388,11 @@
|
||||
"source": [
|
||||
"I plot the filter's estimates alone on the left so you can see how smooth the result is. On the right I plot both the estimate and the measurements to prove that the filter is tracking the maneuver. \n",
|
||||
"\n",
|
||||
"Again I want to emphasize that this is nothing more than the Bayesian algorithm we have been using throughout the book. We have two (or more) measurements or estimate, each with an associated probability. We choose are estimate as a weighted combination of each of those values, where the weights are proportional to the probability of correctness. The computation of the probability at each step is \n",
|
||||
"Again I want to emphasize that this is nothing more than the Bayesian algorithm we have been using throughout the book. We have two (or more) measurements or estimate, each with an associated probability. We choose an estimate as a weighted combination of each of those values, where the weights are proportional to the probability of correctness. The computation of the probability at each step is \n",
|
||||
"\n",
|
||||
"$$\\frac{\\texttt{Prob(meas | state)} \\times\\texttt{prior}}{\\texttt{normalization}}$$\n",
|
||||
"\n",
|
||||
"which is Bayes therom.\n",
|
||||
"which is Bayes theorem.\n",
|
||||
"\n",
|
||||
"For real world problems you are likely to need more than two filters in your bank. In my job I track objects using computer vision. I track hockey pucks. Pucks slide, they bounce and skitter, they roll, they ricochet, they are picked up and carried, and they are 'dribbled' quickly by the players. I track humans who are athletes, and their capacity for nonlinear behavior is nearly limitless. A two filter bank doesn't get very far in those circumstances. I need to model multiple process models, different assumptions for noise due to the computer vision detection, and so on. But you have the main idea. \n",
|
||||
"\n",
|
||||
@ -1446,11 +1446,11 @@
|
||||
"\n",
|
||||
"This naive approach leads to combinatorial explosion. At step 1 we generate $N$ hypotheses, or 1 per filter. At step 2 we generate another $N$ hypotheses which then need to be combined with the prior $N$ hypotheses, which yields $N^2$ hypothesis. Many different schemes have been tried which either cull unlikely hypotheses or merge similar ones, but the algorithms still suffered from computational expense and/or poor performance. I will not cover these in this book, but prominent examples in the literature are the generalized pseudo Bayes (GPB) algorithms.\n",
|
||||
"\n",
|
||||
"The *Interacting Multiple Models* (IMM) algorithm was invented by Blom[5] to solve the combinatorial explosion problem of multiple models. A subsequent paper by Blom and Bar-Shalom is the most cited paper [6]. The idea is to have 1 filter for each possible mode of behavior of the system. At each epoch we we let the filters *interact* with each other. The more likely filters modify the estimates of the less likely filters so they more nearly represent the current state of the sytem. This blending is done probabilistically, so the unlikely filters also modify the likely filters, but by a much smaller amount. \n",
|
||||
"The *Interacting Multiple Models* (IMM) algorithm was invented by Blom[5] to solve the combinatorial explosion problem of multiple models. A subsequent paper by Blom and Bar-Shalom is the most cited paper [6]. The idea is to have 1 filter for each possible mode of behavior of the system. At each epoch we let the filters *interact* with each other. The more likely filters modify the estimates of the less likely filters so they more nearly represent the current state of the sytem. This blending is done probabilistically, so the unlikely filters also modify the likely filters, but by a much smaller amount. \n",
|
||||
"\n",
|
||||
"For example, suppose we have two modes: going straight, or turning. Each mode is represented by a Kalman filter, maybe a first order and second order filter. Now say the target it turning. The second order filter will produce a good estimate, and the first order filter will lag the signal. The likelihood function of each tells us which of the filters is most probable. The first order filter will have low likelihood, so we adjust its estimate greatly with the second order filter. The the second order filter is very likely, so its estimate will only be changed slightly by the first order Kalman filter. \n",
|
||||
"For example, suppose we have two modes: going straight, or turning. Each mode is represented by a Kalman filter, maybe a first order and second order filter. Now say the target it turning. The second order filter will produce a good estimate, and the first order filter will lag the signal. The likelihood function of each tells us which of the filters is most probable. The first order filter will have low likelihood, so we adjust its estimate greatly with the second order filter. The second order filter is very likely, so its estimate will only be changed slightly by the first order Kalman filter. \n",
|
||||
"\n",
|
||||
"Now suppose the target stops turning. Because we have been revising the first order filter's estimate with the second order estimate it will not have been lagging the signal by very much. within just a few epochs it will be producing very good (high likelihood) estimates and be the most probable filter. It will then start contributing heavily to the estimate of the second order filter. Recall that a second order filter mistakes measurement noise for acceleration. This adjustment insures reduces this effect greatly."
|
||||
"Now suppose the target stops turning. Because we have been revising the first order filter's estimate with the second order estimate it will not have been lagging the signal by very much. Within just a few epochs it will be producing very good (high likelihood) estimates and be the most probable filter. It will then start contributing heavily to the estimate of the second order filter. Recall that a second order filter mistakes measurement noise for acceleration. This adjustment insures reduces this effect greatly."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1710,7 +1710,7 @@
|
||||
"\n",
|
||||
"$$\\boldsymbol\\omega_{ij} = \\| \\mu_i \\cdot \\mathbf M_{ij}\\|$$\n",
|
||||
"\n",
|
||||
"We can compute this as follows. I computed the update of $\\mu$ and $\\bar c$ out of order above (ou must compute $\\bar c$ incorporating the transition probability matrix into $\\mu$), so I'll need to correct that here:"
|
||||
"We can compute this as follows. I computed the update of $\\mu$ and $\\bar c$ out of order above (you must compute $\\bar c$ incorporating the transition probability matrix into $\\mu$), so I'll need to correct that here:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user