Fixed typos, corrected code #58.
This commit is contained in:
parent
4be8ec41bf
commit
aa039983a1
@ -828,7 +828,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"That is pretty good! There is a lot of data here, so let's talk about how to interpret it. The thick green line shows the estimate from the filter. It starts at day 0 with the inital guess of 160 lbs. The red line shows the prediction that is made from the previous day's weight. So, on day one the previous weight was 160 lbs, the weight gain is 1 lb, and so the first prediction is 161 lbs. The estimate on day one is then part way between the prediction and measurement at 159.8 lbs. Above the chart is a print out of the previous weight, predicted weight, and new estimate for each day. Finally, the thin black line shows the actual weight gain of the person being weighed. \n",
|
||||
"That is pretty good! There is a lot of data here, so let's talk about how to interpret it. The thick blue line shows the estimate from the filter. It starts at day 0 with the inital guess of 160 lbs. The red line shows the prediction that is made from the previous day's weight. So, on day one the previous weight was 160 lbs, the weight gain is 1 lb, and so the first prediction is 161 lbs. The estimate on day one is then part way between the prediction and measurement at 159.8 lbs. Above the chart is a print out of the previous weight, predicted weight, and new estimate for each day. Finally, the thin black line shows the actual weight gain of the person being weighed. \n",
|
||||
"\n",
|
||||
"The estimates are not a straight line, but they are straighter than the measurements and somewhat close to the trend line we created. Also, it seems to get better over time. \n",
|
||||
"\n",
|
||||
@ -959,7 +959,7 @@
|
||||
"source": [
|
||||
"This filter is the basis for a huge number of filters, including the Kalman filter. In other words, the Kalman filter is a form of the g-h filter, which I will prove later in the book. So is the Least Squares filter, which you may have heard of, and so is the Benedict-Bordner filter, which you probably have not. Each filter has a different way of assigning values to $g$ and $h$, but otherwise the algorithms are identical. For example, the $\\alpha$-$\\beta$ filter assigns a constant to $g$ and $h$, constrained to a certain range of values. Other filters such as the Kalman will vary $g$ and $h$ dynamically at each time step.\n",
|
||||
"\n",
|
||||
"**Let me repeat the key points as they are so important**. If you do not understand these you will not understand the rest of the book. If you do understand them, then the rest of the book will unfold naturally for you as mathematically elaborations to various 'what if' questions we will ask about $g$ and $h$. The math may look profoundly different, but the algorithm will be exactly the same.\n",
|
||||
"**Let me repeat the key points as they are so important**. If you do not understand these you will not understand the rest of the book. If you do understand them, then the rest of the book will unfold naturally for you as mathematical elaborations to various 'what if' questions we will ask about $g$ and $h$. The math may look profoundly different, but the algorithm will be exactly the same.\n",
|
||||
"\n",
|
||||
"* Multiple data points are more accurate than one data point, so throw nothing away no matter how inaccurate it is.\n",
|
||||
"* Always choose a number part way between two data points to create a more accurate estimate.\n",
|
||||
@ -996,7 +996,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let me introduce some more formal terminology. The predict step is known as **sytem propagation**. The *system* is whatever we are estimating - in this case my weight. We *propogate* it into the future. Some texts call this the **evolution**. It means the same thing. The update step is usually known as the **measurement update**. One iteration of the system propagation and measurement update is known as an **epoch**. \n",
|
||||
"Let me introduce some more formal terminology. The predict step is known as **system propagation**. The *system* is whatever we are estimating - in this case my weight. We *propogate* it into the future. Some texts call this the **evolution**. It means the same thing. The update step is usually known as the **measurement update**. One iteration of the system propagation and measurement update is known as an **epoch**. \n",
|
||||
"\n",
|
||||
"Now let's explore a few different problem domains to better understand this algorithm. Consider the problem of trying to track a train on a track. The track constrains the position of the train to a very specific region. Furthermore, trains are large and slow. It takes them many minutes to slow down or speed up significantly. So, if I know that the train is at kilometer marker 23 km at time t and moving at 18 kph, I can be extremely confident in predicting its position at time t + 1 second. And why is that important? Suppose we can only measure its position with an accuracy of $\\pm$ 250 meters. The train is moving at 18 kph, which is 5 meters per second. So at t+1 second the train will be at 23.005 km yet the measurement could be anywhere from 22.755 km to 23.255 km. So if the next measurement says the position is at 23.4 we know that must be wrong. Even if at time t the engineer slammed on the brakes the train will still be very close to 23.005 km because a train cannot slow down very much in 1 second. If we were to design a filter for this problem (and we will a bit further in the chapter!) we would want to design a filter that gave a very high weighting to the prediction vs the measurement. \n",
|
||||
"\n",
|
||||
@ -1118,7 +1118,7 @@
|
||||
"source": [
|
||||
"You can create arrays of 3 or more dimensions, but we have no need for that here, and so I will not elaborate.\n",
|
||||
"\n",
|
||||
"By default the arrays use the data type of the values in the list; if there are multiple types than it will choose the type that most accurately represents all the values. So, for example, if your list contains a mix of `int` and `float` the data type of the array would be of type `float`. You can override this with the `dtype` parameter."
|
||||
"By default the arrays use the data type of the values in the list; if there are multiple types then it will choose the type that most accurately represents all the values. So, for example, if your list contains a mix of `int` and `float` the data type of the array would be of type `float`. You can override this with the `dtype` parameter."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1750,7 +1750,7 @@
|
||||
"source": [
|
||||
"Now let's look at the effect of varying g. Before you perform this exercise, recall that g is the scale factor for choosing between the measurement and prediction. What do you think of a large value of g will be? A small value? \n",
|
||||
"\n",
|
||||
"Now, let the `noise_factor=50` and `dx=5`. Plot the results of $g = 0.1\\mbox{, } 0.5,\\mbox{ and } 0.9$."
|
||||
"Now, let the `noise_factor=50` and `dx=5`. Plot the results of $g = 0.1\\mbox{, } 0.4,\\mbox{ and } 0.8$."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1811,7 +1811,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It is clear that as $g$ is larger we more closely follow the measurement instead of the prediction. When $g=0.9$ we follow the signal almost exactly, and reject almost none of the noise. One might naively conclude that $g$ should always be very small to maximize noise rejection. However, that means that we are mostly ignoring the measurements in favor of our prediction. What happens when the signal changes not due to noise, but an actual state change? Let's have a look. I will create data that has $\\dot{x}=1$ for 9 steps before changing to $\\dot{x}=0$. "
|
||||
"It is clear that as $g$ is larger we more closely follow the measurement instead of the prediction. When $g=0.8$ we follow the signal almost exactly, and reject almost none of the noise. One might naively conclude that $g$ should always be very small to maximize noise rejection. However, that means that we are mostly ignoring the measurements in favor of our prediction. What happens when the signal changes not due to noise, but an actual state change? Let's have a look. I will create data that has $\\dot{x}=1$ for 9 steps before changing to $\\dot{x}=0$. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -1901,7 +1901,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now let's leave g unchanged and investigate the effect of modifying h. We know that h affects how much of we favor the measurement of $\\dot{x}$ vs our prediction. But what does this *mean*? If our signal is changing a lot (quickly relative to the time step of our filter), then a large $h$ will cause us to react to those transient changes rapidly. A smaller $h$ will cause us to react more slowly.\n",
|
||||
"Now let's leave g unchanged and investigate the effect of modifying h. We know that h affects how much we favor the measurement of $\\dot{x}$ vs our prediction. But what does this *mean*? If our signal is changing a lot (quickly relative to the time step of our filter), then a large $h$ will cause us to react to those transient changes rapidly. A smaller $h$ will cause us to react more slowly.\n",
|
||||
"\n",
|
||||
"We will look at three examples. We have a noiseless measurement that slowly goes from 0 to 1 in 50 steps. Our first filter uses a nearly correct initial value for $\\dot{x}$ and a small $h$. You can see from the output that the filter output is very close to the signal. The second filter uses the very incorrect guess of $\\dot{x}=2$. Here we see the filter 'ringing' until it settles down and finds the signal. The third filter uses the same conditions but it now sets $h=0.5$. If you look at the amplitude of the ringing you can see that it is much smaller than in the second chart, but the frequency is greater. It also settles down a bit quicker than the second filter, though not by much."
|
||||
]
|
||||
@ -2329,7 +2329,7 @@
|
||||
"source": [
|
||||
"There are two lessons to be learned here. First, use the $h$ term to respond to changes in velocity that you are not modeling. But, far more importantly, there is a trade off here between responding quickly and accurately to changes in behavior and producing ideal output for when the system is in a steady state that you have. If the train never changes velocity we would make $h$ extremely small to avoid having the filtered estimate unduly affected by the noise in the measurement. But in an interesting problem there are almost always changes in state, and we want to react to them quickly. The more quickly we react to them, the more we are affected by the noise in the sensors. \n",
|
||||
"\n",
|
||||
"I could go on, but my aim is not to develop g-h filter theory here so much as to build insight into how combining measurements and predictions leads to a filtered solution. Tthere is extensive literature on choosing $g$ and $h$ for problems such as this, and there are optimal ways of choosing them to achieve various goals. As I explained earlier it is easy to 'lie' to the filter when experimenting with test data like this. In the subsequent chapters we will learn how the Kalman filter solves this problem in the same basic manner, but with far more sophisticated mathematics. "
|
||||
"I could go on, but my aim is not to develop g-h filter theory here so much as to build insight into how combining measurements and predictions leads to a filtered solution. There is extensive literature on choosing $g$ and $h$ for problems such as this, and there are optimal ways of choosing them to achieve various goals. As I explained earlier it is easy to 'lie' to the filter when experimenting with test data like this. In the subsequent chapters we will learn how the Kalman filter solves this problem in the same basic manner, but with far more sophisticated mathematics. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2496,9 +2496,8 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" x = [[ 1. 2.]\n",
|
||||
" [ 3. 4.]]\n",
|
||||
"dx = [ 10. 12. 0.2]\n"
|
||||
" x = [ 3.8 13.2 101.64]\n",
|
||||
"dx = [ 8.2 9.8 0.56]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -2508,8 +2507,8 @@
|
||||
" \n",
|
||||
"f_air = GHFilter (x=x_0, dx=dx_0, dt=1., g=.8, h=.2)\n",
|
||||
"f_air.update(z=np.array((2., 11., 102.)))\n",
|
||||
"print(' x =', x)\n",
|
||||
"print('dx =', dx_0)"
|
||||
"print(' x =', f_air.x)\n",
|
||||
"print('dx =', f_air.dx)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user