Spelling corrections.
github issue #52 - it wasn't possible to accept the merge becuase notebooks don't merge well after cells are run.
This commit is contained in:
parent
0cc841f68c
commit
63669a8c43
@ -828,7 +828,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"That is pretty good! There is a lot of data here, so let's talk about how to interpret it. The thick green line shows the estimate from the filter. It starts at day 0 with the inital guess of 160 lbs. The red line shows the prediction that is made from the previous day's weight. So, on day one the previous weight was 160 lbs, the weight gain is 1 lb, and so the first prediction is 161 lbs. The estimate on day one is then part way between the prediction and measurement at 159.8 lbs. Above the chart is a print out of the previous weight, predicted weight, and new estimate for each day. Finally, the thin black line shows the actual weight gain of the person being weighed. \n",
|
||||
"That is pretty good! There is a lot of data here, so let's talk about how to interpret it. The thick green line shows the estimate from the filter. It starts at day 0 with the initial guess of 160 lbs. The red line shows the prediction that is made from the previous day's weight. So, on day one the previous weight was 160 lbs, the weight gain is 1 lb, and so the first prediction is 161 lbs. The estimate on day one is then part way between the prediction and measurement at 159.8 lbs. Above the chart is a print out of the previous weight, predicted weight, and new estimate for each day. Finally, the thin black line shows the actual weight gain of the person being weighed. \n",
|
||||
"\n",
|
||||
"The estimates are not a straight line, but they are straighter than the measurements and somewhat close to the trend line we created. Also, it seems to get better over time. \n",
|
||||
"\n",
|
||||
@ -936,7 +936,7 @@
|
||||
"gain_rate = gain_rate\n",
|
||||
"```\n",
|
||||
" \n",
|
||||
"This obviously has no effect, and can be removed. I wrote this to emphasize that in the prediction step you need to predict next value for **all** variables, both *weight* and *gain_rate*. In this case we are assuming that the the gain does not vary, but when we generalize this algorithm we will remove that assumption. "
|
||||
"This obviously has no effect, and can be removed. I wrote this to emphasize that in the prediction step you need to predict next value for **all** variables, both *weight* and *gain_rate*. In this case we are assuming that the gain does not vary, but when we generalize this algorithm we will remove that assumption. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -996,7 +996,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let me introduce some more formal terminology. The predict step is known as **sytem propagation**. The *system* is whatever we are estimating - in this case my weight. We *propogate* it into the future. Some texts call this the **evolution**. It means the same thing. The update step is usually known as the **measurement update**. One iteration of the system propagation and measurement update is known as an **epoch**. \n",
|
||||
"Let me introduce some more formal terminology. The predict step is known as **system propagation**. The *system* is whatever we are estimating - in this case my weight. We *propogate* it into the future. Some texts call this the **evolution**. It means the same thing. The update step is usually known as the **measurement update**. One iteration of the system propagation and measurement update is known as an **epoch**. \n",
|
||||
"\n",
|
||||
"Now let's explore a few different problem domains to better understand this algorithm. Consider the problem of trying to track a train on a track. The track constrains the position of the train to a very specific region. Furthermore, trains are large and slow. It takes them many minutes to slow down or speed up significantly. So, if I know that the train is at kilometer marker 23 km at time t and moving at 18 kph, I can be extremely confident in predicting its position at time t + 1 second. And why is that important? Suppose we can only measure its position with an accuracy of $\\pm$ 250 meters. The train is moving at 18 kph, which is 5 meters per second. So at t+1 second the train will be at 23.005 km yet the measurement could be anywhere from 22.755 km to 23.255 km. So if the next measurement says the position is at 23.4 we know that must be wrong. Even if at time t the engineer slammed on the brakes the train will still be very close to 23.005 km because a train cannot slow down very much in 1 second. If we were to design a filter for this problem (and we will a bit further in the chapter!) we would want to design a filter that gave a very high weighting to the prediction vs the measurement. \n",
|
||||
"\n",
|
||||
@ -1262,7 +1262,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Excercise - Create arrays\n",
|
||||
"### Exercise - Create arrays\n",
|
||||
"\n",
|
||||
"I want you to create a NumPy array of 10 elements with each element containing 1/10. There are several ways to do this; try to implement as many as you can think of. "
|
||||
]
|
||||
@ -1949,7 +1949,7 @@
|
||||
"\n",
|
||||
"If you really want to test yourself, read the next paragraph and try to predict the results before you move the sliders. \n",
|
||||
"\n",
|
||||
"Some things to try include setting $g$ and $h$ to their miminum values. See how perfectly the filter tracks the data! This is only because we are perfectly predicting the weight gain. Adjust $\\dot{x}$ to larger or smaller than 5. The filter should diverge from the data and never reacquire it. Start adding back either $g$ or $h$ and see how the filter snaps back to the data. See what the difference in the line is when you add only $g$ vs only $h$. Can you explain the reason for the difference? Then try setting $g$ greater than 1. Can you explain the results? Put $g$ back to a reasonable value (such as 0.1), and then make $h$ very large. Can you explain these results? Finally, set both $g$ and $h$ to their largest values. \n",
|
||||
"Some things to try include setting $g$ and $h$ to their minimum values. See how perfectly the filter tracks the data! This is only because we are perfectly predicting the weight gain. Adjust $\\dot{x}$ to larger or smaller than 5. The filter should diverge from the data and never reacquire it. Start adding back either $g$ or $h$ and see how the filter snaps back to the data. See what the difference in the line is when you add only $g$ vs only $h$. Can you explain the reason for the difference? Then try setting $g$ greater than 1. Can you explain the results? Put $g$ back to a reasonable value (such as 0.1), and then make $h$ very large. Can you explain these results? Finally, set both $g$ and $h$ to their largest values. \n",
|
||||
" \n",
|
||||
"If you want to explore with this more, change the value of the array `zs` to the values used in any of the charts above and rerun the cell to see the result."
|
||||
]
|
||||
@ -2329,7 +2329,7 @@
|
||||
"source": [
|
||||
"There are two lessons to be learned here. First, use the $h$ term to respond to changes in velocity that you are not modeling. But, far more importantly, there is a trade off here between responding quickly and accurately to changes in behavior and producing ideal output for when the system is in a steady state that you have. If the train never changes velocity we would make $h$ extremely small to avoid having the filtered estimate unduly affected by the noise in the measurement. But in an interesting problem there are almost always changes in state, and we want to react to them quickly. The more quickly we react to them, the more we are affected by the noise in the sensors. \n",
|
||||
"\n",
|
||||
"I could go on, but my aim is not to develop g-h filter theory here so much as to build insight into how combining measurements and predictions leads to a filtered solution. Tthere is extensive literature on choosing $g$ and $h$ for problems such as this, and there are optimal ways of choosing them to achieve various goals. As I explained earlier it is easy to 'lie' to the filter when experimenting with test data like this. In the subsequent chapters we will learn how the Kalman filter solves this problem in the same basic manner, but with far more sophisticated mathematics. "
|
||||
"I could go on, but my aim is not to develop g-h filter theory here so much as to build insight into how combining measurements and predictions leads to a filtered solution. There is extensive literature on choosing $g$ and $h$ for problems such as this, and there are optimal ways of choosing them to achieve various goals. As I explained earlier it is easy to 'lie' to the filter when experimenting with test data like this. In the subsequent chapters we will learn how the Kalman filter solves this problem in the same basic manner, but with far more sophisticated mathematics. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2349,7 +2349,7 @@
|
||||
"\n",
|
||||
" pip install filterpy\n",
|
||||
" \n",
|
||||
"Read Appendix A for more information on installing or downloding FilterPy from GitHub.\n",
|
||||
"Read Appendix A for more information on installing or downloading FilterPy from GitHub.\n",
|
||||
"\n",
|
||||
"To use the g-h filter import it and create an object from the class `GHFilter`. "
|
||||
]
|
||||
|
Loading…
Reference in New Issue
Block a user