More examples and exercises.

Added a new KF that filters a temperature sensor. Discussed what to do if
there is no data for update().
This commit is contained in:
Roger Labbe 2014-05-08 08:20:13 -07:00
parent bcc62c7984
commit 412344fe72

View File

@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:71935257a6cdd6e96f6fc2ae944dd43a6f4b1bd461e175fa7bb4745c9623dcf4"
"signature": "sha256:58455c078849c3b264b0536c1ad78c8c27109be5299e7d16fe969fca4f75c398"
},
"nbformat": 3,
"nbformat_minor": 0,
@ -52,7 +52,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -72,7 +73,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -101,7 +103,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -136,7 +139,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -159,7 +163,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "code",
@ -169,7 +174,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -196,7 +202,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -224,13 +231,14 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Eyeballing this confirms our intuition - no dog moves like this. However, noisy sensor data certainly looks like this. So let's proceed to see how we might solve this mathematically. But how?\n",
"Eyeballing this confirms our intuition - no dog moves like this. However, noisy sensor data certainly looks like this. So let's proceed and try to solve this mathematically. But how?\n",
"\n",
"\n",
"Recall the histogram code for adding a measurement to a pre-existing belief:\n",
@ -297,7 +305,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -331,7 +340,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -366,7 +376,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -384,7 +395,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -408,7 +420,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -468,7 +481,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -512,7 +526,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -588,15 +603,287 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"###More Examples\n",
"In this example I only plotted 10 data points so the output from the print statements would not overwhelm us. Now let's look at the filter's performance with more data. This time we will plot both the output of the filter and the variance.\n",
"\n"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"%precision 2\n",
"# assume dog is always moving 1m to the right\n",
"movement = 1\n",
"movement_error = 2\n",
"sensor_error = 4.5\n",
"pos = (0, 100) # gaussian N(0,50)\n",
"\n",
"Before I go on, I want to emphasize that this code fully implements a 1D Kalman filter. If you have tried to read the literatue, you are perhaps surprised, because this looks nothing like the complex, endless pages of math in those books. To be fair, the math gets a bit more complicated in multiple dimensions, but not by much. So long as we worry about *using* the equations rather than *deriving* them we can create Kalman filters without a lot of effort. Moreover, I hope you'll agree that you have a decent intuitive grasp of what is happening. We represent our beliefs with Gaussians, and our beliefs get better over time because more measurement means more data to work with. \"Measure twice, cut once!\""
"dog = DogSensor(pos[0], velocity=movement, noise=sensor_error)\n",
"\n",
"zs = []\n",
"ps = []\n",
"vs = []\n",
"\n",
"for i in range(50):\n",
" pos = update(pos[0], pos[1], movement, movement_error) \n",
" Z = dog.sense()\n",
" zs.append(Z)\n",
" vs.append(pos[1])\n",
" \n",
" pos = sense(pos[0], pos[1], Z, sensor_error)\n",
" ps.append(pos[0])\n",
" \n",
"#plt.subplot(121) \n",
"p1, = plt.plot(zs,c='r', linestyle='dashed')\n",
"p2, = plt.plot(ps, c='b')\n",
"plt.legend([p1,p2], ['measurement', 'filter'], 2)\n",
"plt.show()\n",
"\n",
"plt.plot(vs)\n",
"plt.title('Variance')\n",
"plt.show()\n",
"print ([float(\"%0.4f\" % v) for v in vs])"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we can see that the variance converges very quickly to roughly 4.1623 in 10 steps. We interpret this as meaning that we become very confident in our position estimate very quickly. The first few measurements are unsure due to our uncertainty in our guess at the initial position, but the filter is able to quickly determine an accurate estimate."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"> Before I go on, I want to emphasize that this code fully implements a 1D Kalman filter. If you have tried to read the literatue, you are perhaps surprised, because this looks nothing like the complex, endless pages of math in those books. To be fair, the math gets a bit more complicated in multiple dimensions, but not by much. So long as we worry about *using* the equations rather than *deriving* them we can create Kalman filters without a lot of effort. Moreover, I hope you'll agree that you have a decent intuitive grasp of what is happening. We represent our beliefs with Gaussians, and our beliefs get better over time because more measurement means more data to work with. \"Measure twice, cut once!\"\n",
"\n",
"\n",
"#####Excercise:\n",
"Modify the values of *movement_error* and *sensor_error* and note the effect on the filter and on the variance. Which has a larger effect on the value that variance converges to. For example, which results in a smaller variance:\n",
"\n",
" movement_error = 40\n",
" sensor_error = 2\n",
" \n",
"or:\n",
"\n",
" movement_error = 2\n",
" sensor_error = 40 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Introduction to Designing a Filter\n",
"So far we have developed our filter based on the dog sensors introduced in the Discrete Bayesian filter chapter. We are used to this problem by now, and may feel ill-equiped to implement a Kalman filter for a different problem. To be honest, there is still quite a bit of information missing from this presentation. The next chapter will fill in the gaps. Still, lets get a feel for it by designing and implementing a Kalman filter for a thermometer. The sensor for the thermometer outputs a voltage that corresponds to the temperature that is being measured. We have read the manufacturer's specifications for the sensor, and it tells us that the sensor exhibits white noise with a standard deviation of 2.13.\n",
"\n",
"We do not have a real sensor to read, so we will simulate the sensor with the following function. We have hard-coded the voltage to 16.3 - obviously the voltage will differ based on the temperature, but that is not important to our filter design."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"temp_variance = 2.13**2\n",
"def volt():\n",
" return random.randn()*temp_variance + 16.3"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We generate white noise with a given variance using the equation *random.randn() * variance*. The specification gives us the standard deviation of the noise, not the variance, but recall that variance is just the square of the standard deviation. Hence we raise 2.13 to the second power.\n",
"> **Sidebar**: spec sheets are just what they sound like - specificiations. Any individual sensor will exhibit different performance based on normal manufacturing variations. Numbers given are often maximums - the spec is a guarantee that the performace will be at least that good. So, our sensor might have standard deviation of 1.8. If you buy an expensive piece of equipment it often comes with a sheet of paper displaying the test results of your specific item; this is usually very trustworthy. On the other hand, if this is a cheap sensor it is likely it received little to no testing prior to being sold. Manufacturers typically test a small subset of their output to verify that everything falls within the desired performance range. If you have a critical application you will need to read the specification sheet carefully to figure out exactly what they mean by their ranges. Do they guarantee their number is a maximum, or is it, say, the $3\\sigma$ error rate? Is every item tested? Is the variance normal, or some other distribution. Finally, manufacturing is not perfect. Your part might be defective and not match the performance on the sheet.\n",
"\n",
"> For example, I just randomly looked up a data sheet for an airflow sensor. There is a field \"Repeatability\", with the value \"$\\pm0.50\\%$ Reading\". Is this a Gaussian? Is there a bias? For example, perhaps the repeatibility is nearly $0.0\\%$ at low temperatures, and always nearly $+0.50$ at high temperatures. Data sheets for electrical components often contain a section of \"Typical Performance Characteristics\". These are used to capture information that cannot be easily conveyed in a table. For example, I am looking at a chart showing output voltage vs current for a LM555 timer. There are three curves showing the performance at different temperatures. The response is ideally linear, but all three lines are curved. This clarifies that errors in voltage outputs are probably not Gaussian - in this chip's case higher temperatures leads to lower voltage output, and the voltage output is quite nonlinear if the input current is very high. \n",
"\n",
"> As you might guess, modeling the performance of your sensors is one of the harder parts of creating good Kalman filter. \n",
"\n",
"Now we need to write the Kalman filter processing loop. As with our previous problem, we need to perform a cycle of sensing and updating. The sensing step probably seems clear - call *volt()* to get the measurement, pass the result into *sense()* function, but what about the update step? We do not have a sensor to detect 'movement' in the voltage, and for any small duration we expect the voltage to remain constant. How shall we handle this?\n",
"\n",
"As always, we will trust in the math. We have no movement, and no error associated with them, so we will just set both to zero. Let's see what happens. "
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"sensor_error = temp_variance\n",
"movement_error = 0\n",
"movement = 0\n",
"voltage = (25,1000) #who knows what the first value is?\n",
"\n",
"zs = []\n",
"ps = []\n",
"vs = []\n",
"N=50\n",
"\n",
"for i in range(N):\n",
" Z = volt()\n",
" zs.append(Z)\n",
" \n",
" voltage = sense(voltage[0], voltage[1], Z, sensor_error)\n",
" ps.append(voltage[0])\n",
" vs.append(voltage[1])\n",
"\n",
" voltage = update(voltage[0], voltage[1], movement, movement_error)\n",
"\n",
"plt.scatter(range(N), zs,marker='+')\n",
"p1, = plt.plot(ps, c='g')\n",
"plt.legend([p1], ['filter'], 3)\n",
"plt.xlim((0,N));plt.ylim((0,30))\n",
"plt.show()\n",
"plt.plot(vs)\n",
"plt.title('Variance')\n",
"plt.show()\n",
"print('Variance converges to',vs[-1])\n",
"print('Last voltage is',voltage[0])"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The first plot shows the individual sensor measurements marked with '+'s vs the filter output. Despite a lot of noise in the sensor we quickly discover the approximate voltage of the sensor. In the run I just completed at the time of authorship, the last voltage output from the filter is $16.213$, which is quite close to the $16.4$ used by the *volt()* function. On other runs I have gotten up to around $16.9$ as an output and also as low as 15.5 or so.\n",
"\n",
"The second plot shows how the variance converges over time. Compare this plot to the variance plot for the dog sensor. While this does converge to a very small value, it is much slower than the dog problem. The next section **Explaining the Results - Multi-Sensor Fusion** explains why this happens.\n",
"\n",
"##### Exercise(optional):\n",
"Write a function that runs the Kalman filter many times and record what value the voltage converges to each time. Plot this as a histogram. After 10,000 runs do the results look normally distributed? Does this match your intuition of what should happen?\n",
"\n",
"> use plt.hist(data,bins=100) to plot the histogram. "
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"#Your code here"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"######Solution\n"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"sensor_error = temp_variance\n",
"\n",
"def VKF():\n",
" voltage=(14,1000)\n",
" for i in range(N):\n",
" Z = volt()\n",
" voltage = sense(voltage[0], voltage[1], Z, sensor_error)\n",
" return voltage[0]\n",
"\n",
"vs = []\n",
"for i in range (10000):\n",
" vs.append (VKF())\n",
"plt.hist(vs, bins=100) \n",
"plt.show()"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"######Discussion\n",
"The results do in fact look like a normal distribution. Each voltage is Gaussian, and the **Central Limit Theorem** guarantees that a large number of Gaussians is normally distributed. We will discuss this more in a subsequent math chapter."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"###Explaining the Results - Multi-Sensor Fusion\n",
"\n",
"So how does the Kalman filter do so well? I have glossed over one aspect of the filter as it becomes confusing to address too many points at the same time. We will return to the dog tracking problem. We used two sensors to track the dog - the RFID sensor that detects position, and the inertial tracker that tracked movement. However, we have focussed all of our attention on the position sensor. Let's change focus and see how the filter performs if the intertial tracker is also noisy. This will provide us with an vital insight into the performance of Kalman filters."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"sensor_error = 30\n",
"movement_sensor = 30\n",
"pos = (0,500)\n",
"\n",
"dog = DogSensor(0, velocity=movement, noise=sensor_error)\n",
"\n",
"zs = []\n",
"ps = []\n",
"vs = []\n",
"\n",
"for i in range(100):\n",
" Z = dog.sense()\n",
" zs.append(Z)\n",
" \n",
" pos = sense(pos[0], pos[1], Z, sensor_error)\n",
" ps.append(pos[0])\n",
" vs.append(pos[1])\n",
"\n",
" pos = update(pos[0], pos[1], movement+ random.randn(), movement_error)\n",
"\n",
"p1, = plt.plot(zs,c='r', linestyle='dashed')\n",
"p2, = plt.plot(ps, c='b')\n",
"plt.legend([p1,p2], ['measurement', 'filter'], 2)\n",
"plt.show()\n",
"plt.plot(vs)\n",
"plt.title('Variance')\n",
"plt.show()"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This result is worse than the example where only the measurement sensor was noisy. Instead of being mostly straight, this time the filter's output is distintly jagged. But, it still mostly tracks the dog. What is happening here?\n",
"\n",
"This illustrates the effects of *multi-sensor fusion*. Suppose we get a position reading of -28.78 followed by 31.43. From that information alone it is impossible to tell if the dog is standing still during very noisy measurements, or perhaps sprinting from -29 to 31 and being accurately measured. But we have a second source of information, his velocity. Even when the velocity is also noisy, it constrains what our beliefs might be. For example, suppose that with the 31.43 position reading we get a velocity reading of 59. That matches the difference between the two positions quite well, so this will lead us to believe the RFID sensor and the velocity sensor. Now suppose we got a velocity reading of 1.7. This doesn't match our RFID reading very well - it suggests that the dog is standing still or moving slowly.\n",
"\n",
"When sensors measure different aspects of the system and they all agree we have strong evidence that the sensors are accurate. And when they do not agree it is a strong indication that one or more of them are inaccurate. \n",
"\n",
"We will formalize this mathematically in the next chapter; for now trust this intuitive explanation. We use this sort of reasoning every day in our lives. If one person tells us something that seems far fetched we are inclined to doubt them. But if several people independently relay the same information we attach higher credence to the data. If one person disagrees with several other people, we tend to distrust the outlier. If we know the people that might alter our belief. If a friend is inclined to practical jokes and tall tales we may put very little trust in what they say. If one lawyer and three lay people opine on some fact of law, and the lawyer disagees with the three you'll probably lend more credence to what the lawyer says because of her expertise. In the next chapter we will learn how to mathematicall model this sort of reasoning."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### More examples\n",
"\n",
"##### Example: Extreme Amounts of Noise\n",
"So I didn't put a lot of noise in the signal, and I also 'correctly guessed' that the dog was at position 0. How does the filter perform in real world conditions? Let's explore and find out. I will start by injecting a lot of noise in the RFID sensor. I will inject an extreme amount of noise - noise that apparently swamps the actual measurement. What does your intution tell about how the filter will perform if the noise is allowed to be anywhere from -300 or 300. In other words, an actual position of 1.0 might be reported as 287.9, or -189.6, or any other number in that range. Think about it before you scroll down."
]
@ -631,7 +918,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -678,7 +966,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -725,7 +1014,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -767,7 +1057,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -778,57 +1069,6 @@
"200 iterations may seem like a lot, but the amount of noise we are injecting is truly huge. In the real world we use sensors like thermometers, laser rangefinders, GPS satellites, computer vision, and so on. None have the enormous error as shown here. A reasonable value for the variance for a cheap thermometer might be 10, for example, and our code is using 30,000 for the variance. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"###Explaining the Results - Multi-Sensor Fusion\n",
"\n",
"So how does the Kalman filter do so well? I have glossed over one aspect of the filter as it becomes confusing to address too many points at the same time. In these examples we have two sensors even though we have only been talking about the RFID sensor. The second sensor measures our dog's movement using an intertial tracker. How does our filter perform if that tracker is also noisy? Let's see:"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"sensor_error = 30\n",
"movement_sensor = 30\n",
"pos = (0,500)\n",
"\n",
"dog = DogSensor(0, velocity=movement, noise=sensor_error)\n",
"\n",
"zs = []\n",
"ps = []\n",
"\n",
"for i in range(100):\n",
" Z = dog.sense()\n",
" zs.append(Z)\n",
" \n",
" pos = sense(pos[0], pos[1], Z, sensor_error)\n",
" ps.append(pos[0])\n",
"\n",
" pos = update(pos[0], pos[1], movement+ random.randn()*2, movement_error)\n",
"\n",
"p1, = plt.plot(zs,c='r', linestyle='dashed')\n",
"p2, = plt.plot(ps, c='b')\n",
"plt.legend([p1,p2], ['measurement', 'filter'], 2)\n",
"plt.show()"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This result is worse than the example where only the measurement sensor was noisy. Instead of being mostly straight, this time the filter's output is distintly jagged. But, it still mostly tracks the dog. What is happening here?\n",
"\n",
"This illustrates the effects of *multi-sensor fusion*. Suppose the dog is actually at 10.0, and we get subsequent measurement readings of -289.78 and 301.43. From that information alone it is impossible to tell if the dog is standing still during very noisy measurements, or perhaps sprinting from -289 to 301 and being accurately measured. But we have a second source of information, his velocity. Even when the velocity is also noisy, it constrains what our beliefs might be. For example, suppose with the readings of -289.78 to 301.43 we get a velocity reading of 590. That matches the difference between the two positions quite well, so this will lead us to believe the RFID sensor and the velocity sensor. Now suppose we got a velocity reading of 1.7. This doesn't match our RFID reading very well. Finally, suppose the velocity reading was -678.8. This completely contradicts the RFID reading - we may not be sure from these few values which sensor is most inaccurate, but perhaps by now you will trust that the Gaussians expressing our beliefs will correctly handle these cases. It's a bit hard to talk about while working with 1D problems, so we will take this topic up in great detail in the next chapter where we develop multidimensional Kalman filters.\n",
"\n",
"Besides that aspect, we are modelling the noise in our sensors using Gaussians which model their real world performance. We are multiplying the Gaussians (probabilities) when we get a new position measurement, adding the Gaussians when we get a movement update. This is algorithmically correct (this is how the histogram filter works) and mathematically correct - why wouldn't it work if our model is correct? Think back to the Discrete Bayes filter that we developed and you'll realize that it is the same logic and algorithm. "
]
},
{
"cell_type": "markdown",
"metadata": {},
@ -856,7 +1096,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -901,7 +1142,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -917,7 +1159,7 @@
"\n",
"Implement a Kalman filter that uses the following *sin()* to generate the measurement value for i in range(100):\n",
"\n",
" Z = math.sin(i/3.) # no noise, perfect data!\n",
" Z = math.sin(i/3.) * 2\n",
" \n",
"Adust the variance and initial positions to see the effect. What is, for example, the result of a very bad initial guess?"
]
@ -930,7 +1172,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -968,7 +1211,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -977,6 +1221,7 @@
"######Discussion\n",
"\n",
"Here we set a bad initial guess of 100. We can see that the filter never 'acquires' the signal. Note now the peak of the filter output always lags the peak of the signal by a small amount. More clearely we can see the large gap in height between the measurement and filter. \n",
"**REWriTE - not seeing heigh gap now**\n",
"\n",
"Maybe we just didn't adjust things 'quite right'. After all, the output looks like a sin wave, it is just offset in $x$ and $y$. Let's test this assumption.\n",
"\n",
@ -992,7 +1237,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -1030,7 +1276,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
@ -1052,6 +1299,52 @@
"If you understand this, you will be able to understand multidimensional Kalman filters and the various extensions that have been make on them. If you do not fully understand this, I strongly suggest rereading this chapter. Try implementing the filter from scratch, just by looking at the equations and reading the text. Change the constants. Maybe try to implement a different tracking problem, like tracking stock prices. Experimentation will build your intuition and understanding of how these marvelous filters work."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"def volt():\n",
" return random.randn()*4. + 14.4\n",
"\n",
"sensor_error = 30\n",
"movement_error = 2\n",
"pos = (12,500)\n",
"\n",
"zs = []\n",
"ps = []\n",
"vs = []\n",
"\n",
"\n",
"for i in range(100):\n",
" Z = volt()\n",
" zs.append(Z)\n",
" \n",
" pos = sense(pos[0], pos[1], Z, sensor_error)\n",
" ps.append(pos[0])\n",
" vs.append(pos[1])\n",
" #pos = update(pos[0], pos[1], 0, movement_error)\n",
"\n",
"\n",
"p1, = plt.plot(zs,c='r', linestyle='dashed')\n",
"p2, = plt.plot(ps, c='b')\n",
"plt.legend([p1,p2], ['measurement', 'filter'], 3)\n",
"plt.show()\n",
"plt.plot(vs)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
"metadata": {},
@ -1075,7 +1368,8 @@
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
}
],
"metadata": {}