Better code formatting with ``python

There is more than that here. I had a bad commit of a bunch of temporary
files, I had to reset back in time, and now I am doing a massive commit.
Sorry.
This commit is contained in:
Roger Labbe 2015-07-18 08:43:24 -07:00
parent cc24dd6291
commit 6092e06459
22 changed files with 1022 additions and 890 deletions

View File

@ -750,16 +750,19 @@
"source": [
"Now let's try a randomly chosen number to scale our estimate: $\\frac{4}{10}$. Our estimate will be four tenths the measurement and the rest will be from the prediction. In other words, we are expressing a belief here, a belief that the prediction is somewhat more likely to be correct than the measurement. We compute that as\n",
"\n",
"$$ new\\_estimate = prediction + \\frac{4}{10}(measurement - prediction)\n",
"$$\n",
"$$\\mathtt{new\\_estimate} = \\mathtt{prediction} + \\frac{4}{10}(\\mathtt{measurement} - \\mathtt{prediction})$$\n",
"\n",
"The difference between the measurement and prediction is called the *residual*, which is depicted by the black vertical line in the plot above. This will become an important value to use later on, as it is an exact computation of the difference between measurements and the filter's output. Smaller residuals imply better performance.\n",
"\n",
"$$\\begin{cases}\n",
"x=0 & \\text{hi there} \\\\\n",
"x = 1 & \\text{good bye}\n",
"\\end{cases}$$\n",
"Let's code that and see the results when we test it against the series of weights from above. We have to take into account one other factor. Weight gain has units of lbs/time, so to be general we will need to add a time step $t$, which we will set to 1 (day). \n",
"\n",
"I hand generated the weight data to correspond to a true starting weight of 160 lbs, and a weight gain of 1 lb per day. In other words on the first day (day zero) the true weight is 160lbs, on the second day (day one, the first day of weighing) the true weight is 161 lbs, and so on. \n",
"\n",
"We need to make a guess for the initial weight to feed into the filter. It is too early to talk about strategies for making that initial guess/estimate, so for now I will assume 159 lbs."
"We need to make a guess for the initial weight. It is too early to talk about initialization strategies, so for now I will assume 159 lbs."
]
},
{
@ -895,7 +898,7 @@
"\n",
"So, should we set the new gain/day to 4.4 lbs? Yesterday we though the weight gain was 1 lb, today we think it is 4.4 lbs. We have two numbers, and want to combine them somehow. Hmm, sounds like our same problem again. Let's use our same tool, and the only tool we have so far - pick a value part way between the two. This time I will use another arbitrarily chosen number, $\\frac{1}{3}$. The equation is identical as for the weight estimate except we have to incorporate time because this is a rate (gain/day):\n",
"\n",
"$$new gain = old gain + \\frac{1}{3}\\frac{measurement - predicted~weight}{1~ day}\n",
"$$\\text{new gain} = \\text{old gain} + \\frac{1}{3}\\frac{\\text{measurement - predicted weight}}{1 \\text{ day}}\n",
"$$"
]
},
@ -964,7 +967,8 @@
"\n",
"One final point before we go on. In the prediction step I wrote the line\n",
"\n",
" gain_rate = gain_rate\n",
"```python\n",
"gain_rate = gain_rate```\n",
" \n",
"This obviously has no effect, and can be removed. I wrote this to emphasize that in the prediction step you need to predict next value for **all** variables, both *weight* and *gain_rate*. In this case we are assuming that the the gain does not vary, but when we generalize this algorithm we will remove that assumption. "
]
@ -1385,17 +1389,18 @@
"source": [
"In the example above, I explicitly coded this to solve the weighing problem that we've been discussing throughout the chapter. For example, the variables are named \"weight_scale\", \"gain\", and so on. I did this to make the algorithm easy to follow - you can easily see that we correctly implemented each step. But, that is code written for exactly one problem, and the algorithm is the same for any problem. So let's rewrite the code to be generic - to work with any problem. Use this function signature:\n",
"\n",
" def g_h_filter(data, x0, dx, g, h, dt):\n",
" \"\"\"\n",
" Performs g-h filter on 1 state variable with a fixed g and h.\n",
"```python\n",
"def g_h_filter(data, x0, dx, g, h, dt):\n",
" \"\"\"\n",
" Performs g-h filter on 1 state variable with a fixed g and h.\n",
"\n",
" 'data' contains the data to be filtered.\n",
" 'x0' is the initial value for our state variable\n",
" 'dx' is the initial change rate for our state variable\n",
" 'g' is the g-h's g scale factor\n",
" 'h' is the g-h's h scale factor\n",
" 'dt' is the length of the time step \n",
" \"\"\"\n",
" 'data' contains the data to be filtered.\n",
" 'x0' is the initial value for our state variable\n",
" 'dx' is the initial change rate for our state variable\n",
" 'g' is the g-h's g scale factor\n",
" 'h' is the g-h's h scale factor\n",
" 'dt' is the length of the time step \n",
" \"\"\"```\n",
"\n",
"Return the data as a NumPy array, not a list. Test it by passing in the same weight data as before, plot the results, and visually determine that it works."
]
@ -2074,18 +2079,21 @@
"\n",
"First, let's simulate the situation without a filter. We will assume that the train is currently at kilometer 23, and moving at 15 m/s. We can code this as \n",
"\n",
" pos = 23*1000\n",
" vel = 15\n",
" \n",
"```python\n",
"pos = 23*1000\n",
"vel = 15```\n",
"\n",
"Now we can compute the position of the train at some future time, *assuming* no change in velocity, with\n",
"\n",
" def compute_new_position(pos, vel, dt=1):\n",
" return pos + (vel * dt)\n",
" \n",
"```python\n",
"def compute_new_position(pos, vel, dt=1):\n",
" return pos + (vel * dt)```\n",
"\n",
"We can simulate the measurement by adding in some random noise to the position. Here our error is 500m, so the code might look like:\n",
"\n",
" def measure_position(pos):\n",
" return pos + random.randn()*500\n",
"```python\n",
"def measure_position(pos):\n",
" return pos + random.randn()*500```\n",
" \n",
"Let's put that in a cell and plot the results of 100 seconds of simulation. I will use NumPy's `asarray` function to convert the data into an NumPy array. This will allow me to divide all of the elements of the array at once by using the '/' operator."
]
@ -2564,7 +2572,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

View File

@ -1761,13 +1761,13 @@
"$$P(x_i|Z) = \\frac{P(Z|x_i) P(x_i)}{P(Z)}$$\n",
"\n",
"That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $P(Z|x_i)$. This is the probability for the measurement at every cell $x_i$. $P(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function, where `belief` is our prior $P(x_i)$.\n",
" \n",
"\n",
" for i, val in enumerate(map_):\n",
" if val == z:\n",
" belief[i] *= correct_scale\n",
" else:\n",
" belief[i] *= 1.\n",
"```python\n",
"for i, val in enumerate(map_):\n",
" if val == z:\n",
" belief[i] *= correct_scale\n",
" else:\n",
" belief[i] *= 1.```\n",
"\n",
"I added the `else` here, which has no mathematical effect, to point out that every element in $x$ (called `belief` in the code) is multiplied by a probability. You may object that I am multiplying by a scale factor, which I am, but this scale factor is derived from the probability of the measurement being correct vs the probability being incorrect.\n",
"\n",
@ -1792,10 +1792,11 @@
"\n",
"That equation is called the *total probability theorem*. Quoting from Wikipedia [6] \"It expresses the total probability of an outcome which can be realized via several distinct events\". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation\n",
"\n",
" for i in range(N):\n",
" for k in range (kN):\n",
" index = (i + (width-k) - offset) % N\n",
" result[i] += prob_dist[index] * kernel[k]"
"```python\n",
"for i in range(N):\n",
" for k in range (kN):\n",
" index = (i + (width-k) - offset) % N\n",
" result[i] += prob_dist[index] * kernel[k]```"
]
},
{
@ -1882,7 +1883,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

View File

@ -907,14 +907,15 @@
"\n",
"> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.\n",
"\n",
" %load -s gaussian stats.py\n",
" \n",
" def gaussian(x, mean, var):\n",
" \"\"\"returns normal distribution for x given a \n",
" gaussian with the specified mean and variance. \n",
" \"\"\"\n",
" return np.exp((-0.5*(x-mean)**2)/var) / \\\n",
" np.sqrt(_two_pi*var)"
"```python\n",
"%load -s gaussian stats.py\n",
"\n",
"def gaussian(x, mean, var):\n",
" \"\"\"returns normal distribution for x given a \n",
" gaussian with the specified mean and variance. \n",
" \"\"\"\n",
" return np.exp((-0.5*(x-mean)**2)/var) / \\\n",
" np.sqrt(_two_pi*var)```"
]
},
{
@ -1005,7 +1006,7 @@
"\n",
"The standard notation for a normal distribution for a random variable $X$ is $X \\sim\\ \\mathcal{N}(\\mu,\\sigma^2)$ where $\\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as\n",
"\n",
"$$temp \\sim \\mathcal{N}(22,4)$$\n",
"$$\\text{temp} \\sim \\mathcal{N}(22,4)$$\n",
"\n",
"This is an **extremely important** result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\\mu=22$ and $\\sigma^2=4$ I can compute the distribution of measurements for over any range."
]
@ -1538,7 +1539,8 @@
"\n",
"The code for rand_student_t is included in `filterpy.stats`. You may use it with\n",
"\n",
" from filterpy.stats import rand_student_t"
"```python\n",
"from filterpy.stats import rand_student_t```"
]
},
{
@ -1599,7 +1601,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

View File

@ -271,15 +271,6 @@
"load_style()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
@ -360,13 +351,15 @@
"\n",
"Our model assumes that the velocity of our dog is constant, but we acknowledged that things like hills and obstacles make that very unlikely to be true. We can simulate this by adding a randomly generated value to the velocity each time we compute the dog's position.\n",
"\n",
" noisy_vel = vel + randn()*velocity_std\n",
" x = x + noisy_vel*dt\n",
"```python\n",
"noisy_vel = vel + randn()*velocity_std\n",
"x = x + noisy_vel*dt```\n",
" \n",
"\n",
"To model the sensor we need to take the current position of the dog and then add some noise to it proportional to the Gaussian of the measurement noise. This is simply\n",
"\n",
" z = x + randn()*measurement_std\n",
"```python\n",
"z = x + randn()*measurement_std```\n",
" \n",
"I don't like the bookkeeping required to keep track of the various values for position, velocity, and standard deviations so I will implement the simulation as a class. "
]
@ -732,25 +725,28 @@
"\n",
"Recall the discrete Bayes code for adding a measurement to a preexisting belief:\n",
"\n",
" def update(pos, measure, p_hit, p_miss):\n",
" q = array(pos, dtype=float)\n",
" for i in range(len(hallway)):\n",
" if hallway[i] == measure:\n",
" q[i] = pos[i] * p_hit\n",
" else:\n",
" q[i] = pos[i] * p_miss\n",
" normalize(q)\n",
" return q\n",
"```python\n",
"def update(pos, measure, p_hit, p_miss):\n",
" q = array(pos, dtype=float)\n",
" for i in range(len(hallway)):\n",
" if hallway[i] == measure:\n",
" q[i] = pos[i] * p_hit\n",
" else:\n",
" q[i] = pos[i] * p_miss\n",
" normalize(q)\n",
" return q```\n",
" \n",
"Note that the algorithm is essentially computing:\n",
"\n",
" new_belief = prior_belief * measurement * sensor_error\n",
"```python\n",
"new_belief = prior_belief * measurement * sensor_error```\n",
" \n",
"The measurement term might not be obvious, but recall that measurement in this case was always 1 or 0, and so it was left out for convenience. Here *prior* carries the both the colloquial sense of *previoius*, but also the Bayesian meaning. In Bayesian terms the *prior* is the probability after the prediction and before the measurements have been incorporated.\n",
" \n",
"If we are implementing this with Gaussians, we might expect it to be implemented as:\n",
"\n",
" new_gaussian = measurement * old_gaussian\n",
"```python\n",
"new_gaussian = measurement * old_gaussian```\n",
" \n",
"where measurement is a Gaussian returned from the sensor. But does that make sense? Can we multiply Gaussians? If we multiply a Gaussian with a Gaussian is the result another Gaussian, or something else?"
]
@ -1067,9 +1063,10 @@
"source": [
"Perhaps this would be clearer if we used more specific names:\n",
"\n",
" def update_dog(dog_pos, dog_variance, measurement, measurement_variance):\n",
" return multiply(dog_pos, dog_variance, \n",
" measurement, measurement_variance)"
"```python\n",
"def update_dog(dog_pos, dog_variance, measurement, measurement_variance):\n",
" return multiply(dog_pos, dog_variance, \n",
" measurement, measurement_variance)```"
]
},
{
@ -1155,17 +1152,17 @@
"\n",
"How do we perform the predict function with Gaussians? Recall the discrete Bayes method:\n",
"\n",
" def predict(pos, move, p_correct, p_under, p_over):\n",
" n = len(pos)\n",
" result = array(pos, dtype=float)\n",
" for i in range(n):\n",
" result[i] = \\\n",
" pos[(i-move) % n] * p_correct + \\\n",
" pos[(i-move-1) % n] * p_over + \\\n",
" pos[(i-move+1) % n] * p_under \n",
" return result\n",
" \n",
" \n",
"```python\n",
"def predict(pos, move, p_correct, p_under, p_over):\n",
" n = len(pos)\n",
" result = array(pos, dtype=float)\n",
" for i in range(n):\n",
" result[i] = \\\n",
" pos[(i-move) % n] * p_correct + \\\n",
" pos[(i-move-1) % n] * p_over + \\\n",
" pos[(i-move+1) % n] * p_under \n",
" return result```\n",
"\n",
"In a nutshell, we shift the probability vector by the amount we believe the animal moved, and adjust the probability. How do we do that with Gaussians?\n",
"\n",
"It turns out that we add them. Think of the case without Gaussians. I think my dog is at 7.3 meters, and he moves 2.6 meters to right, where is he now? Obviously, 7.3 + 2.6 = 9.9. He is at 9.9 meters. Abstractly, the algorithm is `new_pos = old_pos + dist_moved`. It does not matter if we use floating point numbers or Gaussians for these values, the algorithm must be the same. \n",
@ -1329,17 +1326,19 @@
"source": [
"Now let's walk through the code and output.\n",
"\n",
" pos = (0., 400.) # gaussian N(0, 400)\n",
"\n",
"```python\n",
"pos = (0., 400.) # gaussian N(0, 400)```\n",
" \n",
"Here we set the initial position at 0 m. We set the variance to 400 m$^2$, which is a standard deviation of 20 meters. You can think of this as saying \"I believe with 99% accuracy the position is 0 plus or minus 60 meters\". This is because with Gaussians ~99% of values fall within 3$\\sigma$ of the mean.\n",
"\n",
" velocity = 1.\n",
"```python\n",
"velocity = 1.```\n",
" \n",
"So how do I know the velocity? Magic? Consider it a prediction, or perhaps we have a secondary velocity sensor. If this is a robot then this would be a control input to the robot. In the next chapter we will learn how to handle situations where you don't have a velocity sensor or input, so please accept this simplification for now.\n",
"\n",
" process_variance = 1.\n",
" sensor_variance = 2.\n",
"```python\n",
"process_variance = 1.\n",
"sensor_variance = 2.```\n",
" \n",
"Here the variance for the predict step is `process_variance` and the variance for the sensor is `sensor_variance`. The meaning of sensor variance should be clear - it is how much variance there is in each sensor reading. The process variance is how much error there is in the prediction model. We are predicting that at each time step the dog moves forward one meter. Dogs rarely do what we expect, and of course things like hills or the whiff of a squirrel will change his progress. If this was a robot responding to digital commands the performance would be better, but not perfect. Perhaps a hand made robot would have a variance of $\\sigma^2=.1$, and an industrial robot might have $\\sigma^2=0.02$. These are not 'magic' numbers; the square root of the express the distance error in meters. It is easy to get a Kalman filter working by just plugging in numbers, but if the numbers do not reflect reality the performance of the filter will be poor."
]
@ -1350,12 +1349,13 @@
"source": [
"Next we initialize the simulator and run it with\n",
"\n",
" dog = DogSimulation(pos[0], velocity=velocity, \n",
" measurement_variance=sensor_variance, \n",
" process_variance=process_variance)\n",
"```python\n",
"dog = DogSimulation(pos[0], velocity=velocity, \n",
" measurement_variance=sensor_variance, \n",
" process_variance=process_variance)\n",
"\n",
" # simulate dog and get measurements\n",
" zs = [dog.move_and_sense() for _ in range(N)]\n",
"# simulate dog and get measurements\n",
"zs = [dog.move_and_sense() for _ in range(N)]```\n",
" \n",
"It may seem very 'convenient' to set the simulator to the same position as our guess, and it is. Do not fret. In the next example we will see the effect of a wildly inaccurate guess for the dog's initial position. We move the dog for 10 steps forward and store the measurements in `zs` via a list comprehension.\n",
"\n",
@ -1363,7 +1363,8 @@
"\n",
"The next code allocates an array to store the output of the filtered positions. \n",
"\n",
" positions = np.zeros(N)"
"```python\n",
"positions = np.zeros(N)```"
]
},
{
@ -1372,9 +1373,10 @@
"source": [
"Now we enter our `predict() ... update()` loop.\n",
"\n",
" for i in range(N):\n",
" pos = predict(pos=pos[0], variance=pos[1], \n",
" movement=vel, movement_variance=process_variance)\n",
"```python\n",
"for i in range(N):\n",
" pos = predict(pos=pos[0], variance=pos[1], \n",
" movement=vel, movement_variance=process_variance)```\n",
"\n",
"We call the `predict()` function with the Gaussian for the dog's position, and another Gaussian for its movement. `pos[0], pos[1]` is the mean and variance for the position, and 'vel, process_variance' is the mean and variance for the movement. The first time through the loop we get\n",
"\n",
@ -1384,8 +1386,9 @@
"\n",
"Then we call the update function of our filter and save the result in our `positions` array.\n",
"\n",
" pos = update(pos[0], pos[1], z, sensor_variance)\n",
" positions.append(pos[0])\n",
"```python\n",
"pos = update(pos[0], pos[1], z, sensor_variance)\n",
"positions.append(pos[0])```\n",
" \n",
"For this I get this as the result:\n",
"\n",
@ -1401,11 +1404,12 @@
"\n",
"Before we continue I want to point out how few lines of code actually perform the filtering. Most of the code is either initialization, storing of data, simulating the dog movement, and printing results. The code that performs the filtering is the very succinct\n",
"\n",
" for i in range(N):\n",
" pos = predict(pos=pos[0], variance=pos[1], \n",
" movement=vel, movement_variance=process_variance)\n",
" pos = update(mean=pos[0], variance=pos[1],\n",
" measurement=z, measurement_variance=sensor_variance)"
"```python\n",
"for i in range(N):\n",
" pos = predict(pos=pos[0], variance=pos[1], \n",
" movement=vel, movement_variance=process_variance)\n",
" pos = update(mean=pos[0], variance=pos[1],\n",
" measurement=z, measurement_variance=sensor_variance)```"
]
},
{
@ -1611,41 +1615,46 @@
"source": [
"For many purposes the code above suffices. However, if you write enough of these filters the functions will become a bit annoying. For example, having to write\n",
"\n",
" pos = predict(pos[0], pos[1], movement, movement_variance) \n",
" \n",
"```python\n",
"pos = predict(pos[0], pos[1], movement, movement_variance) ```\n",
"\n",
"is a bit cumbersome and error prone. Let's investigate how we might implement this in a form that makes our lives easier.\n",
"\n",
"First, values for the movement error and the measurement errors are typically constant for a given problem, so we only want to specify them once. We can store them in instance variables in the class. Second, it is annoying to have to pass in the state (pos in the code snippet above) and then remember to assign the output of the function back to that state, so the state should also be an instance variable. Our first attempt might look like:\n",
"\n",
" class KalmanFilter1D:\n",
" def __init__(self, initial_state, measurement_variance, movement_variance):\n",
" self.state = initial_state\n",
" self.measurement_variance = measurement_variance\n",
" self.movement_variance = movement_variance\n",
"```python\n",
"class KalmanFilter1D:\n",
" def __init__(self, initial_state, measurement_variance, movement_variance):\n",
" self.state = initial_state\n",
" self.measurement_variance = measurement_variance\n",
" self.movement_variance = movement_variance```\n",
"\n",
"That works, but I am going to use different naming. The Kalman filter literature uses math, not code, so it uses single letter variables, so I will start exposing you to it now. At first it seems impossibly terse, but as you become familiar with the nomenclature you'll see that the math formulas in the textbooks will have an exact one-to-one correspondence with the code. Unfortunately there is not a lot of meaning behind the naming unless you have training in control theory; you will have to memorize them. If you do not make this effort you will never be able to read the literature.\n",
"\n",
"So, we use `x` for the state (estimated value of the filter) and `P` for the variance of the state. `R` is the measurement error, and `Q` is the process (prediction) error. This gives us:\n",
"\n",
" class KalmanFilter1D:\n",
" def __init__(self, x0, P, R, Q):\n",
" self.x = x0\n",
" self.P = P\n",
" self.R = R\n",
" self.Q = Q\n",
"```python\n",
"class KalmanFilter1D:\n",
" def __init__(self, x0, P, R, Q):\n",
" self.x = x0\n",
" self.P = P\n",
" self.R = R\n",
" self.Q = Q```\n",
" \n",
"Now we can implement the `update()` and `predict()` function. In the literature the measurement is usually named either `z` or `y`; I find `y` is too easy to confuse with the y axis of a plot, so I like `z`. I like to think I can hear a `z` in *measurement*, which helps me remember what `z` stands for. So for the update method we might write:\n",
"\n",
" def update(z):\n",
" self.x = (self.P * z + self.x * self.R) / (self.P + self.R)\n",
" self.P = 1 / (1/self.P + 1/self.R)\n",
"```python\n",
"def update(z):\n",
" self.x = (self.P * z + self.x * self.R) / (self.P + self.R)\n",
" self.P = 1 / (1/self.P + 1/self.R)```\n",
"\n",
"Finally, the movement is usually called `u`, and so we will use that. So for the predict function we might write:\n",
"\n",
" def predict(self, u):\n",
" self.x += u\n",
" self.P += self.Q\n",
" \n",
"```python\n",
"def predict(self, u):\n",
" self.x += u\n",
" self.P += self.Q```\n",
"\n",
"That give us the following code. Production code would require significant comments and error checking. However, in the next chapter we will develop Kalman filter code that works for any dimension so this class will never be more than a stepping stone for us."
]
},
@ -1785,13 +1794,15 @@
"source": [
"Modify the values of `movement_variance` and `sensor_variance` and note the effect on the filter and on the variance. Which has a larger effect on the variance convergence?. For example, which results in a smaller variance:\n",
"\n",
"```python\n",
" movement_variance = 40\n",
" sensor_variance = 2\n",
" sensor_variance = 2```\n",
" \n",
"or:\n",
"\n",
"```python\n",
" movement_variance = 2\n",
" sensor_variance = 40"
" sensor_variance = 40```"
]
},
{
@ -2467,9 +2478,11 @@
"\n",
"Do you suppose that this filter works well or poorly with nonlinear systems?\n",
"\n",
"Implement a Kalman filter that uses the following equation to generate the measurement value for i in range(100):\n",
"Implement a Kalman filter that uses the following equation to generate the measurement value\n",
"\n",
" Z = math.sin(i/3.) * 2\n",
"```python\n",
"for i in range(100):\n",
" Z = math.sin(i/3.) * 2```\n",
" \n",
"Adjust the variance and initial positions to see the effect. What is, for example, the result of a very bad initial guess?"
]
@ -2745,7 +2758,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

View File

@ -303,7 +303,7 @@
"What might a **multivariate normal distribution** be? *Multivariate* means multiple variables. Our goal is to be able to represent a normal distribution across multiple dimensions. I don't necessarily mean spatial dimensions - it could be position, velocity, and acceleration. Consider a two dimensional case. Let's say we believe that $x = 2$ and $y = 17$. This might be the *x* and *y* coordinates for the position of our dog, it might be the position and velocity of our dog on the x-axis, or the temperature and wind speed at our weather station. It doesn't really matter. We can see that for $N$ dimensions, we need $N$ means, which we will arrange in a column matrix (vector) like so:\n",
"\n",
"$$\n",
"\\mu = \\begin{bmatrix}{\\mu}_1\\\\{\\mu}_2\\\\ \\vdots \\\\{\\mu}_n\\end{bmatrix}\n",
"\\mu = \\begin{bmatrix}\\mu_1\\\\\\mu_2\\\\ \\vdots \\\\\\mu_n\\end{bmatrix}\n",
"$$\n",
"\n",
"Therefore for this example we would have\n",
@ -1427,7 +1427,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

View File

@ -452,9 +452,8 @@
"\n",
"I have programmed the equations of the Kalman filter into the `KalmanFilter` class in FilterPy. You will import it with\n",
"\n",
" from filterpy.kalman import KalmanFilter\n",
" \n",
"The class contains variables **blah**"
"```python\n",
"from filterpy.kalman import KalmanFilter```"
]
},
{
@ -680,8 +679,9 @@
"\n",
"and implemented it with\n",
"\n",
" def predict(pos, variance, movement, movement_variance):\n",
" return (pos + movement, variance + movement_variance)\n",
"```python\n",
"def predict(pos, variance, movement, movement_variance):\n",
" return (pos + movement, variance + movement_variance)```\n",
" \n",
"If the current position is 5.4 meters, and the movement is 1.1 meters over the time step, then of course the new position is 5.4 + 1.1, or 6.5 meters.\n",
"\n",
@ -693,12 +693,14 @@
"\n",
"But we need to generalize this. The Kalman filter equations work with *any* linear system, not just Newtonian ones. Maybe the system you are filtering is the plumbing system in a chemical plant, and the amount of flow in a given pipe is determined by a linear combination of the settings of different valves. \n",
"\n",
"$$ \\mathtt{pipe_1 flow} = 0.134 (\\mathtt{valve}_1) + 0.41(\\mathtt{valve}_2 - \\mathtt{valve}_3) + .34 \\\\\n",
"$$\\mathtt{pipe_1 flow} = 0.134 (\\mathtt{valve}_1) + 0.41(\\mathtt{valve}_2 - \\mathtt{valve}_3) + .34 \\\\\n",
"\\mathtt{pipe_2 flow} = 0.21 (\\mathtt{valve}_2) - 0.6(\\mathtt{valve}_1 - \\mathtt{valve}_5) + 1.86$$\n",
"\n",
"Linear algebra has a powerful way to express systems of equations. Take this system\n",
"\n",
"$$2x+3y=8\\\\3x-y=1$$\n",
"$$\\begin{cases}\n",
"2x+3y=8\\\\3x-y=1\n",
"\\end{cases}$$\n",
"\n",
"We can put this in matrix form by writing:\n",
"\n",
@ -1081,8 +1083,9 @@
"\n",
"We need to convert the temperature into a voltage so we can perform the subtraction. For the thermometer it might read:\n",
"\n",
" CELSIUS_TO_VOLTS = 0.21475\n",
" residual = measurement - CELSIUS_TO_VOLTS * predicted_state\n",
"```python\n",
"CELSIUS_TO_VOLTS = 0.21475\n",
"residual = measurement - CELSIUS_TO_VOLTS * predicted_state```\n",
" \n",
"The Kalman filter generalizes this problem by having you supply a **measurement function** that converts a state into a measurement. \n",
"\n",
@ -1325,27 +1328,31 @@
"\n",
"The function `filter_dog()` implements the filter itself. Lets work through it line by line. The first line creates the simulation of the DogSensor, as we have seen in the previous chapter.\n",
"\n",
" dog = DogSensor(velocity=1, noise=noise)\n",
"```python\n",
"dog = DogSensor(velocity=1, noise=noise)```\n",
"\n",
"The next line uses our helper function to create a Kalman filter.\n",
"\n",
" dog_filter = dog_tracking_filter(R=R, Q=Q, cov=500.)\n",
" \n",
"```python\n",
"dog_filter = dog_tracking_filter(R=R, Q=Q, cov=500.)```\n",
"\n",
"We will want to plot the filtered position, the measurements, and the covariance, so we will need to store them in lists. The next three lines initialize empty lists of length *count* in a Pythonic way.\n",
"\n",
" pos = [None] * count\n",
" zs = [None] * count\n",
" cov = [None] * count\n",
" \n",
"```python\n",
"pos = [None] * count\n",
"zs = [None] * count\n",
"cov = [None] * count```\n",
"\n",
"Finally we get to the filter. All we need to do is perform the update and predict steps of the Kalman filter for each measurement. The `KalmanFilter` class provides the two functions `update()` and `predict()` for this purpose. `update()` performs the measurement update step of the Kalman filter, and so it takes a variable containing the sensor measurement. \n",
"\n",
"Absent the bookkeeping work of storing the filter's data, the for loop reads:\n",
"\n",
" for t in range(count):\n",
" z = dog.sense()\n",
" dog_filter.update (z)\n",
" dog_filter.predict()\n",
" \n",
"```python\n",
"for t in range(count):\n",
" z = dog.sense()\n",
" dog_filter.update (z)\n",
" dog_filter.predict()```\n",
"\n",
"It really cannot get much simpler than that. As we tackle more complicated problems this code will remain largely the same; all of the work goes into setting up the `KalmanFilter` variables; executing the filter is trivial.\n",
"\n",
"Now let's look at the result. Here is some code that calls `filter_track()` and then plots the result. It is fairly uninteresting code, so I will not walk through it. The `DogSimulation` class from the previous chapter has been placed in `DogSimulation.py`. I have also implemented a few plot routines to visualize the data in `mkf_internal.py`. Both of these are in the *code* subdirectory if you wish to read them.\n",
@ -2660,16 +2667,19 @@
"\n",
"First collect your measurements into an array or list. Maybe it is in a CSV file, for example.\n",
"\n",
" zs = read_altitude_from_csv()\n",
"```python\n",
"zs = read_altitude_from_csv()```\n",
"\n",
"Or maybe you will generate it using a generator:\n",
"\n",
" zs = [some_func(i) for i in range(1000)]\n",
"```python\n",
"zs = [some_func(i) for i in range(1000)]```\n",
"\n",
"Then call the `batch_filter()` method.\n",
"\n",
" Xs, Ps, Xs_pred, Ps_pred = kfilter.batch_filter(zs)\n",
" \n",
"```python\n",
"Xs, Ps, Xs_pred, Ps_pred = kfilter.batch_filter(zs)```\n",
"\n",
"The function takes the list/array of measurements, filters it, and returns a list of state estimates (Xs), covariance matrices (Ps), and the predictions for the same (Xs_pred, Ps_pred).\n",
"\n",
"Here is a complete example."
@ -2925,7 +2935,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

View File

@ -350,28 +350,30 @@
"source": [
"This is still a little hard to compare to Bayes' equation because we are dealing with probability distributions rather than probabilities. So let's cast our minds back to the discrete Bayes chapter where we computed the probability that our dog was at any given position in the hallway. It looked like this:\n",
"\n",
" def update(pos_belief, measure, p_hit, p_miss):\n",
" for i in range(len(hallway)):\n",
" if hallway[i] == measure:\n",
" pos_belief[i] *= p_hit\n",
" else:\n",
" pos_belief[i] *= p_miss\n",
"```python\n",
"def update(pos_belief, measure, p_hit, p_miss):\n",
" for i in range(len(hallway)):\n",
" if hallway[i] == measure:\n",
" pos_belief[i] *= p_hit\n",
" else:\n",
" pos_belief[i] *= p_miss\n",
"\n",
" pos_belief /= sum(pos_belief)\n",
" pos_belief /= sum(pos_belief)```\n",
"\n",
"Let's rewrite this using our newly learned terminology.\n",
"\n",
" def update(prior_probability, measure, prob_hit, prob_miss):\n",
" posterior_probability = np.zeros(len(prior_probability))\n",
" for i in range(len(hallway)):\n",
" if hallway[i] == measure:\n",
" posterior_probability[i] = prior_probability[i] * p_hit\n",
" else:\n",
" posterior_probability[i] = prior_probability[i] * p_miss\n",
"```python\n",
"def update(prior_probability, measure, prob_hit, prob_miss):\n",
" posterior_probability = np.zeros(len(prior_probability))\n",
" for i in range(len(hallway)):\n",
" if hallway[i] == measure:\n",
" posterior_probability[i] = prior_probability[i] * p_hit\n",
" else:\n",
" posterior_probability[i] = prior_probability[i] * p_miss\n",
"\n",
" return posterior_probability / sum(posterior_probability)```\n",
"\n",
" return posterior_probability / sum(posterior_probability)\n",
"\n",
" \n",
"So what is this doing? It's multiplying the old belief that the dog is at position *i* (prior probability) with the probability that the measurement is correct for that position, and then dividing by the total probability for that new event.\n",
"\n",
"Now let's look at Bayes' equation again.\n",
@ -2910,7 +2912,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

View File

@ -2792,7 +2792,8 @@
"source": [
"So to create a trajectory starting at (0, 15) with a velocity of $100 \\frac{m}{s}$ and an angle of $60^\\circ$ we would write:\n",
"\n",
" traj = BallTrajectory2D(x0=0, y0=15, velocity=100, theta_deg=60)\n",
"```python\n",
"traj = BallTrajectory2D(x0=0, y0=15, velocity=100, theta_deg=60)```\n",
" \n",
"and then call `traj.position(t)` for each time step. Let's test this "
]
@ -3052,11 +3053,12 @@
"source": [
"We already performed this step when we tested the state transition function. Recall that we computed the initial velocity for $x$ and $y$ using trigonometry, and set the value of $\\mathbf{x}$ with:\n",
"\n",
" omega = radians(omega)\n",
" vx = cos(omega) * v0\n",
" vy = sin(omega) * v0\n",
"```python\n",
"omega = radians(omega)\n",
"vx = cos(omega) * v0\n",
"vy = sin(omega) * v0\n",
"\n",
" f1.x = np.array([[x, vx, y, vy]]).T\n",
"f1.x = np.array([[x, vx, y, vy]]).T```\n",
" \n",
" \n",
"With all the steps done we are ready to implement our filter and test it. First, the implementation:"
@ -3606,7 +3608,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

View File

@ -417,7 +417,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Unfortunately Gaussians are not closed under an arbitrary nonlinear function. Recall the equations of the Kalman filter - at each step of its evolution we do things like pass the covariances through our process function to get the new covariance at time $k$. Our process function was always linear, so the output was always another Gaussian. Let's look at that on a graph. I will take an arbitrary Gaussian and pass it through the function $f(x) = 2x + 1$ and plot the result. We know how to do this analytically, but lets do this with sampling. I will generate 500,000 points on the Gaussian curve, pass it through the function, and then plot the results. I will do it this way because the next example will be nonlinear, and we will have no way to compute this analytically."
"Unfortunately Gaussians are not closed under an arbitrary nonlinear function. Recall the equations of the Kalman filter - at each evolution (prediction) we pass the Gaussian representing the state through the process function to get the Gaussian at time $k$. Our process function was always linear, so the output was always another Gaussian. Let's look at that on a graph. I will take an arbitrary Gaussian and pass it through the function $f(x) = 2x + 1$ and plot the result. We know how to do this analytically, but lets do this with sampling. I will generate 500,000 points with a normal distribution, pass it through the function, and then plot the results. I do it this way because the next example will be nonlinear, and we will have no way to compute this analytically."
]
},
{
@ -927,7 +927,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@ -749,29 +749,33 @@
"\n",
"We start by importing the filter and creating it. There are 3 variables in `x` and only 1 measurement. At the same time we will create our radar simulator.\n",
"\n",
" from filterpy.kalman import ExtendedKalmanFilter\n",
"```python\n",
"from filterpy.kalman import ExtendedKalmanFilter\n",
"\n",
"rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)\n",
"radar = RadarSim(dt, pos=0., vel=100., alt=1000.)```\n",
"\n",
" rk = ExtendedKalmanFilter(dim_x=3, dim_z=1)\n",
" radar = RadarSim(dt, pos=0., vel=100., alt=1000.)\n",
" \n",
"We will initialize the filter near the airplane's actual position\n",
"\n",
" rk.x = array([radar.pos, radar.vel-10, radar.alt+100])\n",
" \n",
"```python\n",
"rk.x = array([radar.pos, radar.vel-10, radar.alt+100])```\n",
"\n",
"We assign the system matrix using the first term of the Taylor series expansion we computed above.\n",
"\n",
" dt = 0.05\n",
" rk.F = eye(3) + array([[0, 1, 0],\n",
" [0, 0, 0],\n",
" [0, 0, 0]])*dt\n",
" \n",
"```python\n",
"dt = 0.05\n",
"rk.F = eye(3) + array([[0, 1, 0],\n",
" [0, 0, 0],\n",
" [0, 0, 0]])*dt```\n",
"\n",
"After assigning reasonable values to $\\mathbf{R}$, $\\mathbf{Q}$, and $\\mathbf{P}$ we can run the filter with a simple loop\n",
"\n",
" for i in range(int(20/dt)):\n",
" z = radar.get_range()\n",
" rk.update(array([z]), HJacobian_at, hx)\n",
" rk.predict()\n",
" \n",
"```python\n",
"for i in range(int(20/dt)):\n",
" z = radar.get_range()\n",
" rk.update(array([z]), HJacobian_at, hx)\n",
" rk.predict()```\n",
"\n",
"Putting that all together along with some boilerplate code to save the results and plot them, we get"
]
},
@ -1448,7 +1452,8 @@
"\n",
"First, `evalf` uses a dictionary to pass in the values you want to use. For example, if your matrix contains an x and y, you can write\n",
"\n",
" M.evalf(subs={x:3, y:17})\n",
"```python\n",
" M.evalf(subs={x:3, y:17})```\n",
" \n",
"to evaluate the matrix for `x=3` and `y=17`. \n",
"\n",
@ -1924,7 +1929,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@ -483,20 +483,21 @@
"\n",
"As always, the hardest part of the implementation is correctly accounting for the subscripts. A basic implementation without comments or error checking would be:\n",
"\n",
" def rts_smoother(Xs, Ps, F, Q):\n",
" n, dim_x, _ = Xs.shape\n",
" \n",
" # smoother gain\n",
" K = zeros((n,dim_x, dim_x))\n",
" x, P = Xs.copy(), Ps.copy()\n",
"```python\n",
"def rts_smoother(Xs, Ps, F, Q):\n",
" n, dim_x, _ = Xs.shape\n",
"\n",
" for k in range(n-2,-1,-1):\n",
" P_pred = dot(F, P[k]).dot(F.T) + Q\n",
" # smoother gain\n",
" K = zeros((n,dim_x, dim_x))\n",
" x, P = Xs.copy(), Ps.copy()\n",
"\n",
" K[k] = dot(P[k], F.T).dot(inv(P_pred))\n",
" x[k] += dot(K[k], x[k+1] - dot(F, x[k]))\n",
" P[k] += dot(K[k], P[k+1] - P_pred).dot(K[k].T)\n",
" return (x, P, K)\n",
" for k in range(n-2,-1,-1):\n",
" P_pred = dot(F, P[k]).dot(F.T) + Q\n",
"\n",
" K[k] = dot(P[k], F.T).dot(inv(P_pred))\n",
" x[k] += dot(K[k], x[k+1] - dot(F, x[k]))\n",
" P[k] += dot(K[k], P[k+1] - P_pred).dot(K[k].T)\n",
" return (x, P, K)```\n",
" \n",
"This implementation mirrors the implementation provided in FilterPy. It assumes that the Kalman filter is being run externally in batch mode, and the results of the state and covariances are passed in via the `Xs` and `Ps` variable.\n",
"\n",
@ -841,7 +842,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

View File

@ -533,20 +533,21 @@
"source": [
"To be clear about the dimension reduction, if we wanted to track both *x* and *y* we would have written\n",
"\n",
" cvfilter = KalmanFilter(dim_x = 4, dim_z=2)\n",
" cvfilter.x = array([0., 0., 0., 0.])\n",
" cvfilter.P *= 300\n",
" cvfilter.R *= 1\n",
" cvfilter.F = array([[1, dt, 0, 0],\n",
" [0, 1, 0, 0],\n",
" [0, 0, 1, dt],\n",
" [0, 0, 0, 1]], dtype=float)\n",
" cvfilter.H = array([[1, 0, 0, 0],\n",
" [0, 0, 1, 0]], dtype=float)\n",
"```python\n",
"cvfilter = KalmanFilter(dim_x = 4, dim_z=2)\n",
"cvfilter.x = array([0., 0., 0., 0.])\n",
"cvfilter.P *= 300\n",
"cvfilter.R *= 1\n",
"cvfilter.F = array([[1, dt, 0, 0],\n",
" [0, 1, 0, 0],\n",
" [0, 0, 1, dt],\n",
" [0, 0, 0, 1]], dtype=float)\n",
"cvfilter.H = array([[1, 0, 0, 0],\n",
" [0, 0, 1, 0]], dtype=float)\n",
"\n",
" q = Q_discrete_white_noise(dim=2, dt=dt, var=0.020)\n",
" cvfilter.Q[0:2, 0:2] = q\n",
" cvfilter.Q[2:4, 2:4] = q"
"q = Q_discrete_white_noise(dim=2, dt=dt, var=0.020)\n",
"cvfilter.Q[0:2, 0:2] = q\n",
"cvfilter.Q[2:4, 2:4] = q```"
]
},
{
@ -891,11 +892,11 @@
"\n",
"That may seem like a subtle distinction, but from the plots you see it is not. The amount of deviation in the left plot when the maneuver starts is small, but the deviation in the right tells a very different story. If the tracked object was moving according to the process model the residual plot should bounce around 0.0. This is because the measurements will be obeying the equation \n",
"\n",
"$$measurement = process\\_model(t) + noise(t)$$.\n",
"$$\\mathtt{measurement} = \\mathtt{process\\_model}(t) + \\mathtt{noise}(t)$$.\n",
"\n",
"Once the target starts maneuvering the predictions of the target behavior will not match the behavior as the equation will be \n",
"\n",
"$$measurement = process\\_model(t) + maneuver\\_delta(t) + noise(t)$$.\n",
"$$\\mathtt{measurement} = \\mathtt{process\\_model}(t) + \\mathtt{maneuver\\_delta}(t) + \\mathtt{noise}(t)$$.\n",
"\n",
"Therefore if the residuals diverge from a mean of 0.0 we know that a maneuver has commenced.\n",
"\n",
@ -1677,7 +1678,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

View File

@ -360,8 +360,9 @@
"\n",
"This just says to select the sigma points from the filter's initial mean and covariance. In code this might look like\n",
"\n",
" N = 1000\n",
" sigmas = multivariate_normal(mean=x, cov=P, size=N)"
"```python\n",
"N = 1000\n",
"sigmas = multivariate_normal(mean=x, cov=P, size=N)```"
]
},
{
@ -382,26 +383,18 @@
"\\end{aligned}\n",
"$$\n",
"\n",
"That is short and sweet, but perhaps not entirely clear. The first line passes all of the sigma points through a use supplied state transition function and then adds some noise distributed according to the $\\mathbf{Q}$ matrix. In Python we might write"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" for i, s in enumerate(sigmas):\n",
" sigmas[i] = fx(x=s, dt=0.1, u=0.)\n",
"That is short and sweet, but perhaps not entirely clear. The first line passes all of the sigma points through a use supplied state transition function and then adds some noise distributed according to the $\\mathbf{Q}$ matrix. In Python we might write\n",
"\n",
"```python\n",
"for i, s in enumerate(sigmas):\n",
" sigmas[i] = fx(x=s, dt=0.1, u=0.)\n",
"\n",
"sigmas += multivariate_normal(x, Q, N)```\n",
"\n",
" sigmas += multivariate_normal(x, Q, N)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The second line computes the mean from the sigmas. In Python we will take advantage of `numpy.mean` to do this very concisely and quickly.\n",
"\n",
" x = np.mean(sigmas, axis=0)"
"```python\n",
"x = np.mean(sigmas, axis=0)```"
]
},
{
@ -414,10 +407,11 @@
"\n",
"$\\boldsymbol\\chi-\\mathbf{x}^-$ is a one dimensional vector, so we will use `numpy.outer` to compute the $[\\boldsymbol\\chi-\\mathbf{x}^-][\\boldsymbol\\chi-\\mathbf{x}^-]^\\mathsf{T}$ term. In Python we might write\n",
"\n",
"```python\n",
" P = 0\n",
" for s in sigmas:\n",
" P += outer(s-x, s-x)\n",
" P = P / (N-1)"
" P = P / (N-1)```"
]
},
{
@ -460,7 +454,8 @@
"\n",
"just passes the sigma points through the measurement function $h$. We name the resulting points $\\chi_h$ to distinguish them from the sigma points. In Python we could write this as\n",
"\n",
" sigmas_h = h(sigmas, u)"
"```python\n",
"sigmas_h = h(sigmas, u)```"
]
},
{
@ -473,7 +468,8 @@
"\n",
"In Python we can compute that with\n",
"\n",
" z_mean = np.mean(sigmas_h, axis=0)\n",
"```python\n",
"z_mean = np.mean(sigmas_h, axis=0)```\n",
" \n",
"Now that we have the mean of the measurement sigmas we can compute the covariance for every measurement sigma point, and the *cross variance* for the measurement sigma points vs the sigma points. That is expressed by these two equations\n",
"\n",
@ -485,16 +481,17 @@
"\n",
"We can express this in Python with\n",
"\n",
" P_zz = 0\n",
" for sigma in sigmas_h:\n",
" s = sigma - z_mean\n",
" P_zz += outer(s, s)\n",
" P_zz = P_zz / (N-1) + R\n",
"```python\n",
"P_zz = 0\n",
"for sigma in sigmas_h:\n",
" s = sigma - z_mean\n",
" P_zz += outer(s, s)\n",
"P_zz = P_zz / (N-1) + R\n",
"\n",
" P_xz = 0\n",
" for i in range(N):\n",
" P_xz += outer(self.sigmas[i] - self.x, sigmas_h[i] - z_mean)\n",
" P_xz /= N-1"
"P_xz = 0\n",
"for i in range(N):\n",
" P_xz += outer(self.sigmas[i] - self.x, sigmas_h[i] - z_mean)\n",
"P_xz /= N-1```"
]
},
{
@ -503,9 +500,10 @@
"source": [
"Computation of the Kalman gain is straightforward $\\mathbf{K} = \\mathbf{P}_{xz} \\mathbf{P}_{zz}^{-1}$.\n",
"\n",
"In Python this is the trivial\n",
"In Python this is\n",
"\n",
" K = np.dot(P_xz, inv(P_zz))\n",
"```python\n",
"K = np.dot(P_xz, inv(P_zz))```\n",
"\n",
"Next, we update the sigma points with\n",
"\n",
@ -513,15 +511,17 @@
"\n",
"Here $\\mathbf{v}_R$ is the perturbation that we add to the sigmas. In Python we can implement this with\n",
"\n",
" v_r = multivariate_normal([0]*dim_z, R, N)\n",
" for i in range(N):\n",
" sigmas[i] += dot(K, z + v_r[i] - sigmas_h[i])\n",
"```python\n",
"v_r = multivariate_normal([0]*dim_z, R, N)\n",
"for i in range(N):\n",
" sigmas[i] += dot(K, z + v_r[i] - sigmas_h[i])```\n",
"\n",
"\n",
"Our final step is recompute the filter's mean and covariance.\n",
"\n",
" x = np.mean(sigmas, axis=0)\n",
" P = self.P - dot3(K, P_zz, K.T)"
"```python\n",
" x = np.mean(sigmas, axis=0)\n",
" P = self.P - dot3(K, P_zz, K.T)```"
]
},
{
@ -549,12 +549,13 @@
"\n",
"The EnKF is designed for nonlinear problems, so instead of using matrices to implement the state transition and measurement functions you will need to supply Python functions. For this problem they can be written as:\n",
"\n",
" def hx(x):\n",
" return np.array([x[0]])\n",
"```python\n",
"def hx(x):\n",
" return np.array([x[0]])\n",
"\n",
"def fx(x, dt):\n",
" return np.dot(F, x)```\n",
"\n",
" def fx(x, dt):\n",
" return np.dot(F, x)\n",
" \n",
"One final thing: the EnKF code, like the UKF code, uses a single dimension for $\\mathbf{x}$, not a two dimensional column matrix as used by the linear kalman filter code.\n",
"\n",
"Without further ado, here is the code."
@ -738,7 +739,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
"version": "3.4.1"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@ -238,7 +238,7 @@ def plot_monte_carlo_mean(xs, ys, f, mean_fx, label, plot_colormap=True):
plt.scatter(fxs, fys, marker='.', alpha=0.02, color='k')
plt.scatter(mean_fx[0], mean_fx[1],
marker='v', s=300, c='r', label='Linearized Mean')
marker='v', s=300, c='r', label=label)
plt.scatter(computed_mean_x, computed_mean_y,
marker='*',s=120, c='r', label='Computed Mean')

View File

@ -11,7 +11,9 @@ from filterpy.stats import plot_covariance_ellipse
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse,Arrow
import math
from math import cos, sin, atan2, pi
import numpy as np
from numpy.random import randn
def _sigma_points(mean, sigma, kappa):
sigma1 = mean + math.sqrt((1+kappa)*sigma)
@ -148,6 +150,13 @@ def show_3_sigma_points():
plt.plot(xs, ys)
plt.show()
def _plot_sigmas(s, w, alpha=0.5, **kwargs):
min_w = min(abs(w))
scale_factor = 100 / min_w
return plt.scatter(s[:, 0], s[:, 1], s=abs(w)*scale_factor,
alpha=alpha, **kwargs)
def show_sigma_selections():
ax=plt.gca()
ax.axes.get_xaxis().set_visible(False)
@ -156,22 +165,25 @@ def show_sigma_selections():
x = np.array([2, 5])
P = np.array([[3, 1.1], [1.1, 4]])
points = MerweScaledSigmaPoints(2, .05, 2., 1.)
points = MerweScaledSigmaPoints(2, .09, 2., 1.)
sigmas = points.sigma_points(x, P)
plot_covariance_ellipse(x, P, facecolor='b', alpha=0.6, variance=[.5])
plt.scatter(sigmas[:,0], sigmas[:, 1], c='k', s=50)
Wm, Wc = points.weights()
plot_covariance_ellipse(x, P, facecolor='b', alpha=.3, variance=[.5])
_plot_sigmas(sigmas, Wc, alpha=1.0, facecolor='k')
x = np.array([5, 5])
points = MerweScaledSigmaPoints(2, .15, 2., 1.)
points = MerweScaledSigmaPoints(2, .15, 1., .15)
sigmas = points.sigma_points(x, P)
plot_covariance_ellipse(x, P, facecolor='b', alpha=0.6, variance=[.5])
plt.scatter(sigmas[:,0], sigmas[:, 1], c='k', s=50)
Wm, Wc = points.weights()
plot_covariance_ellipse(x, P, facecolor='b', alpha=0.3, variance=[.5])
_plot_sigmas(sigmas, Wc, alpha=1.0, facecolor='k')
x = np.array([8, 5])
points = MerweScaledSigmaPoints(2, .4, 2., 1.)
points = MerweScaledSigmaPoints(2, .2, 3., 10)
sigmas = points.sigma_points(x, P)
plot_covariance_ellipse(x, P, facecolor='b', alpha=0.6, variance=[.5])
plt.scatter(sigmas[:,0], sigmas[:, 1], c='k', s=50)
Wm, Wc = points.weights()
plot_covariance_ellipse(x, P, facecolor='b', alpha=0.3, variance=[.5])
_plot_sigmas(sigmas, Wc, alpha=1.0, facecolor='k')
plt.axis('equal')
plt.xlim(0,10); plt.ylim(0,10)
@ -292,6 +304,105 @@ def plot_rts_output(xs, Ms, t):
print('Difference in position in meters:', xs[-6:-1, 0] - Ms[-6:-1, 0])
def plot_scatter_of_bearing_error():
d = 100
xs, ys = [], []
for i in range (3000):
a = math.radians(30) + randn() * math.radians(1)
xs.append(d*math.cos(a))
ys.append(d*math.sin(a))
plt.scatter(xs, ys)
def plot_scatter_moving_target():
pos = np.array([5., 5.])
for i in range(5):
pos += (0.5, 1.)
actual_angle = math.atan2(pos[1], pos[0])
d = math.sqrt(pos[0]**2 + pos[1]**2)
xs, ys = [], []
for i in range (100):
a = actual_angle + randn() * math.radians(1)
xs.append(d*math.cos(a))
ys.append(d*math.sin(a))
plt.scatter(xs, ys)
plt.axis('equal')
plt.plot([5.5, pos[0]], [6, pos[1]], c='g', linestyle='--')
def _isct(pa, pb, alpha, beta):
""" Returns the (x, y) intersections of points pa and pb
given the bearing ba for point pa and bearing bb for
point pb.
"""
B = [pb[0] - pa[0], pb[1] - pa[1]]
AB = math.sqrt((pa[0] - pb[0])**2 + (pa[1] - pb[1])**2)
ab = atan2(B[1], B[0])
a = alpha - ab
b = pi - beta - ab
p = pi - b - a
AP = (sin(b) / sin(p)) * AB
x = cos(alpha) * AP + pa[0]
y = sin(alpha) * AP + pa[1]
return x, y
def _plot_iscts(pos, sa, sb, N=4):
for i in range(N):
pos += (0.5, 1.)
actual_angle_a = math.atan2(pos[1] - sa[1], pos[0] - sa[0])
actual_angle_b = math.atan2(pos[1] - sb[1], pos[0] - sb[0])
da = math.sqrt((sa[0] - pos[0])**2 + (sa[1] - pos[1])**2)
db = math.sqrt((sb[0] - pos[0])**2 + (sb[1] - pos[1])**2)
xs, ys, xs_a, xs_b, ys_a, ys_b = [], [], [], [], [], []
for i in range (300):
a_a = actual_angle_a + randn() * math.radians(1)
a_b = actual_angle_b + randn() * math.radians(1)
x,y = _isct(sa, sb, a_a, a_b)
xs.append(x)
ys.append(y)
xs_a.append(da*math.cos(a_a) + sa[0])
ys_a.append(da*math.sin(a_a) + sa[1])
xs_b.append(db*math.cos(a_b) + sb[0])
ys_b.append(db*math.sin(a_b) + sb[1])
plt.scatter(xs, ys, c='r', marker='.')
plt.scatter(xs_a, ys_a)
plt.scatter(xs_b, ys_b)
plt.gca().set_aspect('equal')
plt.show()
def plot_iscts_two_sensors():
pos = np.array([4., 4,])
sa = [0., 2.]
sb = [8., 2.]
plt.scatter(*sa, s=100)
plt.scatter(*sb, s=100)
_plot_iscts(pos, sa, sb, N=4)
def plot_iscts_two_sensors_changed_sensors():
sa = [3, 4]
sb = [3, 7]
pos= np.array([3., 3.])
plt.scatter(*sa, s=100)
plt.scatter(*sb, s=100)
_plot_iscts(pos, sa, sb, N=5)
if __name__ == '__main__':
#show_2d_transform()

View File

@ -9,6 +9,8 @@ rm --f Kalman_and_Bayesian_Filters_in_Python.ipynb
rm --f Kalman_and_Bayesian_Filters_in_Python.toc
rm --f Kalman_and_Bayesian_Filters_in_Python.tex
rm --f short.ipynb
rm --f chapter.ipynb
rm --f chapter.pdf
rmdir ./*_files/ 2> /dev/null

View File

@ -5,8 +5,12 @@ rm --f *.toc
rm --f *.aux
rm --f *.log
rm --f *.out
rm --f book.ipynb
rm --f book.toc
rm --f book.tex
rm --f chapter.ipynb
rm --f chapter.pdf
rmdir /S /Q book_files
rmdir /S /Q chapter_files

5
pdf/make_chapter.bat Normal file
View File

@ -0,0 +1,5 @@
cp %1 chapter.ipynb
ipython nbconvert --to latex chapter.ipynb
ipython to_pdf.py chapter.tex

View File

@ -1,6 +1,12 @@
import IPython.nbconvert.exporters.pdf as pdf
import sys
f = open('book.tex', 'r', encoding="iso-8859-1")
if len(sys.argv) == 2:
name = sys.argv[1]
else:
name = 'book.tex'
f = open(name, 'r', encoding="iso-8859-1")
filedata = f.read()
f.close()
@ -11,5 +17,5 @@ f.write(newdata)
f.close()
p = pdf.PDFExporter()
p.run_latex('book.tex')
p.run_latex(name)