Lots of material on resampling.
Happy 4th of July!
This commit is contained in:
parent
afe168c7b3
commit
f8896bbb80
@ -3465,6 +3465,31 @@
|
||||
"I show this to reiterate the importance of using Kalman filters to compute velocities, accelerations, and even higher order values. I use a Kalman filter even when my measurements are so accurate that I am willing to use them unfiltered because it allows me accurate estimates for velocities and accelerations."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Discussion and Summary\n",
|
||||
"\n",
|
||||
"We extended Gaussians into multiple dimensions to allow us to simultaneously handle multiple dimensions, both spacial and others (velocity, etc). This led us to use linear algebra. We made a key insight: unobserved variables have the ability to significantly increase the accuracy of the filter. This counterintuitive result is because the unobserved variables are correlated with the observed variables. Intuitively, I think of it as *triangulating* onto the solution - only a small number of states can be true for a given combination of a velocity Gaussian and position Gaussian, roughtly described by their intersection. But of course this works for non-spatial problms as well. \n",
|
||||
"\n",
|
||||
"There is one important caveat about unobserved variables. It is easy to construct a filter that produces estimates for unobserved variables. Heck, I could write a filter that estimates the color of a tracked car. The designer must verify that these variables are being estimated *correctly*. If you do not have a velocity sensor and yet are estimating velocity, you will need to test that the velocity estimates are correct.; do not trust that they are. For example, suppose the velocity has a periodic component to it - it looks like a sine wave. If your sample time is less than 2 times the frequency you will not be able to accurately estimate the velocity (due to Nyquist's Theorem). Imagine that the sample period is equal to the frequency of the velocity. The filter will report that the velocity is constant because each time it will be sampling the system at the same point on the sin wave. \n",
|
||||
"\n",
|
||||
"Initialization poses a particularly difficult problem for unobserved variables. If you start with a bad initialization the filter can usually recover the observed variables, but may struggle and fail with the unobserved one. Estimating unobserved variables is a powerful tool, but a dangerous one. \n",
|
||||
"\n",
|
||||
"I established a series of steps for designing a Kalman filter. These are not a usual part of the Kalman filter literature, and are only meant as a guide, not a prescription. Designing for a hard problem is an iterative process. You make a guess at the state vector, work out what your measurement and state models are, run some tests, and then alter the design as necessary. \n",
|
||||
"\n",
|
||||
"The design of $\\mathbf{R}$ and $\\mathbf{Q}$ is often quite challenging. I've made it appear to be quite scientific. Your sensor has Gaussian noise of $\\mathcal{N}(0, \\sigma^2)$, so set $\\mathbf{R}=\\sigma^2$. Easy! But I told you a dirty lie. Sensors are not Gaussian. We started the book with a bathroom scale. Suppose $\\sigma=1$ kg, and you try to weigh something that weighs 0.5 kg. Theory tells us we will get negative measurements, but of course the scale will never report weights less than zero. Real world sensors typically have *fat tails*, which mean they have more outlying noise than theory predicts. In some cases, such as with the scale, one or both tails are truncated.\n",
|
||||
"\n",
|
||||
"The case with $\\mathbf{Q}$ is more dire. I hope you were skeptical when I blithely assigned a noise matrix to my prediction about the movements of a dog. Who can say what a dog will do next? But this holds true for more other systems. The GPS in my car doesn't know about hills, the outside winds, or my terrible driving skills. Yet the filter in it requires a precise number to encapsulate all of that information, and it need to work while I drive off-road in the desert, and when Formula One champion Michael Schumacher drives on a closed circuit track.\n",
|
||||
"\n",
|
||||
"These problems led some researchers and engineers to derogatorily call the Kalman filter a 'ball of mud'. In other words, it doesn't always hold together so well. Another term to know - Kalman filters can become **smug**. Their estimates are based solely on what you tell it the noises are. Those values can lead to overly confident estimates. $\\mathbf{P}$ gets smaller and smaller while the filter is actually becoming more and more inaccurate! In the worst case the filter diverges. We will see a lot of that when we start studying nonlinear filters. \n",
|
||||
"\n",
|
||||
"The reality is that the Kalman filter is a mathematical model of the world. The output is only as accurate as that model. To make the math tractable we had to make some assumptions. I We assume that the sensors and motion model have Gaussian noise. We assume that everything is linear. If that is true, the Kalman filter is *optimal* in a least squares sense. This means that there is no way to make a better estimate than what the filter gives us. However, these assumption are almost never true, and hence the model is necessarily limited, and a working filter is rarely optimal.\n",
|
||||
"\n",
|
||||
"In later chapters we will deal with the problem of nonlinearity. For now I want you to understand that designing the matrices of a linear filter is an experimental procedure more than a mathematical one. Use math to establish the initial values, but then you need to experiment. If there is a lot of unaccounted noise in the world (wind, etc) you may have to make $\\mathbf{Q}$ larger. If you make it too large the filter fails to respond quickly to changes. In the **Adaptive Filters** chapter you will learn some alternative techniques that allow you to change the filter design in real time in response to the inputs and performance, but for now you need to find one set of values that works for the conditions your filter will encounter. Noise matrices for an acrobatic plane might be different if the pilot is a student than if the pilot is an expert as the dynamics will be quite different."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@ -70,33 +70,22 @@ class RobotLocalizationParticleFilter(object):
|
||||
|
||||
|
||||
def resample(self):
|
||||
p = np.zeros((self.N, 3))
|
||||
w = np.zeros(self.N)
|
||||
cumulative_sum = np.cumsum(self.weights)
|
||||
cumulative_sum[-1] = 1. # avoid round-off error
|
||||
indexes = np.searchsorted(cumulative_sum, random(self.N))
|
||||
|
||||
cumsum = np.cumsum(self.weights)
|
||||
for i in range(self.N):
|
||||
index = np.searchsorted(cumsum, random())
|
||||
p[i] = self.particles[index]
|
||||
w[i] = self.weights[index]
|
||||
|
||||
self.particles = p
|
||||
assert np.sum(w) != 0
|
||||
assert w[0] is not np.nan
|
||||
self.weights = w / np.sum(w)
|
||||
# resample according to indexes
|
||||
self.particles = self.particles[indexes]
|
||||
self.weights = self.weights[indexes]
|
||||
self.weights /= np.sum(self.weights) # normalize
|
||||
|
||||
|
||||
def resample_from_index(self, indexes):
|
||||
assert len(indexes) == self.N
|
||||
|
||||
p = np.zeros((self.N, 3))
|
||||
w = np.zeros(self.N)
|
||||
for i, index in enumerate(indexes):
|
||||
p[i] = self.particles[index]
|
||||
w[i] = self.weights[index]
|
||||
|
||||
self.particles = p
|
||||
self.weights = w / np.sum(w)
|
||||
|
||||
self.particles = self.particles[indexes]
|
||||
self.weights = self.weights[indexes]
|
||||
self.weights /= np.sum(self.weights)
|
||||
|
||||
|
||||
def estimate(self):
|
||||
|
@ -1,3 +1,7 @@
|
||||
from filterpy.monte_carlo import stratified_resample
|
||||
import matplotlib as mpl
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
import numpy as np
|
||||
from numpy.random import randn, random, uniform, multivariate_normal, seed
|
||||
#from nonlinear_plots import plot_monte_carlo_mean
|
||||
@ -7,7 +11,6 @@ from RobotLocalizationParticleFilter import *
|
||||
|
||||
|
||||
|
||||
|
||||
class ParticleFilter(object):
|
||||
|
||||
def __init__(self, N, x_dim, y_dim):
|
||||
@ -24,7 +27,6 @@ class ParticleFilter(object):
|
||||
self.particles[:, 2] = uniform(0, 2*np.pi, size=N)
|
||||
|
||||
|
||||
|
||||
def predict(self, u, std):
|
||||
""" move according to control input u with noise std"""
|
||||
|
||||
@ -81,7 +83,6 @@ class ParticleFilter(object):
|
||||
return mu, var
|
||||
|
||||
|
||||
|
||||
def plot_random_pd():
|
||||
def norm(x, x0, sigma):
|
||||
return np.exp(-0.5 * (x - x0) ** 2 / sigma ** 2)
|
||||
@ -101,7 +102,6 @@ def plot_random_pd():
|
||||
plt.plot(x, y2)
|
||||
|
||||
|
||||
|
||||
def plot_monte_carlo_ukf():
|
||||
|
||||
def f(x,y):
|
||||
@ -164,7 +164,6 @@ def plot_pf(pf, xlim=100, ylim=100, weights=True):
|
||||
plt.ylim(0, ylim)
|
||||
|
||||
|
||||
|
||||
def Gaussian(mu, sigma, x):
|
||||
|
||||
# calculates the probability of x for 1-dim Gaussian with mean mu and var. sigma
|
||||
@ -175,6 +174,7 @@ def Gaussian(mu, sigma, x):
|
||||
|
||||
return g
|
||||
|
||||
|
||||
def test_gaussian(N):
|
||||
for i in range(N):
|
||||
mean, std, x = randn(3)
|
||||
@ -224,8 +224,6 @@ def show_two_pf_plots():
|
||||
plt.tight_layout()
|
||||
|
||||
|
||||
|
||||
|
||||
def test_pf():
|
||||
|
||||
#seed(1234)
|
||||
@ -261,7 +259,6 @@ def test_pf():
|
||||
plt.scatter(mu[0], mu[1], color='g', s=100)
|
||||
plt.tight_layout()
|
||||
plt.pause(dt)
|
||||
#print(mu - pos)
|
||||
|
||||
|
||||
def test_pf2():
|
||||
@ -292,6 +289,155 @@ def test_pf2():
|
||||
plt.plot(xs[:, 0], xs[:, 1])
|
||||
plt.show()
|
||||
|
||||
|
||||
def plot_cumsum(a):
|
||||
|
||||
N = len(a)
|
||||
|
||||
cmap = mpl.colors.ListedColormap([[0., .4, 1.],
|
||||
[0., .8, 1.],
|
||||
[1., .8, 0.],
|
||||
[1., .4, 0.]]*(int(N/4) + 1))
|
||||
cumsum = np.cumsum(np.asarray(a) / np.sum(a))
|
||||
cumsum = np.insert(cumsum, 0, 0)
|
||||
|
||||
fig = plt.figure(figsize=(6,3))
|
||||
ax = fig.add_axes([0.05, 0.475, 0.9, 0.15])
|
||||
norm = mpl.colors.BoundaryNorm(cumsum, cmap.N)
|
||||
bar = mpl.colorbar.ColorbarBase(ax, cmap=cmap,
|
||||
norm=norm,
|
||||
drawedges=False,
|
||||
spacing='proportional',
|
||||
orientation='horizontal')
|
||||
if N > 10:
|
||||
bar.set_ticks([])
|
||||
plt.show()
|
||||
|
||||
|
||||
def plot_stratified_resample(a):
|
||||
N = len(a)
|
||||
|
||||
cmap = mpl.colors.ListedColormap([[0., .4, 1.],
|
||||
[0., .8, 1.],
|
||||
[1., .8, 0.],
|
||||
[1., .4, 0.]]*(int(N/4) + 1))
|
||||
cumsum = np.cumsum(np.asarray(a) / np.sum(a))
|
||||
cumsum = np.insert(cumsum, 0, 0)
|
||||
|
||||
fig = plt.figure(figsize=(6,3))
|
||||
ax = fig.add_axes([0.05, 0.475, 0.9, 0.15])
|
||||
norm = mpl.colors.BoundaryNorm(cumsum, cmap.N)
|
||||
bar = mpl.colorbar.ColorbarBase(ax, cmap=cmap,
|
||||
norm=norm,
|
||||
drawedges=False,
|
||||
spacing='proportional',
|
||||
orientation='horizontal')
|
||||
xs = np.linspace(0., 1.-1./N, N)
|
||||
ax.vlines(xs, 0, 1, lw=2)
|
||||
|
||||
# make N subdivisions, and chose a random position within each one
|
||||
b = (random(N) + range(N)) / N
|
||||
plt.scatter(b, [.5]*len(b), s=60, facecolor='k', edgecolor='k')
|
||||
bar.set_ticks([])
|
||||
plt.title('stratified resampling')
|
||||
plt.show()
|
||||
|
||||
|
||||
def plot_systematic_resample(a):
|
||||
N = len(a)
|
||||
|
||||
cmap = mpl.colors.ListedColormap([[0., .4, 1.],
|
||||
[0., .8, 1.],
|
||||
[1., .8, 0.],
|
||||
[1., .4, 0.]]*(int(N/4) + 1))
|
||||
cumsum = np.cumsum(np.asarray(a) / np.sum(a))
|
||||
cumsum = np.insert(cumsum, 0, 0)
|
||||
|
||||
fig = plt.figure(figsize=(6,3))
|
||||
ax = fig.add_axes([0.05, 0.475, 0.9, 0.15])
|
||||
norm = mpl.colors.BoundaryNorm(cumsum, cmap.N)
|
||||
bar = mpl.colorbar.ColorbarBase(ax, cmap=cmap,
|
||||
norm=norm,
|
||||
drawedges=False,
|
||||
spacing='proportional',
|
||||
orientation='horizontal')
|
||||
xs = np.linspace(0., 1.-1./N, N)
|
||||
ax.vlines(xs, 0, 1, lw=2)
|
||||
|
||||
# make N subdivisions, and chose a random position within each one
|
||||
b = (random() + np.array(range(N))) / N
|
||||
plt.scatter(b, [.5]*len(b), s=60, facecolor='k', edgecolor='k')
|
||||
bar.set_ticks([])
|
||||
plt.title('systematic resampling')
|
||||
plt.show()
|
||||
|
||||
|
||||
def plot_multinomial_resample(a):
|
||||
N = len(a)
|
||||
|
||||
cmap = mpl.colors.ListedColormap([[0., .4, 1.],
|
||||
[0., .8, 1.],
|
||||
[1., .8, 0.],
|
||||
[1., .4, 0.]]*(int(N/4) + 1))
|
||||
cumsum = np.cumsum(np.asarray(a) / np.sum(a))
|
||||
cumsum = np.insert(cumsum, 0, 0)
|
||||
|
||||
fig = plt.figure(figsize=(6,3))
|
||||
ax = fig.add_axes([0.05, 0.475, 0.9, 0.15])
|
||||
norm = mpl.colors.BoundaryNorm(cumsum, cmap.N)
|
||||
bar = mpl.colorbar.ColorbarBase(ax, cmap=cmap,
|
||||
norm=norm,
|
||||
drawedges=False,
|
||||
spacing='proportional',
|
||||
orientation='horizontal')
|
||||
|
||||
# make N subdivisions, and chose a random position within each one
|
||||
b = random(N)
|
||||
plt.scatter(b, [.5]*len(b), s=60, facecolor='k', edgecolor='k')
|
||||
bar.set_ticks([])
|
||||
plt.title('multinomial resampling')
|
||||
plt.show()
|
||||
|
||||
|
||||
def plot_residual_resample(a):
|
||||
N = len(a)
|
||||
|
||||
a_norm = np.asarray(a) / np.sum(a)
|
||||
cumsum = np.cumsum(a_norm)
|
||||
cumsum = np.insert(cumsum, 0, 0)
|
||||
|
||||
cmap = mpl.colors.ListedColormap([[0., .4, 1.],
|
||||
[0., .8, 1.],
|
||||
[1., .8, 0.],
|
||||
[1., .4, 0.]]*(int(N/4) + 1))
|
||||
|
||||
fig = plt.figure(figsize=(6,3))
|
||||
ax = fig.add_axes([0.05, 0.475, 0.9, 0.15])
|
||||
norm = mpl.colors.BoundaryNorm(cumsum, cmap.N)
|
||||
bar = mpl.colorbar.ColorbarBase(ax, cmap=cmap,
|
||||
norm=norm,
|
||||
drawedges=False,
|
||||
spacing='proportional',
|
||||
orientation='horizontal')
|
||||
|
||||
indexes = residual_resample(a_norm)
|
||||
bins = np.bincount(indexes)
|
||||
for i in range(1, N):
|
||||
n = bins[i-1] # number particles in this sample
|
||||
if n > 0:
|
||||
b = np.linspace(cumsum[i-1], cumsum[i], n+2)[1:-1]
|
||||
plt.scatter(b, [.5]*len(b), s=60, facecolor='k', edgecolor='k')
|
||||
bar.set_ticks([])
|
||||
plt.title('residual resampling')
|
||||
plt.show()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
plot_residual_resample([.1, .2, .3, .4, .2, .3, .1])
|
||||
|
||||
#example()
|
||||
#show_two_pf_plots()
|
||||
test_pf()
|
||||
|
||||
a = [.1, .2, .1, .6]
|
||||
#plot_cumsum(a)
|
||||
#test_pf()
|
||||
|
Loading…
Reference in New Issue
Block a user