From 85ec820018ac83bad1c285e28f1f3d9cf71bb439 Mon Sep 17 00:00:00 2001 From: Roger Labbe Date: Mon, 26 May 2014 11:23:51 -0700 Subject: [PATCH] Improved proof of kalman is g-h, by incorporating variance of estimate. --- Kalman_Filters.ipynb | 72 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 70 insertions(+), 2 deletions(-) diff --git a/Kalman_Filters.ipynb b/Kalman_Filters.ipynb index 9884eda..3d6db57 100644 --- a/Kalman_Filters.ipynb +++ b/Kalman_Filters.ipynb @@ -1,7 +1,7 @@ { "metadata": { "name": "", - "signature": "sha256:b52c7ef7a1a3bbbe84133e8c8ff6477ee050c6647736e3e9a5955927786e1f71" + "signature": "sha256:0c2ff9fb723e44f18867f7a2ac1cee2ec1b6a9843eda41701fd315ebf12f63d5" }, "nbformat": 3, "nbformat_minor": 0, @@ -1179,10 +1179,78 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "> Before I go on, I want to emphasize that this code fully implements a 1D Kalman filter. If you have tried to read the literatue, you are perhaps surprised, because this looks nothing like the complex, endless pages of math in those books. To be fair, the math gets a bit more complicated in multiple dimensions, but not by much. So long as we worry about *using* the equations rather than *deriving* them we can create Kalman filters without a lot of effort. Moreover, I hope you'll agree that you have a decent intuitive grasp of what is happening. We represent our beliefs with Gaussians, and our beliefs get better over time because more measurement means more data to work with. \"Measure twice, cut once!\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Relationship to the g-h Filter\n", "\n", - "> Before I go on, I want to emphasize that this code fully implements a 1D Kalman filter. If you have tried to read the literatue, you are perhaps surprised, because this looks nothing like the complex, endless pages of math in those books. To be fair, the math gets a bit more complicated in multiple dimensions, but not by much. So long as we worry about *using* the equations rather than *deriving* them we can create Kalman filters without a lot of effort. Moreover, I hope you'll agree that you have a decent intuitive grasp of what is happening. We represent our beliefs with Gaussians, and our beliefs get better over time because more measurement means more data to work with. \"Measure twice, cut once!\"\n", + "In the first chapter I stated that the Kalman filter is a form of g-h filter. However, we have been reasoning about the probability of Gaussians, and not used any of the reasoning or equations of the first chapter. A trivial amount of algebra will reveal the relationship, so let's do that now.\n", "\n", + "The equation for our estimate is:\n", "\n", + "$$\n", + "\\mu_{x'}=\\frac{\\sigma_1^2 \\mu_2 + \\sigma_2^2 \\mu_1} {\\sigma_1^2 + \\sigma_2^2}\n", + "$$\n", + "\n", + "which I will make more friendly for our eyes as:\n", + "\n", + "$$\n", + "\\mu_{x'}=\\frac{ya + xb} {a+b}\n", + "$$\n", + "\n", + "We can easily put this into the g-h form with the following algebra\n", + "\n", + "$$\n", + "\\begin{align*}\n", + "\\mu_{x'}&=(x-x) + \\frac{ya + xb} {a+b} \\\\\n", + "\\mu_{x'}&=x-\\frac{a+b}{a+b}x + \\frac{ya + xb} {a+b} \\\\ \n", + "\\mu_{x'}&=x +\\frac{-x(a+b) + xb+ya}{a+b} \\\\\n", + "\\mu_{x'}&=x+ \\frac{-xa+ya}{a+b} \\\\\n", + "\\mu_{x'}&=x+ \\frac{a}{a+b}(y-x)\\\\\n", + "\\end{align*}\n", + "$$\n", + "\n", + "We are almost done, but recall that the variance of estimate is given by \n", + "\n", + "$${\\sigma_{x'}^2} = \\frac{1}{ \\frac{1}{\\sigma_1^2} + \\frac{1}{\\sigma_2^2}}\\\\\n", + "= \\frac{1}{ \\frac{1}{a} + \\frac{1}{b}}\n", + "$$\n", + "\n", + "We can incorporate that term into our equation above by observing that\n", + "$$ \n", + "\\begin{align*}\n", + "\\frac{a}{a+b} &= \\frac{a/a}{(a+b)/a} = \\frac{1}{(a+b)/a}\\\\\n", + " &= \\frac{1}{1 + \\frac{b}{a}} = \\frac{1}{\\frac{b}{b} + \\frac{b}{a}}\\\\\n", + " &= \\frac{1}{b}\\frac{1}{\\frac{1}{b} + \\frac{1}{a}} \\\\\n", + " &= \\frac{\\sigma^2_{x'}}{b}\n", + " \\end{align*}\n", + "$$\n", + "\n", + "We can tie all of this together with\n", + "\n", + "$$\n", + "\\begin{align*}\n", + "\\mu_{x'}&=x+ \\frac{a}{a+b}(y-x)\\\\\n", + "&= x + \\frac{\\sigma^2_{x'}}{b}(y-x) \\\\\n", + "&= x + g(y-x)\\blacksquare\n", + "\\end{align*}\n", + "$$\n", + "\n", + "where\n", + "\n", + "$$g = \\frac{\\sigma^2_{x'}}{\\sigma^2_{y}}$$\n", + "\n", + "The end result is multipying the residual of the two measurements by a constant and adding to our previous value, which is the *g* equation for the g-h filter. *g* is the variance of the new estimate divided by the variance of the measurement. Of course in this case g is not truly a constant, as it varies with each time step as the variance changes, but it is truly the same formula. We can also derive the formula for *h* in the same way." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "#####Excercise:\n", "Modify the values of *movement_error* and *sensor_error* and note the effect on the filter and on the variance. Which has a larger effect on the value that variance converges to. For example, which results in a smaller variance:\n", "\n",