Merge pull request #322 from h12w/master

correct the explanatory equation for Kalman gain
This commit is contained in:
Roger Labbe 2020-04-26 22:01:20 -07:00 committed by GitHub
commit 6181e6e4f0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -1951,7 +1951,7 @@
"For the multivariate Kalman filter $\\mathbf K$ is a vector, not a scalar. Here is the equation again: $\\mathbf K = \\mathbf{\\bar PH}^\\mathsf T \\mathbf{S}^{-1}$. Is this a *ratio*? We can think of the inverse of a matrix as linear algebra's way of finding the reciprocal. Division is not defined for matrices, but it is useful to think of it in this way. So we can read the equation for $\\textbf{K}$ as meaning\n",
"\n",
"$$\\begin{aligned} \\mathbf K &\\approx \\frac{\\mathbf{\\bar P}\\mathbf H^\\mathsf T}{\\mathbf{S}} \\\\\n",
"\\mathbf K &\\approx \\frac{\\mathsf{uncertainty}_\\mathsf{prediction}}{\\mathsf{uncertainty}_\\mathsf{measurement}}\\mathbf H^\\mathsf T\n",
"\\mathbf K &\\approx \\frac{\\mathsf{uncertainty}_\\mathsf{prediction}}{\\mathsf{uncertainty}_\\mathsf{prediction} + \\mathsf{uncertainty}_\\mathsf{measurement}}\\mathbf H^\\mathsf T\n",
"\\end{aligned}$$\n",
"\n",
"The Kalman gain equation computes a ratio based on how much we trust the prediction vs the measurement. We did the same thing in every prior chapter. The equation is complicated because we are doing this in multiple dimensions via matrices, but the concept is simple. The $\\mathbf H^\\mathsf T$ term is less clear, I'll explain it soon. If you ignore that term the equation for the Kalman gain is the same as the univariate case: divide the uncertainty of the prior with the of the sum of the uncertainty of the prior and measurement.\n",