More math material for KF.

Added better explanation of P = FPF' + Q.

Moved conversion of multivariate equations to univariate eqs. to the
math chapter.

Moved the walkthrough of KalmanFilter to an appendix.
This commit is contained in:
Roger Labbe 2015-05-10 18:28:45 -07:00
parent 7c3fd7a2a6
commit a0b7a50b05
7 changed files with 2397 additions and 1763 deletions

File diff suppressed because one or more lines are too long

View File

@ -591,6 +591,140 @@
"$\\mathbf{FPF}^\\mathsf{T}$ is the way we put $\\mathbf{P}$ into the process space using linear algebra so that we can add the process noise $\\mathbf{Q}$ to it."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Converting the Multivariate Equations to the Univariate Case\n",
"\n",
"\n",
"As it turns out the Kalman filter equations are quite easy to deal with in one dimension, so let's do the mathematical proof. \n",
"\n",
"> **Note:** This section will provide you with a strong intuition into what the Kalman filter equations are actually doing. While this section is not strictly required, I recommend reading this section carefully as it should make the rest of the material easier to understand. It is not merely a proof of correctness that you would normally want to skip past! The equations look complicated, but they are actually doing something quite simple.\n",
"\n",
"Let's start with the predict step, which is slightly easier. Here are the multivariate equations.\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\mathbf{x}^- &= \\mathbf{F x} + \\mathbf{B u} \\\\\n",
"\\mathbf{P^-} &= \\mathbf{FP{F}}^\\mathsf{T} + \\mathbf{Q}\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"The state $\\mathbf{x}$ only has one variable, so it is a $1\\times 1$ matrix. Our motion $\\mathbf{u}$ is also be a $1\\times 1$ matrix. Therefore, $\\mathbf{F}$ and $\\mathbf{B}$ must also be $1\\times 1$ matrices. That means that they are all scalars, and we can write\n",
"\n",
"$$x = Fx + Bu$$\n",
"\n",
"Here the variables are not bold, denoting that they are not matrices or vectors. \n",
"\n",
"Our state transition is simple - the next state is the same as this state, so $F=1$. The same holds for the motion transition, so, $B=1$. Thus we have\n",
"\n",
"$$x = x + u$$\n",
"\n",
"which is equivalent to the Gaussian equation from the last chapter\n",
"\n",
"$$ \\mu = \\mu_1+\\mu_2$$\n",
"\n",
"Hopefully the general process is clear, so now I will go a bit faster on the rest. Our other equation for the predict step is\n",
"\n",
"$$\\mathbf{P}^- = \\mathbf{FP{F}}^\\mathsf{T} + \\mathbf{Q}$$\n",
"\n",
"Again, since our state only has one variable $\\mathbf{P}$ and $\\mathbf{Q}$ must also be $1\\times 1$ matrix, which we can treat as scalars, yielding \n",
"\n",
"$$P^- = FPF^\\mathsf{T} + Q$$\n",
"\n",
"We already know $F=1$. The transpose of a scalar is the scalar, so $F^\\mathsf{T} = 1$. This yields\n",
"\n",
"$$P^- = P + Q$$\n",
"\n",
"which is equivalent to the Gaussian equation of \n",
"\n",
"$$\\sigma^2 = \\sigma_1^2 + \\sigma_2^2$$\n",
"\n",
"This proves that the multivariate equations are performing the same math as the univariate equations for the case of the dimension being 1."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here our our multivariate Kalman filter equations for the update step.\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\textbf{y} &= \\mathbf{z} - \\mathbf{H x^-}\\\\\n",
"\\mathbf{K}&= \\mathbf{P^-H}^\\mathsf{T} (\\mathbf{HP^-H}^\\mathsf{T} + \\mathbf{R})^{-1} \\\\\n",
"\\mathbf{x}&=\\mathbf{x}^- +\\mathbf{K\\textbf{y}} \\\\\n",
"\\mathbf{P}&= (\\mathbf{I}-\\mathbf{KH})\\mathbf{P^-}\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"As above, all of the matrices become scalars. $H$ defines how we convert from a position to a measurement. Both are positions, so there is no conversion, and thus $H=1$. Let's substitute in our known values and convert to scalar in one step. One final thing you need to know - division is scalar's analogous operation for matrix inversion, so we will convert the matrix inversion to division.\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"y &= z - x^-\\\\\n",
"K &=P^- / (P^- + R) \\\\\n",
"x &=x +Ky \\\\\n",
"P &= (1-K)P^-\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"Before we continue with the proof, I want you to look at those equations to recognize what a simple concept these equations implement. The residual $y$ is nothing more than the measurement minus the previous state. The gain $K$ is scaled based on how certain we are about the last prediction vs how certain we are about the measurement. We choose a new state $x$ based on the old value of $x$ plus the scaled value of the residual. Finally, we update the uncertainty based on how certain we are about the measurement. Algorithmically this should sound exactly like what we did in the last chapter.\n",
"\n",
"So let's finish off the algebra to prove this. It's straightforward, and not at all necessary for you to learn unless you are interested. Feel free to skim ahead to the last paragraph in this section if you prefer skipping the algebra.\n",
"\n",
"Recall that the univariate equations for the update step are:\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\mu &=\\frac{\\sigma_1^2 \\mu_2 + \\sigma_2^2 \\mu_1} {\\sigma_1^2 + \\sigma_2^2}, \\\\\n",
"\\sigma^2 &= \\frac{1}{\\frac{1}{\\sigma_1^2} + \\frac{1}{\\sigma_2^2}}\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"Here we will say that $\\mu_1$ is the state $x$, and $\\mu_2$ is the measurement $z$. That is entirely arbitrary, we could have chosen the opposite assignment. Thus it follows that that $\\sigma_1^2$ is the state uncertainty $P$, and $\\sigma_2^2$ is the measurement noise $R$. Let's substitute those in.\n",
"\n",
"$$ \\mu = \\frac{Pz + Rx}{P+R} \\\\\n",
"\\sigma^2 = \\frac{1}{\\frac{1}{P} + \\frac{1}{R}}\n",
"$$\n",
"\n",
"I will handle $\\mu$ first. The corresponding equation in the multivariate case is\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"x &= x + Ky \\\\\n",
"&= x + \\frac{P}{P+R}(z-x) \\\\\n",
"&= \\frac{P+R}{P+R}x + \\frac{Pz - Px}{P+R} \\\\\n",
"&= \\frac{Px + Rx + Pz - Px}{P+R} \\\\\n",
"&= \\frac{Pz + Rx}{P+R}\n",
"\\end{aligned}\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's look at $\\sigma^2$. The corresponding equation in the multivariate case is\n",
"\n",
"$$ \n",
"\\begin{aligned}\n",
"P &= (1-K)P \\\\\n",
"&= (1-\\frac{P}{P+R})P \\\\\n",
"&= (\\frac{P+R}{P+R}-\\frac{P}{P+R})P \\\\\n",
"&= (\\frac{P+R-P}{P+R})P \\\\\n",
"&= \\frac{RP}{P+R}\\\\\n",
"&= \\frac{1}{\\frac{P+R}{RP}}\\\\\n",
"&= \\frac{1}{\\frac{R}{RP} + \\frac{P}{RP}} \\\\\n",
"&= \\frac{1}{\\frac{1}{P} + \\frac{1}{R}}\n",
"\\quad\\blacksquare\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"So we have proven that the multivariate equations are equivalent to the univariate equations when we only have one state variable. I'll close this section by recognizing one quibble - I hand waved my assertion that $H=1$ and $F=1$. In general we know this is not true. For example, a digital thermometer may provide measurement in volts, and we need to convert that to temperature, and we use $H$ to do that conversion. I left that issue out of the last chapter to keep the explanation as simple and streamlined as possible. It is very straightforward to add that generalization to the equations of the last chapter, redo the algebra above, and still have the same results. In practice we do not use the equations in the last chapter to perform Kalman filtering due to the material in the next section which demonstrates how much better the Kalman filter performs when we include unobserved variables. So I prefer to leave the equations from the last chapter in their simplest form so that they economically represent our central ideas without any extra complications."
]
},
{
"cell_type": "markdown",
"metadata": {},

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,476 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Walking through the Kalman Filter code"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"<style>\n",
"@import url('http://fonts.googleapis.com/css?family=Source+Code+Pro');\n",
"@import url('http://fonts.googleapis.com/css?family=Vollkorn');\n",
"@import url('http://fonts.googleapis.com/css?family=Arimo');\n",
"\n",
" div.cell{\n",
" width: 850px;\n",
" margin-left: 0% !important;\n",
" margin-right: auto;\n",
" }\n",
" div.text_cell code {\n",
" background: transparent;\n",
" color: #000000;\n",
" font-weight: 600;\n",
" font-size: 11pt;\n",
" font-style: bold;\n",
" font-family: 'Source Code Pro', Consolas, monocco, monospace;\n",
" }\n",
" h1 {\n",
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
"\t}\n",
"\t\n",
" div.input_area {\n",
" background: #F6F6F9;\n",
" border: 1px solid #586e75;\n",
" }\n",
"\n",
" .text_cell_render h1 {\n",
" font-weight: 200;\n",
" font-size: 30pt;\n",
" line-height: 100%;\n",
" color:#c76c0c;\n",
" margin-bottom: 0.5em;\n",
" margin-top: 1em;\n",
" display: block;\n",
" white-space: wrap;\n",
" } \n",
" h2 {\n",
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
" }\n",
" .text_cell_render h2 {\n",
" font-weight: 200;\n",
" font-size: 16pt;\n",
" font-style: italic;\n",
" line-height: 100%;\n",
" color:#c76c0c;\n",
" margin-bottom: 0.5em;\n",
" margin-top: 1.5em;\n",
" display: inline;\n",
" white-space: wrap;\n",
" } \n",
" h3 {\n",
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
" }\n",
" .text_cell_render h3 {\n",
" font-weight: 200;\n",
" font-size: 14pt;\n",
" line-height: 100%;\n",
" color:#d77c0c;\n",
" margin-bottom: 0.5em;\n",
" margin-top: 2em;\n",
" display: block;\n",
" white-space: nowrap;\n",
" }\n",
" h4 {\n",
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
" }\n",
" .text_cell_render h4 {\n",
" font-weight: 100;\n",
" font-size: 14pt;\n",
" color:#d77c0c;\n",
" margin-bottom: 0.5em;\n",
" margin-top: 0.5em;\n",
" display: block;\n",
" white-space: nowrap;\n",
" }\n",
" h5 {\n",
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
" }\n",
" .text_cell_render h5 {\n",
" font-weight: 200;\n",
" font-style: normal;\n",
" color: #1d3b84;\n",
" font-size: 16pt;\n",
" margin-bottom: 0em;\n",
" margin-top: 0.5em;\n",
" display: block;\n",
" white-space: nowrap;\n",
" }\n",
" div.text_cell_render{\n",
" font-family: 'Arimo',verdana,arial,sans-serif;\n",
" line-height: 125%;\n",
" font-size: 120%;\n",
" width:740px;\n",
" margin-left:auto;\n",
" margin-right:auto;\n",
" text-align:justify;\n",
" text-justify:inter-word;\n",
" }\n",
" div.output_subarea.output_text.output_pyout {\n",
" overflow-x: auto;\n",
" overflow-y: scroll;\n",
" max-height: 50000px;\n",
" }\n",
" div.output_subarea.output_stream.output_stdout.output_text {\n",
" overflow-x: auto;\n",
" overflow-y: scroll;\n",
" max-height: 50000px;\n",
" }\n",
" div.output_wrapper{\n",
" margin-top:0.2em;\n",
" margin-bottom:0.2em;\n",
"}\n",
"\n",
" code{\n",
" font-size: 70%;\n",
" }\n",
" .rendered_html code{\n",
" background-color: transparent;\n",
" }\n",
" ul{\n",
" margin: 2em;\n",
" }\n",
" ul li{\n",
" padding-left: 0.5em; \n",
" margin-bottom: 0.5em; \n",
" margin-top: 0.5em; \n",
" }\n",
" ul li li{\n",
" padding-left: 0.2em; \n",
" margin-bottom: 0.2em; \n",
" margin-top: 0.2em; \n",
" }\n",
" ol{\n",
" margin: 2em;\n",
" }\n",
" ol li{\n",
" padding-left: 0.5em; \n",
" margin-bottom: 0.5em; \n",
" margin-top: 0.5em; \n",
" }\n",
" ul li{\n",
" padding-left: 0.5em; \n",
" margin-bottom: 0.5em; \n",
" margin-top: 0.2em; \n",
" }\n",
" a:link{\n",
" font-weight: bold;\n",
" color:#447adb;\n",
" }\n",
" a:visited{\n",
" font-weight: bold;\n",
" color: #1d3b84;\n",
" }\n",
" a:hover{\n",
" font-weight: bold;\n",
" color: #1d3b84;\n",
" }\n",
" a:focus{\n",
" font-weight: bold;\n",
" color:#447adb;\n",
" }\n",
" a:active{\n",
" font-weight: bold;\n",
" color:#447adb;\n",
" }\n",
" .rendered_html :link {\n",
" text-decoration: underline; \n",
" }\n",
" .rendered_html :hover {\n",
" text-decoration: none; \n",
" }\n",
" .rendered_html :visited {\n",
" text-decoration: none;\n",
" }\n",
" .rendered_html :focus {\n",
" text-decoration: none;\n",
" }\n",
" .rendered_html :active {\n",
" text-decoration: none;\n",
" }\n",
" .warning{\n",
" color: rgb( 240, 20, 20 )\n",
" } \n",
" hr {\n",
" color: #f3f3f3;\n",
" background-color: #f3f3f3;\n",
" height: 1px;\n",
" }\n",
" blockquote{\n",
" display:block;\n",
" background: #fcfcfc;\n",
" border-left: 5px solid #c76c0c;\n",
" font-family: 'Open sans',verdana,arial,sans-serif;\n",
" width:680px;\n",
" padding: 10px 10px 10px 10px;\n",
" text-align:justify;\n",
" text-justify:inter-word;\n",
" }\n",
" blockquote p {\n",
" margin-bottom: 0;\n",
" line-height: 125%;\n",
" font-size: 100%;\n",
" }\n",
"</style>\n",
"<script>\n",
" MathJax.Hub.Config({\n",
" TeX: {\n",
" extensions: [\"AMSmath.js\"]\n",
" },\n",
" tex2jax: {\n",
" inlineMath: [ ['$','$'], [\"\\\\(\",\"\\\\)\"] ],\n",
" displayMath: [ ['$$','$$'], [\"\\\\[\",\"\\\\]\"] ]\n",
" },\n",
" displayAlign: 'center', // Change this to 'center' to center equations.\n",
" \"HTML-CSS\": {\n",
" scale:85,\n",
" styles: {'.MathJax_Display': {\"margin\": 4}}\n",
" }\n",
" });\n",
"</script>\n"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#format the book\n",
"%matplotlib inline\n",
"from __future__ import division, print_function\n",
"import matplotlib.pyplot as plt\n",
"import book_format\n",
"book_format.load_style()"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"** author's note: this code is somewhat old. This section needs to be edited; I would not pay a lot of attention to it right now. **\n",
"\n",
"The kalman filter code that we are using is implemented in my Python library `FilterPy`. If you are interested in the full implementation of the filter you should look in `filterpy\\kalman\\kalman_filter.py`. In the following I will present a simplified implementation of the same code. The code in the library handles issues that are beyond the scope of this chapter, such as numerical stability and support for the extended Kalman filter, subject of a later chapter. \n",
"\n",
"The code is implemented as the class `KalmanFilter`. Some Python programmers are not a fan of object oriented (OO) Python, and eschew classes. I do not intend to enter into that battle other than to say that I have often seen OO abused. Here I use the class to encapsulate the data that is pertinent to the filter so that you do not have to store and pass around a half dozen variables everywhere.\n",
"\n",
"The method `__init__()` is used by Python to create the object. Here is the method \n",
"\n",
" def __init__(self, dim_x, dim_z):\n",
" \"\"\" Create a Kalman filter. You are responsible for setting the \n",
" various state variables to reasonable values; the defaults below will\n",
" not give you a functional filter.\n",
" \n",
" Parameters\n",
" ----------\n",
" dim_x : int\n",
" Number of state variables for the Kalman filter. For example, if\n",
" you are tracking the position and velocity of an object in two\n",
" dimensions, dim_x would be 4.\n",
" \n",
" This is used to set the default size of P, Q, and u\n",
" \n",
" dim_z : int\n",
" Number of of measurement inputs. For example, if the sensor\n",
" provides you with position in (x,y), dim_z would be 2. \n",
" \"\"\"\n",
" \n",
" self.dim_x = dim_x\n",
" self.dim_z = dim_z\n",
"\n",
" self.x = np.zeros((dim_x, 1)) # state\n",
" self.P = np.eye(dim_x) # uncertainty covariance\n",
" self.Q = np.eye(dim_x) # process uncertainty\n",
" self.u = 0 # control input vector\n",
" self.B = np.zeros((dim_x, 1))\n",
" self.F = 0 # state transition matrix\n",
" self.H = 0 # Measurement function\n",
" self.R = np.eye(dim_z) # state uncertainty\n",
"\n",
" # identity matrix. Do not alter this.\n",
" self._I = np.eye(dim_x)\n",
"\n",
"More than anything this method exists to document for you what the variable names are in the filter. To do anything useful with this filter you will have to modify most of these values. Some are set to useful values. For example, `R` is set to an identity matrix; if you want the diagonals of `R` to be 10. you may write (as we did earlier in this chapter) `my_filter.R += 10.`.\n",
"\n",
"The names used for each variable matches the math symbology used in this chapter. Thus, `self.P` is the covariance matrix, `self.x` is the state, and so on.\n",
"\n",
"The predict function implements the predict step of the Kalman equations, which are \n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\mathbf{x}^- &= \\mathbf{F x} + \\mathbf{B u} \\\\\n",
"\\mathbf{P}^- &= \\mathbf{FP{F}}^\\mathsf{T} + \\mathbf{Q} \n",
"\\end{aligned}\n",
"$$\n",
"\n",
"The corresponding code is\n",
"\n",
" def predict(self): \n",
" self.x = self.F.dot(self.x) + self.B.dot(self.u)\n",
" self.P = self.F.dot(self.P).dot(self.F.T) + self.Q\n",
"\n",
"I haven't discussed the use of NumPy much until now, but this method illustrates the power of that package. We use NumPy's `array` class to store our data and perform the linear algebra for our filters. `array` implements matrix multiplication using the `.dot()` method; if you use `*` you will get element-wise multiplication. As a heavy user of linear algebra this design is somewhat distressing as I use matrix multiplication far more often than element-wise multiplication. However, this design is due to historical developments in the library and we must live with it. The Python community has recognized this problem, and in Python 3.5 we will have the `@` operator to implement matrix multiplication. \n",
"\n",
"With that in mind, the Python code `self.F.dot(self.x)` implements the math expression $\\mathbf{F x}$.\n",
"\n",
"NumPy's `array` implements matrix transposition by using the `.T` property. Therefore, `F.T` is the python implementation of $\\mathbf{F}^\\mathsf{T}$.\n",
"\n",
"The `update()` method implements the update equations of the Kalman filter, which are\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\mathbf{y} &= \\mathbf{z} - \\mathbf{H}\\mathbf{x^-} \\\\\n",
"\\mathbf{K} &= \\mathbf{P} \\mathbf{H}^\\mathsf{T} (\\mathbf{H} \\mathbf{P^-} \\mathbf{H}^\\mathsf{T} +\\mathbf{R})^{-1} \\\\\n",
"\\mathbf{x} &= \\mathbf{x}^- + \\mathbf{K} \\mathbf{y} \\\\\n",
"\\mathbf{P} &= (\\mathbf{I} - \\mathbf{K} \\mathbf{H})\\mathbf{P^-}\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"The corresponding code is:\n",
"\n",
" def update(self, Z, R=None):\n",
" \"\"\"\n",
" Add a new measurement (Z) to the kalman filter. If Z is None, nothing\n",
" is changed.\n",
"\n",
" Optionally provide R to override the measurement noise for this\n",
" one call, otherwise self.R will be used.\n",
"\n",
" self.residual, self.S, and self.K are stored in case you want to\n",
" inspect these variables. Strictly speaking they are not part of the\n",
" output of the Kalman filter, however, it is often useful to know\n",
" what these values are in various scenarios.\n",
" \"\"\"\n",
"\n",
" if Z is None:\n",
" return\n",
"\n",
" if R is None:\n",
" R = self.R\n",
" elif np.isscalar(R):\n",
" R = np.eye(self.dim_z) * R\n",
"\n",
" # error (residual) between measurement and prediction\n",
" self.residual = Z - self.H.dot(self.x)\n",
"\n",
" # project system uncertainty into measurement space\n",
" self.S = self.H.dot(self.P).dot(self.H.T) + R\n",
"\n",
" # map system uncertainty into kalman gain\n",
" self.K = self.P.dot(self.H.T).dot(linalg.inv(self.S))\n",
"\n",
" # predict new x with residual scaled by the kalman gain\n",
" self.x += self.K.dot(self.residual)\n",
"\n",
" KH = self.K.dot(self.H)\n",
" I_KH = self._I - KH\n",
" self.P = (I_KH.dot(self.P.dot(I_KH.T)) +\n",
" self.K.dot(self.R.dot(self.K.T)))\n",
"\n",
"There are a few more complications in this piece of code compared to `predict()` but it should still be quite clear. \n",
"\n",
"The first complication are the lines:\n",
"\n",
" if Z is None:\n",
" return\n",
" \n",
"This just lets you deal with missing data in a natural way. It is typical to use `None` to indicate the absence of data. If there is no data for an update we skip the update equations. This bit of code means you can write something like:\n",
"\n",
" z = read_sensor() # may return None if no data\n",
" my_kf.update(z)\n",
" \n",
"instead of:\n",
" z = read_sensor()\n",
" if z is not None:\n",
" my_kf.update(z)\n",
" \n",
"Reasonable people will argue whether my choice is cleaner, or obscures the fact that we do not update if the measurement is `None`. Having written a lot of avionics code my proclivity is always to do the safe thing. If we pass 'None' into the function I do not want an exception to occur; instead, I want the reasonable thing to happen, which is to just return without doing anything. If you feel that my choice obscures that fact, go ahead and write the explicit `if` statement prior to calling `update()` and get the best of both worlds.\n",
"\n",
"The next bit of code lets you optionally pass in a value to override `R`. It is common for the sensor noise to vary over time; if it does you can pass in the value as the optional parameter `R`.\n",
"\n",
" if R is None:\n",
" R = self.R\n",
" elif np.isscalar(R):\n",
" R = np.eye(self.dim_z) * R\n",
" \n",
"This code will use self.R if you do not provide a value for `R`. If you did provide a value, it will check if you provided a scalar (number); if so it constructs a matrix of the correct dimension for you. Otherwise it assumes that you passed in a matrix of the correct dimension.\n",
"\n",
"The rest of the code implements the Kalman filter equations, with one exception. Instead of implementing \n",
"\n",
"$$\\mathbf{P} = (\\mathbf{I} - \\mathbf{KH})\\mathbf{P}^-$$\n",
"\n",
"it implements the somewhat more complicated form \n",
"\n",
"$$\\mathbf{P} = (\\mathbf{I} - \\mathbf{KH})\\mathbf{P}^-(\\mathbf{I} - \\mathbf{KH})^\\mathsf{T} + \\mathbf{KRK}^\\mathsf{T}$$.\n",
"\n",
"The reason for this altered equation is that it is more numerically stable than the former equation, at the cost of being a bit more expensive to compute. It is not always possible to find the optimal value for $\\text{K}$, in which case the former equation will not produce good results because it assumes optimality. The longer reformulation used in the code is derived from more general math that does not assume optimality, and hence provides good results for non-optimal filters (such as when we can not correctly model our measurement error).\n",
"\n",
"Various texts differ as to whether this form of the equation should always be used, or only used when you know you need it. I choose to expend a bit more processing power to ensure stability; if your hardware is very constrained and you are able to prove that the simpler equation is correct for your problem then you might choose to use it instead. Personally, I find that a risky approach and do not recommend it to non-experts. Brown's *Introduction to Random Signals and Applied Kalman Filtering* [3] discusses this issue in some detail, if you are interested."
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": []
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": []
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@ -1,297 +1,340 @@
# -*- coding: utf-8 -*-
"""
Created on Thu May 1 16:56:49 2014
@author: rlabbe
"""
import numpy as np
from matplotlib.patches import Ellipse
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
from numpy.random import multivariate_normal
import stats
def show_residual_chart():
plt.xlim([0.9,2.5])
plt.ylim([1.5,3.5])
plt.scatter ([1,2,2],[2,3,2.3])
plt.scatter ([2],[2.8],marker='o')
ax = plt.axes()
ax.annotate('', xy=(2,3), xytext=(1,2),
arrowprops=dict(arrowstyle='->', ec='#004080',
lw=2,
shrinkA=3, shrinkB=4))
ax.annotate('prediction', xy=(2.04,3.), color='#004080')
ax.annotate('measurement', xy=(2.05, 2.28))
ax.annotate('prior estimate', xy=(1, 1.9))
ax.annotate('residual', xy=(2.04,2.6), color='#e24a33')
ax.annotate('new estimate', xy=(2,2.8),xytext=(2.1,2.8),
arrowprops=dict(arrowstyle='->', ec="k", shrinkA=3, shrinkB=4))
ax.annotate('', xy=(2,3), xytext=(2,2.3),
arrowprops=dict(arrowstyle="-",
ec="#e24a33",
lw=2,
shrinkA=5, shrinkB=5))
plt.title("Kalman Filter Predict and Update")
plt.axis('equal')
plt.show()
def show_position_chart():
""" Displays 3 measurements at t=1,2,3, with x=1,2,3"""
plt.scatter ([1,2,3], [1,2,3], s=128, color='#004080')
plt.xlim([0,4]);
plt.ylim([0,4])
plt.annotate('t=1', xy=(1,1), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.annotate('t=2', xy=(2,2), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.annotate('t=3', xy=(3,3), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.xlabel("X")
plt.ylabel("Y")
plt.xticks(np.arange(1,4,1))
plt.yticks(np.arange(1,4,1))
plt.show()
def show_position_prediction_chart():
""" displays 3 measurements, with the next position predicted"""
plt.scatter ([1,2,3], [1,2,3], s=128, color='#004080')
plt.annotate('t=1', xy=(1,1), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.annotate('t=2', xy=(2,2), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.annotate('t=3', xy=(3,3), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.xlim([0,5])
plt.ylim([0,5])
plt.xlabel("Position")
plt.ylabel("Time")
plt.xticks(np.arange(1,5,1))
plt.yticks(np.arange(1,5,1))
plt.scatter ([4], [4], c='g',s=128, color='#8EBA42')
ax = plt.axes()
ax.annotate('', xy=(4,4), xytext=(3,3),
arrowprops=dict(arrowstyle='->',
ec='g',
shrinkA=6, shrinkB=5,
lw=3))
plt.show()
def show_x_error_chart(count):
""" displays x=123 with covariances showing error"""
plt.cla()
plt.gca().autoscale(tight=True)
cov = np.array([[0.03,0], [0,8]])
e = stats.covariance_ellipse (cov)
cov2 = np.array([[0.03,0], [0,4]])
e2 = stats.covariance_ellipse (cov2)
cov3 = np.array([[12,11.95], [11.95,12]])
e3 = stats.covariance_ellipse (cov3)
sigma=[1, 4, 9]
if count >= 1:
stats.plot_covariance_ellipse ((0,0), ellipse=e, variance=sigma)
if count == 2 or count == 3:
stats.plot_covariance_ellipse ((5,5), ellipse=e, variance=sigma)
if count == 3:
stats.plot_covariance_ellipse ((5,5), ellipse=e3, variance=sigma,
edgecolor='r')
if count == 4:
M1 = np.array([[5, 5]]).T
m4, cov4 = stats.multivariate_multiply(M1, cov2, M1, cov3)
e4 = stats.covariance_ellipse (cov4)
stats.plot_covariance_ellipse ((5,5), ellipse=e, variance=sigma,
alpha=0.25)
stats.plot_covariance_ellipse ((5,5), ellipse=e3, variance=sigma,
edgecolor='r', alpha=0.25)
stats.plot_covariance_ellipse (m4[:,0], ellipse=e4, variance=sigma)
#plt.ylim([0,11])
#plt.xticks(np.arange(1,4,1))
plt.xlabel("Position")
plt.ylabel("Velocity")
plt.show()
def show_x_with_unobserved():
""" shows x=1,2,3 with velocity superimposed on top """
# plot velocity
sigma=[0.5,1.,1.5,2]
cov = np.array([[1,1],[1,1.1]])
stats.plot_covariance_ellipse ((2,2), cov=cov, variance=sigma, axis_equal=False)
# plot positions
cov = np.array([[0.003,0], [0,12]])
sigma=[0.5,1.,1.5,2]
e = stats.covariance_ellipse (cov)
stats.plot_covariance_ellipse ((1,1), ellipse=e, variance=sigma, axis_equal=False)
stats.plot_covariance_ellipse ((2,1), ellipse=e, variance=sigma, axis_equal=False)
stats.plot_covariance_ellipse ((3,1), ellipse=e, variance=sigma, axis_equal=False)
# plot intersection cirle
isct = Ellipse(xy=(2,2), width=.2, height=1.2, edgecolor='r', fc='None', lw=4)
plt.gca().add_artist(isct)
plt.ylim([0,11])
plt.xlim([0,4])
plt.xticks(np.arange(1,4,1))
plt.xlabel("Position")
plt.ylabel("Time")
plt.show()
def plot_3d_covariance(mean, cov):
""" plots a 2x2 covariance matrix positioned at mean. mean will be plotted
in x and y, and the probability in the z axis.
Parameters
----------
mean : 2x1 tuple-like object
mean for x and y coordinates. For example (2.3, 7.5)
cov : 2x2 nd.array
the covariance matrix
"""
# compute width and height of covariance ellipse so we can choose
# appropriate ranges for x and y
o,w,h = stats.covariance_ellipse(cov,3)
# rotate width and height to x,y axis
wx = abs(w*np.cos(o) + h*np.sin(o))*1.2
wy = abs(h*np.cos(o) - w*np.sin(o))*1.2
# ensure axis are of the same size so everything is plotted with the same
# scale
if wx > wy:
w = wx
else:
w = wy
minx = mean[0] - w
maxx = mean[0] + w
miny = mean[1] - w
maxy = mean[1] + w
xs = np.arange(minx, maxx, (maxx-minx)/40.)
ys = np.arange(miny, maxy, (maxy-miny)/40.)
xv, yv = np.meshgrid (xs, ys)
zs = np.array([100.* stats.multivariate_gaussian(np.array([x,y]),mean,cov) \
for x,y in zip(np.ravel(xv), np.ravel(yv))])
zv = zs.reshape(xv.shape)
ax = plt.figure().add_subplot(111, projection='3d')
ax.plot_surface(xv, yv, zv, rstride=1, cstride=1, cmap=cm.autumn)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.contour(xv, yv, zv, zdir='x', offset=minx-1, cmap=cm.autumn)
ax.contour(xv, yv, zv, zdir='y', offset=maxy, cmap=cm.BuGn)
def plot_3d_sampled_covariance(mean, cov):
""" plots a 2x2 covariance matrix positioned at mean. mean will be plotted
in x and y, and the probability in the z axis.
Parameters
----------
mean : 2x1 tuple-like object
mean for x and y coordinates. For example (2.3, 7.5)
cov : 2x2 nd.array
the covariance matrix
"""
# compute width and height of covariance ellipse so we can choose
# appropriate ranges for x and y
o,w,h = stats.covariance_ellipse(cov,3)
# rotate width and height to x,y axis
wx = abs(w*np.cos(o) + h*np.sin(o))*1.2
wy = abs(h*np.cos(o) - w*np.sin(o))*1.2
# ensure axis are of the same size so everything is plotted with the same
# scale
if wx > wy:
w = wx
else:
w = wy
minx = mean[0] - w
maxx = mean[0] + w
miny = mean[1] - w
maxy = mean[1] + w
count = 1000
x,y = multivariate_normal(mean=mean, cov=cov, size=count).T
xs = np.arange(minx, maxx, (maxx-minx)/40.)
ys = np.arange(miny, maxy, (maxy-miny)/40.)
xv, yv = np.meshgrid (xs, ys)
zs = np.array([100.* stats.multivariate_gaussian(np.array([xx,yy]),mean,cov) \
for xx,yy in zip(np.ravel(xv), np.ravel(yv))])
zv = zs.reshape(xv.shape)
ax = plt.figure().add_subplot(111, projection='3d')
ax.scatter(x,y, [0]*count, marker='.')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.contour(xv, yv, zv, zdir='x', offset=minx-1, cmap=cm.autumn)
ax.contour(xv, yv, zv, zdir='y', offset=maxy, cmap=cm.BuGn)
if __name__ == "__main__":
#show_position_chart()
#plot_3d_covariance((2,7), np.array([[8.,0],[0,4.]]))
#plot_3d_sampled_covariance([2,7], [[8.,0],[0,4.]])
#show_residual_chart()
#show_position_chart()
show_x_error_chart(4)
# -*- coding: utf-8 -*-
"""
Created on Thu May 1 16:56:49 2014
@author: rlabbe
"""
import numpy as np
from matplotlib.patches import Ellipse
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
from numpy.random import multivariate_normal
import stats
def show_residual_chart():
plt.xlim([0.9,2.5])
plt.ylim([1.5,3.5])
plt.scatter ([1,2,2],[2,3,2.3])
plt.scatter ([2],[2.8],marker='o')
ax = plt.axes()
ax.annotate('', xy=(2,3), xytext=(1,2),
arrowprops=dict(arrowstyle='->', ec='#004080',
lw=2,
shrinkA=3, shrinkB=4))
ax.annotate('prediction', xy=(2.04,3.), color='#004080')
ax.annotate('measurement', xy=(2.05, 2.28))
ax.annotate('prior estimate', xy=(1, 1.9))
ax.annotate('residual', xy=(2.04,2.6), color='#e24a33')
ax.annotate('new estimate', xy=(2,2.8),xytext=(2.1,2.8),
arrowprops=dict(arrowstyle='->', ec="k", shrinkA=3, shrinkB=4))
ax.annotate('', xy=(2,3), xytext=(2,2.3),
arrowprops=dict(arrowstyle="-",
ec="#e24a33",
lw=2,
shrinkA=5, shrinkB=5))
plt.title("Kalman Filter Predict and Update")
plt.axis('equal')
plt.show()
def show_position_chart():
""" Displays 3 measurements at t=1,2,3, with x=1,2,3"""
plt.scatter ([1,2,3], [1,2,3], s=128, color='#004080')
plt.xlim([0,4]);
plt.ylim([0,4])
plt.annotate('t=1', xy=(1,1), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.annotate('t=2', xy=(2,2), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.annotate('t=3', xy=(3,3), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.xlabel("X")
plt.ylabel("Y")
plt.xticks(np.arange(1,4,1))
plt.yticks(np.arange(1,4,1))
plt.show()
def show_position_prediction_chart():
""" displays 3 measurements, with the next position predicted"""
plt.scatter ([1,2,3], [1,2,3], s=128, color='#004080')
plt.annotate('t=1', xy=(1,1), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.annotate('t=2', xy=(2,2), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.annotate('t=3', xy=(3,3), xytext=(0,-10),
textcoords='offset points', ha='center', va='top')
plt.xlim([0,5])
plt.ylim([0,5])
plt.xlabel("Position")
plt.ylabel("Time")
plt.xticks(np.arange(1,5,1))
plt.yticks(np.arange(1,5,1))
plt.scatter ([4], [4], c='g',s=128, color='#8EBA42')
ax = plt.axes()
ax.annotate('', xy=(4,4), xytext=(3,3),
arrowprops=dict(arrowstyle='->',
ec='g',
shrinkA=6, shrinkB=5,
lw=3))
plt.show()
def show_x_error_chart(count):
""" displays x=123 with covariances showing error"""
plt.cla()
plt.gca().autoscale(tight=True)
cov = np.array([[0.03,0], [0,8]])
e = stats.covariance_ellipse (cov)
cov2 = np.array([[0.03,0], [0,4]])
e2 = stats.covariance_ellipse (cov2)
cov3 = np.array([[12,11.95], [11.95,12]])
e3 = stats.covariance_ellipse (cov3)
sigma=[1, 4, 9]
if count >= 1:
stats.plot_covariance_ellipse ((0,0), ellipse=e, variance=sigma)
if count == 2 or count == 3:
stats.plot_covariance_ellipse ((5,5), ellipse=e, variance=sigma)
if count == 3:
stats.plot_covariance_ellipse ((5,5), ellipse=e3, variance=sigma,
edgecolor='r')
if count == 4:
M1 = np.array([[5, 5]]).T
m4, cov4 = stats.multivariate_multiply(M1, cov2, M1, cov3)
e4 = stats.covariance_ellipse (cov4)
stats.plot_covariance_ellipse ((5,5), ellipse=e, variance=sigma,
alpha=0.25)
stats.plot_covariance_ellipse ((5,5), ellipse=e3, variance=sigma,
edgecolor='r', alpha=0.25)
stats.plot_covariance_ellipse (m4[:,0], ellipse=e4, variance=sigma)
#plt.ylim([0,11])
#plt.xticks(np.arange(1,4,1))
plt.xlabel("Position")
plt.ylabel("Velocity")
plt.show()
def show_x_with_unobserved():
""" shows x=1,2,3 with velocity superimposed on top """
# plot velocity
sigma=[0.5,1.,1.5,2]
cov = np.array([[1,1],[1,1.1]])
stats.plot_covariance_ellipse ((2,2), cov=cov, variance=sigma, axis_equal=False)
# plot positions
cov = np.array([[0.003,0], [0,12]])
sigma=[0.5,1.,1.5,2]
e = stats.covariance_ellipse (cov)
stats.plot_covariance_ellipse ((1,1), ellipse=e, variance=sigma, axis_equal=False)
stats.plot_covariance_ellipse ((2,1), ellipse=e, variance=sigma, axis_equal=False)
stats.plot_covariance_ellipse ((3,1), ellipse=e, variance=sigma, axis_equal=False)
# plot intersection cirle
isct = Ellipse(xy=(2,2), width=.2, height=1.2, edgecolor='r', fc='None', lw=4)
plt.gca().add_artist(isct)
plt.ylim([0,11])
plt.xlim([0,4])
plt.xticks(np.arange(1,4,1))
plt.xlabel("Position")
plt.ylabel("Time")
plt.show()
def plot_3d_covariance(mean, cov):
""" plots a 2x2 covariance matrix positioned at mean. mean will be plotted
in x and y, and the probability in the z axis.
Parameters
----------
mean : 2x1 tuple-like object
mean for x and y coordinates. For example (2.3, 7.5)
cov : 2x2 nd.array
the covariance matrix
"""
# compute width and height of covariance ellipse so we can choose
# appropriate ranges for x and y
o,w,h = stats.covariance_ellipse(cov,3)
# rotate width and height to x,y axis
wx = abs(w*np.cos(o) + h*np.sin(o))*1.2
wy = abs(h*np.cos(o) - w*np.sin(o))*1.2
# ensure axis are of the same size so everything is plotted with the same
# scale
if wx > wy:
w = wx
else:
w = wy
minx = mean[0] - w
maxx = mean[0] + w
miny = mean[1] - w
maxy = mean[1] + w
xs = np.arange(minx, maxx, (maxx-minx)/40.)
ys = np.arange(miny, maxy, (maxy-miny)/40.)
xv, yv = np.meshgrid (xs, ys)
zs = np.array([100.* stats.multivariate_gaussian(np.array([x,y]),mean,cov) \
for x,y in zip(np.ravel(xv), np.ravel(yv))])
zv = zs.reshape(xv.shape)
ax = plt.figure().add_subplot(111, projection='3d')
ax.plot_surface(xv, yv, zv, rstride=1, cstride=1, cmap=cm.autumn)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.contour(xv, yv, zv, zdir='x', offset=minx-1, cmap=cm.autumn)
ax.contour(xv, yv, zv, zdir='y', offset=maxy, cmap=cm.BuGn)
def plot_3d_sampled_covariance(mean, cov):
""" plots a 2x2 covariance matrix positioned at mean. mean will be plotted
in x and y, and the probability in the z axis.
Parameters
----------
mean : 2x1 tuple-like object
mean for x and y coordinates. For example (2.3, 7.5)
cov : 2x2 nd.array
the covariance matrix
"""
# compute width and height of covariance ellipse so we can choose
# appropriate ranges for x and y
o,w,h = stats.covariance_ellipse(cov,3)
# rotate width and height to x,y axis
wx = abs(w*np.cos(o) + h*np.sin(o))*1.2
wy = abs(h*np.cos(o) - w*np.sin(o))*1.2
# ensure axis are of the same size so everything is plotted with the same
# scale
if wx > wy:
w = wx
else:
w = wy
minx = mean[0] - w
maxx = mean[0] + w
miny = mean[1] - w
maxy = mean[1] + w
count = 1000
x,y = multivariate_normal(mean=mean, cov=cov, size=count).T
xs = np.arange(minx, maxx, (maxx-minx)/40.)
ys = np.arange(miny, maxy, (maxy-miny)/40.)
xv, yv = np.meshgrid (xs, ys)
zs = np.array([100.* stats.multivariate_gaussian(np.array([xx,yy]),mean,cov) \
for xx,yy in zip(np.ravel(xv), np.ravel(yv))])
zv = zs.reshape(xv.shape)
ax = plt.figure().add_subplot(111, projection='3d')
ax.scatter(x,y, [0]*count, marker='.')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.contour(xv, yv, zv, zdir='x', offset=minx-1, cmap=cm.autumn)
ax.contour(xv, yv, zv, zdir='y', offset=maxy, cmap=cm.BuGn)
from filterpy.common import plot_covariance_ellipse
def plot_3_covariances():
P = [[2, 0], [0, 2]]
plt.subplot(131)
plot_covariance_ellipse((2, 7), cov=P, facecolor='g', alpha=0.2,
title='|2 0|\n|0 2|', axis_equal=False)
plt.ylim((4, 10))
plt.gca().set_aspect('equal', adjustable='box')
plt.subplot(132)
P = [[2, 0], [0, 9]]
plt.ylim((4, 10))
plt.gca().set_aspect('equal', adjustable='box')
plot_covariance_ellipse((2, 7), P, facecolor='g', alpha=0.2,
axis_equal=False, title='|2 0|\n|0 9|')
plt.subplot(133)
P = [[2, 1.2], [1.2, 2]]
plt.ylim((4, 10))
plt.gca().set_aspect('equal', adjustable='box')
plot_covariance_ellipse((2, 7), P, facecolor='g', alpha=0.2,
axis_equal=False,
title='|2 1.2|\n|1.2 2|')
plt.tight_layout()
plt.show()
def plot_correlation_covariance():
P = [[4, 3.9], [3.9, 4]]
plot_covariance_ellipse((5, 10), P, edgecolor='k',
variance=[1, 2**2, 3**2])
plt.xlabel('X')
plt.ylabel('Y')
plt.gca().autoscale(tight=True)
plt.axvline(7.5, ls='--', lw=1)
plt.axhline(12.5, ls='--', lw=1)
plt.scatter(7.5, 12.5, s=2000, alpha=0.5)
plt.title('|4.0 3.9|\n|3.9 4.0|')
plt.show()
if __name__ == "__main__":
#show_position_chart()
#plot_3d_covariance((2,7), np.array([[8.,0],[0,4.]]))
#plot_3d_sampled_covariance([2,7], [[8.,0],[0,4.]])
#show_residual_chart()
#show_position_chart()
show_x_error_chart(4)

183
code/particle_filter.py Normal file
View File

@ -0,0 +1,183 @@
# -*- coding: utf-8 -*-
"""
Created on Sat May 2 09:46:06 2015
@author: Roger
"""
import math
import numpy as np
from numpy.random import uniform
from numpy.random import randn
import scipy.stats
import matplotlib.pyplot as plt
import random
class ParticleFilter(object):
def __init__(self, N, x_range, y_range):
self.particles = np.zeros((N, 4))
self.N = N
self.x_range = x_range
self.y_range = y_range
# assign
self.weights = np.array([1./N] * N)
self.particles[:, 0] = uniform(0, x_range, size=N)
self.particles[:, 1] = uniform(0, y_range, size=N)
self.particles[:, 3] = uniform(0, 2*np.pi, size=N)
def create_particles(self, mu, var):
self.particles[:, 0] = mu[0] + randn(self.N)* np.sqrt(var)
self.particles[:, 1] = mu[1] + randn(self.N)* np.sqrt(var)
def create_particle(self):
return [uniform(0, self.x_range), uniform(0, self.y_range), 0, 0]
def assign_speed_by_gaussian(self, speed, var):
""" move every particle by the specified speed (assuming time=1.)
with the specified variance, assuming Gaussian distribution. """
self.particles[:, 2] = np.random.normal(speed, var, self.N)
def control(self, dx):
self.particles[:, 0] += dx[0]
self.particles[:, 1] += dx[1]
def move(self, h, v, t=1.):
""" move the particles according to their speed and direction for the
specified time duration t"""
h = math.atan2(h[1], h[0])
h = randn(self.N) * .4 + h
vs = v + randn(self.N) * 0.1
vx = v * np.cos(h)
vy = v * np.sin(h)
#vx = self.particles[:, 2] * np.cos(self.particles[:, 3]) + randn(self.N)*0.5
#vy = self.particles[:, 2] * np.sin(self.particles[:, 3]) + randn(self.N)*0.5
self.particles[:, 0] = (self.particles[:, 0] + vx*t)
self.particles[:, 1] = (self.particles[:, 1] + vy*t)
def move2(self, u):
dx = u[0] + randn(self.N) * 1.9
dy = u[1] + randn(self.N) * 1.9
self.particles[:, 0] = (self.particles[:, 0] + dx)
self.particles[:, 1] = (self.particles[:, 1] + dy)
def weight(self, z, var):
dist = np.sqrt((self.particles[:, 0] - z[0])**2 +
(self.particles[:, 1] - z[1])**2)
# simplification assumes variance is invariant to world projection
n = scipy.stats.norm(0, np.sqrt(var))
prob = n.pdf(dist)
# particles far from a measurement will give us 0.0 for a probability
# due to floating point limits. Once we hit zero we can never recover,
# so add some small nonzero value to all points.
prob += 1.e-12
self.weights *= prob
self.weights /= sum(self.weights) # normalize
def neff(self):
return 1. / np.sum(np.square(self.weights))
def resample(self):
p = np.zeros((self.N, 4))
w = np.zeros(self.N)
cumsum = np.cumsum(self.weights)
for i in range(self.N):
index = np.searchsorted(cumsum, random.random())
p[i] = self.particles[index]
w[i] = self.weights[index]
self.particles = p
self.weights = w / np.sum(w)
def estimate(self):
""" returns mean and variance """
pos = self.particles[:, 0:2]
mu = np.average(pos, weights=self.weights, axis=0)
var = np.average((pos - mu)**2, weights=self.weights, axis=0)
return mu, var
def plot(pf, xlim=100, ylim=100, weights=True):
if weights:
a = plt.subplot(221)
a.cla()
plt.xlim(0, ylim)
plt.ylim(0, 1)
plt.scatter(pf.particles[:, 0], pf.weights, marker='.', s=1)
a = plt.subplot(224)
a.cla()
plt.scatter(pf.weights, pf.particles[:, 1], marker='.', s=1)
plt.ylim(0, xlim)
plt.xlim(0, 1)
a = plt.subplot(223)
a.cla()
else:
plt.cla()
plt.scatter(pf.particles[:, 0], pf.particles[:, 1], marker='.', s=1)
plt.xlim(0, xlim)
plt.ylim(0, ylim)
if __name__ == '__main__':
pf = ParticleFilter(5000, 100, 100)
pf.particles[:,3] = np.random.randn(pf.N)*np.radians(10) + np.radians(45)
z = np.array([20, 20])
pf.create_particles(z, 40)
mu0 = np.array([0., 0.])
for x in range(60):
z[0] += 1.0 + randn()*0.3
z[1] += 1.0 + randn()*0.3
pf.move2((1,1))
pf.weight(z, 5.2)
# pf.weight((z[0] + randn()*0.2, z[1] + randn()*0.2), 5.2)
pf.resample()
mu, var = pf.estimate()
if x == 0:
mu0 = mu
print(mu - z)
print('neff', pf.neff())
#print(var)
plot(pf, weights=False)
plt.scatter(z[0], z[1], c='r', s=40)
plt.scatter(mu[0], mu[1], c='g', s=100)#,
#s=min(500, abs((1./np.sum(var)))*20), alpha=0.5)
plt.tight_layout()
plt.pause(.02)
#pf.assign_speed_by_gaussian(1, 1.5)
#pf.move(h=[1,1], v=1.4, t=1)
#pf.control(mu-mu0)
mu0 = mu

View File

@ -1,153 +1,157 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<center><h1>Kalman and Bayesian Filters in Python</h1></center>\n",
"<p>\n",
" <p>\n",
"Table of Contents\n",
"-----\n",
"\n",
"[**Preface**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/00_Preface.ipynb)\n",
" \n",
"Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links.\n",
"\n",
"\n",
"[**Chapter 1: The g-h Filter ($\\alpha$-$\\beta$ Filter)**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/01_g-h_filter.ipynb)\n",
"\n",
"Intuitive introduction to the g-h filter, also known as the $\\alpha$-$\\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. \n",
"\n",
"\n",
"[**Chapter 2: The Discrete Bayes Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/02_Discrete_Bayes.ipynb)\n",
"\n",
"Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form.\n",
"\n",
"\n",
"[**Chapter 3: Least Squares Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/03_Least_Squares_Filters.ipynb)\n",
"\n",
"Introduces the least squares filter in batch and recursive forms. I've not made a start on authoring this yet. Many authors develop KF explanations by covering least squares first. I am not, so I may move this chapter deeper in the book, or remove it.\n",
"\n",
"\n",
"[**Chapter 4: Gaussian Probabilities**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/04_Gaussians.ipynb)\n",
"\n",
"Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains.\n",
"\n",
"\n",
"[**Chapter 5: One Dimensional Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/05_Kalman_Filters.ipynb)\n",
"\n",
"Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. \n",
"\n",
"\n",
"[**Chapter 6: Multivariate Kalman Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/06_Multivariate_Kalman_Filters.ipynb)\n",
"\n",
"We extend the Kalman filter developed in the previous chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.\n",
"\n",
"\n",
"[**Chapter 7: Kalman Filter Math**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/07_Kalman_Filter_Math.ipynb)\n",
"\n",
"We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. \n",
"\n",
"\n",
"[**Chapter 8: Designing Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/08_Designing_Kalman_Filters.ipynb)\n",
"\n",
"Building on material in Chapter 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.\n",
"\n",
"\n",
"[**Chapter 9: Nonlinear Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/09_Nonlinear_Filtering.ipynb)\n",
"\n",
"Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.\n",
"\n",
"\n",
"[**Chapter 10: Unscented Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/10_Unscented_Kalman_Filter.ipynb)\n",
"\n",
"Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.\n",
"\n",
"This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.\n",
"\n",
"\n",
"[**Chapter 11: Extended Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/11_Extended_Kalman_Filters.ipynb)\n",
"\n",
"Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. \n",
"\n",
"\n",
"[**Chapter 12: Designing Nonlinear Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/12_Designing_Nonlinear_Kalman_Filters.ipynb)\n",
"\n",
"Works through some examples of the design of Kalman filters for nonlinear problems. *This is still very much a work in progress.*\n",
"\n",
"\n",
"[**Chapter 13: Particle Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/13_Particle_Filters.ipynb)\n",
" \n",
"Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.\n",
"\n",
"\n",
"[**Chapter 14: Smoothing**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/14_Smoothing.ipynb)\n",
"\n",
"Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.\n",
"\n",
"\n",
"[**Chapter 15: Adaptive Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/15_Adaptive_Filtering.ipynb)\n",
" \n",
"Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.\n",
"\n",
"\n",
"[**Chapter 16: H-Infinity Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/16_HInfinity_Filters.ipynb)\n",
" \n",
"Describes the $H_\\infty$ filter. \n",
"\n",
"*I have code that implements the filter, but no supporting text yet.*\n",
"\n",
"\n",
"[**Chapter 17: Ensemble Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/17_Ensemble_Kalman_Filters.ipynb)\n",
"\n",
"Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.\n",
"\n",
"\n",
"[**Chapter XX: Numerical Stability**](not implemented)\n",
"\n",
"EKF and UKF are linear approximations of nonlinear problems. Unless programmed carefully, they are not numerically stable. We discuss some common approaches to this problem.\n",
"\n",
"*This chapter is not started. I'm likely to rearrange where this material goes - this is just a placeholder.*\n",
"\n",
"\n",
"[**Appendix: Installation, Python, NumPy, and FilterPy**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_A_Installation.ipynb)\n",
"\n",
"Brief introduction of Python and how it is used in this book. Description of the companion\n",
"library FilterPy. \n",
" \n",
"\n",
"[**Appendix: Symbols and Notations**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_B_Symbols_and_Notations.ipynb)\n",
"\n",
"Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.\n",
"\n",
"*Still just a collection of notes at this point.*\n",
"\n",
"\n",
"### Github repository\n",
"http://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<center><h1>Kalman and Bayesian Filters in Python</h1></center>\n",
"<p>\n",
" <p>\n",
"Table of Contents\n",
"-----\n",
"\n",
"[**Preface**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/00_Preface.ipynb)\n",
" \n",
"Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links.\n",
"\n",
"\n",
"[**Chapter 1: The g-h Filter ($\\alpha$-$\\beta$ Filter)**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/01_g-h_filter.ipynb)\n",
"\n",
"Intuitive introduction to the g-h filter, also known as the $\\alpha$-$\\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. \n",
"\n",
"\n",
"[**Chapter 2: The Discrete Bayes Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/02_Discrete_Bayes.ipynb)\n",
"\n",
"Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form.\n",
"\n",
"\n",
"[**Chapter 3: Least Squares Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/03_Least_Squares_Filters.ipynb)\n",
"\n",
"Introduces the least squares filter in batch and recursive forms. I've not made a start on authoring this yet. Many authors develop KF explanations by covering least squares first. I am not, so I may move this chapter deeper in the book, or remove it.\n",
"\n",
"\n",
"[**Chapter 4: Gaussian Probabilities**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/04_Gaussians.ipynb)\n",
"\n",
"Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains.\n",
"\n",
"\n",
"[**Chapter 5: One Dimensional Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/05_Kalman_Filters.ipynb)\n",
"\n",
"Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. \n",
"\n",
"\n",
"[**Chapter 6: Multivariate Kalman Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/06_Multivariate_Kalman_Filters.ipynb)\n",
"\n",
"We extend the Kalman filter developed in the previous chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.\n",
"\n",
"\n",
"[**Chapter 7: Kalman Filter Math**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/07_Kalman_Filter_Math.ipynb)\n",
"\n",
"We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. \n",
"\n",
"\n",
"[**Chapter 8: Designing Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/08_Designing_Kalman_Filters.ipynb)\n",
"\n",
"Building on material in Chapter 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.\n",
"\n",
"\n",
"[**Chapter 9: Nonlinear Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/09_Nonlinear_Filtering.ipynb)\n",
"\n",
"Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.\n",
"\n",
"\n",
"[**Chapter 10: Unscented Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/10_Unscented_Kalman_Filter.ipynb)\n",
"\n",
"Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.\n",
"\n",
"This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.\n",
"\n",
"\n",
"[**Chapter 11: Extended Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/11_Extended_Kalman_Filters.ipynb)\n",
"\n",
"Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. \n",
"\n",
"\n",
"[**Chapter 12: Designing Nonlinear Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/12_Designing_Nonlinear_Kalman_Filters.ipynb)\n",
"\n",
"Works through some examples of the design of Kalman filters for nonlinear problems. *This is still very much a work in progress.*\n",
"\n",
"\n",
"[**Chapter 13: Particle Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/13_Particle_Filters.ipynb)\n",
" \n",
"Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.\n",
"\n",
"\n",
"[**Chapter 14: Smoothing**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/14_Smoothing.ipynb)\n",
"\n",
"Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.\n",
"\n",
"\n",
"[**Chapter 15: Adaptive Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/15_Adaptive_Filtering.ipynb)\n",
" \n",
"Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.\n",
"\n",
"\n",
"[**Chapter 16: H-Infinity Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/16_HInfinity_Filters.ipynb)\n",
" \n",
"Describes the $H_\\infty$ filter. \n",
"\n",
"*I have code that implements the filter, but no supporting text yet.*\n",
"\n",
"\n",
"[**Chapter 17: Ensemble Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/17_Ensemble_Kalman_Filters.ipynb)\n",
"\n",
"Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.\n",
"\n",
"\n",
"[**Chapter XX: Numerical Stability**](not implemented)\n",
"\n",
"EKF and UKF are linear approximations of nonlinear problems. Unless programmed carefully, they are not numerically stable. We discuss some common approaches to this problem.\n",
"\n",
"*This chapter is not started. I'm likely to rearrange where this material goes - this is just a placeholder.*\n",
"\n",
"\n",
"[**Appendix A: Installation, Python, NumPy, and FilterPy**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_A_Installation.ipynb)\n",
"\n",
"Brief introduction of Python and how it is used in this book. Description of the companion\n",
"library FilterPy. \n",
" \n",
"\n",
"[**Appendix B: Symbols and Notations**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_B_Symbols_and_Notations.ipynb)\n",
"\n",
"Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.\n",
"\n",
"*Still just a collection of notes at this point.*\n",
"\n",
"\n",
"[**Appendix C: Walking through the Kalman Filter code**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix_C_Walking_Through_KF_Code.ipynb)\n",
"\n",
"A brief walkthrough of the KalmanFilter class from FilterPy.\n",
"\n",
"### Github repository\n",
"http://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}