More work on various parts of the multidimensional filter.

Added custom CSS style for formatting the book.
This commit is contained in:
Roger Labbe 2014-05-07 12:59:25 -07:00
parent b31e3a0b12
commit 3b9bb1fd44
8 changed files with 273 additions and 61 deletions

View File

@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:22eea384355b20ece229d2669aa6a64132be5fada2c2565b3598ab6846b58610"
"signature": "sha256:7c1d5ebfcacf654b0862f976d6da64ba34c3743b23833736a6b3e7efb53ba119"
},
"nbformat": 3,
"nbformat_minor": 0,
@ -73,7 +73,9 @@
"source": [
"Probably this is immediately recognizable to you as a 'bell curve'. This curve is ubiquitious because under real world conditions most observations are distributed in such a manner. We will not prove the math here, but the **central limit theorem** proves that under certain conditions the arithmetic mean of independent observations will be distributed in this manner, even if the observations themselves do not have this distribution. In nonmathematical terms, this means that if you take a bunch of measurements from a sensor and use them in a filter, they are very likely to create this distribution.\n",
"\n",
"Before we go further, a moment for terminology. This is variously called a normal distribution, a Gaussian distribution, or a bell curve. However, other distributions also have a bell shaped curve, so that name is somewhat ambiguous, and we will not use it again."
"Before we go further, a moment for terminology. This is variously called a normal distribution, a Gaussian distribution, or a bell curve. However, other distributions also have a bell shaped curve, so that name is somewhat ambiguous, and we will not use *bell curve* in this book.\n",
"\n",
"Often *univariate* is tacked onto the front of the name to indicate that this is one dimensional - it is the gaussian for a scalar value, so often you will see it as *univariate normal distribution*. We will use this term often when we need to distinguish between the 1D case and the multidimensional cases that we will use in later chapters. For reference, we will learn that the multidimensional case is called *multivariate normal distribution*. If the context of what we are discussing makes the dimensionality clear we will often leave off the dimensional qualifier, as in the rest of this chapter."
]
},
{
@ -84,7 +86,7 @@
"\n",
"So let us explore how gaussians work. A gaussian is a continuous probability distribution that is completely described with two parameters, the mean ($\\mu$) and the variance ($\\sigma^2$). It is defined as:\n",
"$$ \n",
"f(x, \\mu, \\sigma) = \\frac{1}{\\sigma\\sqrt{2\\pi}} e^{-0.5*{(x-\\mu)^2}/\\sigma^2 }\n",
"f(x, \\mu, \\sigma) = \\frac{1}{\\sigma\\sqrt{2\\pi}} e^{-\\frac{1}{2}{(x-\\mu)^2}/\\sigma^2 }\n",
"$$"
]
},
@ -152,6 +154,42 @@
"The standard notation for a normal distribution is just $N(\\mu,\\sigma^2)$. I will not go into detail as to why $\\sigma^2$ is used, other than to note that $\\sigma$ is commonly called the *standard deviation*, which has enormous utility in statistics. The standard deviation is not really used in this book, so I will not address it further. The important thing to understand is that the variance ($\\sigma^2$) is a measure of the width of the curve. The curve above is notated as $N(23, 1)$, since $\\mu=23$ and $\\sigma=1$. We will use this notiation throughout the rest of the book, so learn it now.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Interactive Gaussians\n",
"\n",
"For those that are using this directly in IPython Notebook, here is an interactive version of the guassian plots. Use the sliders to modify $\\mu$ and $\\sigma^2$. Adjusting $\\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\\sigma^2$ will make the bell curve thicker and thinner."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"import math\n",
"from IPython.html.widgets import interact, interactive, fixed\n",
"#from IPython.html import widgets\n",
"#from IPython.display import clear_output, display, HTML\n",
"\n",
"def gaussian (x, mu, sigma):\n",
" ''' compute the gaussian with the specified mean(mu) and sigma'''\n",
" return math.exp (-0.5 * (x-mu)**2 / sigma) / math.sqrt(2.*math.pi*sigma)\n",
"\n",
"def plt_g (mu,variance):\n",
" xs = arange(0,10,0.15)\n",
" ys = [gaussian (x, mu,variance) for x in xs]\n",
" plot (xs, ys)\n",
" ylim((0,1))\n",
" show()\n",
"\n",
"interact (plt_g, mu=(0,10), variance=(0.2,4.5))"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
"metadata": {},
@ -167,6 +205,22 @@
"\n",
"#### Summary and Key Points"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"#format the book\n",
"from IPython.core.display import HTML\n",
"def css_styling():\n",
" styles = open(\"./styles/custom.css\", \"r\").read()\n",
" return HTML(styles)\n",
"css_styling()"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
}
],
"metadata": {}

View File

@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:ba2750345b8a777403243fcf1a80cb6a58d7ee1074b9095134150e1679648635"
"signature": "sha256:921885fd1aa9ceffb5a623914d16a63e877555dc81134f7d1e872c84de7cddac"
},
"nbformat": 3,
"nbformat_minor": 0,
@ -13,6 +13,7 @@
"metadata": {},
"source": [
"#Introduction\n",
"##### Version 0.1\n",
"\n",
"The Kalman filter was introduced to the world via papers published in 1958 and 1960 by Rudolph E Kalman. This work built on work by Nobert Wiener. Kalman's early papers were extremely abstract, but researchers quickly realized that the papers described a very practical technique to filter noisy data. From then until now it has been an ongoing topic of research, and there are many books and papers devoted not only to the basics, but many specializations and extensions to the technique. If you are reading this, you have likely come across some of them.\n",
"\n",
@ -64,7 +65,14 @@
{
"cell_type": "code",
"collapsed": false,
"input": [],
"input": [
"#format the book\n",
"from IPython.core.display import HTML\n",
"def css_styling():\n",
" styles = open(\"./styles/custom.css\", \"r\").read()\n",
" return HTML(styles)\n",
"css_styling()"
],
"language": "python",
"metadata": {},
"outputs": [],

View File

@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:0131b9ce88d9ff5da30b8c74995895cb6cb161758766f2e1e2599c4b05cebb6c"
"signature": "sha256:62f7253d6ad9a6031738dd1335cf418d484588da5b86c0e2b98b648ae1f20f38"
},
"nbformat": 3,
"nbformat_minor": 0,
@ -876,10 +876,18 @@
{
"cell_type": "code",
"collapsed": false,
"input": [],
"input": [
"#format the book\n",
"from IPython.core.display import HTML\n",
"def css_styling():\n",
" styles = open(\"./styles/custom.css\", \"r\").read()\n",
" return HTML(styles)\n",
"css_styling()"
],
"language": "python",
"metadata": {},
"outputs": []
"outputs": [],
"prompt_number": ""
}
],
"metadata": {}

View File

@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:fff3501726b07546f648c399fe50981edcfb8d3db4ccb230e1d64757436ddf9d"
"signature": "sha256:43e6b1c0a6a87cad97442c35e4be82ce79c07ba2a6b41e95e576e4371959d448"
},
"nbformat": 3,
"nbformat_minor": 0,
@ -12,14 +12,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Multidimensional Kalman Filters\n",
"Multidimensional Kalman Filters\n",
"=====\n",
"\n",
"The techniques in the last chapter are very powerful, but they only work in one dimension. The gaussians represent a mean and variance that are scalars - real numbers. They provide no way to represent multidimensional data, such as the position of a dog in a field. You may retort that you could use two Kalman filters for that case, one tracks the x coordinate and the other tracks the y coordinate. That does work in some cases, but put that thought aside, because soon you will see some enormous benefits to implementing the multidimensional case.\n",
"\n",
"\n",
"## Multivariate Normal Distributions\n",
"###Multivariate Normal Distributions\n",
"\n",
"What might a multivariate (meaning multidimensional) normal distribution look like? Our goal is to be able to represent a normal distribution across multiple dimensions. Consider the 2 dimensional case. Let's say we believe that x = 2 and y = 7. Therefore we can see that for N dimensions, we need N means, like so:\n",
"\n",
"What might a *multivariate normal distribution* look like? In this context, multivariate just means multiple variables. Our goal is to be able to represent a normal distribution across multiple dimensions. Consider the 2 dimensional case. Let's say we believe that $x = 2$ and $y = 7$. Therefore we can see that for $N$ dimensions, we need $N$ means, like so:\n",
"$$ \\mu = \\begin{bmatrix}{\\mu}_1\\\\{\\mu}_2\\\\ \\vdots \\\\{\\mu}_n\\end{bmatrix} \n",
"$$\n",
"\n",
@ -28,38 +30,58 @@
"\\mu = \\begin{bmatrix}2\\\\7\\end{bmatrix} \n",
"$$\n",
"\n",
"The next step is representing our variances. At first blush we might think we would also need N variances for N dimensions. We might want to say the variance for x is 10 and the variance for y is 8. While this is possible, it does not consider the more general case. For example, suppose we were tracking house prices vs total $m^2$ of the floor plan. These numbers are *correlated*. It is not an exact correlation, but in general houses in the same neighborhood are more expensive if they have a larger floor plan. We want a way to express not only what we think the variance is in the price and the $m^2$, but also the degree to which they are correlated. It turns out that we use a matrix to denote this:\n",
"The next step is representing our variances. At first blush we might think we would also need N variances for N dimensions. We might want to say the variance for x is 10 and the variance for y is 8, like so. \n",
"\n",
"$$\\sigma^2 = \\begin{bmatrix}10\\\\8\\end{bmatrix}$$ \n",
"\n",
"While this is possible, it does not consider the more general case. For example, suppose we were tracking house prices vs total $m^2$ of the floor plan. These numbers are *correlated*. It is not an exact correlation, but in general houses in the same neighborhood are more expensive if they have a larger floor plan. We want a way to express not only what we think the variance is in the price and the $m^2$, but also the degree to which they are correlated. It turns out that we use the following matrix to denote *covariances* with multivariate normal distributions. You might guess, correctly, that *covariance* is short for *correlated variances*.\n",
"\n",
"$$\n",
"\\Sigma = \\begin{pmatrix}\n",
" {\\sigma}_{1,1} & {\\sigma}_{1,2} & \\cdots & {\\sigma}_{1,n} \\\\\n",
" {\\sigma}_{2,1} &{\\sigma}_{2,2} & \\cdots & {\\sigma}_{2,n} \\\\\n",
" {{\\sigma}_{1}}^2 & p{\\sigma}_{1}{\\sigma}_{2} & \\cdots & p{\\sigma}_{1}{\\sigma}_{n} \\\\\n",
" p{\\sigma}_{2}{\\sigma}_{1} &{{\\sigma}_{2}}^2 & \\cdots & p{\\sigma}_{2}{\\sigma}_{n} \\\\\n",
" \\vdots & \\vdots & \\ddots & \\vdots \\\\\n",
" {\\sigma}_{n,1} & {\\sigma}_{n,2} & \\cdots & {\\sigma}_{n,n}\n",
" p{\\sigma}_{n}{\\sigma}_{1} & p{\\sigma}_{n}{\\sigma}_{2} & \\cdots & {{\\sigma}_{n}}^2\n",
" \\end{pmatrix}\n",
"$$\n",
"\n",
"This is called the covariance matrix, and is probably a bit confusing at the moment. Rather than explain the math in detail at the moment, we will take our usual tactic of building our intuition first with various physical models. \n",
"\n",
"So here is the full equation for the multivarate normal distribution.\n",
"If you haven't seen this before it is probably a bit confusing at the moment. Rather than explain the math right now, we will take our usual tactic of building our intuition first with various physical models. At this point, note that the diagonal contains the variance for each state variable, and that all off-diagonal elements are a product of the $\\sigma$ corresponding to the $i$th (row) and $j$th (column) state variable multiplied by a constant $p$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, without explanation, here is the full equation for the multivarate normal distribution.\n",
"\n",
"$$\\mathcal{N}(\\mu,\\,\\Sigma) = (2\\pi)^{-\\frac{n}{2}}|\\Sigma|^{-\\frac{1}{2}}\\, e^{ -\\frac{1}{2}(\\mathbf{x}-\\mu)'\\Sigma^{-1}(\\mathbf{x}-\\mu) }$$\n",
"\n",
"I urge you to not try to remember this function. We will program it once in a function and then call it when we need to compute a specific value. However, if you look at it briefly you will note that it looks quite similar to the univarate normal distribution except it uses matrices instead of scalar values. If you are reasonably well-versed in linear algebra this equation should look quite managable; if not, don't worry, the python is coming up next!\n"
"I urge you to not try to remember this function. We will program it once in a function and then call it when we need to compute a specific value. However, if you look at it briefly you will note that it looks quite similar to the *univarate normal distribution* except it uses matrices instead of scalar values, and the root of $\\pi$ is scaled by $N$.\n",
"\n",
"$$ \n",
"f(x, \\mu, \\sigma) = \\frac{1}{\\sigma\\sqrt{2\\pi}} e^{{-\\frac{1}{2}}{(x-\\mu)^2}/\\sigma^2 }\n",
"$$\n",
"\n",
"If you are reasonably well-versed in linear algebra this equation should look quite managable; if not, don't worry! If you want to learn the math we will cover it in detail in the next optional chapter. If you choose to skip that chapter the rest of this book should still be managable for you\n",
"\n",
"I have programmed it and saved it in the file gaussian.py with the function name multivariate_gaussian. I am not showing the code here because I have taken advantage of the linear algebra solving apparatus of numpy to efficiently compute a solution - the code does not correspond to the equation in a one to one manner. If you wish to view the code, I urge you to either load it in an editor, or load it into this worksheet by putting '%load gaussian.py' without the quotes in the next cell and executing it with ctrl-enter. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Note**: As of version 0.14 scipy has implemented the multivariate normal equation with the function **scipy.stats.multivariate_normal()**. It is superior to my function in several ways. First, it is implemented in Fortran, and is therefore faster than mine. Second, it implements a 'frozen' form where you set the mean and covariance once, and then calculate the probability for any number of values for x over any arbitrary number of calls. This is much more efficient then recomputing everything in each call. So, if you have version 0.14 or later you may want to substitute my function for the built in version. Use **scipy.version.version** to get the version number. Note that I deliberately named my function **multivariate_gaussian()** to ensure it is never confused with the built in version.\n",
"\n",
"> If you intend to use Python for Kalman filters, you will want to read the tutorial for the stats module, which explains 'freezing' distributions and other very useful features. As of this date, it includes an example of using the multivariate_normal function, which does work a bit differently from my function.\n",
"http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html</div>"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"import numpy as np\n",
"import math\n",
"def multivariate_gaussian(x, mu, cov):\n",
" n = len(x)\n",
" det = np.sqrt(np.prod(np.diag(cov)))\n",
" frac = (2*math.pi)**(-n/2.) * (1./det)\n",
" fprime = (x - mu)**2\n",
" return frac * np.exp(-0.5*np.dot(fprime, 1./np.diag(cov)))\n"
"from gaussian import *"
],
"language": "python",
"metadata": {},
@ -222,9 +244,9 @@
"g.plot_sigma_ellipse(e, '|2 0|\\n|0 9|')\n",
"\n",
"subplot(133)\n",
"cov = array([[2,3],[1,2]])\n",
"cov = array([[2,1.2],[1.2,3]])\n",
"e = g.sigma_ellipse (cov, 2, 7)\n",
"g.plot_sigma_ellipse(e,'|2 3|\\n|1 2|')\n",
"g.plot_sigma_ellipse(e,'|2 1.2|\\n|1.2 2|')\n",
"show()\n",
"pylab.rcParams['figure.figsize'] = 6,4"
],
@ -240,21 +262,28 @@
"From a mathematical perspective these display the values that the multivariate gaussian takes for a specific sigma (in this case $\\sigma^2=1$. Think of it as taking a horizontal slice through the 3D surface plot we did above. However, thinking about the physical interpretation of these plots clarifies their meaning.\n",
"\n",
"The first plot uses mean and the covariance matrices $\n",
"\\mu =\\begin{bmatrix}2\\\\7\\end{bmatrix}, cov = \\begin{bmatrix}2&0\\\\0&2\\end{bmatrix}$. Let this be our current belief about the position of our dog in a field. In other words, we believe that he is positioned at (2,7) with a variance of $\\sigma^2=2$ for both x and y. The contour plot shows where we believe the dog is located with the '+' in the center of the ellipse. The ellipse shows the boundary for the $1\\sigma^2$ probability - points where the dog is quite likely to be based on our current knowledge. Of course, the dog might be very far from this point, as Gaussians allow the mean to be any value. For example, the dog could be at (3234.76,189989.62), but that has vanishing low probability of being true. Generally speaking displaying the $1\\sigma^2$ to $2\\sigma^2$ contour captures the most likely values for the distribution. An equivelent way of thinking about this is the circle/ellipse shows us the amount of error in our belief. A tiny circle would indicate that we have a very small error, and a very large circle indicates a lot of error in our belief. We will use this throughout the rest of the book to display and evaluate the accuracy of our filters at any point in time. \n",
"\n",
"\\mu =\\begin{bmatrix}2\\\\7\\end{bmatrix}, cov = \\begin{bmatrix}2&0\\\\0&2\\end{bmatrix}$. Let this be our current belief about the position of our dog in a field. In other words, we believe that he is positioned at (2,7) with a variance of $\\sigma^2=2$ for both x and y. The contour plot shows where we believe the dog is located with the '+' in the center of the ellipse. The ellipse shows the boundary for the $1\\sigma^2$ probability - points where the dog is quite likely to be based on our current knowledge. Of course, the dog might be very far from this point, as Gaussians allow the mean to be any value. For example, the dog could be at (3234.76,189989.62), but that has vanishing low probability of being true. Generally speaking displaying the $1\\sigma^2$ to $2\\sigma^2$ contour captures the most likely values for the distribution. An equivelent way of thinking about this is the circle/ellipse shows us the amount of error in our belief. A tiny circle would indicate that we have a very small error, and a very large circle indicates a lot of error in our belief. We will use this throughout the rest of the book to display and evaluate the accuracy of our filters at any point in time. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The second plot uses mean and the covariance matrices $\n",
"\\mu =\\begin{bmatrix}2\\\\7\\end{bmatrix}, cov = \\begin{bmatrix}2&0\\\\0&9\\end{bmatrix}$. This time we use a different variance for x (2) vs y (9). The result is an ellipse. When we look at it we can immediately tell that we have a lot more uncertainty in the y value vs the x value. Our belief that the value is (2,7) is the same in both cases, but errors are different. This sort of thing happens naturally as we track objects in the world - one sensor has a better view of the object, or is closer, than another sensor, and so we end up with different error rates in the different axis.\n",
"\n",
"\n",
"\\mu =\\begin{bmatrix}2\\\\7\\end{bmatrix}, cov = \\begin{bmatrix}2&0\\\\0&9\\end{bmatrix}$. This time we use a different variance for x (2) vs y (9). The result is an ellipse. When we look at it we can immediately tell that we have a lot more uncertainty in the y value vs the x value. Our belief that the value is (2,7) is the same in both cases, but errors are different. This sort of thing happens naturally as we track objects in the world - one sensor has a better view of the object, or is closer, than another sensor, and so we end up with different error rates in the different axis."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The third plot uses mean and the covariance matrices $\n",
"\\mu =\\begin{bmatrix}2\\\\7\\end{bmatrix}, cov = \\begin{bmatrix}2&3\\\\1&2\\end{bmatrix}$. This is the first contour that has values in the off-diagonal elements of $cov$, and this is the first contour plot with a slanted ellipse. This is not a coincidence. The two facts are telling use the same thing. A slanted ellipse tells us that the x and y values are somehow **correlated**. We denote that in the covariance matrix with values off the diagonal. What does this mean in physical terms? Think of trying to park your car in a parking spot. You can not pull up beside the spot and then move sideways into the space because most cars cannot go purely sideways. $x$ and $y$ are not independent. This is a consequence of the steering system in a car. When your tires are turned the car rotates around its rear axle while moving forward. Or think of a horse attached to a pivoting exercise bar in a corral. The horse can only walk in circles, he cannot vary $x$ and $y$ independently, which means he cannot walk straight forward to to the side. If $x$ changes, $y$ must also change in a defined way. \n",
"\\mu =\\begin{bmatrix}2\\\\7\\end{bmatrix}, cov = \\begin{bmatrix}2&1.2\\\\1.2&2\\end{bmatrix}$. This is the first contour that has values in the off-diagonal elements of $cov$, and this is the first contour plot with a slanted ellipse. This is not a coincidence. The two facts are telling use the same thing. A slanted ellipse tells us that the x and y values are somehow **correlated**. We denote that in the covariance matrix with values off the diagonal. What does this mean in physical terms? Think of trying to park your car in a parking spot. You can not pull up beside the spot and then move sideways into the space because most cars cannot go purely sideways. $x$ and $y$ are not independent. This is a consequence of the steering system in a car. When your tires are turned the car rotates around its rear axle while moving forward. Or think of a horse attached to a pivoting exercise bar in a corral. The horse can only walk in circles, he cannot vary $x$ and $y$ independently, which means he cannot walk straight forward to to the side. If $x$ changes, $y$ must also change in a defined way. \n",
"\n",
"So when we see this ellipse we know that $x$ and $y$ are correlated, and that the correlation is \"strong\". I will not prove it here, but a 45 $^{\\circ}$ angle denotes complete correlation between $x$ and $y$, whereas $0$ and $90$ denote no correlation at all. Those who are familiar with this math will be objecting quite strongly, as this is actually quite sloppy language that does not adress all of the mathematical issues. They are right, but for now this is a good first approximation to understanding these ellipses from a physical interpretation point of view. The size of the ellipse shows how much error we have in each axis, and the slant shows how strongly correlated the values are.\n",
"**IS THIS TRUE???**\n",
"\n",
"\n",
"\n",
"\n",
"\n"
"A word about **correlation** and **independence**. If variables are **independent** they can vary separately. If you walk in an open field, you can move in the $x$ direction (east-west), the $y$ direction(north-south), or any combination thereof. Independent variables are always also **uncorrelated**. Except in special cases, the reverse does not hold true. Variables can be uncorrelated, but dependent. For example, consider the pair$(x,y)$ where $y=x^2$. Correlation is a linear measurement, so $x$ and $y$ are uncorrelated. However, they are obviously dependent on each other. ** wikipedia article 'correlation and dependence' claims multivariate normals are a special case, where the correlation coeff $p$ completely defines the dependence. FIGURE THIS OUT!**"
]
},
{
@ -390,7 +419,28 @@
{
"cell_type": "code",
"collapsed": false,
"input": [],
"input": [
"cov = array([[7,4],[4,7.]])\n",
"mu = array([0,0])\n",
"x = array([0,0])\n",
"print multivariate_gaussian(x,mu,cov)"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"#format the book\n",
"from IPython.core.display import HTML\n",
"def css_styling():\n",
" styles = open(\"./styles/custom.css\", \"r\").read()\n",
" return HTML(styles)\n",
"css_styling()"
],
"language": "python",
"metadata": {},
"outputs": [],

View File

@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:590f0bf7162a3bfaae7a3f473dc035e859b9b29d9c8a7e0a72698e7589560fdc"
"signature": "sha256:385b6aaf050313285ac8e5f3b463ffb1b2157cd6102fee6defbc10838c4649d0"
},
"nbformat": 3,
"nbformat_minor": 0,
@ -59,8 +59,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "code",
@ -72,8 +71,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "code",
@ -152,7 +150,6 @@
" plot (xs, ys)\n",
" show()\n",
"\n",
" \n",
"interact (plt_g, mu=(0,10), gamma=(0.01,6))"
],
"language": "python",

View File

@ -2,7 +2,8 @@ import math
import numpy as np
import numpy.linalg as linalg
import matplotlib.pyplot as plt
import scipy.sparse as sp
import scipy.sparse.linalg as spln
_two_pi = 2*math.pi
@ -36,19 +37,25 @@ def multivariate_gaussian(x, mu, cov):
The function gaussian() implements the 1D (univariate)case, and is much
faster than this function.
"""
# force all to numpy.array type
x = np.asarray(x)
mu = np.asarray(mu)
x = np.array(x, copy=False, ndmin=1)
mu = np.array(mu,copy=False, ndmin=1)
n = mu.size
cov = _to_cov(cov, n)
nx = len(mu)
cov = _to_cov(cov, nx)
det = np.sqrt(np.prod(np.diag(cov)))
frac = _two_pi**(-n/2.) * (1./det)
fprime = (x - mu)**2
return frac * np.exp(-0.5*np.dot(fprime, 1./np.diag(cov)))
norm_coeff = nx*math.log(2*math.pi) + np.linalg.slogdet(cov)[1]
err = x - mu
if (sp.issparse(cov)):
numerator = spln.spsolve(cov, err).T.dot(err)
else:
numerator = np.linalg.solve(cov, err).T.dot(err)
return math.exp(-0.5*(norm_coeff + numerator))
def norm_plot(mean, var):
min_x = mean - var * 1.5
max_x = mean + var * 1.5

View File

@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:1a6b28b567f1c5631b9b36052e0f65cf80787c42b18cc448bbc07857f94a982a"
"signature": "sha256:d0bfbbe322bb5a6c6b2c484f5f8da49814c5c8d8ec1120813d02a9511ddf94a1"
},
"nbformat": 3,
"nbformat_minor": 0,
@ -615,14 +615,26 @@
"cell_type": "code",
"collapsed": false,
"input": [
"Author notes:\n",
" Do I want to go to the multidimensional case? At least describe it, but why not implement it as well?\n",
" "
"#format the book\n",
"from IPython.core.display import HTML\n",
"def css_styling():\n",
" styles = open(\"./styles/custom.css\", \"r\").read()\n",
" return HTML(styles)\n",
"css_styling()"
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"**Author notes:**\n",
" Do I want to go to the multidimensional case? At least describe it, but why not implement it as well"
]
}
],
"metadata": {}

76
styles/custom.css Normal file
View File

@ -0,0 +1,76 @@
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunss.otf');
}
@font-face {
font-family: "Computer Modern";
font-weight: bold;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunsx.otf');
}
@font-face {
font-family: "Computer Modern";
font-style: oblique;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunsi.otf');
}
@font-face {
font-family: "Computer Modern";
font-weight: bold;
font-style: oblique;
src: url('http://9dbb143991406a7c655e-aa5fcb0a5a4ec34cff238a2d56ca4144.r56.cf5.rackcdn.com/cmunso.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: Helvetica, serif;
}
h4{
margin-top:12px;
margin-bottom: 3px;
}
div.text_cell_render{
font-family: Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 145%;
font-size: 130%;
width:800px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro", source-code-pro,Consolas, monospace;
}
.prompt{
display: None;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 22pt;
color: #4057A1;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
<script>
MathJax.Hub.Config({
TeX: {
extensions: ["AMSmath.js"]
},
tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ],
displayMath: [ ['$$','$$'], ["\\[","\\]"] ]
},
displayAlign: 'center', // Change this to 'center' to center equations.
"HTML-CSS": {
styles: {'.MathJax_Display': {"margin": 4}}
}
});
</script>