More explanations about Gaussians

This commit is contained in:
Roger Labbe 2014-05-10 10:20:31 -07:00
parent 6e5c02ad36
commit a7d8371b36
4 changed files with 142 additions and 116 deletions

View File

@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:5304ce00179761b51c04bc474726572fbd4057258b20454a864ab8a3baca398b"
"signature": "sha256:9184f3be6129a4cf39b259921633ec76099fb8ca364fd39a633da1920257669b"
},
"nbformat": 3,
"nbformat_minor": 0,
@ -38,8 +38,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -92,8 +91,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -132,8 +130,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -164,8 +161,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -204,8 +200,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -249,8 +244,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
}
],
"metadata": {}

View File

@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:58455c078849c3b264b0536c1ad78c8c27109be5299e7d16fe969fca4f75c398"
"signature": "sha256:91af593ec32629bec5fa5a58b7f5ae29e79fc039fe09b20d37c26dca0ac20430"
},
"nbformat": 3,
"nbformat_minor": 0,
@ -52,8 +52,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -73,8 +72,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -91,6 +89,8 @@
"input": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.pylab as pylab\n",
"pylab.rcParams['figure.figsize'] = 10,6\n",
"\n",
"dog = DogSensor (noise=0.0)\n",
"xs = []\n",
@ -103,8 +103,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -139,8 +138,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -163,8 +161,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "code",
@ -174,14 +171,13 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You may not have a full understanding of the exact *meaning* of a noise value of 100.0, but as it turns out if you multiply *randn()* with a number $n$, the result is just a normal distribution with $\\sigma = \\sqrt{n}$. So the example with noise = 100 is using the normal distribution $N(0,100)$. Recall the notation for a normal distribution is $N(\\mu,\\sigma^2)$. If the square root is confusing, recall that normal distributions use $\\sigma^2$ for the variance, and $\\sigma$ is the standard deviation, which we do not use in this book. *dog_sensor.__init__()* takes the square root of the noise setting so that the *noise * randn()* call properly computes the normal distribution. "
"You may not have a full understanding of the exact *meaning* of a noise value of 100.0, but as it turns out if you multiply *randn()* with a number $n$, the result is just a normal distribution with $\\sigma = \\sqrt{n}$. So the example with noise = 100 is using the normal distribution $\\mathcal{N}(0,100)$. Recall the notation for a normal distribution is $\\mathcal{N}(\\mu,\\sigma^2)$. If the square root is confusing, recall that normal distributions use $\\sigma^2$ for the variance, and $\\sigma$ is the standard deviation, which we do not use in this book. *dog_sensor.__init__()* takes the square root of the noise setting so that the *noise * randn()* call properly computes the normal distribution. "
]
},
{
@ -190,7 +186,7 @@
"source": [
"#### Math with Gaussians\n",
"\n",
"Let's say we believe that our dog is at 23m, and the variance is 5 ($N(23,5)$). We can represent that in a plot:\n"
"Let's say we believe that our dog is at 23m, and the variance is 5, or $pos_{dog}=\\mathcal{N}(23,5)$). We can represent that in a plot:\n"
]
},
{
@ -202,18 +198,17 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This corresponds to a fairly inexact belief. While we believe that the dog is at 23, note that roughly 21 to 25 are quite likely as well. Let's assume for the moment our dog is standing still, and we query the sensor again. This time it returns 23.2 as the position. Can we use this additional information to improve our estimate of the dog's position.\n",
"This corresponds to a fairly inexact belief. While we believe that the dog is at 23, note that roughly 21 to 25 are quite likely as well. Let's assume for the moment our dog is standing still, and we query the sensor again. This time it returns 23.2 as the position. Can we use this additional information to improve our estimate of the dog's position?\n",
"\n",
"Intuition suggests 'yes'. Consider: if we read the sensor 100 times and each time it returned a value between 21 and 25, all centered around 23, we should be very confident that the dog is somewhere very near 23. Of course, a different physical interpertation is possible. Perhaps our dog was randomly wandering back and forth in a way that exactly emulated a normal distribution. But that seems extremely unlikely - I certainly have never seen a dog do that. So the only reasonable assumption is that the dog was mostly standing still at 23.0.\n",
"\n",
"Let's look at this in a plot:"
"Let's look at 100 sensor readings in a plot:"
]
},
{
@ -231,8 +226,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -263,17 +257,32 @@
"\n",
" new_gaussian = measurement * old_gaussian\n",
" \n",
"where measurement is a gaussian returned from the sensor. But does that make sense? Can we multiply gaussians? If we multiply a gaussian with a gaussing is the result another gaussian, or something else?\n",
"\n",
"Of course the answer is 'yes', or this chapter would be for naught. It is not particularly difficult to perform the algebra to derive the equation for multiplying two gaussians, but I will just present the result:\n",
"$$ N({\\mu}_1, {{\\sigma}_1}^2)*N({\\mu}_2, {{\\sigma}_2}^2) = N(\\frac{{\\sigma}_1 {\\mu}_2 + {\\sigma}_2 {\\mu}_1}{{\\sigma}_1 + {\\sigma}_2},\\frac{1}{\\frac{1}{{\\sigma}_1} + \\frac{1}{{\\sigma}_2}}) $$"
"where measurement is a Gaussian returned from the sensor. But does that make sense? Can we multiply gaussians? If we multiply a Gaussian with a Gaussian is the result another Gaussian, or something else?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's immediately look at some plots of this to inform our intuition about this result. First, let's look at the result of multiplying $N(23,5) $ to itself. This corresponds to getting 23.0 as the sensor value twice in a row."
"It is not particularly difficult to perform the algebra to derive the equation for multiplying two gaussians, but I will just present the result:\n",
"$$\n",
"N(\\mu_1, \\sigma_1^2)*N(\\mu_2, \\sigma_2^2) = N(\\frac{\\sigma_1^2 \\mu_2 + \\sigma_2^2 \\mu_1}{\\sigma_1^2 + \\sigma_2^2},\\frac{1}{\\frac{1}{\\sigma_1^2} + \\frac{1}{\\sigma_2^2}}) $$ \n",
"\n",
"In other words the result is a Gaussian with \n",
"\n",
"$$\\begin{align*}\n",
"\\mu &=\\frac{\\sigma_1^2 \\mu_2 + \\sigma_2^2 \\mu_1} {\\sigma_1^2 + \\sigma_2^2}, \\\\\n",
"\\sigma &= \\frac{1}{\\frac{1}{\\sigma_1^2} + \\frac{1}{\\sigma_2^2}}\n",
"\\end{align*}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Without doing a deep analysis we can immediately infer some things. First and most importantly the result of multiplying two Gaussians is another Gaussian. The expression for the mean is not particularly illuminating, except that it is a combination of the means and variances of the input. But the variance of the result is merely some combination of the variances of the variances of the input. We conclude from this that the variances are completely unaffected by the values of the mean!\n",
"\n",
"Let's immediately look at some plots of this. First, let's look at the result of multiplying $N(23,5)$ to itself. This corresponds to getting 23.0 as the sensor value twice in a row. But before you look at the result, what do you think the result will look like? What should the new mean be? Will the variance by wider, narrower, or the same?"
]
},
{
@ -305,16 +314,19 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The result is either amazing or what you would expect, depending on your state of mind. I must admit I vacillate freely between the two! Note that the result of the multiplation is taller and narrow than the original Gaussian. If we think of the Gaussians as two measurement, this makes sense. If I measure twice and get the same value, I should be more confident in my answer than if I just measured once. \"Measure twice, cut once\" is a useful saying and practice due to this fact! \n",
"The result is either amazing or what you would expect, depending on your state of mind. I must admit I vacillate freely between the two! Note that the result of the multiplation is taller and narrow than the original Gaussian but the mean is the same. Does this match your intuition of what the result should have been?\n",
"\n",
"Now let's multiply two gaussians (or equivelently, two measurements) that are partially separated. What do you think the result will be? Let's find out:"
"If we think of the Gaussians as two measurements, this makes sense. If I measure twice and get the same value, I should be more confident in my answer than if I just measured once. If I measure twice and get $23m$ each time, I should conclude that the length is close to $23m$. So the mean should be $23$. I am more confident with two measurements than with one, so the variance of the result should be smaller. \n",
"\n",
"\"Measure twice, cut once\" is a useful saying and practice due to this fact! The Gaussian is just a mathematical model of this physical fact, so we should expect the math to follow our physical process. \n",
"\n",
"Now let's multiply two gaussians (or equivelently, two measurements) that are partially separated. In other words, their means will be different, but their variances will be the same. What do you think the result will be? Think about it, and then look at the graph."
]
},
{
@ -340,8 +352,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -351,20 +362,56 @@
"\n",
"That is fairly counter-intuitive, so let's consider it further. Perhaps a more reasonable assumption would be that either you or your coworker just made a mistake, and the true distance is either 23 or 25, but certainly not 24. Surely that is possible. However, suppose the two measurements you reported as 24.01 and 23.99. In that case you would agree that in this case the best guess for the correct value is 24? Which interpretation we choose depends on the properties of the sensors we are using. Humans make galling mistakes, physical sensors do not. \n",
"\n",
"This topic is fairly deep, and I will explore it once we have completed our Kalman filter. For now I will merely say that the Kalman filter requires the interpretation that measurements are accurate, with Gaussian noise, and that a large error caused by misreading a measuring tape is not Gaussian noise. So perhaps you would be justified in thinking that a histogram filter will perform better for the human readings, and the Kalman filter will perform better with sensor readings that have gaussian noise.\n",
"This topic is fairly deep, and I will explore it once we have completed our Kalman filter. For now I will merely say that the Kalman filter requires the interpretation that measurements are accurate, with Gaussian noise, and that a large error caused by misreading a measuring tape is not Gaussian noise.\n",
"\n",
"For now I ask that you trust me. The math is correct, so we have no choice but to accept it and use it. We will see how the Kalman filter deals with movements vs error very soon. In the meantime, accept that 24 is the correct answer to this problem."
"For now I ask that you trust me. The math is correct, so we have no choice but to accept it and use it. We will see how the Kalman filter deals with movements vs error very soon. In the meantime, accept that 24 is the correct answer to this problem.\n",
"\n",
"One final test of your intuition. What if the two measurements are widely separated? "
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"xs = np.arange(0, 60, 0.1)\n",
"\n",
"m1,s1 = 10, 5\n",
"m2,s2 = 50, 5\n",
"m, s = multiply(m1,s1,m2,s2)\n",
"\n",
"ys = [gaussian.gaussian(x,m1,s1) for x in xs]\n",
"p1, = plt.plot (xs,ys)\n",
"\n",
"ys = [gaussian.gaussian(x,m2,s2) for x in xs]\n",
"p2, = plt.plot (xs,ys)\n",
"\n",
"ys = [gaussian.gaussian(x,m,s) for x in xs]\n",
"p3, = plt.plot(xs,ys)\n",
"plt.legend([p1,p2,p3],['measure 1', 'measure 2', 'multiply'])\n",
"plt.show()"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This result bothered me quite a bit when I first learned it. If my first measurement was 10, and the next one was 50, why would I choose 30 as a result? And why would I be *more* confident? Doesn't it make sense that either one of the measurements is wrong, or that I am measuring a moving object? Shouldn't the result be nearer 50? And, shouldn't the variance be larger, not smaller?\n",
"\n",
"Well, no. It will become clearer soon, but recall our discrete Bayesian filter. It had two steps: *sense*, which incorporated the new measurement, and then *update*, which incorporated the movement. In the chart above we don't have any movement information. For now, trust me that very shortly we will learn how to incorporate that information. In the meantime, reflect on the fact that if we have 2 measurements with known variance, this is the only possible result, because we do not have any information (yet) as to whether the object is moving or that one sensor might be malfunctioning. As with the discrete Bayes filter, this result just reflects our current knowledge, and this is all we know. Trust the math!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Implementing Sensing\n",
"#### Implementing the Sensing Step\n",
"\n",
"Recall the histogram filter uses a numpy array to encode our belief about the position of our dog at any time. That array stored our belief that the dog was in any position in the hallway using 10 positions. This was very crude, because with a 100m hallway that corresponded to positions 10m apart. It would have been trivial to expand the number of positions to say 1,000, and that is what we would do if using it for a real problem. But the problem remains that the distribution is discrete and multimodal - it can express strong belief that the dog is in two positions at the same time.\n",
"Recall the histogram filter uses a numpy array to encode our belief about the position of our dog at any time. That array stored our belief of our dog's position in the hallway using 10 discrete positions. This was very crude, because with a 100m hallway that corresponded to positions 10m apart. It would have been trivial to expand the number of positions to say 1,000, and that is what we would do if using it for a real problem. But the problem remains that the distribution is discrete and multimodal - it can express strong belief that the dog is in two positions at the same time.\n",
"\n",
"Therefore, we will use a single Gaussian to reflect our current belief of the dog's position. Gaussians extend to infinity on both sides of the mean, so the single Gaussian will cover the entire hallway. They are unimodal, and seem to reflect the behavior of real-world sensors - most errors are small and clustered around the mean. Here is the entire implementation of the sense function for a Kalman filter:"
"Therefore, we will use a single Gaussian to reflect our current belief of the dog's position. In other words, we will use $dog_{pos} = \\mathcal{N}(\\mu,\\sigma^2)$. Gaussians extend to infinity on both sides of the mean, so the single Gaussian will cover the entire hallway. They are unimodal, and seem to reflect the behavior of real-world sensors - most errors are small and clustered around the mean. Here is the entire implementation of the sense function for a Kalman filter:"
]
},
{
@ -376,8 +423,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -395,8 +441,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -420,8 +465,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -481,8 +525,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -526,8 +569,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -646,8 +688,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -696,8 +737,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -752,8 +792,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -777,8 +816,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -808,8 +846,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -862,8 +899,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -918,8 +954,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -966,8 +1001,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -1014,8 +1048,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -1057,8 +1090,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -1096,8 +1128,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -1142,8 +1173,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -1172,8 +1202,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -1211,8 +1240,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -1237,8 +1265,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -1276,8 +1303,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -1305,8 +1331,7 @@
"input": [],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "code",
@ -1342,8 +1367,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
},
{
"cell_type": "markdown",
@ -1368,8 +1392,7 @@
],
"language": "python",
"metadata": {},
"outputs": [],
"prompt_number": ""
"outputs": []
}
],
"metadata": {}

View File

@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:3b71546948dd21caccb388a6147d443538b451fdb64f56494861aa34958fb04a"
"signature": "sha256:c85d72f3d31f37b3edfae8ecdc7e79438bde16b84b6d6d119c4edfc1239913d8"
},
"nbformat": 3,
"nbformat_minor": 0,
@ -81,7 +81,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
"from gaussian import *"
"from gaussian import gaussian, multivariate_gaussian"
],
"language": "python",
"metadata": {},
@ -102,6 +102,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
"import numpy as np\n",
"x = np.array([2.5, 7.3])"
],
"language": "python",
@ -183,7 +184,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"These numbers are not easy to interpret. Let's plot this in 3D, with the z coordinate being the probability."
"These numbers are not easy to interpret. Let's plot this in 3D, with the z (up) coordinate being the probability."
]
},
{
@ -194,22 +195,34 @@
"\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.pylab as pylab\n",
"from matplotlib import cm\n",
"from mpl_toolkits.mplot3d import Axes3D\n",
"import numpy as np\n",
"\n",
"pylab.rcParams['figure.figsize'] = 12,6\n",
"cov = np.array([[8.,0],[0,10.]])\n",
"\n",
"mu = np.array([2,7])\n",
"\n",
"xs, ys = np.arange(-8, 13, .75), np.arange(-8, 20, .75)\n",
"xv, yv = np.meshgrid (xs, ys)\n",
"\n",
"zs = np.array([multivariate_gaussian(np.array([x,y]),mu,cov) \n",
"zs = np.array([100.* multivariate_gaussian(np.array([x,y]),mu,cov) \\\n",
" for x,y in zip(np.ravel(xv), np.ravel(yv))])\n",
"zv = zs.reshape(xv.shape)\n",
"\n",
"ax = plt.figure().add_subplot(111, projection='3d')\n",
"ax.plot_wireframe(xv, yv, zv)\n",
"plt.show()\n",
"pylab.rcParams['figure.figsize'] = 6,4"
"ax.plot_surface(xv, yv, zv)\n",
"\n",
"ax.set_xlabel('X')\n",
"ax.set_ylabel('Y')\n",
"\n",
"ax.contour(xv, yv, zv, zdir='x', offset=-9, cmap=cm.autumn)\n",
"ax.contour(xv, yv, zv, zdir='y', offset=20, cmap=cm.BuGn)\n",
"plt.xlim((-10,20)) \n",
"plt.ylim((-10,20)) \n",
"\n",
"plt.show()\n"
],
"language": "python",
"metadata": {},
@ -219,7 +232,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The result is clearly a 3D bell shaped curve. We can see that the gaussian is centered around (2,7), and that the probability quickly drops away in all directions.\n",
"The result is clearly a 3D bell shaped curve. We can see that the gaussian is centered around (2,7), and that the probability quickly drops away in all directions. On the sides of the plot I have drawn the Gaussians for x in greens and for y in orange.\n",
"\n",
"As beautiful as this is, it is perhaps a bit hard to get useful information. For example, it is not easy to tell if x and y both have the same variance or not. So for most of the rest of this book we will display multidimensional gaussian using contour plots. I will use some helper functions in gaussian.py to plot them. If you are interested in linear algebra go ahead and look at the code used to produce these contours, otherwise feel free to ignore it."
]
@ -229,7 +242,6 @@
"collapsed": false,
"input": [
"import gaussian as g\n",
"pylab.rcParams['figure.figsize'] = 12,4\n",
"\n",
"cov = np.array([[2,0],[0,2]])\n",
"e = g.sigma_ellipse (cov, 2, 7)\n",
@ -246,9 +258,7 @@
"cov = np.array([[2,1.2],[1.2,3]])\n",
"e = g.sigma_ellipse (cov, 2, 7)\n",
"g.plot_sigma_ellipse(e,'|2 1.2|\\n|1.2 2|')\n",
"plt.show()\n",
"\n",
"pylab.rcParams['figure.figsize'] = 6,4"
"plt.show()"
],
"language": "python",
"metadata": {},

View File

@ -1,7 +1,7 @@
{
"metadata": {
"name": "",
"signature": "sha256:385b6aaf050313285ac8e5f3b463ffb1b2157cd6102fee6defbc10838c4649d0"
"signature": "sha256:889ce3306bc4b69fe3ca9aafb5eec924ed230e1f19586a991b69ad5ef43b60fc"
},
"nbformat": 3,
"nbformat_minor": 0,
@ -157,12 +157,11 @@
"outputs": []
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"cell_type": "markdown",
"metadata": {},
"outputs": []
"source": [
"$$\\sigma_1^2$$"
]
}
],
"metadata": {}