pset 3 problems 6, 7

This commit is contained in:
Steven G. Johnson 2023-03-10 20:34:51 -05:00
parent 35b11d86e8
commit 56c7967f40

View File

@ -154,24 +154,50 @@
},
{
"cell_type": "markdown",
"id": "88ab5b5e",
"id": "55b01fd5",
"metadata": {},
"source": [
"## Problems 6, etc: coming soon"
"## Problem 6 (5+5 points)\n",
"\n",
"Suppose we are solving a generalized/weighted least-square problem:\n",
"$$\n",
"\\min_{x \\in \\mathbb{R}^n} \\Vert b - Ax \\Vert_W\n",
"$$\n",
"where $A$ is $m \\times n$, $W = W^T$ is an $m \\times m$ positive-definite \"weight\" matrix, and $\\Vert y \\Vert_W = \\sqrt{y^T W y}$ is a weighted $\\ell^2$ norm.\n",
"\n",
"**(a)** Show that this is equivalent to an ordinary least-square problem of minimizing $\\Vert d - Cx \\Vert_2 = \\Vert b - Ax \\Vert_W$ for some matrix $C$ and vector $d$. (Hint: use the fact that any positive-definite matrix $W$ can be factored as $W = $ \\_\\_\\_\\_\\_.)\n",
"\n",
"**(b)** Show that the normal equations $C^T C \\hat{x} = C^T d$ are equivalent to the generalized normal equations $A^T W A \\hat{x} = A^T W b$ given in class."
]
},
{
"cell_type": "markdown",
"id": "85e81fbe",
"id": "88ab5b5e",
"metadata": {},
"source": [
"More problems will be posted by Saturday, March 11."
"## Problem 7 (5+5 points)\n",
"\n",
"The most common form of least-squares is linear regression, i.e. fitting $m$ data points $(a_i, b_i)$ to a model of the form $a(b) = x_1 + b x_2$.\n",
"\n",
"Suppose the data points $b_i$ have independent random errors with equal variances $\\sigma^2$ (i.e. they are \"homoscedastic\"). (This is the case in which GaussMarkov says that ordinary least-squares minimizes the variance.) In this case, many sources give simple explicit formulas for the variances of the fit coefficients $x_1$ and $x_2$\n",
"\n",
"$$\n",
"\\text{variance of }\\hat{x}_1 = \\frac{\\sigma^2 \\sum_{i=1}^m a_i^2} {m \\sum_{j=1}^m (a_j - \\text{mean } a)^2}\n",
"$$\n",
"\n",
"$$\n",
"\\text{variance of }\\hat{x}_2 = \\frac{\\sigma^2 } {\\sum_{j=1}^m (a_j - \\text{mean } a)^2}\n",
"$$\n",
"\n",
"**Derive these formulas** from the more general equation $W = (A^T V^{-1} A)^{-1}$ for the covariance matrix of $\\hat{x}$ that we found in class for weighted least squares (where $V$ is the covariance matrix of $b$).\n",
"\n",
"(The formula for the inverse of a 2x2 matrix will be helpful; google it.)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Julia 1.8.3",
"display_name": "Julia 1.8.0",
"language": "julia",
"name": "julia-1.8"
},
@ -179,7 +205,7 @@
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.8.3"
"version": "1.8.2"
}
},
"nbformat": 4,