use quarto, not Pluto to render pages
This commit is contained in:
@@ -8,6 +8,11 @@ using CalculusWithJulia
|
||||
using Plots
|
||||
using SymPy
|
||||
using Roots
|
||||
```
|
||||
|
||||
And the following from the `Contour` package:
|
||||
|
||||
```julia
|
||||
import Contour: contours, levels, level, lines, coordinates
|
||||
```
|
||||
|
||||
@@ -109,8 +114,8 @@ pt = [a, b, 0]
|
||||
scatter!(unzip([pt])...)
|
||||
arrow!(pt, [1,0,0], linestyle=:dash)
|
||||
arrow!(pt, [0,1,0], linestyle=:dash)
|
||||
|
||||
```
|
||||
|
||||
#### Alternate forms
|
||||
|
||||
The equation for the tangent plane is often expressed in a more explicit form. For $n=2$, if we set $dx = x-a$ and $dy=y-a$, then the equation for the plane becomes:
|
||||
@@ -915,8 +920,10 @@ Another might be the vertical squared distance to the line:
|
||||
|
||||
|
||||
```math
|
||||
d2(\alpha, \beta) = (y_1 - l(x_1))^2 + (y_2 - l(x_2))^2 + (y_3 - l(x_3))^2 =
|
||||
(y1 - (\alpha + \beta x_1))^2 + (y3 - (\alpha + \beta x_3))^2 + (y3 - (\alpha + \beta x_3))^2
|
||||
\begin{align*}
|
||||
d2(\alpha, \beta) &= (y_1 - l(x_1))^2 + (y_2 - l(x_2))^2 + (y_3 - l(x_3))^2 \\
|
||||
&= (y1 - (\alpha + \beta x_1))^2 + (y3 - (\alpha + \beta x_3))^2 + (y3 - (\alpha + \beta x_3))^2
|
||||
\end{align*}
|
||||
```
|
||||
|
||||
Another might be the *shortest* distance to the line:
|
||||
@@ -1011,8 +1018,8 @@ gammas₂ = [1.0]
|
||||
|
||||
for n in 1:5
|
||||
xn = xs₂[end]
|
||||
gamma = gammas₂[end]
|
||||
xn1 = xn - gamma * gradient(f₂)(xn)
|
||||
gamma₀ = gammas₂[end]
|
||||
xn1 = xn - gamma₀ * gradient(f₂)(xn)
|
||||
dx, dy = xn1 - xn, gradient(f₂)(xn1) - gradient(f₂)(xn)
|
||||
gamman1 = abs( (dx ⋅ dy) / (dy ⋅ dy) )
|
||||
|
||||
@@ -1133,10 +1140,8 @@ fxx, d
|
||||
Consequently we have a local maximum at this critical point.
|
||||
|
||||
|
||||
```julia; echo=false
|
||||
note(""" The `Optim.jl` package provides efficient implementations of these two numeric methods, and others. """)
|
||||
```
|
||||
|
||||
!!! note
|
||||
The `Optim.jl` package provides efficient implementations of these two numeric methods, and others.
|
||||
|
||||
## Constrained optimization, Lagrange multipliers
|
||||
|
||||
@@ -1557,11 +1562,13 @@ This theorem can be generalized to scalar functions, but the notation can be cum
|
||||
Following [Folland](https://sites.math.washington.edu/~folland/Math425/taylor2.pdf) we use *multi-index* notation. Suppose $f:R^n \rightarrow R$, and let $\alpha=(\alpha_1, \alpha_2, \dots, \alpha_n)$. Then define the following notation:
|
||||
|
||||
```math
|
||||
|\alpha| = \alpha_1 + \cdots + \alpha_n, \quad
|
||||
\alpha! = \alpha_1!\alpha_2!\cdot\cdots\cdot\alpha_n!,\quad
|
||||
\vec{x}^\alpha = x_1^{\alpha_1}x_2^{\alpha_2}\cdots x_n^{\alpha^n}, \quad
|
||||
\partial^\alpha f = \partial_1^{\alpha_1}\partial_2^{\alpha_2}\cdots \partial_n^{\alpha_n} f =
|
||||
\frac{\partial^{|\alpha|}f}{\partial x_1^{\alpha_1} \partial x_2^{\alpha_2} \cdots \partial x_n^{\alpha_n}}.
|
||||
\begin{align*}
|
||||
|\alpha| &= \alpha_1 + \cdots + \alpha_n, \\
|
||||
\alpha! &= \alpha_1!\alpha_2!\cdot\cdots\cdot\alpha_n!, \\
|
||||
\vec{x}^\alpha &= x_1^{\alpha_1}x_2^{\alpha_2}\cdots x_n^{\alpha^n}, \\
|
||||
\partial^\alpha f &= \partial_1^{\alpha_1}\partial_2^{\alpha_2}\cdots \partial_n^{\alpha_n} f \\
|
||||
& = \frac{\partial^{|\alpha|}f}{\partial x_1^{\alpha_1} \partial x_2^{\alpha_2} \cdots \partial x_n^{\alpha_n}}.
|
||||
\endalign*}
|
||||
```
|
||||
|
||||
This notation makes many formulas from one dimension carry over to higher dimensions. For example, the binomial theorem says:
|
||||
@@ -1781,8 +1788,8 @@ choices = [
|
||||
raw"`` 2x + y - 2z = 1``",
|
||||
raw"`` x + 2y + 3z = 6``"
|
||||
]
|
||||
ans = 1
|
||||
radioq(choices, ans)
|
||||
answ = 1
|
||||
radioq(choices, answ)
|
||||
```
|
||||
|
||||
|
||||
@@ -1798,8 +1805,8 @@ choices = [
|
||||
raw"`` y^2 + y, x^2 + x``",
|
||||
raw"`` \langle 2y + y^2, 2x + x^2``"
|
||||
]
|
||||
ans = 1
|
||||
radioq(choices, ans)
|
||||
answ = 1
|
||||
radioq(choices, answ)
|
||||
```
|
||||
|
||||
Is this the Hessian of $f$?
|
||||
@@ -1830,8 +1837,8 @@ choices = [
|
||||
L"The function $f$ has a saddle point, as $d < 0$",
|
||||
L"Nothing can be said, as $d=0$"
|
||||
]
|
||||
ans = 2
|
||||
radioq(choices, ans, keep_order=true)
|
||||
answ = 2
|
||||
radioq(choices, answ, keep_order=true)
|
||||
```
|
||||
|
||||
|
||||
@@ -1885,8 +1892,8 @@ choices = [
|
||||
L"Nothing can be said, as $d=0$",
|
||||
L"The test does not apply, as $\nabla{f}$ is not $0$ at this point."
|
||||
]
|
||||
ans = 3
|
||||
radioq(choices, ans, keep_order=true)
|
||||
answ = 3
|
||||
radioq(choices, answ, keep_order=true)
|
||||
```
|
||||
|
||||
Which is true of $f$ at $(0, -1/2)$:
|
||||
@@ -1899,8 +1906,8 @@ choices = [
|
||||
L"Nothing can be said, as $d=0$",
|
||||
L"The test does not apply, as $\nabla{f}$ is not $0$ at this point."
|
||||
]
|
||||
ans = 1
|
||||
radioq(choices, ans, keep_order=true)
|
||||
answ = 1
|
||||
radioq(choices, answ, keep_order=true)
|
||||
```
|
||||
|
||||
|
||||
@@ -1914,8 +1921,8 @@ choices = [
|
||||
L"Nothing can be said, as $d=0$",
|
||||
L"The test does not apply, as $\nabla{f}$ is not $0$ at this point."
|
||||
]
|
||||
ans = 5
|
||||
radioq(choices, ans, keep_order=true)
|
||||
answ = 5
|
||||
radioq(choices, answ, keep_order=true)
|
||||
```
|
||||
|
||||
|
||||
@@ -1962,8 +1969,8 @@ choices =[
|
||||
"It is the determinant of the Hessian",
|
||||
L"It isn't, $b^2-4ac$ is from the quadratic formula"
|
||||
]
|
||||
ans = 1
|
||||
radioq(choices, ans)
|
||||
answ = 1
|
||||
radioq(choices, answ)
|
||||
```
|
||||
|
||||
Which condition on $a$, $b$, and $c$ will ensure a *local maximum*:
|
||||
@@ -1974,8 +1981,8 @@ choices = [
|
||||
L"That $a<0$ and $ac-b^2 > 0$",
|
||||
L"That $ac-b^2 < 0$"
|
||||
]
|
||||
ans = 2
|
||||
radioq(choices, ans, keep_order=true)
|
||||
answ = 2
|
||||
radioq(choices, answ, keep_order=true)
|
||||
```
|
||||
|
||||
Which condition on $a$, $b$, and $c$ will ensure a saddle point?
|
||||
@@ -1987,8 +1994,8 @@ choices = [
|
||||
L"That $a<0$ and $ac-b^2 > 0$",
|
||||
L"That $ac-b^2 < 0$"
|
||||
]
|
||||
ans = 3
|
||||
radioq(choices, ans, keep_order=true)
|
||||
answ = 3
|
||||
radioq(choices, answ, keep_order=true)
|
||||
```
|
||||
|
||||
|
||||
@@ -2016,8 +2023,8 @@ choices = [
|
||||
raw"`` \langle 2x, y^2\rangle``",
|
||||
raw"`` \langle x^2, 2y \rangle``"
|
||||
]
|
||||
ans = 1
|
||||
radioq(choices, ans)
|
||||
answ = 1
|
||||
radioq(choices, answ)
|
||||
```
|
||||
|
||||
Due to the form of the gradient of the constraint, finding when $\nabla{f} = \lambda \nabla{g}$ is the same as identifying when this ratio $|f_x/f_y|$ is $1$. The following solves for this by checking each point on the constraint:
|
||||
|
||||
Reference in New Issue
Block a user