update scalar_functions_applications.qmd

This commit is contained in:
Fang Liu 2023-06-24 11:58:52 +08:00
parent 09b28eafcc
commit 4107d20bfa

View File

@ -27,7 +27,7 @@ This section presents different applications of scalar functions.
## Tangent planes, linearization
Consider the case $f:R^2 \rightarrow R$. We visualize $z=f(x,y)$ through a surface. At a point $(a, b)$, this surface, if $f$ is sufficiently smooth, can be approximated by a flat area, or a plane. For example, the Northern hemisphere of the earth, might be modeled simplistically by $z = \sqrt{R^2 - (x^2 + y^2)}$ for some $R$ and with the origin at the earth's core. The ancient view of a "flat earth," can be more generously seen as identifying this tangent plane with the sphere. More apt for current times, is the use of GPS coordinates to describe location. The difference between any two coordinates is technically a distance on a curved, nearly spherical, surface. But if the two points are reasonably closes (miles, not tens of miles) and accuracy isn't of utmost importance (i.e., not used for self-driving cars), then the distance can be found from the Euclidean distance formula, $\sqrt{(\Delta\text{latitude})^2 + \Delta\text{longitude})^2}$. That is, as if the points were on a plane, not a curved surface.
Consider the case $f:R^2 \rightarrow R$. We visualize $z=f(x,y)$ through a surface. At a point $(a, b)$, this surface, if $f$ is sufficiently smooth, can be approximated by a flat area, or a plane. For example, the Northern hemisphere of the earth, might be modeled simplistically by $z = \sqrt{R^2 - (x^2 + y^2)}$ for some $R$ and with the origin at the earth's core. The ancient view of a "flat earth," can be more generously seen as identifying this tangent plane with the sphere. More apt for current times, is the use of GPS coordinates to describe location. The difference between any two coordinates is technically a distance on a curved, nearly spherical, surface. But if the two points are reasonably closes (miles, not tens of miles) and accuracy isn't of utmost importance (i.e., not used for self-driving cars), then the distance can be found from the Euclidean distance formula, $\sqrt{(\Delta\text{latitude})^2 + (\Delta\text{longitude})^2}$. That is, as if the points were on a plane, not a curved surface.
For the univariate case, the tangent line has many different uses. Here we see the tangent plane also does.
@ -39,7 +39,7 @@ For the univariate case, the tangent line has many different uses. Here we see t
The partial derivatives have the geometric view of being the derivative of the univariate functions $f(\vec\gamma_x(t))$ and $f(\vec\gamma_y(t))$, where $\vec\gamma_x$ moves just parallel to the $x$ axis (e.g. $\langle t + a, b\rangle$). and $\vec\gamma_y$ moves just parallel to the $y$ axis. The partial derivatives then are slopes of tangent lines to each curve. The tangent plane, should it exist, should match both slopes at a given point. With this observation, we can identify it.
Consider $f(\vec\gamma_x)$ at a point $(a,b)$. The path has a tangent vector, which has "slope" $\frac{\partial f}{\partial x}$. and in the direction of the $x$ axis, but not the $y$ axis, as does this vector: $\langle 1, 0, \frac{\partial f}{\partial x} \rangle$. Similarly, this vector $\langle 0, 1, \frac{\partial f}{\partial y} \rangle$ describes the tangent line to $f(\vec\gamma_y)$ a the point.
Consider $f(\vec\gamma_x)$ at a point $(a,b)$. The path has a tangent vector, which has "slope" $\frac{\partial f}{\partial x}$. and in the direction of the $x$ axis, but not the $y$ axis, as does this vector: $\langle 1, 0, \frac{\partial f}{\partial x} \rangle$. Similarly, this vector $\langle 0, 1, \frac{\partial f}{\partial y} \rangle$ describes the tangent line to $f(\vec\gamma_y)$ at the point.
These two vectors will lie in the plane. The normal vector is found by their cross product:
@ -50,7 +50,7 @@ These two vectors will lie in the plane. The normal vector is found by their cro
n = [1, 0, f_x] × [0, 1, f_y]
```
Let $\vec{x} = \langle a, b, f(a,b)$. The tangent plane at $\vec{x}$ then is described by all vectors $\vec{v}$ with $\vec{n}\cdot(\vec{v} - \vec{x}) = 0$. Using $\vec{v} = \langle x,y,z\rangle$, we have:
Let $\vec{x} = \langle a, b, f(a,b) \rangle$. The tangent plane at $\vec{x}$ then is described by all vectors $\vec{v}$ with $\vec{n}\cdot(\vec{v} - \vec{x}) = 0$. Using $\vec{v} = \langle x,y,z\rangle$, we have:
$$
@ -121,7 +121,7 @@ arrow!(pt, [0,1,0], linestyle=:dash)
#### Alternate forms
The equation for the tangent plane is often expressed in a more explicit form. For $n=2$, if we set $dx = x-a$ and $dy=y-a$, then the equation for the plane becomes:
The equation for the tangent plane is often expressed in a more explicit form. For $n=2$, if we set $dx = x-a$ and $dy=y-b$, then the equation for the plane becomes:
$$
@ -306,12 +306,12 @@ $$
f(1,1) + \nabla{f}(1,1) \cdot \langle 0.1, -0.1\rangle,
$$
where $f(1,1) = \sin(\pi) = 0$ and $\nabla{f} = \langle y^2\cos(\pi x y^2), \cos(\pi x y^2) 2y\rangle = \cos(\pi x y^2)\langle x,2y\rangle$. So, the answer is:
where $f(1,1) = \sin(\pi) = 0$ and $\nabla{f} = \langle \pi y^2\cos(\pi x y^2), \cos(\pi x y^2) 2\pi x y\rangle = \pi y \cos(\pi x y^2)\langle y,2x\rangle$. So, the answer is:
$$
0 + \cos(\pi) \langle 1,2\rangle\cdot \langle 0.1, -0.1 \rangle =
(-1)(0.1 - 2(0.1)) = 0.1.
0 + \pi \cos(\pi) \langle 1,2\rangle\cdot \langle 0.1, -0.1 \rangle =
(-\pi)(0.1 - 2(0.1)) = 0.1\pi.
$$
##### Example
@ -593,7 +593,7 @@ To find $\partial{z}/\partial{x}$ and $\partial{z}/\partial{y}$ we have:
#| hold: true
@syms x, y, Z()
∂x = solve(diff(x^4 -x^3 + y^2 + Z(x,y)^2, x), diff(Z(x,y),x))
∂y = solve(diff(x^4 -x^3 + y^2 + Z(x,y)^2, x), diff(Z(x,y),y))
∂y = solve(diff(x^4 -x^3 + y^2 + Z(x,y)^2, y), diff(Z(x,y),y))
∂x, ∂y
```
@ -648,7 +648,7 @@ F(p) = find_zero_derivative(f, (0, pi/2), p)
plot(F, 0.01, 5) # p > 0
```
This problem does not have a readily expressed value for $x^*$, but when $p \approx 0$ we should get similar behavior to the intersection of $y=px$ and $y=\pi/2 - x$ for $x^*$, or $x^* \approx \pi/(2(1-p))$ which has derivative of $-\pi/2$ at $p=0$, matching the above graph. For *large* $p$, the problem looks like the intersection of the line $y=1$ with $y=px$ or $x^* \approx 1/p$ which has derivative that goes to $0$ as $p$ goes to infinity, again matching this graph.
This problem does not have a readily expressed value for $x^*$, but when $p \approx 0$ we should get similar behavior to the intersection of $y=px$ and $y=\pi/2 - x$ for $x^*$, or $x^* \approx \pi/(2(1+p))$ which has derivative of $-\pi/2$ at $p=0$, matching the above graph. For *large* $p$, the problem looks like the intersection of the line $y=1$ with $y=px$ or $x^* \approx 1/p$ which has derivative that goes to $0$ as $p$ goes to infinity, again matching this graph.
## Optimization
@ -707,7 +707,7 @@ $$
\nabla{f} = -2/5e^{-(x^2 + y^2)/5} (5\sin(x^2 + y^2) + \cos(x^2 + y^2)) \langle x, y \rangle.
$$
This is zero at the origin, or when $5\sin(x^2 + y^2) = -\cos(x^2 + y^2)$. The latter is $0$ on circles of radius $r$ where $5\sin(r) = \cos(r)$ or $r = \tan^{-1}(-1/5) + k\pi$ for $k = 1, 2, \dots$. This matches the graph, where the extrema are on circles by symmetry. Imagine now, picking a value where the function takes a maximum and adding the tangent plane. As the gradient is $\vec{0}$, this will be flat. The point at the origin will have the surface fall off from the tangent plane in each direction, whereas the other points, will have a circle where the tangent plane rests on the surface, but otherwise will fall off from the tangent plane. Characterizing this "falling off" will help to identify local maxima that are distinct.
This is zero at the origin, or when $5\sin(x^2 + y^2) = -\cos(x^2 + y^2)$. The latter is $0$ on circles of radius $r$ where $5\sin(r) = -\cos(r)$ or $r = \tan^{-1}(-1/5) + k\pi$ for $k = 1, 2, \dots$. This matches the graph, where the extrema are on circles by symmetry. Imagine now, picking a value where the function takes a maximum and adding the tangent plane. As the gradient is $\vec{0}$, this will be flat. The point at the origin will have the surface fall off from the tangent plane in each direction, whereas the other points, will have a circle where the tangent plane rests on the surface, but otherwise will fall off from the tangent plane. Characterizing this "falling off" will help to identify local maxima that are distinct.
---
@ -825,7 +825,7 @@ There are $3$ real critical points. To classify them we need the sign of $f_{xx}
```{julia}
Hⱼ = sympy.hessian(fⱼ(x,y), (x,y))
function classify(H, pt)
Ha = subs.(H, x .=> pt[1], y .=> pt[2])
Ha = subs.(H, x => pt[1], y => pt[2])
(det=det(Ha), f_xx=Ha[1,1])
end
[classify(Hⱼ, pt) for pt in ptsⱼ]
@ -906,7 +906,8 @@ surface(xs, ys, hₗ)
ts = cpsₗ # 2pi/3 and 4pi/3 by above
xs, ys = cos.(ts), sin.(ts)
scatter!(xs, ys, fₗ)
zs = fₗ.(xs, ys)
scatter3d!(xs, ys, zs)
```
A contour plot also shows that some - and only one - extrema happens on the interior:
@ -961,7 +962,7 @@ Hₛ = subs.(hessian(exₛ, [x,y]), x=>xstarₛ[x], y=>xstarₛ[y])
As it occurs at $(\bar{x}, \bar{y})$ where $\bar{x} = (x_1 + x_2 + x_3)/3$ and $\bar{y} = (y_1+y_2+y_3)/3$ - the averages of the three values - the critical point is an interior point of the triangle.
As mentioned by Strang, the real problem is to minimize $d_1 + d_2 + d_3$. A direct approach with `SymPy` - just replacing `d2` above with the square root` fails. Consider instead the gradient of $d_1$, say. To avoid square roots, this is taken implicitly from $d_1^2$:
As mentioned by Strang, the real problem is to minimize $d_1 + d_2 + d_3$. A direct approach with `SymPy` - just replacing `d2` above with the square root fails. Consider instead the gradient of $d_1$, say. To avoid square roots, this is taken implicitly from $d_1^2$:
$$
@ -1070,7 +1071,7 @@ Another might be the vertical squared distance to the line:
\begin{align*}
d2(\alpha, \beta) &= (y_1 - l(x_1))^2 + (y_2 - l(x_2))^2 + (y_3 - l(x_3))^2 \\
&= (y1 - (\alpha + \beta x_1))^2 + (y3 - (\alpha + \beta x_3))^2 + (y3 - (\alpha + \beta x_3))^2
&= (y1 - (\alpha + \beta x_1))^2 + (y2 - (\alpha + \beta x_2))^2 + (y3 - (\alpha + \beta x_3))^2
\end{align*}
Another might be the *shortest* distance to the line:
@ -1115,12 +1116,12 @@ With this observation, the formulas can be re-expressed through:
$$
\beta = \frac{\sum{x_i - \bar{x}}(y_i - \bar{y})}{\sum(x_i-\bar{x})^2},
\beta = \frac{\sum{(x_i - \bar{x})(y_i - \bar{y})}}{\sum(x_i-\bar{x})^2},
\quad
\alpha = \bar{y} - \beta \bar{x}.
$$
Relative to the centered values, this may be viewed as a line through $(\bar{x}, \bar{y})$ with slope given by $(\vec{x}-\bar{x})\cdot(\vec{y}-\bar{y}) / \|\vec{x}-\bar{x}\|$.
Relative to the centered values, this may be viewed as a line through $(\bar{x}, \bar{y})$ with slope given by $(\vec{x}-\bar{x})\cdot(\vec{y}-\bar{y}) / \|\vec{x}-\bar{x}\|^2$.
As an example, if the point are $(1,1), (2,3), (5,8)$ we get:
@ -1153,7 +1154,7 @@ where $\gamma$ is some scaling factor for the gradient. The above quantifies the
Let $\Delta_x =\vec{x}_{n}- \vec{x}_{n-1}$ and $\Delta_y = \nabla{f}(\vec{x}_{n}) - \nabla{f}(\vec{x}_{n-1})$ A variant of the Barzilai-Borwein method is to take $\gamma_n = | \Delta_x \cdot \Delta_y / \Delta_y \cdot \Delta_y |$.
To illustrate, take $f(x,y) = -(x^2 + y^2) \cdot e^{-(2x^2 + y^2)}$ and a starting point $\langle 1, 1 \rangle$. We have, starting with $\gamma_0 = 1$ there are $5$ steps taken:
To illustrate, take $f(x,y) = - e^{-((x-1)^2 + 2(y-1/2)^2)}$ and a starting point $\langle 0, 0 \rangle$. We have, starting with $\gamma_0 = 1$ there are $5$ steps taken:
```{julia}
@ -1201,9 +1202,9 @@ end
offset = 0
us = vs = range(-1, 2, length=100)
surface_contour(vs, vs, f₂, offset=offset)
surface_contour(us, vs, f₂, offset=offset)
pts = [[pt..., offset] for pt in xs₂]
scatter!(unzip(pts)...)
scatter3d!(unzip(pts)...)
plot!(unzip(pts)..., linewidth=3)
```
@ -1221,7 +1222,7 @@ and had a step expressible in terms of the inverse of $M$ as $M^{-1} [g; h]$. In
$$
\vec{x}_{n+1} = \vec{x}_n - [H_f(\vec{x}_n]^{-1} \nabla(f)(\vec{x}_n).
\vec{x}_{n+1} = \vec{x}_n - [H_f(\vec{x}_n)]^{-1} \nabla(f)(\vec{x}_n).
$$
The Wikipedia page states where applicable, Newton's method converges much faster towards a local maximum or minimum than gradient descent.
@ -1252,7 +1253,7 @@ plot(Ps, Pc, layout=2) # combine plots
As we will solve for the critical points numerically, we consider the contour plot as well, as it shows better where the critical points are.
Over this region we see clearly 5 peaks or valleys: near $(0, 1.5)$, near $(1.2, 0)$, near $(0.2, -1.8)$, near $(-0.5, -0.8)$, and near $(-1.2, 0.2)$. To classify the $5$ critical points we need to first identify them, then compute the Hessian, and then, possibly compute $f_xx$ at the point. Here we do so for one of them using a numeric approach.
Over this region we see clearly 5 peaks or valleys: near $(0, 1.5)$, near $(1.2, 0)$, near $(0.2, -1.8)$, near $(-0.5, -0.8)$, and near $(-1.2, 0.2)$. To classify the $5$ critical points we need to first identify them, then compute the Hessian, and then, possibly compute $f_{xx}$ at the point. Here we do so for one of them using a numeric approach.
For concreteness, consider the peak or valley near $(0,1.5)$. We use Newton's method to numerically compute the critical point. The Newton step, specialized here is:
@ -1338,7 +1339,7 @@ end
p
```
From the plot we see the key property that $g$ is orthogonal to the level curve.
From the plot we see the key property that $\nabla g$ is orthogonal to the level curve.
Now consider $f(x,y)$, a function we wish to maximize. The gradient points in the direction of *greatest* increase, provided $f$ is smooth. We are interested in the value of this gradient along the level curve of $g$. Consider this figure representing a portion of the level curve, it's tangent, normal, the gradient of $f$, and the contours of $f$:
@ -1432,7 +1433,7 @@ We consider [again]("../derivatives/optimization.html") the problem of maximizin
$$
A(x,y) = xy, \quad P(x,y) = 2x + 2y = 25.
A(x,y) = xy, \quad P(x,y) = 2x + 2y = 20.
$$
We see $\nabla{A} = \lambda \nabla{P}$, or $\langle y, x \rangle = \lambda \langle 2, 2\rangle$. We see the solution has $x = y$ and from the constraint $x=y = 5$.
@ -1637,14 +1638,14 @@ For Dido's problem, $f(x,y,y') = y$ and $g(x, y, y') = \sqrt{1 + y'^2}$, so $L =
$$
(y - \lambda\sqrt{1 + y'^2}) - \lambda y' \frac{2y'}{2\sqrt{1 + y'^2}} = C.
(y - \lambda\sqrt{1 + y'^2}) + \lambda y' \frac{2y'}{2\sqrt{1 + y'^2}} = C.
$$
by multiplying through by the denominator and squaring to remove the square root, a quadratic equation in $y'^2$ can be found. This can be solved to give:
$$
y' = \frac{dy}{dx} = \sqrt{\frac{\lambda^2 -(y + C)^2}{(y+C)^2}}.
y' = \frac{dy}{dx} = \sqrt{\frac{\lambda^2 -(y - C)^2}{(y-C)^2}}.
$$
Here is a snippet of `SymPy` code to verify the above:
@ -1653,23 +1654,23 @@ Here is a snippet of `SymPy` code to verify the above:
```{julia}
#| hold: true
@vars y y λ C
ex = Eq(-λ*y^2/sqrt(1 + y^2) + λ*sqrt(1 + y^2), C + y)
Δ = sqrt(1 + y^2) / (C+y)
ex = Eq(-λ*y^2/sqrt(1 + y^2) + λ*sqrt(1 + y^2), y - C)
Δ = sqrt(1 + y^2) / (y - C)
ex1 = Eq(simplify(ex.lhs()*Δ), simplify(ex.rhs() * Δ))
ex2 = Eq(ex1.lhs()^2 - 1, simplify(ex1.rhs()^2) - 1)
```
Now $y'$ can be integrated using the substitution $y + C = \lambda \cos\theta$ to give: $-\lambda\int\cos\theta d\theta = x + D$, $D$ some constant. That is:
Now $y'$ can be integrated using the substitution $y - C = \lambda \cos\theta$ to give: $-\lambda\int\cos\theta d\theta = x + D$, $D$ some constant. That is:
\begin{align*}
x + D &= - \lambda \sin\theta\\
y + C &= \lambda\cos\theta.
y - C &= \lambda\cos\theta.
\end{align*}
Squaring gives the equation of a circle: $(x +D)^2 + (y+C)^2 = \lambda^2$.
Squaring gives the equation of a circle: $(x +D)^2 + (y-C)^2 = \lambda^2$.
We center and *rescale* the problem so that $x_0 = -1, x_1 = 1$. Then $L > 2$ as otherwise the rope is too short. From here, we describe the radius and center of the circle.
@ -1680,16 +1681,16 @@ We have $y=0$ at $x=1$ and $-1$ giving:
\begin{align*}
(-1 + D)^2 + (0 + C)^2 &= \lambda^2\\
(+1 + D)^2 + (0 + C)^2 &= \lambda^2.
(-1 + D)^2 + (0 - C)^2 &= \lambda^2\\
(+1 + D)^2 + (0 - C)^2 &= \lambda^2.
\end{align*}
Squaring out and solving gives $D=0$, $1 + C^2 = \lambda^2$. That is, an arc of circle with radius $1+C^2$ and centered at $(0, -C)$.
Squaring out and solving gives $D=0$, $1 + C^2 = \lambda^2$. That is, an arc of circle with radius $\sqrt{1+C^2}$ and centered at $(0, C)$.
$$
x^2 + (y + C)^2 = 1 + C^2.
x^2 + (y - C)^2 = 1 + C^2.
$$
Now to identify $C$ in terms of $L$. $L$ is the length of arc of circle of radius $r =\sqrt{1 + C^2}$ and angle $2\theta$, so $L = 2r\theta$ But using the boundary conditions in the equations for $x$ and $y$ gives $\tan\theta = 1/C$, so $L = 2\sqrt{1 + C^2}\tan^{-1}(1/C)$ which can be solved for $C$ provided $L \geq 2$.
@ -1721,7 +1722,7 @@ We have $f(x,y,z) = \text{distance}(\vec{x},\vec{0}) = \sqrt{x^2 + y^2 + z^2}$,
$$
\langle 2x, 2y ,2x \rangle = \lambda_1\langle 2x, 2y, -2z\rangle + \lambda_2 \langle 1, 0, -2 \rangle.
\langle 2x, 2y ,2z \rangle = \lambda_1\langle 2x, 2y, -2z\rangle + \lambda_2 \langle 1, 0, -2 \rangle.
$$
Here we use `SymPy`:
@ -1765,7 +1766,7 @@ Taylor's theorem for a univariate function states that if $f$ has $k+1$ derivati
$$
f(x) = \sum_{j=0}^k \frac{f^{j}(a)}{j!} (x-a)^k + R_k(x),
f(x) = \sum_{j=0}^k \frac{f^{j}(a)}{j!} (x-a)^j + R_k(x),
$$
where $R_k(x) = f^{k+1}(\xi)/(k+1)!(x-a)^{k+1}$ for some $\xi$ between $a$ and $x$.
@ -1965,9 +1966,9 @@ Finally, to see how compact the notation issue, suppose $f:R^3 \rightarrow R$, w
```{julia}
#| hold: true
@syms F() a[1:3] dx[1:3]
@syms 𝐅() a[1:3] dx[1:3]
sum(partial(F(a...), α, a) / factorial(α) * dx^α for k in 0:3 for α in MultiIndex.(MultiIndices(3, k))) # 3rd order
sum(partial(𝐅(a...), α, a) / factorial(α) * dx^α for k in 0:3 for α in MultiIndex.(MultiIndices(3, k))) # 3rd order
```
## Questions
@ -2076,7 +2077,7 @@ val = det(ForwardDiff.hessian(f, [-1/3, -1/3]))
numericq(val)
```
Which is true of $f$ at $(-1/3, 1/3)$:
Which is true of $f$ at $(-1/3, -1/3)$:
```{julia}
@ -2092,7 +2093,7 @@ answ = 2
radioq(choices, answ, keep_order=true)
```
##### Question
###### Question
([Knill](http://www.math.harvard.edu/~knill/teaching/summer2018/handouts/week4.pdf)) Let the Tutte polynomial be $f(x,y) = x + 2x^2 + x^3 + y + 2xy + y^2$.
@ -2146,7 +2147,7 @@ gradf = gradient(f(x,y), [x,y])
sympy.hessian(f(x,y), [x,y])
```
Which is true of $f$ at $(-1/3, 1/3)$:
Which is true of $f$ at $(-2/3, 1/6)$:
```{julia}
@ -2208,8 +2209,8 @@ Is this the Hessian of $f$?
$$
\begin{bmatrix}
2a & 2b\\
2b & 2c
2a & b\\
b & 2c
\end{bmatrix}
$$
@ -2235,7 +2236,7 @@ $$
yesnoq(false)
```
Explain why $ac - b^2$ is of any interest here:
Explain why $4ac - b^2$ is of any interest here:
```{julia}
@ -2256,9 +2257,9 @@ Which condition on $a$, $b$, and $c$ will ensure a *local maximum*:
#| hold: true
#| echo: false
choices = [
L"That $a>0$ and $ac-b^2 > 0$",
L"That $a<0$ and $ac-b^2 > 0$",
L"That $ac-b^2 < 0$"
L"That $a>0$ and $4ac-b^2 > 0$",
L"That $a<0$ and $4ac-b^2 > 0$",
L"That $4ac-b^2 < 0$"
]
answ = 2
radioq(choices, answ, keep_order=true)
@ -2271,9 +2272,9 @@ Which condition on $a$, $b$, and $c$ will ensure a saddle point?
#| hold: true
#| echo: false
choices = [
L"That $a>0$ and $ac-b^2 > 0$",
L"That $a<0$ and $ac-b^2 > 0$",
L"That $ac-b^2 < 0$"
L"That $a>0$ and $4ac-b^2 > 0$",
L"That $a<0$ and $4ac-b^2 > 0$",
L"That $4ac-b^2 < 0$"
]
answ = 3
radioq(choices, answ, keep_order=true)
@ -2320,7 +2321,7 @@ Due to the form of the gradient of the constraint, finding when $\nabla{f} = \la
#| hold: true
f(x,y) = exp(-x^2-y^2) * (2x^2 + y^2)
f(v) = f(v...)
r(t) = 3*[cos(t), sin(t)]
r(t) = sqrt(3)*[cos(t), sin(t)]
rat(x) = abs(x[1]/x[2]) - 1
fn = rat ∘ ∇(f) ∘ r
ts = fzeros(fn, 0, 2pi)
@ -2334,7 +2335,7 @@ Using these points, what is the largest value on the boundary?
#| echo: false
f(x,y) = exp(-x^2-y^2) * (2x^2 + y^2)
f(v) = f(v...)
r(t) = 3*[cos(t), sin(t)]
r(t) = sqrt(3)*[cos(t), sin(t)]
rat(x) = abs(x[1]/x[2]) - 1
fn = rat ∘ ∇(f) ∘ r
ts = fzeros(fn, 0, 2pi)