typos
This commit is contained in:
@@ -118,7 +118,7 @@ Adding many arrows this way would be inefficient.
|
||||
### Setting a viewing angle for 3D plots
|
||||
|
||||
|
||||
For 3D plots, the viewing angle can make the difference in visualizing the key features. In `Plots`, some backends allow the viewing angle to be set with the mouse by clicking and dragging. Not all do. For such, the `camera` argument is used, as in `camera(azimuthal, elevation)` where the angles are given in degrees. If the $x$-$y$-$z$ coorinates are given, then `elevation` or *inclination*, is the angle between the $z$ axis and the $x-y$ plane (so `90` is a top view) and `azimuthal` is the angle in the $x-y$ plane from the $x$ axes.
|
||||
For 3D plots, the viewing angle can make the difference in visualizing the key features. In `Plots`, some backends allow the viewing angle to be set with the mouse by clicking and dragging. Not all do. For such, the `camera` argument is used, as in `camera(azimuthal, elevation)` where the angles are given in degrees. If the $x$-$y$-$z$ coordinates are given, then `elevation` or *inclination*, is the angle between the $z$ axis and the $x-y$ plane (so `90` is a top view) and `azimuthal` is the angle in the $x-y$ plane from the $x$ axes.
|
||||
|
||||
|
||||
## Visualizing functions from $R^2 \rightarrow R$
|
||||
|
||||
@@ -739,7 +739,7 @@ To go from a function that takes a point to a function of that point, we have th
|
||||
|
||||
```{julia}
|
||||
#| eval: false
|
||||
FowardDiff.gradient(f::Function) = x -> ForwardDiff.gradient(f, x)
|
||||
ForwardDiff.gradient(f::Function) = x -> ForwardDiff.gradient(f, x)
|
||||
```
|
||||
|
||||
It works as follows, where a vector of values is passed in for the point in question:
|
||||
@@ -781,7 +781,7 @@ partial_x(f, y) = x -> ForwardDiff.derivative(u -> f(u,y), x)
|
||||
|
||||
:::{.callout-note}
|
||||
## Note
|
||||
For vector-valued functions, we can overide the syntax `'` using `Base.adjoint`, as `'` is treated as a postfix operator in `Julia` for the `adjoint` operation. The symbol `\\nabla` is also available in `Julia`, but it is not an operator, so can't be used as mathematically written `∇f` (this could be used as a name though). In `CalculusWithJulia` a definition is made so essentially `∇(f) = x -> ForwardDiff.gradient(f, x)`. It does require parentheses to be called, as in `∇(f)`.
|
||||
For vector-valued functions, we can override the syntax `'` using `Base.adjoint`, as `'` is treated as a postfix operator in `Julia` for the `adjoint` operation. The symbol `\\nabla` is also available in `Julia`, but it is not an operator, so can't be used as mathematically written `∇f` (this could be used as a name though). In `CalculusWithJulia` a definition is made so essentially `∇(f) = x -> ForwardDiff.gradient(f, x)`. It does require parentheses to be called, as in `∇(f)`.
|
||||
|
||||
:::
|
||||
|
||||
@@ -1471,7 +1471,7 @@ We see that `diff(ex, x, y)` and `diff(ex, y, x)` are identical. This is not a c
|
||||
|
||||
|
||||
|
||||
For higher order mixed partials, something similar to Schwarz's theorem still holds. Say $f:R^n \rightarrow R$ is $C^k$ if $f$ is continuous and all partial derivatives of order $j \leq k$ are continous. If $f$ is $C^k$, and $k=k_1+k_2+\cdots+k_n$ ($k_i \geq 0$) then
|
||||
For higher order mixed partials, something similar to Schwarz's theorem still holds. Say $f:R^n \rightarrow R$ is $C^k$ if $f$ is continuous and all partial derivatives of order $j \leq k$ are continuous. If $f$ is $C^k$, and $k=k_1+k_2+\cdots+k_n$ ($k_i \geq 0$) then
|
||||
|
||||
|
||||
$$
|
||||
@@ -1507,7 +1507,7 @@ hessian(ex, (x, y))
|
||||
When the mixed partials are continuous, this will be a symmetric matrix. The Hessian matrix plays the role of the second derivative in the multivariate Taylor theorem.
|
||||
|
||||
|
||||
For numeric use, `FowardDiff` has a `hessian` function. It expects a scalar function and a point and returns the Hessian matrix. We have for $f(x,y) = e^x\cos(y)$ at the point $(1,2)$, the Hessian matrix is:
|
||||
For numeric use, `ForwardDiff` has a `hessian` function. It expects a scalar function and a point and returns the Hessian matrix. We have for $f(x,y) = e^x\cos(y)$ at the point $(1,2)$, the Hessian matrix is:
|
||||
|
||||
|
||||
```{julia}
|
||||
|
||||
@@ -320,7 +320,7 @@ $$
|
||||
A [piriform](http://www.math.harvard.edu/~knill/teaching/summer2011/handouts/32-linearization.pdf) is described by the quartic surface $f(x,y,z) = x^4 -x^3 + y^2+z^2 = 0$. Find the tangent line at the point $\langle 2,2,2 \rangle$.
|
||||
|
||||
|
||||
Here, $\nabla{f}$ describes a *normal* to the tangent plane. The description of a plane may be described by $\hat{N}\cdot(\vec{x} - \vec{x}_0) = 0$, where $\vec{x}_0$ is identified with a point on the plane (the point $(2,2,2)$ here). With this, we have $\hat{N}\cdot\vec{x} = ax + by + cz = \hat{N}\cdot\langle 2,2,2\rangle = 2(a+b+c)$. For ths problem, $\nabla{f}(2,2,2) = \langle a, b, c\rangle$ is given by:
|
||||
Here, $\nabla{f}$ describes a *normal* to the tangent plane. The description of a plane may be described by $\hat{N}\cdot(\vec{x} - \vec{x}_0) = 0$, where $\vec{x}_0$ is identified with a point on the plane (the point $(2,2,2)$ here). With this, we have $\hat{N}\cdot\vec{x} = ax + by + cz = \hat{N}\cdot\langle 2,2,2\rangle = 2(a+b+c)$. For this problem, $\nabla{f}(2,2,2) = \langle a, b, c\rangle$ is given by:
|
||||
|
||||
|
||||
```{julia}
|
||||
@@ -1404,7 +1404,7 @@ ys = range(-0.5, 0.5, length=100)
|
||||
contour!(xs, ys, f, levels = [.7, .85, 1, 1.15, 1.3])
|
||||
```
|
||||
|
||||
We can still identify the tangent and normal directions. What is different about this point is that local movement on the constraint curve is also local movement on the contour line of $f$, so $f$ doesn't increase or decrease here, as it would if this point were an extrema along the contraint. The key to seeing this is the contour lines of $f$ are *tangent* to the constraint. The respective gradients are *orthogonal* to their tangent lines, and in dimension $2$, this implies they are parallel to each other.
|
||||
We can still identify the tangent and normal directions. What is different about this point is that local movement on the constraint curve is also local movement on the contour line of $f$, so $f$ doesn't increase or decrease here, as it would if this point were an extrema along the constraint. The key to seeing this is the contour lines of $f$ are *tangent* to the constraint. The respective gradients are *orthogonal* to their tangent lines, and in dimension $2$, this implies they are parallel to each other.
|
||||
|
||||
|
||||
> *The method of Lagrange multipliers*: To optimize $f(x,y)$ subject to a constraint $g(x,y) = k$ we solve for all *simultaneous* solutions to
|
||||
@@ -1552,7 +1552,7 @@ Following Lagrange, we generalize the problem to the following: maximize $\int_{
|
||||
The starting point is a *perturbation*: $\hat{y}(x) = y(x) + \epsilon_1 \eta_1(x) + \epsilon_2 \eta_2(x)$. There are two perturbation terms, were only one term added, then the perturbation may make $\hat{y}$ not satisfy the constraint, the second term is used to ensure the constraint is not violated. If $\hat{y}$ is to be a possible solution to our problem, we would want $\hat{y}(x_0) = \hat{y}(x_1) = 0$, as it does for $y(x)$, so we *assume* $\eta_1$ and $\eta_2$ satisfy this boundary condition.
|
||||
|
||||
|
||||
With this notation, and fixing $y$ we can re-express the equations in terms ot $\epsilon_1$ and $\epsilon_2$:
|
||||
With this notation, and fixing $y$ we can re-express the equations in terms of $\epsilon_1$ and $\epsilon_2$:
|
||||
|
||||
|
||||
|
||||
@@ -1699,7 +1699,7 @@ Now to identify $C$ in terms of $L$. $L$ is the length of arc of circle of radiu
|
||||
##### Example: more constraints
|
||||
|
||||
|
||||
Consider now the case of maximizing $f(x,y,z)$ subject to $g(x,y,z)=c$ and $h(x,y,z) = d$. Can something similar be said to characterize potential values for this to occur? Trying to describe where $g(x,y,z) = c$ and $h(x,y,z)=d$ in general will prove difficult. The easy case would be it the two equations were linear, in which case they would describe planes. Two non-parallel planes would intersect in a line. If the general case, imagine the surfaces locally replaced by their tangent planes, then their intersection would be a line, and this line would point in along the curve given by the intersection of the surfaces formed by the contraints. This line is similar to the tangent line in the $2$-variable case. Now if $\nabla{f}$, which points in the direction of greatest increase of $f$, had a non-zero projection onto this line, then moving the point in that direction along the line would increase $f$ and still leave the point following the contraints. That is, if there is a non-zero directional derivative the point is not a maximum.
|
||||
Consider now the case of maximizing $f(x,y,z)$ subject to $g(x,y,z)=c$ and $h(x,y,z) = d$. Can something similar be said to characterize potential values for this to occur? Trying to describe where $g(x,y,z) = c$ and $h(x,y,z)=d$ in general will prove difficult. The easy case would be it the two equations were linear, in which case they would describe planes. Two non-parallel planes would intersect in a line. If the general case, imagine the surfaces locally replaced by their tangent planes, then their intersection would be a line, and this line would point in along the curve given by the intersection of the surfaces formed by the constraints. This line is similar to the tangent line in the $2$-variable case. Now if $\nabla{f}$, which points in the direction of greatest increase of $f$, had a non-zero projection onto this line, then moving the point in that direction along the line would increase $f$ and still leave the point following the constraints. That is, if there is a non-zero directional derivative the point is not a maximum.
|
||||
|
||||
|
||||
The tangent planes are *orthogonal* to the vectors $\nabla{g}$ and $\nabla{h}$, so in this case parallel to $\nabla{g} \times \nabla{h}$. The condition that $\nabla{f}$ be *orthogonal* to this vector, means that $\nabla{f}$ *must* sit in the plane described by $\nabla{g}$ and $\nabla{h}$ - the plane of orthogonal vectors to $\nabla{g} \times \nabla{h}$. That is, this condition is needed:
|
||||
|
||||
@@ -178,7 +178,7 @@ Instead of the comprehension, broadcasting can be used
|
||||
surface(X.(thetas, phis'), Y.(thetas, phis'), Z.(thetas, phis'))
|
||||
```
|
||||
|
||||
If the parameterization is presented as a function, broadcasting can be used to succintly plot
|
||||
If the parameterization is presented as a function, broadcasting can be used to succinctly plot
|
||||
|
||||
|
||||
```{julia}
|
||||
|
||||
@@ -586,7 +586,7 @@ With this, the equation is simply $\vec{tl}(t) = \vec{f}(t_0) + \vec{f}'(t_0) \c
|
||||
##### Example: parabolic motion
|
||||
|
||||
|
||||
In physics, we learn that the equation $F=ma$ can be used to derive a formula for postion, when acceleration, $a$, is a constant. The resulting equation of motion is $x = x_0 + v_0t + (1/2) at^2$. Similarly, if $x(t)$ is a vector-valued postion vector, and the *second* derivative, $x''(t) =\vec{a}$, a constant, then we have: $x(t) = \vec{x_0} + \vec{v_0}t + (1/2) \vec{a} t^2$.
|
||||
In physics, we learn that the equation $F=ma$ can be used to derive a formula for position, when acceleration, $a$, is a constant. The resulting equation of motion is $x = x_0 + v_0t + (1/2) at^2$. Similarly, if $x(t)$ is a vector-valued position vector, and the *second* derivative, $x''(t) =\vec{a}$, a constant, then we have: $x(t) = \vec{x_0} + \vec{v_0}t + (1/2) \vec{a} t^2$.
|
||||
|
||||
|
||||
For two dimensions, we have the force due to gravity acts downward, only in the $y$ direction. The acceleration is then $\vec{a} = \langle 0, -g \rangle$. If we start at the origin, with initial velocity $\vec{v_0} = \langle 2, 3\rangle$, then we can plot the trajectory until the object returns to ground ($y=0$) as follows:
|
||||
@@ -1109,7 +1109,7 @@ $$
|
||||
0 = \frac{d(r^2 \dot{\theta})}{dt} = 2r\dot{r}\dot{\theta} + r^2 \ddot{\theta},
|
||||
$$
|
||||
|
||||
which is the tranversal component of the acceleration times $r$, as decomposed above. This means, that the acceleration of the planet is completely towards the Sun at the origin.
|
||||
which is the transversal component of the acceleration times $r$, as decomposed above. This means, that the acceleration of the planet is completely towards the Sun at the origin.
|
||||
|
||||
|
||||
Kepler's first law, relates $r$ and $\theta$ through the polar equation of an ellipse:
|
||||
@@ -2020,7 +2020,7 @@ Let $r$ be the radius of a circle and for concreteness we position it at $(-r, 0
|
||||
|
||||
* Between angles $0$ and $\pi/2$ the horse has unconstrained access, so they can graze a wedge of radius $R$.
|
||||
* Between angles $\pi/2$ and until the horse's $y$ position is $0$ when the tether is taut the boundary of what can be eaten is described by the involute.
|
||||
* The horse can't eat from withing the circle or radius $r$.
|
||||
* The horse can't eat from within the circle or radius $r$.
|
||||
|
||||
|
||||
```{julia}
|
||||
|
||||
@@ -564,7 +564,7 @@ Vectors are defined similarly. As they are identified with *column* vectors, we
|
||||
𝒷 = [10, 11, 12] # not 𝒷 = [10 11 12], which would be a row vector.
|
||||
```
|
||||
|
||||
In `Julia`, entries in a matrix (or a vector) are stored in a container with a type wide enough accomodate each entry. In this example, the type is SymPy's `Sym` type:
|
||||
In `Julia`, entries in a matrix (or a vector) are stored in a container with a type wide enough accommodate each entry. In this example, the type is SymPy's `Sym` type:
|
||||
|
||||
|
||||
```{julia}
|
||||
|
||||
Reference in New Issue
Block a user