This commit is contained in:
jverzani 2022-05-24 13:51:49 -04:00
parent 5c0fd1b6fe
commit 244f492f9e
240 changed files with 65211 additions and 0 deletions

24
.github/workflows/documentation.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: Documentation
on:
push:
branches:
- main # update to match your development branch (master, main, dev, trunk, ...)
tags: '*'
pull_request:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest
with:
version: '1.7'
- name: Install dependencies
run: julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
- name: Build and deploy
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # If authenticating with GitHub Actions token
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} # If authenticating with SSH deploy key
run: julia --project=docs/ docs/make.jl

7
.gitignore vendored
View File

@ -1 +1,8 @@
/Manifest.toml
docs/Mainfest.toml
docs/build
docs/site
/html
test/benchmarks.json
Manifest.toml
TODO.md

10
CwJ/ODEs/Project.toml Normal file
View File

@ -0,0 +1,10 @@
[deps]
DiffEqBase = "2b5f629d-d688-5b77-993f-72d75c75574e"
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
MonteCarloMeasurements = "0987c9cc-fe09-11e8-30f0-b96dd679fdca"
NLsolve = "2774e3e8-f4cf-5e23-947b-6d7e65073b56"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
QuadGK = "1fd47b50-473d-5c70-9696-f719f8f3bcdc"
Roots = "f2b01f46-fcfa-551c-844a-d8ac1e96c665"
SymPy = "24249f21-da20-56a4-8eb1-6a02cf4ae2e6"

BIN
CwJ/ODEs/cache/euler.cache vendored Normal file

Binary file not shown.

BIN
CwJ/ODEs/cache/odes.cache vendored Normal file

Binary file not shown.

View File

@ -0,0 +1,384 @@
# The `DifferentialEquations` suite
This section uses these add-on packages:
```julia
using OrdinaryDiffEq
using Plots
using ModelingToolkit
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
const frontmatter = (
title = "The `DifferentialEquations` suite",
description = "Calculus with Julia: The `DifferentialEquations` suite",
tags = ["CalculusWithJulia", "odes", "the `differentialequations` suite"],
);
fig_size = (600, 400)
nothing
```
----
The
[`DifferentialEquations`](https://github.com/SciML/DifferentialEquations.jl)
suite of packages contains solvers for a wide range of various
differential equations. This section just briefly touches touch on
ordinary differential equations (ODEs), and so relies only on
`OrdinaryDiffEq` part of the suite. For more detail on this type and
many others covered by the suite of packages, there are many other
resources, including the
[documentation](https://diffeq.sciml.ai/stable/) and accompanying
[tutorials](https://github.com/SciML/SciMLTutorials.jl).
## SIR Model
We follow along with an introduction to the SIR model for the spread of disease by [Smith and Moore](https://www.maa.org/press/periodicals/loci/joma/the-sir-model-for-spread-of-disease-introduction). This model received a workout due to the COVID-19 pandemic.
The basic model breaks a population into three cohorts: The **susceptible** individuals, the **infected** individuals, and the **recovered** individuals. These add to the population size, ``N``, which is fixed, but the cohort sizes vary in time. We name these cohort sizes ``S(t)``, ``I(t)``, and ``R(t)`` and define ``s(t)=S(t)/N``, ``i(t) = I(t)/N`` and ``r(t) = R(t)/N`` to be the respective proportions.
The following *assumptions* are made about these cohorts by Smith and Moore:
> No one is added to the susceptible group, since we are ignoring births and immigration. The only way an individual leaves the susceptible group is by becoming infected.
This implies the rate of change in time of ``S(t)`` depends on the current number of susceptibles, and the amount of interaction with the infected cohorts. The model *assumes* each infected person has ``b`` contacts per day that are sufficient to spread the disease. Not all contacts will be with susceptible people, but if people are assumed to mix within the cohorts, then there will be on average ``b \cdot S(t)/N`` contacts with susceptible people per infected person. As each infected person is modeled identically, the time rate of change of ``S(t)`` is:
```math
\frac{dS}{dt} = - b \cdot \frac{S(t)}{N} \cdot I(t) = -b \cdot s(t) \cdot I(t)
```
It is negative, as no one is added, only taken off. After dividing by
``N``, this can also be expressed as ``s'(t) = -b s(t) i(t)``.
> assume that a fixed fraction ``k`` of the infected group will recover during any given day.
This means the change in time of the recovered depends on ``k`` and the number infected, giving rise to the equation
```math
\frac{dR}{dt} = k \cdot I(t)
```
which can also be expressed in proportions as ``r'(t) = k \cdot i(t)``.
Finally, from ``S(t) + I(T) + R(t) = N`` we have ``S'(T) + I'(t) + R'(t) = 0`` or ``s'(t) + i'(t) + r'(t) = 0``.
Combining, it is possible to express the rate of change of the infected population through:
```math
\frac{di}{dt} = b \cdot s(t) \cdot i(t) - k \cdot i(t)
```
The author's apply this model to flu statistics from Hong Kong where:
```math
\begin{align*}
S(0) &= 7,900,000\\
I(0) &= 10\\
R(0) &= 0\\
\end{align*}
```
In `Julia` we define these, `N` to model the total population, and `u0` to be the proportions.
```julia
S0, I0, R0 = 7_900_000, 10, 0
N = S0 + I0 + R0
u0 = [S0, I0, R0]/N # initial proportions
```
An *estimated* set of values for ``k`` and ``b`` are ``k=1/3``, coming from the average period of infectiousness being estimated at three days and ``b=1/2``, which seems low in normal times, but not for an infected person who may be feeling quite ill and staying at home. (The model for COVID would certainly have a larger ``b`` value).
Okay, the mathematical modeling is done; now we try to solve for the unknown functions using `DifferentialEquations`.
To warm up, if ``b=0`` then ``i'(t) = -k \cdot i(t)`` describes the infected. (There is no circulation of people in this case.) The solution would be achieved through:
```julia; hold=true
k = 1/3
f(u,p,t) = -k * u # solving u(t) = - k u(t)
time_span = (0.0, 20.0)
prob = ODEProblem(f, I0/N, time_span)
sol = solve(prob, Tsit5(), reltol=1e-8, abstol=1e-8)
plot(sol)
```
The `sol` object is a set of numbers with a convenient `plot` method. As may have been expected, this graph shows exponential decay.
A few comments are in order. The problem we want to solve is
```math
\frac{di}{dt} = -k \cdot i(t) = F(i(t), k, t)
```
where ``F`` depends on the current value (``i``), a parameter (``k``), and the time (``t``). We did not utilize ``p`` above for the parameter, as it was easy not to, but could have, and will in the following. The time variable ``t`` does not appear by itself in our equation, so only `f(u, p, t) = -k * u` was used, `u` the generic name for a solution which in this case is ``i``.
The problem we set up needs an initial value (the ``u0``) and a time span to solve over. Here we want time to model real time, so use floating point values.
The plot shows steady decay, as there is no mixing of infected with others.
Adding in the interaction requires a bit more work. We now have what is known as a *system* of equations:
```math
\begin{align*}
\frac{ds}{dt} &= -b \cdot s(t) \cdot i(t)\\
\frac{di}{dt} &= b \cdot s(t) \cdot i(t) - k \cdot i(t)\\
\frac{dr}{dt} &= k \cdot i(t)\\
\end{align*}
```
Systems of equations can be solved in a similar manner as a single ordinary differential equation, though adjustments are made to accommodate the multiple functions.
We use a style that updates values in place, and note that `u` now holds ``3`` different functions at once:
```julia
function sir!(du, u, p, t)
k, b = p
s, i, r = u[1], u[2], u[3]
ds = -b * s * i
di = b * s * i - k * i
dr = k * i
du[1], du[2], du[3] = ds, di, dr
end
```
The notation `du` is suggestive of both the derivative and a small increment. The mathematical formulation follows the derivative, the numeric solution uses a time step and increments the solution over this time step. The `Tsit5()` solver, used here, adaptively chooses a time step, `dt`; were the `Euler` method used, this time step would need to be explicit.
```julia; echo=false
note("""
The `sir!` function has the trailing `!` indicating -- by convention -- it *mutates* its first value, `du`. In this case, through an assignment, as in `du[1]=ds`. This could use some explanation. The *binding* `du` refers to the *container* holding the ``3`` values, whereas `du[1]` refers to the first value in that container. So `du[1]=ds` changes the first value, but not the *binding* of `du` to the container. That is, `du` mutates. This would be quite different were the call `du = [ds,di,dr]` which would create a new *binding* to a new container and not mutate the values in the original container.
""", title="Mutation not re-binding")
```
With the update function defined, the problem is setup and a solution found with in the same manner:
```julia;
p = (k=1/3, b=1/2) # parameters
time_span = (0.0, 150.0) # time span to solve over, 5 months
prob = ODEProblem(sir!, u0, time_span, p)
sol = solve(prob, Tsit5())
plot(sol)
plot!(x -> 0.5, linewidth=2) # mark 50% line
```
The lower graph shows the number of infected at each day over the five-month period displayed. The peak is around 6-7% of the population at any one time. However, over time the recovered part of the population reaches over 50%, meaning more than half the population is modeled as getting sick.
Now we change the parameter ``b`` and observe the difference. We passed in a value `p` holding our two parameters, so we just need to change that and run the model again:
```julia; hold=true
p = (k=1/2, b=2) # change b from 1/2 to 2 -- more daily contact
prob = ODEProblem(sir!, u0, time_span, p)
sol = solve(prob, Tsit5())
plot(sol)
```
The graphs are somewhat similar, but the steady state is reached much more quickly and nearly everyone became infected.
What about if ``k`` were bigger?
```julia; hold=true
p = (k=2/3, b=1/2)
prob = ODEProblem(sir!, u0, time_span, p)
sol = solve(prob, Tsit5())
plot(sol)
```
The graphs show that under these conditions the infections never take off; we have ``i' = (b\cdot s-k)i = k\cdot((b/k) s - 1) i`` which is always negative, since `(b/k)s < 1`, so infections will only decay.
The solution object is indexed by time, then has the `s`, `i`, `r` estimates. We use this structure below to return the estimated proportion of recovered individuals at the end of the time span.
```julia
function recovered(k,b)
prob = ODEProblem(sir!, u0, time_span, (k,b));
sol = solve(prob, Tsit5());
s,i,r = last(sol)
r
end
```
This function makes it easy to see the impact of changing the parameters. For example, fixing ``k=1/3`` we have:
```julia
f(b) = recovered(1/3, b)
plot(f, 0, 2)
```
This very clearly shows the sharp dependence on the value of ``b``; below some level, the proportion of people who are ever infected (the recovered cohort) remains near ``0``; above that level it can climb quickly towards ``1``.
The function `recovered` is of two variables returning a single value. In subsequent sections we will see a few ``3``-dimensional plots that are common for such functions, here we skip ahead and show how to visualize multiple function plots at once using "`z`" values in a graph.
```julia; hold=true
k, ks = 0.1, 0.2:0.1:0.9 # first `k` and then the rest
bs = range(0, 2, length=100)
zs = recovered.(k, bs) # find values for fixed k, each of bs
p = plot(bs, k*one.(bs), zs, legend=false) # k*one.(ks) is [k,k,...,k]
for k in ks
plot!(p, bs, k*one.(bs), recovered.(k, bs))
end
p
```
The 3-dimensional graph with `plotly` can have its viewing angle
adjusted with the mouse. When looking down on the ``x-y`` plane, which
code `b` and `k`, we can see the rapid growth along a line related to
``b/k``.
Smith and Moore point out that ``k`` is roughly the reciprocal of the number of days an individual is sick enough to infect others. This can be estimated during a breakout. However, they go on to note that there is no direct way to observe ``b``, but there is an indirect way.
The ratio ``c = b/k`` is the number of close contacts per day times the number of days infected which is the number of close contacts per infected individual.
This can be estimated from the curves once steady state has been reached (at the end of the pandemic).
```math
\frac{di}{ds} = \frac{di/dt}{ds/dt} = \frac{b \cdot s(t) \cdot i(t) - k \cdot i(t)}{-b \cdot s(t) \cdot i(t)} = -1 + \frac{1}{c \cdot s}
```
This equation does not depend on ``t``; ``s`` is the dependent variable. It could be solved numerically, but in this case affords an algebraic solution: ``i = -s + (1/c) \log(s) + q``, where ``q`` is some constant. The quantity ``q = i + s - (1/c) \log(s)`` does not depend on time, so is the same at time ``t=0`` as it is as ``t \rightarrow \infty``. At ``t=0`` we have ``s(0) \approx 1`` and ``i(0) \approx 0``, whereas ``t \rightarrow \infty``, ``i(t) \rightarrow 0`` and ``s(t)`` goes to the steady state value, which can be estimated. Solving with ``t=0``, we see ``q=0 + 1 - (1/c)\log(1) = 1``. In the limit them ``1 = 0 + s_{\infty} - (1/c)\log(s_\infty)`` or ``c = \log(s_\infty)/(1-s_\infty)``.
## Trajectory with drag
We now solve numerically the problem of a trajectory with a drag force from air resistance.
The general model is:
```math
\begin{align*}
x''(t) &= - W(t,x(t), x'(t), y(t), y'(t)) \cdot x'(t)\\
y''(t) &= -g - W(t,x(t), x'(t), y(t), y'(t)) \cdot y'(t)\\
\end{align*}
```
with initial conditions: ``x(0) = y(0) = 0`` and ``x'(0) = v_0 \cos(\theta), y'(0) = v_0 \sin(\theta)``.
This into an ODE by a standard trick. Here we define our function for updating a step. As can be seen the vector `u` contains both ``\langle x,y \rangle``
and ``\langle x',y' \rangle``
```julia
function xy!(du, u, p, t)
g, γ = p.g, p.k
x, y = u[1], u[2]
x, y = u[3], u[4] # unicode \prime[tab]
W = γ
du[1] = x
du[2] = y
du[3] = 0 - W * x
du[4] = -g - W * y
end
```
This function ``W`` is just a constant above, but can be easily modified as desired.
```julia; echo=false
note("""
The "standard" trick is to take a second order ODE like ``u''(t)=u`` and turn this into two coupled ODEs by using a new name: ``v=u'(t)`` and then ``v'(t) = u(t)``. In this application, there are ``4`` equations, as we have *both* ``x''`` and ``y''`` being so converted. The first and second components of ``du`` are new variables, the third and fourth show the original equation.
""", title="A second-order ODE is a coupled first-order ODE")
```
The initial conditions are specified through:
```julia
θ = pi/4
v₀ = 200
xy₀ = [0.0, 0.0]
vxy₀ = v₀ * [cos(θ), sin(θ)]
INITIAL = vcat(xy₀, vxy₀)
```
The time span can be computed using an *upper* bound of no drag, for which the classic physics formulas give (when ``y_0=0``) ``(0, 2v_{y0}/g)``
```julia
g = 9.8
TSPAN = (0, 2*vxy₀[2] / g)
```
This allows us to define an `ODEProblem`:
```julia
trajectory_problem = ODEProblem(xy!, INITIAL, TSPAN)
```
When ``\gamma = 0`` there should be no drag and we expect to see a parabola:
```julia; hold=true
ps = (g=9.8, k=0)
SOL = solve(trajectory_problem, Tsit5(); p = ps)
plot(t -> SOL(t)[1], t -> SOL(t)[2], TSPAN...; legend=false)
```
The plot is a parametric plot of the ``x`` and ``y`` parts of the solution over the time span. We can see the expected parabolic shape.
On a *windy* day, the value of ``k`` would be positive. Repeating the above with ``k=1/4`` gives:
```julia; hold=true
ps = (g=9.8, k=1/4)
SOL = solve(trajectory_problem, Tsit5(); p = ps)
plot(t -> SOL(t)[1], t -> SOL(t)[2], TSPAN...; legend=false)
```
We see that the ``y`` values have gone negative. The `DifferentialEquations` package can adjust for that with a *callback* which terminates the problem once ``y`` has gone negative. This can be implemented as follows:
```julia; hold=true
condition(u,t,integrator) = u[2] # called when `u[2]` is negative
affect!(integrator) = terminate!(integrator) # stop the process
cb = ContinuousCallback(condition, affect!)
ps = (g=9.8, k = 1/4)
SOL = solve(trajectory_problem, Tsit5(); p = ps, callback=cb)
plot(t -> SOL(t)[1], t -> SOL(t)[2], TSPAN...; legend=false)
```
Finally, we note that the `ModelingToolkit` package provides symbolic-numeric computing. This allows the equations to be set up symbolically, as in `SymPy` before being passed off to `DifferentialEquations` to solve numerically. The above example with no wind resistance could be translated into the following:
```julia; hold=true
@parameters t γ g
@variables x(t) y(t)
D = Differential(t)
eqs = [D(D(x)) ~ -γ * D(x),
D(D(y)) ~ -g - γ * D(y)]
@named sys = ODESystem(eqs)
sys = ode_order_lowering(sys) # turn 2nd order into 1st
u0 = [D(x) => vxy₀[1],
D(y) => vxy₀[2],
x => 0.0,
y => 0.0]
p = [γ => 0.0,
g => 9.8]
prob = ODEProblem(sys, u0, TSPAN, p, jac=true)
sol = solve(prob,Tsit5())
plot(t -> sol(t)[3], t -> sol(t)[4], TSPAN..., legend=false)
```
The toolkit will automatically generate fast functions and can perform transformations (such as is done by `ode_order_lowering`) before passing along to the numeric solves.

834
CwJ/ODEs/euler.jmd Normal file
View File

@ -0,0 +1,834 @@
# Euler's method
This section uses these add-on packages:
```julia
using CalculusWithJulia
using Plots
using SymPy
using Roots
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
const frontmatter = (
title = "Euler's method",
description = "Calculus with Julia: Euler's method",
tags = ["CalculusWithJulia", "odes", "euler's method"],
);
fig_size = (600, 400)
nothing
```
----
The following section takes up the task of numerically approximating solutions to differential equations. `Julia` has a huge set of state-of-the-art tools for this task starting with the [DifferentialEquations](https://github.com/SciML/DifferentialEquations.jl) package. We don't use that package in this section, focusing on simpler methods and implementations for pedagogical purposes, but any further exploration should utilize the tools provided therein. A brief introduction to the package follows in an upcoming [section](./differential_equations.html).
----
Consider the differential equation:
```math
y'(x) = y(x) \cdot x, \quad y(1)=1,
```
which can be solved with `SymPy`:
```julia;
@syms x, y, u()
D = Differential(x)
x0, y0 = 1, 1
F(y,x) = y*x
dsolve(D(u)(x) - F(u(x), x))
```
With the given initial condition, the solution becomes:
```julia;
out = dsolve(D(u)(x) - F(u(x),x), u(x), ics=Dict(u(x0) => y0))
```
Plotting this solution over the slope field
```julia;
p = plot(legend=false)
vectorfieldplot!((x,y) -> [1, F(x,y)], xlims=(0, 2.5), ylims=(0, 10))
plot!(rhs(out), linewidth=5)
```
we see that the vectors that are drawn seem to be tangent to the graph
of the solution. This is no coincidence, the tangent lines to integral
curves are in the direction of the slope field.
What if the graph of the solution were not there, could we use this
fact to *approximately* reconstruct the solution?
That is, if we stitched together pieces of the slope field, would we
get a curve that was close to the actual answer?
```julia; hold=true; echo=false; cache=true
## {{{euler_graph}}}
function make_euler_graph(n)
x, y = symbols("x, y")
F(y,x) = y*x
x0, y0 = 1, 1
h = (2-1)/5
xs = zeros(n+1)
ys = zeros(n+1)
xs[1] = x0 # index is off by 1
ys[1] = y0
for i in 1:n
xs[i + 1] = xs[i] + h
ys[i + 1] = ys[i] + h * F(ys[i], xs[i])
end
p = plot(legend=false)
vectorfieldplot!((x,y) -> [1, F(y,x)], xlims=(1,2), ylims=(0,6))
## Add Euler soln
plot!(p, xs, ys, linewidth=5)
scatter!(p, xs, ys)
## add function
out = dsolve(u'(x) - F(u(x), x), u(x), ics=(u, x0, y0))
plot!(p, rhs(out), x0, xs[end], linewidth=5)
p
end
n = 5
anim = @animate for i=1:n
make_euler_graph(i)
end
imgfile = tempname() * ".gif"
gif(anim, imgfile, fps = 1)
caption = """
Illustration of a function stitching together slope field lines to
approximate the answer to an initial-value problem. The other function drawn is the actual solution.
"""
ImageFile(imgfile, caption)
```
The illustration suggests the answer is yes, let's see. The solution
is drawn over $x$ values $1$ to $2$. Let's try piecing together $5$
pieces between $1$ and $2$ and see what we have.
The slope-field vectors are *scaled* versions of the vector `[1, F(y,x)]`. The `1`
is the part in the direction of the $x$ axis, so here we would like
that to be $0.2$ (which is $(2-1)/5$. So our vectors would be `0.2 *
[1, F(y,x)]`. To allow for generality, we use `h` in place of the
specific value $0.2$.
Then our first pieces would be the line connecting $(x_0,y_0)$ to
```math
\langle x_0, y_0 \rangle + h \cdot \langle 1, F(y_0, x_0) \rangle.
```
The above uses vector notation to add the piece scaled by $h$ to the
starting point. Rather than continue with that notation, we will use
subscripts. Let $x_1$, $y_1$ be the postion of the tip of the
vector. Then we have:
```math
x_1 = x_0 + h, \quad y_1 = y_0 + h F(y_0, x_0).
```
With this notation, it is easy to see what comes next:
```math
x_2 = x_1 + h, \quad y_2 = y_1 + h F(y_1, x_1).
```
We just shifted the indices forward by $1$. But graphically what is
this? It takes the tip of the first part of our "stitched" together
solution, finds the slope filed there (`[1, F(y,x)]`) and then uses
this direction to stitch together one more piece.
Clearly, we can repeat. The $n$th piece will end at:
```math
x_{n+1} = x_n + h, \quad y_{n+1} = y_n + h F(y_n, x_n).
```
For our example, we can do some numerics. We want $h=0.2$ and $5$
pieces, so values of $y$ at $x_0=1, x_1=1.2, x_2=1.4, x_3=1.6,
x_4=1.8,$ and $x_5=2$.
Below we do this in a loop. We have to be a bit careful, as in `Julia`
the vector of zeros we create to store our answers begins indexing at
$1$, and not $0$.
```julia;
n=5
h = (2-1)/n
xs = zeros(n+1)
ys = zeros(n+1)
xs[1] = x0 # index is off by 1
ys[1] = y0
for i in 1:n
xs[i + 1] = xs[i] + h
ys[i + 1] = ys[i] + h * F(ys[i], xs[i])
end
```
So how did we do? Let's look graphically:
```julia;
plot(exp(-1/2)*exp(x^2/2), x0, 2)
plot!(xs, ys)
```
Not bad. We wouldn't expect this to be exact - due to the concavity
of the solution, each step is an underestimate. However, we see it is
an okay approximation and would likely be better with a smaller $h$. A
topic we pursue in just a bit.
Rather than type in the above command each time, we wrap it all up in
a function. The inputs are $n$, $a=x_0$, $b=x_n$, $y_0$, and, most
importantly, $F$. The output is massaged into a function through a
call to `linterp`, rather than two vectors. The `linterp` function we define below just
finds a function that linearly interpolates between the points and is
`NaN` outside of the range of the $x$ values:
```julia;
function linterp(xs, ys)
function(x)
((x < xs[1]) || (x > xs[end])) && return NaN
for i in 1:(length(xs) - 1)
if xs[i] <= x < xs[i+1]
l = (x-xs[i]) / (xs[i+1] - xs[i])
return (1-l) * ys[i] + l * ys[i+1]
end
end
ys[end]
end
end
```
With that, here is our function to find an approximate solution to $y'=F(y,x)$ with initial condition:
```julia;
function euler(F, x0, xn, y0, n)
h = (xn - x0)/n
xs = zeros(n+1)
ys = zeros(n+1)
xs[1] = x0
ys[1] = y0
for i in 1:n
xs[i + 1] = xs[i] + h
ys[i + 1] = ys[i] + h * F(ys[i], xs[i])
end
linterp(xs, ys)
end
```
With `euler`, it becomes easy to explore different values.
For example, we thought the solution would look better with a smaller $h$ (or larger $n$). Instead of $n=5$, let's try $n=50$:
```julia;
u₁₂ = euler(F, 1, 2, 1, 50)
plot(exp(-1/2)*exp(x^2/2), x0, 2)
plot!(u₁₂, x0, 2)
```
It is more work for the computer, but not for us, and clearly a much better approximation to the actual answer is found.
## The Euler method
```julia; hold=true; echo=false
imgfile ="figures/euler.png"
caption = """
Figure from first publication of Euler's method. From [Gander and Wanner](http://www.unige.ch/~gander/Preprints/Ritz.pdf).
"""
ImageFile(:ODEs, imgfile, caption)
```
The name of our function reflects the [mathematician](https://en.wikipedia.org/wiki/Leonhard_Euler) associated with the iteration:
```math
x_{n+1} = x_n + h, \quad y_{n+1} = y_n + h \cdot F(y_n, x_n),
```
to approximate a solution to the first-order, ordinary differential
equation with initial values: $y'(x) = F(y,x)$.
[The Euler method](https://en.wikipedia.org/wiki/Euler_method) uses
linearization. Each "step" is just an approximation of the function
value $y(x_{n+1})$ with the value from the tangent line tangent to the
point $(x_n, y_n)$.
Each step introduces an error. The error in one step is known as the
*local truncation error* and can be shown to be about equal to $1/2
\cdot h^2 \cdot f''(x_{n})$ assuming $y$ has ``3`` or more derivatives.
The total error, or more commonly, *global truncation error*, is the
error between the actual answer and the approximate answer at the end
of the process. It reflects an accumulation of these local errors. This
error is *bounded* by a constant times $h$. Since it gets smaller as
$h$ gets smaller in direct proportion, the Euler method is called
*first order*.
Other, somewhat more complicated, methods have global truncation errors that
involve higher powers of $h$ - that is for the same size $h$, the
error is smaller. In analogy is the fact that Riemann sums have
error that depends on $h$, whereas other methods of approximating the
integral have smaller errors. For example, Simpson's rule had error
related to $h^4$. So, the Euler method may not be employed if there
is concern about total resources (time, computer, ...), it is
important for theoretical purposes in a manner similar to the role of the Riemann
integral.
In the examples, we will see that for many problems the simple Euler
method is satisfactory, but not always so. The task of numerically
solving differential equations is not a one-size-fits-all one. In the
following, a few different modifications are presented to the basic
Euler method, but this just scratches the surface of the topic.
#### Examples
##### Example
Consider the initial value problem $y'(x) = x + y(x)$ with initial
condition $y(0)=1$. This problem can be solved exactly. Here we
approximate over $[0,2]$ using Euler's method.
```julia;
𝑭(y,x) = x + y
𝒙0, 𝒙n, 𝒚0 = 0, 2, 1
𝒇 = euler(𝑭, 𝒙0, 𝒙n, 𝒚0, 25)
𝒇(𝒙n)
```
We graphically compare our approximate answer with the exact one:
```julia;
plot(𝒇, 𝒙0, 𝒙n)
𝒐ut = dsolve(D(u)(x) - 𝑭(u(x),x), u(x), ics = Dict(u(𝒙0) => 𝒚0))
plot(rhs(𝒐ut), 𝒙0, 𝒙n)
plot!(𝒇, 𝒙0, 𝒙n)
```
From the graph it appears our value for `f(xn)` will underestimate the
actual value of the solution slightly.
##### Example
The equation $y'(x) = \sin(x \cdot y)$ is not separable, so need not have an
easy solution. The default method will fail. Looking at the available methods with `sympy.classify_ode(𝐞qn, u(x))` shows a power series method which
can return a power series *approximation* (a Taylor polynomial). Let's
look at comparing an approximate answer given by the Euler method to
that one returned by `SymPy`.
First, the `SymPy` solution:
```julia;
𝐅(y,x) = sin(x*y)
𝐞qn = D(u)(x) - 𝐅(u(x), x)
𝐨ut = dsolve(𝐞qn, hint="1st_power_series")
```
If we assume $y(0) = 1$, we can continue:
```julia;
𝐨ut1 = dsolve(𝐞qn, u(x), ics=Dict(u(0) => 1), hint="1st_power_series")
```
The approximate value given by the Euler method is
```julia;
𝐱0, 𝐱n, 𝐲0 = 0, 2, 1
plot(legend=false)
vectorfieldplot!((x,y) -> [1, 𝐅(y,x)], xlims=(𝐱0, 𝐱n), ylims=(0,5))
plot!(rhs(𝐨ut1).removeO(), linewidth=5)
𝐮 = euler(𝐅, 𝐱0, 𝐱n, 𝐲0, 10)
plot!(𝐮, linewidth=5)
```
We see that the answer found from using a polynomial series matches that of Euler's method for a bit, but as time evolves, the approximate solution given by Euler's method more closely tracks the slope field.
##### Example
The
[Brachistochrone problem](http://www.unige.ch/~gander/Preprints/Ritz.pdf)
was posed by Johann Bernoulli in 1696. It asked for the curve between
two points for which an object will fall faster along that curve than
any other. For an example, a bead sliding on a wire will take a certain amount of time to get from point $A$ to point $B$, the time depending on the shape of the wire. Which shape will take the least amount of time?
```julia; hold=true; echo=false
imgfile = "figures/bead-game.jpg"
caption = """
A child's bead game. What shape wire will produce the shortest time for a bed to slide from a top to the bottom?
"""
ImageFile(:ODEs, imgfile, caption)
```
Restrict our attention to the $x$-$y$ plane, and consider a path,
between the point $(0,A)$ and $(B,0)$. Let $y(x)$ be the distance from
$A$, so $y(0)=0$ and at the end $y$ will be $A$.
[Galileo](http://www-history.mcs.st-and.ac.uk/HistTopics/Brachistochrone.html)
knew the straight line was not the curve, but incorrectly thought the
answer was a part of a circle.
```julia; hold=true; echo=false
imgfile = "figures/galileo.gif"
caption = """
As early as 1638, Galileo showed that an object falling along `AC` and then `CB` will fall faster than one traveling along `AB`, where `C` is on the arc of a circle.
From the [History of Math Archive](http://www-history.mcs.st-and.ac.uk/HistTopics/Brachistochrone.html).
"""
ImageFile(:ODEs, imgfile, caption)
```
This simulation also suggests that a curved path is better than the shorter straight one:
```julia; hold=true; echo=false; cache=true
##{{{brach_graph}}}
function brach(f, x0, vx0, y0, vy0, dt, n)
m = 1
g = 9.8
axs = Float64[0]
ays = Float64[-g]
vxs = Float64[vx0]
vys = Float64[vy0]
xs = Float64[x0]
ys = Float64[y0]
for i in 1:n
x = xs[end]
vx = vxs[end]
ax = -f'(x) * (f''(x) * vx^2 + g) / (1 + f'(x)^2)
ay = f''(x) * vx^2 + f'(x) * ax
push!(axs, ax)
push!(ays, ay)
push!(vxs, vx + ax * dt)
push!(vys, vys[end] + ay * dt)
push!(xs, x + vxs[end] * dt)# + (1/2) * ax * dt^2)
push!(ys, ys[end] + vys[end] * dt)# + (1/2) * ay * dt^2)
end
[xs ys vxs vys axs ays]
end
fs = [x -> 1 - x,
x -> (x-1)^2,
x -> 1 - sqrt(1 - (x-1)^2),
x -> - (x-1)*(x+1),
x -> 3*(x-1)*(x-1/3)
]
MS = [brach(f, 1/100, 0, 1, 0, 1/100, 100) for f in fs]
function make_brach_graph(n)
p = plot(xlim=(0,1), ylim=(-1/3, 1), legend=false)
for (i,f) in enumerate(fs)
plot!(f, 0, 1)
U = MS[i]
x = min(1.0, U[n,1])
scatter!(p, [x], [f(x)])
end
p
end
n = 4
anim = @animate for i=[1,5,10,15,20,25,30,35,40,45,50,55,60]
make_brach_graph(i)
end
imgfile = tempname() * ".gif"
gif(anim, imgfile, fps = 1)
caption = """
The race is on. An illustration of beads falling along a path, as can be seen, some paths are faster than others. The fastest path would follow a cycloid. See [Bensky and Moelter](https://pdfs.semanticscholar.org/66c1/4d8da6f2f5f2b93faf4deb77aafc7febb43a.pdf) for details on simulating a bead on a wire.
"""
ImageFile(imgfile, caption)
```
Now, the natural question is which path is best? The solution can be
[reduced](http://mathworld.wolfram.com/BrachistochroneProblem.html) to
solving this equation for a positive $c$:
```math
1 + (y'(x))^2 = \frac{c}{y}, \quad c > 0.
```
Reexpressing, this becomes:
```math
\frac{dy}{dx} = \sqrt{\frac{C-y}{y}}.
```
This is a separable equation and can be solved, but even `SymPy` has
trouble with this integral. However, the result has been known to be a piece of a cycloid since the insightful
Jacob Bernoulli used an analogy from light bending to approach the problem. The answer is best described parametrically
through:
```math
x(u) = c\cdot u - \frac{c}{2}\sin(2u), \quad y(u) = \frac{c}{2}( 1- \cos(2u)), \quad 0 \leq u \leq U.
```
The values of $U$ and $c$ must satisfy $(x(U), y(U)) = (B, A)$.
Rather than pursue this, we will solve it numerically for a fixed
value of $C$ over a fixed interval to see the shape.
The equation can be written in terms of $y'=F(y,x)$, where
```math
F(y,x) = \sqrt{\frac{c-y}{y}}.
```
But as $y_0 = 0$, we immediately would have a problem with the first step, as there would be division by $0$.
This says that for the optimal solution, the bead picks up speed by first sliding straight down before heading off towards $B$. That's great for the physics, but runs roughshod over our Euler method, as the first step has an infinity.
For this, we can try the *backwards Euler* method which uses the slope at $(x_{n+1}, y_{n+1})$, rather than $(x_n, y_n)$. The update step becomes:
```math
y_{n+1} = y_n + h \cdot F(y_{n+1}, x_{n+1}).
```
Seems innocuous, but the value we are trying to find, $y_{n+1}$, is
now on both sides of the equation, so is only *implicitly* defined. In
this code, we use the `find_zero` function from the `Roots` package. The
caveat is, this function needs a good initial guess, and the one we
use below need not be widely applicable.
```julia;
function back_euler(F, x0, xn, y0, n)
h = (xn - x0)/n
xs = zeros(n+1)
ys = zeros(n+1)
xs[1] = x0
ys[1] = y0
for i in 1:n
xs[i + 1] = xs[i] + h
## solve y[i+1] = y[i] + h * F(y[i+1], x[i+1])
ys[i + 1] = find_zero(y -> ys[i] + h * F(y, xs[i + 1]) - y, ys[i]+h)
end
linterp(xs, ys)
end
```
We then have with $C=1$ over the interval $[0,1.2]$ the following:
```julia;
𝐹(y, x; C=1) = sqrt(C/y - 1)
𝑥0, 𝑥n, 𝑦0 = 0, 1.2, 0
cyc = back_euler(𝐹, 𝑥0, 𝑥n, 𝑦0, 50)
plot(x -> 1 - cyc(x), 𝑥0, 𝑥n)
```
Remember, $y$ is the displacement from the top, so it is
non-negative. Above we flipped the graph to make it look more like
expectation. In general, the trajectory may actually dip below the
ending point and come back up. The above won't see this, for as
written $dy/dx \geq 0$, which need not be the case, as the defining
equation is in terms of $(dy/dx)^2$, so the derivative could have any
sign.
##### Example: stiff equations
The Euler method is *convergent*, in that as $h$ goes to $0$, the
approximate solution will converge to the actual answer. However, this
does not say that for a fixed size $h$, the approximate value will be
good. For example, consider the differential equation $y'(x) =
-5y$. This has solution $y(x)=y_0 e^{-5x}$. However, if we try the
Euler method to get an answer over $[0,2]$ with $h=0.5$ we don't see
this:
```julia;
(y,x) = -5y
𝓍0, 𝓍n, 𝓎0 = 0, 2, 1
𝓊 = euler(, 𝓍0, 𝓍n, 𝓎0, 4) # n =4 => h = 2/4
vectorfieldplot((x,y) -> [1, (y,x)], xlims=(0, 2), ylims=(-5, 5))
plot!(x -> y0 * exp(-5x), 0, 2, linewidth=5)
plot!(𝓊, 0, 2, linewidth=5)
```
What we see is that the value of $h$ is too big to capture the decay
scale of the solution. A smaller $h$, can do much better:
```julia;
𝓊₁ = euler(, 𝓍0, 𝓍n, 𝓎0, 50) # n=50 => h = 2/50
plot(x -> y0 * exp(-5x), 0, 2)
plot!(𝓊₁, 0, 2)
```
This is an example of a
[stiff equation](https://en.wikipedia.org/wiki/Stiff_equation). Such
equations cause explicit methods like the Euler one problems, as small
$h$s are needed to good results.
The implicit, backward Euler method does not have this issue, as we can see here:
```julia;
𝓊₂ = back_euler(, 𝓍0, 𝓍n, 𝓎0, 4) # n =4 => h = 2/4
vectorfieldplot((x,y) -> [1, (y,x)], xlims=(0, 2), ylims=(-1, 1))
plot!(x -> y0 * exp(-5x), 0, 2, linewidth=5)
plot!(𝓊₂, 0, 2, linewidth=5)
```
##### Example: The pendulum
The differential equation describing the simple pendulum is
```math
\theta''(t) = - \frac{g}{l}\sin(\theta(t)).
```
The typical approach to solving for $\theta(t)$ is to use the small-angle approximation that $\sin(x) \approx x$, and then the differential equation simplifies to:
$\theta''(t) = -g/l \cdot \theta(t)$, which is easily solved.
Here we try to get an answer numerically. However, the problem, as stated, is not a first order equation due to the $\theta''(t)$ term. If we let $u(t) = \theta(t)$ and $v(t) = \theta'(t)$, then we get *two* coupled first order equations:
```math
v'(t) = -g/l \cdot \sin(u(t)), \quad u'(t) = v(t).
```
We can try the Euler method here. A simple approach might be this iteration scheme:
```math
\begin{align*}
x_{n+1} &= x_n + h,\\
u_{n+1} &= u_n + h v_n,\\
v_{n+1} &= v_n - h \cdot g/l \cdot \sin(u_n).
\end{align*}
```
Here we need *two* initial conditions: one for the initial value
$u(t_0)$ and the initial value of $u'(t_0)$. We have seen if we start at an angle $a$ and release the bob from rest, so $u'(0)=0$ we get a sinusoidal answer to the linearized model. What happens here? We let $a=1$, $L=5$ and $g=9.8$:
We write a function to solve this starting from $(x_0, y_0)$ and ending at $x_n$:
```julia;
function euler2(x0, xn, y0, yp0, n; g=9.8, l = 5)
xs, us, vs = zeros(n+1), zeros(n+1), zeros(n+1)
xs[1], us[1], vs[1] = x0, y0, yp0
h = (xn - x0)/n
for i = 1:n
xs[i+1] = xs[i] + h
us[i+1] = us[i] + h * vs[i]
vs[i+1] = vs[i] + h * (-g / l) * sin(us[i])
end
linterp(xs, us)
end
```
Let's take $a = \pi/4$ as the initial angle, then the approximate
solution should be $\pi/4\cos(\sqrt{g/l}x)$ with period $T =
2\pi\sqrt{l/g}$. We try first to plot then over 4 periods:
```julia;
𝗅, 𝗀 = 5, 9.8
𝖳 = 2pi * sqrt(𝗅/𝗀)
𝗑0, 𝗑n, 𝗒0, 𝗒p0 = 0, 4𝖳, pi/4, 0
plot(euler2(𝗑0, 𝗑n, 𝗒0, 𝗒p0, 20), 0, 4𝖳)
```
Something looks terribly amiss. The issue is the step size, $h$, is
too large to capture the oscillations. There are basically only $5$
steps to capture a full up and down motion. Instead, we try to get $20$ steps per period
so $n$ must be not $20$, but $4 \cdot 20 \cdot T \approx 360$. To this
graph, we add the approximate one:
```julia;
plot(euler2(𝗑0, 𝗑n, 𝗒0, 𝗒p0, 360), 0, 4𝖳)
plot!(x -> pi/4*cos(sqrt(𝗀/𝗅)*x), 0, 4𝖳)
```
Even now, we still see that something seems amiss, though the issue is
not as dramatic as before. The oscillatory nature of the pendulum is
seen, but in the Euler solution, the amplitude grows, which would
necessarily mean energy is being put into the system. A familiar
instance of a pendulum would be a child on a swing. Without pumping
the legs - putting energy in the system - the height of the swing's
arc will not grow. Though we now have oscillatory motion, this growth
indicates the solution is still not quite right. The issue is likely
due to each step mildly overcorrecting and resulting in an overall
growth. One of the questions pursues this a bit further.
## Questions
##### Question
Use Euler's method with $n=5$ to approximate $u(1)$ where
```math
u'(x) = x - u(x), \quad u(0) = 1
```
```julia; hold=true; echo=false
F(y,x) = x - y
x0, xn, y0 = 0, 1, 1
val = euler(F, x0, xn, y0, 5)(1)
numericq(val)
```
##### Question
Consider the equation
```math
y' = x \cdot \sin(y), \quad y(0) = 1.
```
Use Euler's method with $n=50$ to find the value of $y(5)$.
```julia; hold=true; echo=false
F(y, x) = x * sin(y)
x0, xn, y0 = 0, 5, 1
n = 50
u = euler(F, x0, xn, y0, n)
numericq(u(xn))
```
##### Question
Consider the ordinary differential equation
```math
\frac{dy}{dx} = 1 - 2\frac{y}{x}, \quad y(1) = 0.
```
Use Euler's method to solve for $y(2)$ when $n=50$.
```julia; hold=true; echo=false
F(y, x) = 1 - 2y/x
x0, xn, y0 = 1, 2, 0
n = 50
u = euler(F, x0, xn, y0, n)
numericq(u(xn))
```
##### Question
Consider the ordinary differential equation
```math
\frac{dy}{dx} = \frac{y \cdot \log(y)}{x}, \quad y(2) = e.
```
Use Euler's method to solve for $y(3)$ when $n=25$.
```julia; hold=true; echo=false
F(y, x) = y*log(y)/x
x0, xn, y0 = 2, 3, exp(1)
n = 25
u = euler(F, x0, xn, y0, n)
numericq(u(xn))
```
##### Question
Consider the first-order non-linear ODE
```math
y' = y \cdot (1-2x), \quad y(0) = 1.
```
Use Euler's method with $n=50$ to approximate the solution $y$ over $[0,2]$.
What is the value at $x=1/2$?
```julia; hold=true; echo=false
F(y, x) = y * (1-2x)
x0, xn, y0 = 0, 2, 1
n = 50
u = euler(F, x0, xn, y0, n)
numericq(u(1/2))
```
What is the value at $x=3/2$?
```julia; hold=true; echo=false
F(y, x) = y * (1-2x)
x0, xn, y0 = 0, 2, 1
n = 50
u = euler(F, x0, xn, y0, n)
numericq(u(3/2))
```
##### Question: The pendulum revisited.
The issue with the pendulum's solution growing in amplitude can be
addressed using a modification to the Euler method attributed to
[Cromer](http://astro.physics.ncsu.edu/urca/course_files/Lesson14/index.html). The
fix is to replace the term `sin(us[i])` in the line `vs[i+1] = vs[i] + h * (-g / l) *
sin(us[i])` of the `euler2` function with `sin(us[i+1])`, which uses the updated angular
velocity in the ``2``nd step in place of the value before the step.
Modify the `euler2` function to implement the Euler-Cromer method. What do you see?
```julia; hold=true; echo=false
choices = [
"The same as before - the amplitude grows",
"The solution is identical to that of the approximation found by linearization of the sine term",
"The solution has a constant amplitude, but its period is slightly *shorter* than that of the approximate solution found by linearization",
"The solution has a constant amplitude, but its period is slightly *longer* than that of the approximate solution found by linearization"]
ans = 4
radioq(choices, ans, keep_order=true)
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.4 KiB

BIN
CwJ/ODEs/figures/euler.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 885 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

914
CwJ/ODEs/odes.jmd Normal file
View File

@ -0,0 +1,914 @@
# ODEs
This section uses these add-on packages:
```julia
using CalculusWithJulia
using Plots
using SymPy
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
const frontmatter = (
title = "ODEs",
description = "Calculus with Julia: ODEs",
tags = ["CalculusWithJulia", "odes", "odes"],
);
nothing
```
----
Some relationships are easiest to describe in terms of rates or derivatives. For example:
* Knowing the speed of a car and how long it has been driving can
summarize the car's location.
* One of Newton's famous laws, $F=ma$, describes the force on an
object of mass $m$ in terms of the acceleration. The acceleration
is the derivative of velocity, which in turn is the derivative of
position. So if we know the rates of change of $v(t)$ or $x(t)$, we
can differentiate to find $F$.
* Newton's law of [cooling](http://tinyurl.com/z4lmetp). This
describes the temperature change in an object due to a difference in
temperature with the object's surroundings. The formula being,
$T'(t) = -r \left(T(t) - T_a \right)$, where $T(t)$ is temperature at time $t$
and $T_a$ the ambient temperature.
* [Hooke's law](http://tinyurl.com/kbz7r8l) relates force on an object
to the position on the object, through $F = k x$. This is
appropriate for many systems involving springs. Combined with
Newton's law $F=ma$, this leads to an equation that $x$ must
satisfy: $m x''(t) = k x(t)$.
## Motion with constant acceleration
Let's consider the case of constant acceleration. This describes how nearby objects fall to earth, as the force due to gravity is assumed to be a constant, so the acceleration is the constant force divided by the constant mass.
With constant acceleration, what is the velocity?
As mentioned, we have $dv/dt = a$ for any velocity function $v(t)$, but in this case, the right hand side is assumed to be constant. How does this restrict the possible functions, $v(t)$, that the velocity can be?
Here we can integrate to find that any answer must look like the following for some constant of integration:
```math
v(t) = \int \frac{dv}{dt} dt = \int a dt = at + C.
```
If we are given the velocity at a fixed time, say $v(t_0) = v_0$, then we can use the definite integral to get:
```math
v(t) - v(t_0) = \int_{t_0}^t a dt = at - a t_0.
```
Solving, gives:
```math
v(t) = v_0 + a (t - t_0).
```
This expresses the velocity at time $t$ in terms of the initial velocity, the constant acceleration and the time duration.
A natural question might be, is this the *only* possible answer? There are a few useful ways to think about this.
First, suppose there were another, say $u(t)$. Then define $w(t)$ to be the difference: $w(t) = v(t) - u(t)$. We would have that $w'(t) = v'(t) - u'(t) = a - a = 0$. But from the mean value theorem, a function whose derivative is *continuously* $0$, will necessarily be a constant. So at most, $v$ and $u$ will differ by a constant, but if both are equal at $t_0$, they will be equal for all $t$.
Second, since the derivative of any solution is a continuous function, it is true by the fundamental theorem of calculus that it *must* satisfy the form for the antiderivative. The initial condition makes the answer unique, as the indeterminate $C$ can take only one value.
Summarizing, we have
> If ``v(t)`` satisfies the equation: ``v'(t) = a``, ``v(t_0) = v_0,``
> then the unique solution will be ``v(t) = v_0 + a (t - t_0)``.
Next, what about position? Here we know that the time derivative of position yields the velocity, so we should have that the unknown position function satisfies this equation and initial condition:
```math
x'(t) = v(t) = v_0 + a (t - t_0), \quad x(t_0) = x_0.
```
Again, we can integrate to get an answer for any value $t$:
```math
x(t) - x(t_0) = \int_{t_0}^t \frac{dv}{dt} dt = (v_0t + \frac{1}{2}a t^2 - at_0 t) |_{t_0}^t =
(v_0 - at_0)(t - t_0) + \frac{1}{2} a (t^2 - t_0^2).
```
There are three constants: the initial value for the independent variable, $t_0$, and the two initial values for the velocity and position, $v_0, x_0$. Assuming $t_0 = 0$, we can simplify the above to get a formula familiar from introductory physics:
```math
x(t) = x_0 + v_0 t + \frac{1}{2} at^2.
```
Again, the mean value theorem can show that with the initial value specified this is the only possible solution.
## First-order initial-value problems
The two problems just looked at can be summarized by the following. We are looking for solutions to an equation of the form (taking $y$ and $x$ as the variables, in place of $x$ and $t$):
```math
y'(x) = f(x), \quad y(x_0) = y_0.
```
This is called an *ordinary differential equation* (ODE), as it is an equation involving the ordinary derivative of an unknown function, $y$.
This is called a first-order, ordinary differential equation, as there is only the first derivative involved.
This is called an initial-value problem, as the value at the initial point $x_0$ is specified as part of the problem.
#### Examples
Let's look at a few more examples, and then generalize.
##### Example: Newton's law of cooling
Consider the ordinary differential equation given by Newton's law of cooling:
```math
T'(t) = -r (T(t) - T_a), \quad T(0) = T_0
```
This equation is also first order, as it involves just the first derivative, but notice that on the right hand side is the function $T$, not the variable being differentiated against, $t$.
As we have a difference on the right hand side, we rename the variable through $U(t) = T(t) - T_a$. Then, as $U'(t) = T'(t)$, we have the equation:
```math
U'(t) = -r U(t), \quad U(0) = U_0.
```
This shows that the rate of change of $U$ depends on $U$. Large postive values indicate a negative rate of change - a push back towards the origin, and large negative values of $U$ indicate a positive rate of change - again, a push back towards the origin. We shouldn't be surprised to either see a steady decay towards the origin, or oscillations about the origin.
What will we find? This equation is different from the previous two
equations, as the function $U$ appears on both sides. However, we can
rearrange to get:
```math
\frac{dU}{dt}\frac{1}{U(t)} = -r.
```
This suggests integrating both sides, as before. Here we do the "$u$"-substitution $u = U(t)$, so $du = U'(t) dt$:
```math
-rt + C = \int \frac{dU}{dt}\frac{1}{U(t)} dt =
\int \frac{1}{u}du = \log(u).
```
Solving gives: $u = U(t) = e^C e^{-rt}$. Using the initial condition forces $e^C = U(t_0) = T(0) - T_a$ and so our solution in terms of $T(t)$ is:
```math
T(t) - T_a = (T_0 - T_a) e^{-rt}.
```
In words, the initial difference in temperature of the object and the environment exponentially decays to $0$.
That is, as $t > 0$ goes to $\infty$, the right hand will go to $0$ for $r > 0$, so $T(t) \rightarrow T_a$ - the temperature of the object will reach the ambient temperature. The rate of this is largest when the difference between $T(t)$ and $T_a$ is largest, so when objects are cooling the statement "hotter things cool faster" is appropriate.
A graph of the solution for $T_0=200$ and $T_a=72$ and $r=1/2$ is made
as follows. We've added a few line segments from the defining formula,
and see that they are indeed tangent to the solution found for the differential equation.
```julia; hold=true; echo=false
T0, Ta, r = 200, 72, 1/2
f(u, t) = -r*(u - Ta)
v(t) = Ta + (T0 - Ta) * exp(-r*t)
p = plot(v, 0, 6, linewidth=4, legend=false)
[plot!(p, x -> v(a) + f(v(a), a) * (x-a), 0, 6) for a in 1:2:5]
p
```
The above is implicitly assuming that there could be no other
solution, than the one we found. Is that really the case? We will see
that there is a theorem that can answer this, but in this case, the
trick of taking the difference of two equations satisfying the
equation leads to the equation $W'(t) = r W(t), \text{ and } W(0) =
0$. This equation has a general solution of $W(t) = Ce^{rt}$ and the
initial condition forces $C=0$, so $W(t) = 0$, as before. Hence, the
initial-value problem for Newton's law of cooling has a unique
solution.
In general, the equation could be written as (again using $y$ and $x$ as the variables):
```math
y'(x) = g(y), \quad y(x_0) = y_0
```
This is called an *autonomous*, first-order ODE, as the right-hand side does not depend on $x$ (except through ``y(x)``).
Let $F(y) = \int_{y_0}^y du/g(u)$, then a solution to the above is $F(y) = x - x_0$, assuming $1/g(u)$ is integrable.
##### Example: Toricelli's law
[Toricelli's Law](http://tinyurl.com/hxvf3qp) describes the speed a jet of water will leave a vessel through an opening below the surface of the water. The formula is $v=\sqrt{2gh}$, where $h$ is the height of the water above the hole and $g$ the gravitational constant. This arises from equating the kinetic energy gained, $1/2 mv^2$ and potential energy lost, $mgh$, for the exiting water.
An application of Torricelli's law is to describe the volume of water in a tank over time, $V(t)$. Imagine a cylinder of cross sectional area $A$ with a hole of cross sectional diameter $a$ at the bottom, Then $V(t) = A h(t)$, with $h$ giving the height. The change in volume over $\Delta t$ units of time must be given by the value $a v(t) \Delta t$, or
```math
V(t+\Delta t) - V(t) = -a v(t) \Delta t = -a\sqrt{2gh(t)}\Delta t
```
This suggests the following formula, written in terms of $h(t)$ should apply:
```math
A\frac{dh}{dt} = -a \sqrt{2gh(t)}.
```
Rearranging, this gives an equation
```math
\frac{dh}{dt} \frac{1}{\sqrt{h(t)}} = -\frac{a}{A}\sqrt{2g}.
```
Integrating both sides yields:
```math
2\sqrt{h(t)} = -\frac{a}{A}\sqrt{2g} t + C.
```
If $h(0) = h_0 = V(0)/A$, we can solve for $C = 2\sqrt{h_0}$, or
```math
\sqrt{h(t)} = \sqrt{h_0} -\frac{1}{2}\frac{a}{A}\sqrt{2g} t.
```
Setting $h(t)=0$ and solving for $t$ shows that the time to drain the tank would be $(2A)/(a\sqrt{2g})\sqrt{h_0}$.
##### Example
Consider now the equation
```math
y'(x) = y(x)^2, \quad y(x_0) = y_0.
```
This is called a *non-linear* ordinary differential equation, as the $y$ variable on the right hand side presents itself in a non-linear form (it is squared). These equations may have solutions that are not defined for all times.
This particular problem can be solved as before by moving the $y^2$ to the left hand side and integrating to yield:
```math
y(x) = - \frac{1}{C + x},
```
and with the initial condition:
```math
y(x) = \frac{y_0}{1 - y_0(x - x_0)}.
```
This answer can demonstrate *blow-up*. That is, in a finite range for $x$ values, the $y$ value can go to infinity. For example, if the initial conditions are $x_0=0$ and $y_0 = 1$, then $y(x) = 1/(1-x)$ is only defined for $x \geq x_0$ on $[0,1)$, as at $x=1$ there is a vertical asymptote.
## Separable equations
We've seen equations of the form $y'(x) = f(x)$ and $y'(x) = g(y)$ both solved by integrating. The same tricks will work for equations of the form $y'(x) = f(x) \cdot g(y)$. Such equations are called *separable*.
Basically, we equate up to constants
```math
\int \frac{dy}{g(y)} = \int f(x) dx.
```
For example, suppose we have the equation
```math
\frac{dy}{dx} = x \cdot y(x), \quad y(x_0) = y_0.
```
Then we can find a solution, $y(x)$ through:
```math
\int \frac{dy}{y} = \int x dx,
```
or
```math
\log(y) = \frac{x^2}{2} + C
```
Which yields:
```math
y(x) = e^C e^{\frac{1}{2}x^2}.
```
Substituting in $x_0$ yields a value for $C$ in terms of the initial information $y_0$ and $x_0$.
## Symbolic solutions
Differential equations are classified according to their type. Different types have different methods for solution, when a solution exists.
The first-order initial value equations we have seen can be described generally by
```math
\begin{align*}
y'(x) &= F(y,x),\\
y(x_0) &= x_0.
\end{align*}
```
Special cases include:
* *linear* if the function $F$ is linear in $y$;
* *autonomous* if $F(y,x) = G(y)$ (a function of $y$ alone);
* *separable* if $F(y,x) = G(y)H(x)$.
As seen, separable equations are approached by moving the "$y$" terms to one side, the "$x$" terms to the other and integrating. This also applies to autonomous equations then. There are other families of equation types that have exact solutions, and techniques for solution, summarized at this [Wikipedia page](http://tinyurl.com/zywzz4q).
Rather than go over these various families, we demonstrate that `SymPy` can solve many of these equations symbolically.
The `solve` function in `SymPy` solves equations for unknown
*variables*. As a differential equation involves an unknown *function*
there is a different function, `dsolve`. The basic idea is to describe
the differential equation using a symbolic function and then call
`dsolve` to solve the expression.
Symbolic functions are defined by the `@syms` macro (also see `?symbols`) using parentheses to distinguish a function from a variable:
```julia;
@syms x u() # a symbolic variable and a symbolic function
```
We will solve the following, known as the *logistic equation*:
```math
u'(x) = a u(1-u), \quad a > 0
```
Before beginning, we look at the form of the equation. When $u=0$ or
$u=1$ the rate of change is $0$, so we expect the function might be
bounded within that range. If not, when $u$ gets bigger than $1$, then
the slope is negative and when $u$ gets less than $0$, the slope is
positive, so there will at least be a drift back to the range
$[0,1]$. Let's see exactly what happens. We define a parameter,
restricting `a` to be positive:
```julia;
@syms a::positive
```
To specify a derivative of `u` in our equation we can use `diff(u(x),x)` but here, for visual simplicity, use the `Differential` operator, as follows:
```julia;
D = Differential(x)
eqn = D(u)(x) ~ a * u(x) * (1 - u(x)) # use l \Equal[tab] r, Eq(l,r), or just l - r
```
In the above, we evaluate the symbolic function at the variable `x`
through the use of `u(x)` in the expression. The equation above uses `~` to combine the left- and right-hand sides as an equation in `SymPy`. (A unicode equals is also available for this task). This is a shortcut for `Eq(l,r)`, but even just using `l - r` would suffice, as the default assumption for an equation is that it is set to `0`.
The `Differential` operation is borrowed from the `ModelingToolkit` package, which will be introduced later.
To finish, we call `dsolve` to find a solution (if possible):
```julia;
out = dsolve(eqn)
```
This answer - to a first-order equation - has one free constant,
`C_1`, which can be solved for from an initial condition. We can see
that when $a > 0$, as $x$ goes to positive infinity the solution goes
to $1$, and when $x$ goes to negative infinity, the solution goes to $0$
and otherwise is trapped in between, as expected.
The limits are confirmed by investigating the limits of the right-hand:
```julia;
limit(rhs(out), x => oo), limit(rhs(out), x => -oo)
```
We can confirm that the solution is always increasing, hence trapped within ``[0,1]`` by observing that the derivative is positive when `C₁` is positive:
```juila;
diff(rhs(out),x)
```
Suppose that $u(0) = 1/2$. Can we solve for $C_1$ symbolically? We can use `solve`, but first we will need to get the symbol for `C_1`:
```julia;
eq = rhs(out) # just the right hand side
C1 = first(setdiff(free_symbols(eq), (x,a))) # fish out constant, it is not x or a
c1 = solve(eq(x=>0) - 1//2, C1)
```
And we plug in with:
```julia;
eq(C1 => c1[1])
```
That's a lot of work. The `dsolve` function in `SymPy` allows initial conditions to be specified for some equations. In this case, ours is $x_0=0$ and $y_0=1/2$. The extra arguments passed in through a dictionary to the `ics` argument:
```julia;
x0, y0 = 0, Sym(1//2)
dsolve(eqn, u(x), ics=Dict(u(x0) => y0))
```
(The one subtlety is the need to write the rational value as a symbolic expression, as otherwise it will get converted to a floating point value prior to being passed along.)
##### Example: Hooke's law
In the first example, we solved for position, $x(t)$, from an assumption of constant acceleration in two steps. The equation relating the two is a second-order equation: $x''(t) = a$, so two constants are generated. That a second-order equation could be reduced to two first-order equations is not happy circumstance, as it can always be done. Rather than show the technique though, we demonstrate that `SymPy` can also handle some second-order ODEs.
Hooke's law relates the force on an object to its position via $F=ma = -kx$, or $x''(t) = -(k/m)x(t)$.
Suppose $k > 0$. Then we can solve, similar to the above, with:
```julia;
@syms k::positive m::positive
D2 = D ∘ D # takes second derivative through composition
eqnh = D2(u)(x) ~ -(k/m)*u(x)
dsolve(eqnh)
```
Here we find two constants, as anticipated, for we would guess that
two integrations are needed in the solution.
Suppose the spring were started by pulling it down to a bottom and
releasing. The initial position at time $0$ would be $a$, say, and
initial velocity $0$. Here we get the solution specifying initial
conditions on the function and its derivative (expressed through
`u'`):
```julia;
dsolve(eqnh, u(x), ics = Dict(u(0) => -a, D(u)(0) => 0))
```
We get that the motion will follow
$u(x) = -a \cos(\sqrt{k/m}x)$. This is simple oscillatory behavior. As the spring stretches, the force gets large enough to pull it back, and as it compresses the force gets large enough to push it back. The amplitude of this oscillation is $a$ and the period $2\pi/\sqrt{k/m}$. Larger $k$ values mean shorter periods; larger $m$ values mean longer periods.
##### Example: the pendulum
The simple gravity [pendulum](http://tinyurl.com/h8ys6ts) is an idealization of a physical pendulum that models a "bob" with mass $m$ swinging on a massless rod of length $l$ in a frictionless world governed only by the gravitational constant $g$. The motion can be described by this differential equation for the angle, $\theta$, made from the vertical:
```math
\theta''(t) + \frac{g}{l}\sin(\theta(t)) = 0
```
Can this second-order equation be solved by `SymPy`?
```julia;
@syms g::positive l::positive theta()=>"θ"
eqnp = D2(theta)(x) + g/l*sin(theta(x))
```
Trying to do so, can cause `SymPy` to hang or simply give up and repeat its input; no easy answer is forthcoming for this equation.
In general, for the first-order initial value problem characterized by
$y'(x) = F(y,x)$, there are conditions
([Peano](http://tinyurl.com/h663wba) and
[Picard-Lindelof](http://tinyurl.com/3rbde5e)) that can guarantee the
existence (and uniqueness) of equation locally, but there may not be
an accompanying method to actually find it. This particular problem
has a solution, but it can not be written in terms of elementary
functions.
However, as [Huygens](https://en.wikipedia.org/wiki/Christiaan_Huygens) first noted, if the angles involved are small, then we approximate the solution through the linearization $\sin(\theta(t)) \approx \theta(t)$. The resulting equation for an approximate answer is just that of Hooke:
```math
\theta''(t) + \frac{g}{l}\theta(t) = 0
```
Here, the solution is in terms of sines and cosines, with period given by $T = 2\pi/\sqrt{k} = 2\pi\cdot\sqrt{l/g}$. The answer does not depend on the mass, $m$, of the bob nor the amplitude of the motion, provided the small-angle approximation is valid.
If we pull the bob back an angle $a$ and release it then the initial conditions are $\theta(0) = a$ and $\theta'(a) = 0$. This gives the solution:
```julia;
eqnp₁ = D2(u)(x) + g/l * u(x)
dsolve(eqnp₁, u(x), ics=Dict(u(0) => a, D(u)(0) => 0))
```
##### Example: hanging cables
A chain hangs between two supports a distance $L$ apart. What shape
will it take if there are no forces outside of gravity acting on it?
What about if the force is uniform along length of the chain, like a
suspension bridge? How will the shape differ then?
Let $y(x)$ describe the chain at position $x$, with $0 \leq x \leq L$,
say. We consider first the case of the chain with no force save
gravity. Let $w(x)$ be the density of the chain at $x$, taken below to be a constant.
The chain is in equilibrium, so tension, $T(x)$, in the chain will be
in the direction of the derivative. Let $V$ be the vertical component
and $H$ the horizontal component. With only gravity acting on the
chain, the value of $H$ will be a constant. The value of $V$ will vary
with position.
At a point $x$, there is $s(x)$ amount of chain with weight $w \cdot s(x)$. The tension is in the direction of the tangent line, so:
```math
\tan(\theta) = y'(x) = \frac{w s(x)}{H}.
```
In terms of an increment of chain, we have:
```math
\frac{w ds}{H} = d(y'(x)).
```
That is, the ratio of the vertical and horizontal tensions in the increment are in balance with the differential of the derivative.
But $ds = \sqrt{dx^2 + dy^2} = \sqrt{dx^2 + y'(x)^2 dx^2} = \sqrt{1 + y'(x)^2}dx$, so we can simplify to:
```math
\frac{w}{H}\sqrt{1 + y'(x)^2}dx =y''(x)dx.
```
This yields the second-order equation:
```math
y''(x) = \frac{w}{H} \sqrt{1 + y'(x)^2}.
```
We enter this into `Julia`:
```julia;
@syms w::positive H::positive y()
eqnc = D2(y)(x) ~ (w/H) * sqrt(1 + y'(x)^2)
```
Unfortunately, `SymPy` needs a bit of help with this problem, by breaking the problem into
steps.
For the first step we solve for the derivative. Let $u = y'$,
then we have $u'(x) = (w/H)\sqrt{1 + u(x)^2}$:
```julia;
eqnc₁ = subs(eqnc, D(y)(x) => u(x))
```
and can solve via:
```julia;
outc = dsolve(eqnc₁)
```
So $y'(x) = u(x) = \sinh(C_1 + w \cdot x/H)$. This can be solved by direct
integration as there is no $y(x)$ term on the right hand
side.
```julia;
D(y)(x) ~ rhs(outc)
```
We see a simple linear transformation involving the hyperbolic sine. To avoid, `SymPy` struggling with the above equation, and knowing the hyperbolic sine is the derivative of the hyperbolic cosine, we anticipate an answer and verify it:
```julia;
yc = (H/w)*cosh(C1 + w*x/H)
diff(yc, x) == rhs(outc) # == not \Equal[tab]
```
The shape is a hyperbolic cosine, known as the catenary.
```julia; echo=false
imgfile = "figures/verrazano-narrows-bridge-anniversary-historic-photos-2.jpeg"
caption = """
The cables of an unloaded suspension bridge have a different shape than a loaded suspension bridge. As seen, the cables in this [figure](https://www.brownstoner.com/brooklyn-life/verrazano-narrows-bridge-anniversary-historic-photos/) would be modeled by a catenary.
"""
ImageFile(:ODEs, imgfile, caption)
```
----
If the chain has a uniform load -- like a suspension bridge with a deck -- sufficient to make the weight of the chain negligible, then how does the above change? Then the vertical tension comes from $Udx$ and not $w ds$, so the equation becomes instead:
```math
\frac{Udx}{H} = d(y'(x)).
```
This $y''(x) = U/H$, a constant. So it's answer will be a parabola.
##### Example: projectile motion in a medium
The first example describes projectile motion without air resistance. If we use $(x(t), y(t))$ to describe position at time $t$, the functions satisfy:
```math
x''(t) = 0, \quad y''(t) = -g.
```
That is, the $x$ position - where no forces act - has $0$ acceleration, and the $y$ position - where the force of gravity acts - has constant acceleration, $-g$, where $g=9.8m/s^2$ is the gravitational constant. These equations can be solved to give:
```math
x(t) = x_0 + v_0 \cos(\alpha) t, \quad y(t) = y_0 + v_0\sin(\alpha)t - \frac{1}{2}g \cdot t^2.
```
Furthermore, we can solve for $t$ from $x(t)$, to get an equation describing $y(x)$. Here are all the steps:
```julia; hold=true
@syms x0::real y0::real v0::real alpha::real g::real
@syms t x u()
a1 = dsolve(D2(u)(x) ~ 0, u(x), ics=Dict(u(0) => x0, D(u)(0) => v0 * cos(alpha)))
a2 = dsolve(D2(u)(x) ~ -g, u(x), ics=Dict(u(0) => y0, D(u)(0) => v0 * sin(alpha)))
ts = solve(t - rhs(a1), x)[1]
y = simplify(rhs(a2)(t => ts))
sympy.Poly(y, x).coeffs()
```
Though `y` is messy, it can be seen that the answer is a quadratic polynomial in $x$ yielding the familiar
parabolic motion for a trajectory. The output shows the coefficients.
In a resistive medium, there are drag forces at play. If this force is
proportional to the velocity, say, with proportion $\gamma$, then the
equations become:
```math
\begin{align*}
x''(t) &= -\gamma x'(t), & \quad y''(t) &= -\gamma y'(t) -g, \\
x(0) &= x_0, &\quad y(0) &= y_0,\\
x'(0) &= v_0\cos(\alpha),&\quad y'(0) &= v_0 \sin(\alpha).
\end{align*}
```
We now attempt to solve these.
```julia
@syms alpha::real, γ::postive, t::positive, v()
@syms x_0::real y_0::real v_0::real
Dₜ = Differential(t)
eq₁ = Dₜ(Dₜ(u))(t) ~ - γ * Dₜ(u)(t)
eq₂ = Dₜ(Dₜ(v))(t) ~ -g - γ * Dₜ(v)(t)
a₁ = dsolve(eq₁, ics=Dict(u(0) => x_0, Dₜ(u)(0) => v_0 * cos(alpha)))
a₂ = dsolve(eq₂, ics=Dict(v(0) => y_0, Dₜ(v)(0) => v_0 * sin(alpha)))
ts = solve(x - rhs(a₁), t)[1]
yᵣ = rhs(a₂)(t => ts)
```
This gives $y$ as a function of $x$.
There are a lot of symbols. Lets simplify by using constants $x_0=y_0=0$:
```julia;
yᵣ₁ = yᵣ(x_0 => 0, y_0 => 0)
```
What is the trajectory? We see
that the `log` function part will have issues when
$-\gamma x + v_0 \cos(\alpha) = 0$.
If we fix some parameters, we can plot.
```julia;
v₀, γ₀, α = 200, 1/2, pi/4
soln = yᵣ₁(v_0=>v₀, γ=>γ₀, alpha=>α, g=>9.8)
plot(soln, 0, v₀ * cos(α) / γ₀ - 1/10, legend=false)
```
We can see that the resistance makes the path quite non-symmetric.
## Visualizing a first-order initial value problem
The solution, $y(x)$, is known through its derivative. A useful tool to visualize the solution to a first-order differential equation is the [slope field](http://tinyurl.com/jspzfok) (or direction field) plot, which at different values of $(x,y)$, plots a vector with slope given through $y'(x)$.The `vectorfieldplot` of the `CalculusWithJulia` package can be used to produce these.
For example, in a previous example we found a solution to $y'(x) = x\cdot y(x)$, coded as
```julia
F(y, x) = y*x
```
Suppose $x_0=1$ and $y_0=1$. Then a direction field plot is drawn through:
```julia; hold=true
@syms x y
x0, y0 = 1, 1
plot(legend=false)
vectorfieldplot!((x,y) -> [1, F(y,x)], xlims=(x0, 2), ylims=(y0-5, y0+5))
f(x) = y0*exp(-x0^2/2) * exp(x^2/2)
plot!(f, linewidth=5)
```
In general, if the first-order equation is written as $y'(x) = F(y,x)$, then we plot a "function" that takes $(x,y)$ and returns an $x$ value of $1$ and a $y$ value of $F(y,x)$, so the slope is $F(y,x)$.
```julia; echo=false
note(L"""The order of variables in $F(y,x)$ is conventional with the equation $y'(x) = F(y(x),x)$.
""")
```
The plots are also useful for illustrating solutions for different initial conditions:
```julia; hold=true
p = plot(legend=false)
x0, y0 = 1, 1
vectorfieldplot!((x,y) -> [1,F(y,x)], xlims=(x0, 2), ylims=(y0-5, y0+5))
for y0 in -4:4
f(x) = y0*exp(-x0^2/2) * exp(x^2/2)
plot!(f, x0, 2, linewidth=5)
end
p
```
Such solutions are called [integral
curves](https://en.wikipedia.org/wiki/Integral_curve).
These graphs illustrate the fact that the slope field is tangent to the graph of any
integral curve.
## Questions
##### Question
Using `SymPy` to solve the differential equation
```math
u' = \frac{1-x}{u}
```
gives
```julia; hold=true
@syms x u()
dsolve(D(u)(x) - (1-x)/u(x))
```
The two answers track positive and negative solutions. For the initial condition, $u(-1)=1$, we have the second one is appropriate: $u(x) = \sqrt{C_1 - x^2 + 2x}$. At $-1$ this gives: $1 = \sqrt{C_1-3}$, so $C_1 = 4$.
This value is good for what values of $x$?
```julia; hold=true; echo=false
choices = [
"``[-1, \\infty)``",
"``[-1, 4]``",
"``[-1, 0]``",
"``[1-\\sqrt{5}, 1 + \\sqrt{5}]``"]
ans = 4
radioq(choices, ans)
```
##### Question
Suppose $y(x)$ satisfies
```math
y'(x) = y(x)^2, \quad y(1) = 1.
```
What is $y(3/2)$?
```julia; hold=true; echo=false
@syms x u()
out = dsolve(D(u)(x) - u(x)^2, u(x), ics=Dict(u(1) => 1))
val = N(rhs(out(3/2)))
numericq(val)
```
##### Question
Solve the initial value problem
```math
y' = 1 + x^2 + y(x)^2 + x^2 y(x)^2, \quad y(0) = 1.
```
Use your answer to find $y(1)$.
```julia; hold=true; echo=false
eqn = D(u)(x) - (1 + x^2 + u(x)^2 + x^2 * u(x)^2)
out = dsolve(eqn, u(x), ics=Dict(u(0) => 1))
val = N(rhs(out)(1).evalf())
numericq(val)
```
##### Question
A population is modeled by $y(x)$. The rate of population growth is generally proportional to the population ($k y(x)$), but as the population gets large, the rate is curtailed $(1 - y(x)/M)$.
Solve the initial value problem
```math
y'(x) = k\cdot y(x) \cdot (1 - \frac{y(x)}{M}),
```
when $k=1$, $M=100$, and $y(0) = 20$. Find the value of $y(5)$.
```julia; hold=true;echo=false
k, M = 1, 100
eqn = D(u)(x) - k * u(x) * (1 - u(x)/M)
out = dsolve(eqn, u(x), ics=Dict(u(0) => 20))
val = N(rhs(out)(5))
numericq(val)
```
##### Question
Solve the initial value problem
```math
y'(t) = \sin(t) - \frac{y(t)}{t}, \quad y(\pi) = 1
```
Find the value of the solution at $t=2\pi$.
```julia; hold=true; echo=false
eqn = D(u)(x) - (sin(x) - u(x)/x)
out = dsolve(eqn, u(x), ics=Dict(u(PI) => 1))
val = N(rhs(out(2PI)))
numericq(val)
```
##### Question
Suppose $u(x)$ satisfies:
```math
\frac{du}{dx} = e^{-x} \cdot u(x), \quad u(0) = 1.
```
Find $u(5)$ using `SymPy`.
```julia; hold=true; echo=false
eqn = D(u)(x) - exp(-x)*u(x)
out = dsolve(eqn, u(x), ics=Dict(u(0) => 1))
val = N(rhs(out)(5))
numericq(val)
```
##### Question
The differential equation with boundary values
```math
\frac{r^2 \frac{dc}{dr}}{dr} = 0, \quad c(1)=2, c(10)=1,
```
can be solved with `SymPy`. What is the value of $c(5)$?
```julia; hold=true; echo=false
@syms x u()
eqn = diff(x^2*D(u)(x), x)
out = dsolve(eqn, u(x), ics=Dict(u(1)=>2, u(10) => 1)) |> rhs
out(5) # 10/9
choices = ["``10/9``", "``3/2``", "``9/10``", "``8/9``"]
ans = 1
radioq(choices, ans)
```
##### Question
The example with projectile motion in a medium has a parameter
$\gamma$ modeling the effect of air resistance. If `y` is the
answer - as would be the case if the example were copy-and-pasted
in - what can be said about `limit(y, gamma=>0)`?
```julia; hold=true; echo=false
choices = [
"The limit is a quadratic polynomial in `x`, mirroring the first part of that example.",
"The limit does not exist, but the limit to `oo` gives a quadratic polynomial in `x`, mirroring the first part of that example.",
"The limit does not exist -- there is a singularity -- as seen by setting `gamma=0`."
]
ans = 1
radioq(choices, ans)
```

27
CwJ/ODEs/process.jl Normal file
View File

@ -0,0 +1,27 @@
using WeavePynb
using Mustache
mmd(fname) = mmd_to_html(fname, BRAND_HREF="../toc.html", BRAND_NAME="Calculus with Julia")
## uncomment to generate just .md files
mmd(fname) = mmd_to_md(fname, BRAND_HREF="../toc.html", BRAND_NAME="Calculus with Julia")
fnames = [
"odes",
"euler"
]
function process_file(nm, twice=false)
include("$nm.jl")
mmd_to_md("$nm.mmd")
markdownToHTML("$nm.md")
twice && markdownToHTML("$nm.md")
end
process_files(twice=false) = [process_file(nm, twice) for nm in fnames]
"""
## TODO ODEs
"""

248
CwJ/ODEs/solve.jmd Normal file
View File

@ -0,0 +1,248 @@
# The problem-algorithm-solve interface
This section uses these add-on packages:
```julia
using Plots
using MonteCarloMeasurements
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
const frontmatter = (
title = "The problem-algorithm-solve interface",
description = "Calculus with Julia: The problem-algorithm-solve interface",
tags = ["CalculusWithJulia", "odes", "the problem-algorithm-solve interface"],
);
fig_size = (600, 400)
nothing
```
----
The [DifferentialEquations.jl](https://github.com/SciML) package is an entry point to a suite of `Julia` packages for numerically solving differential equations in `Julia` and other languages. A common interface is implemented that flexibly adjusts to the many different problems and algorithms covered by this suite of packages. In this section, we review a very informative [post](https://discourse.julialang.org/t/function-depending-on-the-global-variable-inside-module/64322/10) by discourse user `@genkuroki` which very nicely demonstrates the usefulness of the problem-algorithm-solve approach used with `DifferentialEquations.jl`. We slightly modify the presentation below for our needs, but suggest a perusal of the original post.
##### Example: FreeFall
The motion of an object under a uniform gravitational field is of interest.
The parameters that govern the equation of motions are the gravitational constant, `g`; the initial height, `y0`; and the initial velocity, `v0`. The time span for which a solution is sought is `tspan`.
A problem consists of these parameters. Typical `Julia` usage would be to create a structure to hold the parameters, which may be done as follows:
```julia
struct Problem{G, Y0, V0, TS}
g::G
y0::Y0
v0::V0
tspan::TS
end
Problem(;g=9.80665, y0=0.0, v0=30.0, tspan=(0.0,8.0)) = Problem(g, y0, v0, tspan)
```
The above creates a type, `Problem`, *and* a default constructor with default values. (The original uses a more sophisticated setup that allows the two things above to be combined.)
Just calling `Problem()` will create a problem suitable for the earth, passing different values for `g` would be possible for other planets.
To solve differential equations there are many different possible algorithms. Here is the construction of two types to indicate two algorithms:
```julia
struct EulerMethod{T}
dt::T
end
EulerMethod(; dt=0.1) = EulerMethod(dt)
struct ExactFormula{T}
dt::T
end
ExactFormula(; dt=0.1) = ExactFormula(dt)
```
The above just specifies a type for dispatch --- the directions indicating what code to use to solve the problem. As seen, each specifies a size for a time step with default of `0.1`.
A type for solutions is useful for different `show` methods or other methods. One can be created through:
```julia
struct Solution{Y, V, T, P<:Problem, A}
y::Y
v::V
t::T
prob::P
alg::A
end
```
The different algorithms then can be implemented as part of a generic `solve` function. Following the post we have:
```julia
solve(prob::Problem) = solve(prob, default_algorithm(prob))
default_algorithm(prob::Problem) = EulerMethod()
function solve(prob::Problem, alg::ExactFormula)
g, y0, v0, tspan = prob.g, prob.y0, prob.v0, prob.tspan
dt = alg.dt
t0, t1 = tspan
t = range(t0, t1 + dt/2; step = dt)
y(t) = y0 + v0*(t - t0) - g*(t - t0)^2/2
v(t) = v0 - g*(t - t0)
Solution(y.(t), v.(t), t, prob, alg)
end
function solve(prob::Problem, alg::EulerMethod)
g, y0, v0, tspan = prob.g, prob.y0, prob.v0, prob.tspan
dt = alg.dt
t0, t1 = tspan
t = range(t0, t1 + dt/2; step = dt)
n = length(t)
y = Vector{typeof(y0)}(undef, n)
v = Vector{typeof(v0)}(undef, n)
y[1] = y0
v[1] = v0
for i in 1:n-1
v[i+1] = v[i] - g*dt # F*h step of Euler
y[i+1] = y[i] + v[i]*dt # F*h step of Euler
end
Solution(y, v, t, prob, alg)
end
```
The post has a more elegant means to unpack the parameters from the structures, but for each of the above, the parameters are unpacked, and then the corresponding algorithm employed. As of version `v1.7` of `Julia`, the syntax `(;g,y0,v0,tspan) = prob` could also be employed.
The exact formulas, ` y(t) = y0 + v0*(t - t0) - g*(t - t0)^2/2` and `v(t) = v0 - g*(t - t0)`, follow from well-known physics formulas. Each answer is wrapped in a `Solution` type so that the answers found can be easily extracted in a uniform manner.
For example, plots of each can be obtained through:
```julia
earth = Problem()
sol_euler = solve(earth)
sol_exact = solve(earth, ExactFormula())
plot(sol_euler.t, sol_euler.y;
label="Euler's method (dt = $(sol_euler.alg.dt))", ls=:auto)
plot!(sol_exact.t, sol_exact.y; label="exact solution", ls=:auto)
title!("On the Earth"; xlabel="t", legend=:bottomleft)
```
Following the post, since the time step `dt = 0.1` is not small enough, the error of the Euler method is rather large. Next we change the algorithm parameter, `dt`, to be smaller:
```julia
earth₂ = Problem()
sol_euler₂ = solve(earth₂, EulerMethod(dt = 0.01))
sol_exact₂ = solve(earth₂, ExactFormula())
plot(sol_euler₂.t, sol_euler₂.y;
label="Euler's method (dt = $(sol_euler₂.alg.dt))", ls=:auto)
plot!(sol_exact₂.t, sol_exact₂.y; label="exact solution", ls=:auto)
title!("On the Earth"; xlabel="t", legend=:bottomleft)
```
It is worth noting that only the first line is modified, and only the method requires modification.
Were the moon to be considered, the gravitational constant would need adjustment. This parameter is part of the problem, not the solution algorithm.
Such adjustments are made by passing different values to the `Problem`
constructor:
```julia
moon = Problem(g = 1.62, tspan = (0.0, 40.0))
sol_eulerₘ = solve(moon)
sol_exactₘ = solve(moon, ExactFormula(dt = sol_euler.alg.dt))
plot(sol_eulerₘ.t, sol_eulerₘ.y;
label="Euler's method (dt = $(sol_eulerₘ.alg.dt))", ls=:auto)
plot!(sol_exactₘ.t, sol_exactₘ.y; label="exact solution", ls=:auto)
title!("On the Moon"; xlabel="t", legend=:bottomleft)
```
The code above also adjusts the time span in addition to the
graviational constant. The algorithm for exact formula is set to use
the `dt` value used in the `euler` formula, for easier
comparison. Otherwise, outside of the labels, the patterns are the
same. Only those things that need changing are changed, the rest comes
from defaults.
The above shows the benefits of using a common interface. Next, the post illustrates how *other* authors could extend this code, simply by adding a *new* `solve` method. For example,
```julia
struct Symplectic2ndOrder{T}
dt::T
end
Symplectic2ndOrder(;dt=0.1) = Symplectic2ndOrder(dt)
function solve(prob::Problem, alg::Symplectic2ndOrder)
g, y0, v0, tspan = prob.g, prob.y0, prob.v0, prob.tspan
dt = alg.dt
t0, t1 = tspan
t = range(t0, t1 + dt/2; step = dt)
n = length(t)
y = Vector{typeof(y0)}(undef, n)
v = Vector{typeof(v0)}(undef, n)
y[1] = y0
v[1] = v0
for i in 1:n-1
ytmp = y[i] + v[i]*dt/2
v[i+1] = v[i] - g*dt
y[i+1] = ytmp + v[i+1]*dt/2
end
Solution(y, v, t, prob, alg)
end
```
Had the two prior methods been in a package, the other user could still extend the interface, as above, with just a slight standard modification.
The same approach works for this new type:
```julia
earth₃ = Problem()
sol_sympl₃ = solve(earth₃, Symplectic2ndOrder(dt = 2.0))
sol_exact₃ = solve(earth₃, ExactFormula())
plot(sol_sympl₃.t, sol_sympl₃.y; label="2nd order symplectic (dt = $(sol_sympl₃.alg.dt))", ls=:auto)
plot!(sol_exact₃.t, sol_exact₃.y; label="exact solution", ls=:auto)
title!("On the Earth"; xlabel="t", legend=:bottomleft)
```
Finally, the author of the post shows how the interface can compose with other packages in the `Julia` package ecosystem. This example uses the external package `MonteCarloMeasurements` which plots the behavior of the system for perturbations of the initial value:
```julia
earth₄ = Problem(y0 = 0.0 ± 0.0, v0 = 30.0 ± 1.0)
sol_euler₄ = solve(earth₄)
sol_sympl₄ = solve(earth₄, Symplectic2ndOrder(dt = 2.0))
sol_exact₄ = solve(earth₄, ExactFormula())
ylim = (-100, 60)
P = plot(sol_euler₄.t, sol_euler₄.y;
label="Euler's method (dt = $(sol_euler₄.alg.dt))", ls=:auto)
title!("On the Earth"; xlabel="t", legend=:bottomleft, ylim)
Q = plot(sol_sympl₄.t, sol_sympl₄.y;
label="2nd order symplectic (dt = $(sol_sympl₄.alg.dt))", ls=:auto)
title!("On the Earth"; xlabel="t", legend=:bottomleft, ylim)
R = plot(sol_exact₄.t, sol_exact₄.y; label="exact solution", ls=:auto)
title!("On the Earth"; xlabel="t", legend=:bottomleft, ylim)
plot(P, Q, R; size=(720, 600))
```
The only change was in the problem, `Problem(y0 = 0.0 ± 0.0, v0 = 30.0 ± 1.0)`, where a different number type is used which accounts for uncertainty. The rest follows the same pattern.
This example, shows the flexibility of the problem-algorithm-solver pattern while maintaining a consistent pattern for execution.

1
CwJ/Project.toml Normal file
View File

@ -0,0 +1 @@
[deps]

3
CwJ/TODO/AD.md Normal file
View File

@ -0,0 +1,3 @@
Good paper recommended here (https://discourse.julialang.org/t/learning-automatic-differentiation/56158/3)
https://www.jmlr.org/papers/volume18/17-468/17-468.pdf

61
CwJ/TODO/arrows.md Normal file
View File

@ -0,0 +1,61 @@
This is really just
plot!([0,cos(θ)],[0,sin(θ)], arrow=true)
https://stackoverflow.com/questions/58219191/drawing-an-arrow-with-specified-direction-on-a-point-in-scatter-plot-in-julia
https://github.com/m3g/CKP/blob/master/disciplina/codes/velocities.jl
using Plots
using LaTeXStrings
function arch(θ₁,θ₂;radius=1.,Δθ=1.)
θ₁ = π*θ₁/180
θ₂ = π*θ₂/180
Δθ = π*Δθ/180
l = round(Int,(θ₂-θ₁)/Δθ)
x = zeros(l)
y = zeros(l)
for i in 1:l
θ = θ₁ + i*Δθ
x[i] = radius*cos(θ)
y[i] = radius*sin(θ)
end
return x, y
end
plot()
x, y = arch(0,360)
plot(x,y,seriestype=:shape,label="",alpha=0.5)
x, y = arch(0,360,radius=0.95)
plot!(x,y,seriestype=:shape,label="",fillcolor=:white)
x, y = arch(0,360,radius=0.7)
plot!(x,y,seriestype=:shape,label="",alpha=0.5,fillcolor=:red)
x, y = arch(0,360,radius=0.65)
plot!(x,y,seriestype=:shape,label="",fillcolor=:white)
plot!([0,0],[0,1.1],arrow=true,color=:black,linewidth=2,label="")
plot!([0,1.1],[0,0],arrow=true,color=:black,linewidth=2,label="")
x, y = arch(15,16,radius=0.65)
plot!([0,x[1]],[0,y[1]],arrow=true,color=:black,linewidth=1,label="")
x, y = arch(35,36,radius=0.95)
plot!([0,x[1]],[0,y[1]],arrow=true,color=:black,linewidth=1,label="")
plot!(draw_arrow=true)
plot!(showaxis=:no,ticks=nothing,xlim=[-0.1,1.1],ylim=[-0.1,1.1],)
plot!(xlabel="x",ylabel="y",size=(400,400))
annotate!(0.58,-0.07,text(L"\Delta v_1",10))
annotate!(0.88,-0.07,text(L"\Delta v_2",10))
savefig("./velocities.pdf")

30
CwJ/TODO/earth.jl Normal file
View File

@ -0,0 +1,30 @@
# Calculate the temperature of the earth using the simplest model
# @jake
# https://discourse.julialang.org/t/seven-lines-of-julia-examples-sought/50416/121
using Unitful, Plots
p_sun = 386e24u"W" # power output of the sun
radius_a = 6378u"km" # semi-major axis of the earth
radius_b = 6357u"km" # semi-minor axis of the earth
orbit_a = 149.6e6u"km" # distance from the sun to earth
orbit_e = 0.017 # eccentricity of r = a(1-e^2)/(1+ecos(θ)) & time ≈ 365.25 * θ / 360 where θ is in degrees
a = 0.75 # absorptivity of the sun's radiation
e = 0.6 # emmissivity of the earth (very dependent on cloud cover)
σ = 5.6703e-8u"W*m^-2*K^-4" # Stefan-Boltzman constant
temp_sky = 3u"K" # sky temperature
t = (0:0.25:365.25)u"d" # day of year in 1/4 day increments
θ = 2*π/365.25u"d" .* t # approximate angle around the sun
r = orbit_a * (1-orbit_e^2) ./ (1 .+ orbit_e .* cos.(θ)) # distance from sun to earth
area_projected = π * radius_a * radius_b # area of earth facing the sun
ec = sqrt(1-radius_b^2/radius_a^2) # eccentricity of earth
area_surface = 2*π*radius_a^2*(1 + radius_b^2/(ec*radius_b^2)*atanh(ec)) # surface area of the earth
q_in = p_sun * a * area_projected ./ (4 * π .* r.^2) # total heat impacting the earth
temp_earth = (q_in ./ (e*σ*area_surface) .+ temp_sky^4).^0.25 # temperature of the earth
plot(t*u"d^-1", temp_earth*u"K^-1" .- 273.15, label = false, title = "Temperature of Earth", xlabel = "Day", ylabel = "Temperature [C]")

View File

@ -0,0 +1,115 @@
###### Question (Ladder [questions](http://www.mathematische-basteleien.de/ladder.htm))
A ``7``meter ladder leans against wall with the base ``1.5``meters from wall at its base. At which height does the ladder touch the wall?
```julia; hold=true; echo=false
l = 7
adj = 1.5
opp = sqrt(l^2 - adj^2)
numericq(opp, 1e-3)
```
----
A ``7``meter ladder leans against the wall. Between the ladder and the wall is a ``1``m cube box. The ladder touches the wall, the box and the ground. There are two such positions, what is the height of the ladder of the more upright position?
You might find this code of help:
```julia; eval=false
@syms x y
l, b = 7, 1
eq = (b+x)^2 + (b+y)^2
eq = subs(eq, x=> b*(b/y)) # x/b = b/y
solve(eq ~ l^2, y)
```
What is the value `b+y` in the above?
```julia; echo=false
radioq(("The height of the ladder",
"The height of the box plus ladder",
"The distance from the base of the ladder to the box,"
"The distance from the base of the ladder to the base of the wall"
),1)
```
What is the height of the ladder
```julia; hold=true; echo=false
numericq(6.90162289514212, 1e-3)
```
----
A ladder of length ``c`` is to moved through a 2-dimensional hallway of width ``b`` which has a right angled bend. If ``4b=c``, when will the ladder get stuck?
Consider this picture
```julia; hold=true; echo=false
p = plot(; axis=nothing, legend=false, aspect_ratio=:equal)
x,y=1,2
b = sqrt(x*y)
plot!(p, [0,0,b+x], [b+y,0,0], linestyle=:dot)
plot!(p, [0,b+x],[b,b], color=:black, linestyle=:dash)
plot!(p, [b,b],[0,b+y], color=:black, linestyle=:dash)
plot!(p, [b+x,0], [0, b+y], color=:black)
```
Suppose ``b=5``, then with ``b+x`` and ``b+y`` being the lengths on the walls where it is stuck *and* by similar triangles ``b/x = y/b`` we can solve for ``x``. (In the case take the largest positive value. The answer would be the angle ``\theta`` with ``\tan(\theta) = (b+y)/(b+x)``.
```julia; hold=true; echo=false
b = 5
l = 4*b
@syms x y
eq = (b+x)^2 + (b+y)^2
eq =subs(eq, y=> b^2/x)
x₀ = N(maximum(filter(>(0), solve(eq ~ l^2, x))))
y₀ = b^2/x₀
θ₀ = Float64(atan((b+y₀)/(b+x₀)))
numericq(θ₀, 1e-2)
```
-----
Two ladders of length ``a`` and ``b`` criss-cross between two walls of width ``x``. They meet at a height of ``c``.
```julia; hold=true; echo=false
p = plot(; legend=false, axis=nothing, aspect_ratio=:equal)
ya,yb,x = 2,3,1
plot!(p, [0,x],[ya,0], color=:black)
plot!(p, [0,x],[0, yb], color=:black)
plot!(p, [0,0], [0,yb], color=:blue, linewidth=5)
plot!(p, [x,x], [0,yb], color=:blue, linewidth=5)
plot!(p, [0,x], [0,0], color=:blue, linewidth=5)
xc = ya/(ya+yb)
c = yb*xc
plot!(p, [xc,xc],[0,c])
p
```
Suppose ``c=1``, ``b=3``, and ``a=5``. Find ``x``.
Introduce ``x = z + y``, and ``h`` and ``k`` the heights of the ladders along the left wall and the right wall.
The ``z/c = x/k`` and ``y/c = x/h`` by similar triangles. As ``z + y`` is ``x`` we can solve to get
```math
x = z + y = \frac{xc}{k} + \frac{xc}{h}
= \frac{xc}{\sqrt{b^2 - x^2}} + \frac{xc}{\sqrt{a^2 - x^2}}
```
With ``a,b,c`` as given, this can be solved with
```julia; hold=true; echo=false
a,b,c = 5, 3, 1
f(x) = x*c/sqrt(b^2 - x^2) + x*c/sqrt(a^2 - x^2) - x
find_zero(f, (0, b))
```
The answer is ``2.69\dots``.

BIN
CwJ/TODO/ti-30-image.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

View File

@ -0,0 +1,9 @@
[deps]
CairoMakie = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
GLMakie = "e9467ef8-e4e7-5192-8a1a-b1aee30e663a"
IntervalArithmetic = "d1acc4aa-44c8-5952-acd4-ba5d80a2a253"
IntervalRootFinding = "d2bf35a9-74e0-55ec-b149-d360ff49b807"
LaTeXStrings = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
MDBM = "dd61e66b-39ce-57b0-8813-509f78be4b4d"
Symbolics = "0c5d862f-8b57-4792-8d23-62f2024744c7"
Weave = "44d3d7a6-8a23-5bf8-98c5-b353f8df5ec9"

8
CwJ/alternatives/README Normal file
View File

@ -0,0 +1,8 @@
# Alternatives
There are many ways to do related things in `Julia`. This directory holds alternatives to the some choices made within these notes:
## Symbolics
## Makie

View File

@ -0,0 +1,904 @@
# Calculus plots with Makie
## XXX This needs a total rewrite for the new Makie
```julia; echo=false; results="hidden"
using CalculusWithJulia
using CalculusWithJulia.WeaveSupport
using AbstractPlotting
Base.showable(m::MIME"image/png", p::AbstractPlotting.Scene) = true # instruct weave to make graphs
nothing
```
The [Makie.jl webpage](https://github.com/JuliaPlots/Makie.jl) says
> From the Jpanese word Maki-e, which is a technique to sprinkle lacquer with gold and silver powder. Data is basically the gold and silver of our age, so let's spread it out beautifully on the screen!
`Makie` itself is a metapackage for a rich ecosystem. We show how to
use the interface provided by `AbstractPlotting` and the `GLMakie`
backend to produce the familiar graphics of calculus. We do not
discuss the `MakieLayout` package which provides a means to layout
multiple graphics and add widgets, such as sliders and buttons, to a
layout. We do not discuss `MakieRecipes`. For `Plots`, there are
"recipes" that make some of the plots more straightforward. We do not
discuss the
[`AlgebraOfGraphics`](https://github.com/JuliaPlots/AlgebraOfGraphics.jl)
which presents an interface for the familiar graphics of statistics.
## Scenes
Makie draws graphics onto a canvas termed a "scene" in the Makie documentation. There are `GLMakie`, `WGLMakie`, and `CairoMakie` backends for different types of canvases. In the following, we have used `GLMakie`. `WGLMakie` is useful for incorporating `Makie` plots into web-based technologies.
We begin by loading our two packages:
```julia
using AbstractPlotting
using GLMakie
#using WGLMakie; WGLMakie.activate!()
#AbstractPlotting.set_theme!(scale_figure=false, resolution = (480, 400))
```
The `Makie` developers have workarounds for the delayed time to first plot, but without utilizing these the time to load the package is lengthy.
A scene is produced with `Scene()` or through a plotting primitive:
```julia
scene = Scene()
```
We see next how to move beyond the blank canvas.
## Points (`scatter`)
The task of plotting the points, say $(1,2)$, $(2,3)$, $(3,2)$ can be done different ways. Most plotting packages, and `Makie` is no exception, allow the following: form vectors of the $x$ and $y$ values then plot those with `scatter`:
```julia
xs = [1,2,3]
ys = [2,3,2]
scatter(xs, ys)
```
The `scatter` function creates and returns a `Scene` object, which when displayed shows the plot.
The more generic `plot` function can also be used for this task.
### `Point2`, `Point3`
When learning about points on the Cartesian plane, a "`t`"-chart is often produced:
```
x | y
-----
1 | 2
2 | 3
3 | 2
```
The `scatter` usage above used the columns. The rows are associated with the points, and these too can be used to produce the same graphic.
Rather than make vectors of $x$ and $y$ (and optionally $z$) coordinates, it is more idiomatic to create a vector of "points." `Makie` utilizes a `Point` type to store a 2 or 3 dimensional point. The `Point2` and `Point3` constructors will be utilized.
`Makie` uses a GPU, when present, to accelerate the graphic rendering. GPUs employ 32-bit numbers. Julia uses an `f0` to indicate 32-bit floating points. Hence the alternate types `Point2f0` to store 2D points as 32-bit numbers and `Points3f0` to store 3D points as 32-bit numbers are seen in the documentation for Makie.
We can plot vector of points in as direct manner as vectors of their coordinates:
```julia
pts = [Point2(1,2), Point2(2,3), Point2(3,2)]
scatter(pts)
```
A typical usage is to generate points from some vector-valued
function. Say we have a parameterized function `r` taking $R$ into
$R^2$ defined by:
```julia
r(t) = [sin(t), cos(t)]
```
Then broadcasting values gives a vector of vectors, each identified with a point:
```julia
ts = [1,2,3]
r.(ts)
```
We can broadcast `Point2` over this to create a vector of `Point` objects:
```julia
pts = Point2.(r.(ts))
```
These then can be plotted directly:
```julia
scatter(pts)
```
The ploting of points in three dimesions is essentially the same, save the use of `Point3` instead of `Point2`.
```julia
r(t) = [sin(t), cos(t), t]
ts = range(0, 4pi, length=100)
pts = Point3.(r.(ts))
scatter(pts)
```
----
To plot points generated in terms of vectors of coordinates, the
component vectors must be created. The "`t`"-table shows how, simply
loop over each column and add the corresponding $x$ or $y$ (or $z$)
value. This utility function does exactly that, returning the vectors
in a tuple.
```julia
unzip(vs) = Tuple([vs[j][i] for j in eachindex(vs)] for i in eachindex(vs[1]))
```
(The functionality is essentially a reverse of the `zip` function, hence the name.)
We might have then:
```julia
scatter(unzip(r.(ts))...)
```
where splatting is used to specify the `xs`, `ys`, and `zs` to `scatter`.
(Compare to `scatter(Point3.(r.(ts)))` or `scatter(Point3∘r).(ts))`.)
### Attributes
A point is drawn with a "marker" with a certain size and color. These attributes can be adjusted, as in the following:
```julia
scatter(xs, ys, marker=[:x,:cross, :circle], markersize=25, color=:blue)
```
Marker attributes include
* `marker` a symbol, shape. A single value will be repeated. A vector of values of a matching size will specify a marker for each point.
* `marker_offset` offset coordinates
* `markersize` size (radius pixels) of marker
### Text (`text`)
Text can be placed at a point, as a marker is. To place text the desired text and a position need to be specified.
For example:
```julia
pts = Point2.(1:5, 1:5)
scene = scatter(pts)
[text!(scene, "text", position=pt, textsize=1/i, rotation=2pi/i) for (i,pt) in enumerate(pts)]
scene
```
The graphic shows that `position` positions the text, `textsize` adjusts the displayed size, and `rotation` adjusts the orientation.
Attributes for `text` include:
* `position` to indicate the position. Either a `Point` object, as above, or a tuple
* `align` Specify the text alignment through `(:pos, :pos)`, where `:pos` can be `:left`, `:center`, or `:right`.
* `rotation` to indicate how the text is to be rotated
* `textsize` the font point size for the text
* `font` to indicate the desired font
## Curves
### Plots of univariate functions
The basic plot of univariate calculus is the graph of a function $f$ over an interval $[a,b]$. This is implemented using a familiar strategy: produce a series of representative values between $a$ and $b$; produce the corresponding $f(x)$ values; plot these as points and connect the points with straight lines. The `lines` function of `AbstractPlotting` will do the last step.
By taking a sufficient number of points within $[a,b]$ the connect-the-dot figure will appear curved, when the function is.
To create regular values between `a` and `b` either the `range` function, the related `LinRange` function, or the range operator (`a:h:b`) are employed.
For example:
```julia
f(x) = sin(x)
a, b = 0, 2pi
xs = range(a, b, length=250)
lines(xs, f.(xs))
```
Or
```julia
f(x) = cos(x)
a, b = -pi, pi
xs = a:pi/100:b
lines(xs, f.(xs))
```
As with `scatter`, `lines` returns a `Scene` object that produces a graphic when displayed.
As with `scatter`, `lines` can can also be drawn using a vector of points:
```juila
lines([Point2(x, fx) for (x,fx) in zip(xs, f.(xs))])
```
(Though the advantage isn't clear here, this will be useful when the points are more naturally generated.)
When a `y` value is `NaN` or infinite, the connecting lines are not drawn:
```
xs = 1:5
ys = [1,2,NaN, 4, 5]
lines(xs, ys)
```
As with other plotting packages, this is useful to represent discontinuous functions, such as what occurs at a vertical asymptote.
#### Adding to a scene (`lines!`, `scatter!`, ...)
To *add* or *modify* a scene can be done using a mutating version of a plotting primitive, such as `lines!` or `scatter!`. The names follow `Julia`'s convention of using an `!` to indicate that a function modifies an argument, in this case the scene.
Here is one way to show two plots at once:
```julia
xs = range(0, 2pi, length=100)
scene = lines(xs, sin.(xs))
lines!(scene, xs, cos.(xs))
```
We will see soon how to modify the line attributes so that the curves can be distinguished.
The following shows the construction details in the graphic, and that the initial scene argument is implicitly assumed:
```julia
xs = range(0, 2pi, length=10)
lines(xs, sin.(xs))
scatter!(xs, sin.(xs), markersize=10)
```
----
The current scene will have data limits that can be of interest. The following indicates how they can be manipulated to get the limits of the displayed `x` values.
```julia
xs = range(0, 2pi, length=200)
scene = plot(xs, sin.(xs))
rect = scene.data_limits[] # get limits for g from f
a, b = rect.origin[1], rect.origin[1] + rect.widths[1]
```
In the output it can be discerned that the values are 32-bit floating point numbers *and* yield a slightly larger interval than specified in `xs`.
As an example, this shows how to add the tangent line to a graph. The slope of the tangent line being computed by `ForwardDiff.derivative`.
```julia
using ForwardDiff
f(x) = x^x
a, b= 0, 2
c = 0.5
xs = range(a, b, length=200)
tl(x) = f(c) + ForwardDiff.derivative(f, c) * (x-c)
scene = lines(xs, f.(xs))
lines!(scene, xs, tl.(xs), color=:blue)
```
#### Attributes
In the last example, we added the argument `color=:blue` to the `lines!` call. This set an attribute for the line being drawn. Lines have other attributes that allow different ones to be distinguished, as above where colors indicate the different graphs.
Other attributes can be seen from the help page for `lines`, and include:
* `color` set with a symbol, as above, or a string
* `linestyle` available styles are set by a symbol, one of `:dash`, `:dot`, `:dashdot`, or `:dashdotdot`.
* `linewidth` width of line
* `transparency` the `alpha` value, a number between $0$ and $1$, smaller numbers for more transparent.
A legend can also be used to help identify different curves on the same graphic, though this is not illustrated. There are examples in the Makie gallery.
#### Scene attributes
Attributes of the scene include any titles and labels, the limits that define the coordinates being displayed, the presentation of tick marks, etc.
The `title` function can be used to add a title to a scene. The calling syntax is `title(scene, text)`.
To set the labels of the graph, there are "shorthand" functions `xlabel!`, `ylabel!`, and `zlabel!`. The calling pattern would follow `xlabel!(scene, "x-axis")`.
The plotting ticks and their labels are returned by the unexported functions `tickranges` and `ticklabels`. The unexported `xtickrange`, `ytickrange`, and `ztickrange`; and `xticklabels`, `yticklabels`, and `zticklabels` return these for the indicated axes.
These can be dynamically adjusted using `xticks!`, `yticks!`, or `zticks!`.
```julia
pts = [Point2(1,2), Point2(2,3), Point2(3,2)]
scene = scatter(pts)
title(scene, "3 points")
ylabel!(scene, "y values")
xticks!(scene, xtickrange=[1,2,3], xticklabels=["a", "b", "c"])
```
To set the limits of the graph there are shorthand functions `xlims!`, `ylims!`, and `zlims!`. This might prove useful if vertical asymptotes are encountered, as in this example:
```julia
f(x) = 1/x
a,b = -1, 1
xs = range(-1, 1, length=200)
scene = lines(xs, f.(xs))
ylims!(scene, (-10, 10))
center!(scene)
```
### Plots of parametric functions
A space curve is a plot of a function $f:R^2 \rightarrow R$ or $f:R^3 \rightarrow R$.
To construct a curve from a set of points, we have a similar pattern in both $2$ and $3$ dimensions:
```julia
r(t) = [sin(2t), cos(3t)]
ts = range(0, 2pi, length=200)
pts = Point2.(r.(ts)) # or (Point2∘r).(ts)
lines(pts)
```
Or
```julia
r(t) = [sin(2t), cos(3t), t]
ts = range(0, 2pi, length=200)
pts = Point3.(r.(ts))
lines(pts)
```
Alternatively, vectors of the $x$, $y$, and $z$ components can be produced and then plotted using the pattern `lines(xs, ys)` or `lines(xs, ys, zs)`. For example, using `unzip`, as above, we might have done the prior example with:
```julia
xs, ys, zs = unzip(r.(ts))
lines(xs, ys, zs)
```
#### Tangent vectors (`arrows`)
A tangent vector along a curve can be drawn quite easily using the `arrows` function. There are different interfaces for `arrows`, but we show the one which uses a vector of positions and a vector of "vectors". For the latter, we utilize the `derivative` function from `ForwardDiff`:
```julia
using ForwardDiff
r(t) = [sin(t), cos(t)] # vector, not tuple
ts = range(0, 4pi, length=200)
scene = Scene()
lines!(scene, Point2.(r.(ts)))
nts = 0:pi/4:2pi
us = r.(nts)
dus = ForwardDiff.derivative.(r, nts)
arrows!(scene, Point2.(us), Point2.(dus))
```
In 3 dimensions the differences are minor:
```julia
r(t) = [sin(t), cos(t), t] # vector, not tuple
ts = range(0, 4pi, length=200)
scene = Scene()
lines!(scene, Point3.(r.(ts)))
nts = pi:pi/4:3pi
us = r.(nts)
dus = ForwardDiff.derivative.(r, nts)
arrows!(scene, Point3.(us), Point3.(dus))
```
##### Attributes
Attributes for `arrows` include
* `arrowsize` to adjust the size
* `lengthscale` to scale the size
* `arrowcolor` to set the color
* `arrowhead` to adjust the head
* `arrowtail` to adjust the tail
### Implicit equations (2D)
The graph of an equation is the collection of all $(x,y)$ values satisfying the equation. This is more general than the graph of a function, which can be viewed as the graph of the equation $y=f(x)$. An equation in $x$-$y$ can be graphed if the set of solutions to a related equation $f(x,y)=0$ can be identified, as one can move all terms to one side of an equation and define $f$ as the rule of the side with the terms.
The [MDBM](https://github.com/bachrathyd/MDBM.jl) (Multi-Dimensional Bisection Method) package can be used for the task of characterizing when $f(x,y)=0$. (Also `IntervalConstraintProgramming` can be used.) We first wrap its interface and then define a "`plot`" recipe (through method overloading, not through `MakieRecipes`).
```julia
using MDBM
```
```julia
function implicit_equation(f, axes...; iteration::Int=4, constraint=nothing)
axes = [axes...]
if constraint == nothing
prob = MDBM_Problem(f, axes)
else
prob = MDBM_Problem(f, axes, constraint=constraint)
end
solve!(prob, iteration)
prob
end
```
The `implicit_equation` function is just a simplified wrapper for the `MDBM_Problem` interface. It creates an object to be plotted in a manner akin to:
```julia
f(x,y) = x^3 + x^2 + x + 1 - x*y # solve x^3 + x^2 + x + 1 = x*y
ie = implicit_equation(f, -5:5, -10:10)
```
The function definition is straightforward. The limits for `x` and `y` are specified in the above using ranges. This specifies the initial grid of points for the apdaptive algorithm used by `MDBM` to identify solutions.
To visualize the output, we make a new method for `plot` and `plot!`. There is a distinction between 2 and 3 dimensions. Below in two dimensions curve(s) are drawn. In three dimensions, scaled cubes are used to indicate the surface.
```julia
AbstractPlotting.plot(m::MDBM_Problem; kwargs...) = plot!(Scene(), m; kwargs...)
AbstractPlotting.plot!(m::MDBM_Problem; kwargs...) = plot!(AbstractPlotting.current_scene(), m; kwargs...)
AbstractPlotting.plot!(scene::AbstractPlotting.Scene, m::MDBM_Problem; kwargs...) =
plot!(Val(_dim(m)), scene, m; kwargs...)
_dim(m::MDBM.MDBM_Problem{a,N,b,c,d,e,f,g,h}) where {a,N,b,c,d,e,f,g,h} = N
```
Dispatch is used for the two different dimesions, identified through `_dim`, defined above.
```julia
# 2D plot
function AbstractPlotting.plot!(::Val{2}, scene::AbstractPlotting.Scene,
m::MDBM_Problem; color=:black, kwargs...)
mdt=MDBM.connect(m)
for i in 1:length(mdt)
dt=mdt[i]
P1=getinterpolatedsolution(m.ncubes[dt[1]], m)
P2=getinterpolatedsolution(m.ncubes[dt[2]], m)
lines!(scene, [P1[1],P2[1]],[P1[2],P2[2]], color=color, kwargs...)
end
scene
end
```
```julia
# 3D plot
function AbstractPlotting.plot!(::Val{3}, scene::AbstractPlotting.Scene,
m::MDBM_Problem; color=:black, kwargs...)
positions = Point{3, Float32}[]
scales = Vec3[]
mdt=MDBM.connect(m)
for i in 1:length(mdt)
dt=mdt[i]
P1=getinterpolatedsolution(m.ncubes[dt[1]], m)
P2=getinterpolatedsolution(m.ncubes[dt[2]], m)
a, b = Vec3(P1), Vec3(P2)
push!(positions, Point3(P1))
push!(scales, b-a)
end
cube = Rect{3, Float32}(Vec3(-0.5, -0.5, -0.5), Vec3(1, 1, 1))
meshscatter!(scene, positions, marker=cube, scale = scales, color=color, transparency=true, kwargs...)
scene
end
```
We see that the equation `ie` has two pieces. (This is known as Newton's trident, as Newton was interested in this form of equation.)
```julia
plot(ie)
```
## Surfaces
Plots of surfaces in 3 dimensions are useful to help understand the behavior of multivariate functions.
#### Surfaces defined through $z=f(x,y)$
The "peaks" function generates the logo for MATLAB. Here we see how it can be plotted over the region $[-5,5]\times[-5,5]$.
```julia
peaks(x,y) = 3*(1-x)^2*exp(-x^2 - (y+1)^2) - 10(x/5-x^3-y^5)*exp(-x^2-y^2)- 1/3*exp(-(x+1)^2-y^2)
xs = ys = range(-5, 5, length=25)
surface(xs, ys, peaks)
```
The calling pattern `surface(xs, ys, f)` implies a rectangular grid over the $x$-$y$ plane defined by `xs` and `ys` with $z$ values given by $f(x,y)$.
Alternatively a "matrix" of $z$ values can be specified. For a function `f`, this is conveniently generated by the pattern `f.(xs, ys')`, the `'` being important to get a matrix of all $x$-$y$ pairs through `Julia`'s broadcasting syntax.
```julia
zs = peaks.(xs, ys')
surface(xs, ys, zs)
```
To see how this graph is constructed, the points $(x,y,f(x,y))$ are plotted over the grid and displayed.
Here we downsample to illutrate
```julia
xs = ys = range(-5, 5, length=5)
pts = [Point3(x, y, peaks(x,y)) for x in xs for y in ys]
scatter(pts, markersize=25)
```
These points are connected. The `wireframe` function illustrates just the frame
```julia
wireframe(xs, ys, peaks.(xs, ys'), linewidth=5)
```
The `surface` call triangulates the frame and fills in the shading:
```julia
surface!(xs, ys, peaks.(xs, ys'))
```
#### Implicitly defined surfaces, $F(x,y,z)=0$
The set of points $(x,y,z)$ satisfying $F(x,y,z) = 0$ will form a surface that can be visualized using the `MDBM` package. We illustrate showing two nested surfaces.
```julia
r₂(x,y,z) = x^2 + y^2 + z^2 - 5/4 # a sphere
r₄(x,y,z) = x^4 + y^4 + z^4 - 1
xs = ys = zs = -2:2
m2,m4 = implicit_equation(r₂, xs, ys, zs), implicit_equation(r₄, xs, ys, zs)
plot(m4, color=:yellow)
plot!(m2, color=:red)
```
#### Parametrically defined surfaces
A surface may be parametrically defined through a function $r(u,v) = (x(u,v), y(u,v), z(u,v))$. For example, the surface generated by $z=f(x,y)$ is of the form with $r(u,v) = (u,v,f(u,v))$.
The `surface` function and the `wireframe` function can be used to display such surfaces. In previous usages, the `x` and `y` values were vectors from which a 2-dimensional grid is formed. For parametric surfaces, a grid for the `x` and `y` values must be generated. This function will do so:
```julia
function parametric_grid(us, vs, r)
n,m = length(us), length(vs)
xs, ys, zs = zeros(n,m), zeros(n,m), zeros(n,m)
for (i, uᵢ) in enumerate(us)
for (j, vⱼ) in enumerate(vs)
x,y,z = r(uᵢ, vⱼ)
xs[i,j] = x
ys[i,j] = y
zs[i,j] = z
end
end
(xs, ys, zs)
end
```
With the data suitably massaged, we can directly plot either a `surface` or `wireframe` plot.
----
As an aside, The above can be done more campactly with nested list comprehensions:
```
xs, ys, zs = [[pt[i] for pt in r.(us, vs')] for i in 1:3]
```
Or using the `unzip` function directly after broadcasting:
```
xs, ys, zs = unzip(r.(us, vs'))
```
----
For example, a sphere can be parameterized by $r(u,v) = (\sin(u)\cos(v), \sin(u)\sin(v), \cos(u))$ and visualized through:
```julia
r(u,v) = [sin(u)*cos(v), sin(u)*sin(v), cos(u)]
us = range(0, pi, length=25)
vs = range(0, pi/2, length=25)
xs, ys, zs = parametric_grid(us, vs, r)
scene = Scene()
surface!(scene, xs, ys, zs)
wireframe!(scene, xs, ys, zs)
```
A surface of revolution for $g(u)$ revolved about the $z$ axis can be visualized through:
```julia
g(u) = u^2 * exp(-u)
r(u,v) = (g(u)*sin(v), g(u)*cos(v), u)
us = range(0, 3, length=10)
vs = range(0, 2pi, length=10)
xs, ys, zs = parametric_grid(us, vs, r)
scene = Scene()
surface!(scene, xs, ys, zs)
wireframe!(scene, xs, ys, zs)
```
A torus with big radius $2$ and inner radius $1/2$ can be visualized as follows
```julia
r1, r2 = 2, 1/2
r(u,v) = ((r1 + r2*cos(v))*cos(u), (r1 + r2*cos(v))*sin(u), r2*sin(v))
us = vs = range(0, 2pi, length=25)
xs, ys, zs = parametric_grid(us, vs, r)
scene = Scene()
surface!(scene, xs, ys, zs)
wireframe!(scene, xs, ys, zs)
```
A Möbius strip can be produced with:
```julia
ws = range(-1/4, 1/4, length=8)
thetas = range(0, 2pi, length=30)
r(w, θ) = ((1+w*cos(θ/2))*cos(θ), (1+w*cos(θ/2))*sin(θ), w*sin(θ/2))
xs, ys, zs = parametric_grid(ws, thetas, r)
scene = Scene()
surface!(scene, xs, ys, zs)
wireframe!(scene, xs, ys, zs)
```
## Contour plots (`contour`, `heatmap`)
For a function $z = f(x,y)$ an alternative to a surface plot, is a contour plot. That is, for different values of $c$ the level curves $f(x,y)=c$ are drawn.
For a function $f(x,y)$, the syntax for generating a contour plot follows that for `surface`.
For example, using the `peaks` function, previously defined, we have a contour plot over the region $[-5,5]\times[-5,5]$ is generated through:
```julia
xs = ys = range(-5, 5, length=100)
contour(xs, ys, peaks)
```
The default of $5$ levels can be adjusted using the `levels` keyword:
```julia
contour(xs, ys, peaks.(xs, ys'), levels = 20)
```
(As a reminder, the above also shows how to generate values "`zs`" to pass to `contour` instead of a function.)
The contour graph makes identification of peaks and valleys easy as the limits of patterns of nested contour lines.
An alternative visualzation using color to replace contour lines is a heatmap. The `heatmap` function produces these. The calling syntax is similar to `contour` and `surface`:
```julia
heatmap(xs, ys, peaks)
```
This graph shows peaks and valleys through "hotspots" on the graph.
The `MakieGallery` package includes an example of a surface plot with both a wireframe and 2D contour graph added. It is replicated here using the `peaks` function scaled by $5$.
The function and domain to plot are described by:
```julia
xs = ys = range(-5, 5, length=51)
zs = peaks.(xs, ys') / 5;
```
The `zs` were generated, as `wireframe` does not provide the interface for passing a function.
The `surface` and `wireframe` are produced as follows:
```julia
scene = surface(xs, ys, zs)
wireframe!(scene, xs, ys, zs, overdraw = true, transparency = true, color = (:black, 0.1))
```
To add the contour, a simple call via `contour!(scene, xs, ys, zs)` will place the contour at the $z=0$ level which will make it hard to read. Rather, placing at the "bottom" of the scene is desirable. To identify that the "scene limits" are queried and the argument `transformation = (:xy, zmin)` is passed to `contour!`:
```julia
xmin, ymin, zmin = minimum(scene_limits(scene))
contour!(scene, xs, ys, zs, levels = 15, linewidth = 2, transformation = (:xy, zmin))
center!(scene)
```
The `transformation` plot attribute sets the "plane" (one of `:xy`, `:yz`, or `:xz`) at a location, in this example `zmin`.
### Three dimensional contour plots
The `contour` function can also plot $3$-dimensional contour plots. Concentric spheres, contours of $x^2 + y^2 + z^2 = c$ for $c > 0$ are presented by the following:
```julia
f(x,y,z) = x^2 + y^2 + z^2
xs = ys = zs = range(-3, 3, length=100)
contour(xs, ys, zs, f)
```
## Vector fields. Visualizations of $f:R^2 \rightarrow R^2$
The vector field $f(x,y) = (y, -x)$ can be visualized as a set of vectors, $f(x,y)$, positioned at a grid. These can be produced with the `arrows` function. Below we pass a vector of points for the anchors and a vector of points representing the vectors.
We can generate these on a regular grid through:
```julia
f(x, y) = [y, -x]
xs = ys = -5:5
pts = vec(Point2.(xs, ys'))
dus = vec(Point2.(f.(xs, ys')))
```
Broadcasting over `(xs, ys')` ensures each pair of possible values is encountered. The `vec` call reshapes an array into a vector.
Calling `arrows` on the prepared data produces the graphic:
```julia
arrows(pts, dus)
```
The grid seems rotated at first glance. This is due to the length of the vectors as the $(x,y)$ values get farther from the origin. Plotting the *normalized* values (each will have length $1$) can be done easily using `norm` (which requires `LinearAlgebra` to be loaded):
```julia
using LinearAlgebra
dvs = dus ./ norm.(dus)
arrows(pts, dvs)
```
The rotational pattern becomes quite clear now.
The `streamplot` function also illustrates this phenomenon. This implements an "algorithm [that] puts an arrow somewhere and extends the streamline in both directions from there. Then, it chooses a new position (from the remaining ones), repeating the the exercise until the streamline gets blocked, from which on a new starting point, the process repeats."
The `streamplot` function expects a `point` not a pair of values, so we adjust `f` slightly and call the function using the pattern `streamplot(f, xs, ys)`:
```julia
g(x,y) = Point2(f(x,y))
streamplot(g, xs, ys)
```
(The viewing range could also be adjusted with the `-5..5` notation from the `IntervalSets` package which is brought in when `AbstractPlotting` is loaded.)
## Interacting with a scene
[Interaction](http://makie.juliaplots.org/stable/interaction.html) with a scene is very much integrated into `Makie`, as the design has a "sophisticated referencing system" which allows sharing of attributes. Adjusting one attribute, can then propogate to others.
In Makie, a `Node` is a structure that allows its value to be updated, similar to an array.
Nodes are `Observables`, which when changed can trigger an event. Nodes can rely on other nodes, so events can be cascaded.
A simple example is a means to dynamically adjust a label for a scene.
```
xs, = 1:5, rand(5)
scene = scatter(xs, ys)
```
We can create a "Node" through:
```
x = Node("x values")
```
The value of the node is retrieved by `x[]`, though the function call `to_value(x)` is recommened, as it will be defined even when `x` is not a node. This stored value could be used to set the $x$-label in our scene:
```
xlabel!(scene, x[])
```
We now set up an observer to update the label whenever the value of `x` is updated:
```
on(x) do val
xlabel!(scen, val)
end
```
Now setting the value of `x` will also update the label:
```
x[] = "The x axis"
```
A node can be more complicated. This shows how a node of $x$ value can be used to define dependent $y$ values. A scatter plot will update when the $x$ values are updated:
```
xs = Node(1:10)
ys = lift(a -> f.(a), xs)
```
The `lift` function lifts the values of `xs` to the values of `ys`.
These can be plotted directly:
```
scene = lines(xs, ys)
```
Changes to the `xs` values will be reflected in the `scene`.
```
xs[] = 2:9
```
We can incoporporate the two:
```
lab = lift(val -> "Showing from $(val.start) to $(val.stop)", xs)
on(lab) do val
xlabel!(scene, val)
udpate!(scene)
end
```
The `update!` call redraws the scene to adjust to increased or decreased range of $x$ values.
The mouse position can be recorded. An example in the gallery of examples shows how.
Here is a hint:
```
scene = lines(1:5, rand(5))
pos = lift(scene.events.mouseposition) do mpos
@show AbstractPlotting.to_world(scene, Point2f0(mpos))
end
```
This will display the coordinates of the mouse in the terminal, as the mouse is moved around.

View File

@ -0,0 +1,662 @@
# JavaScript based plotting libraries
This section uses this add-on package:
```julia
using PlotlyLight
```
To avoid a dependence on the `CalculusWithJulia` package, we load two utility packages:
```julia
using PlotUtils
using SplitApplyCombine
```
----
`Julia` has different interfaces to a few JavaScript plotting libraries, notably the [vega](https://vega.github.io/vega/) and [vega-lite](https://vega.github.io/vega-lite/) through the [VegaLite.jl](https://github.com/queryverse/VegaLite.jl) package, and [plotly](https://plotly.com/javascript/) through several interfaces: `Plots.jl`, `PlotlyJS.jl`, and `PlotlyLight.jl`. These all make web-based graphics, for display through a web browser.
The `Plots.jl` interface is a backend for the familiar `Plots` package, making the calling syntax familiar, as is used throughout these notes. The `plotly()` command, from `Plots`, switches to this backend.
The `PlotlyJS.jl` interface offers direct translation from `Julia` structures to the underlying `JSON` structures needed by plotly, and has mechanisms to call back into `Julia` from `JavaScript`. This allows complicated interfaces to be produced.
Here we discuss `PlotlyLight` which conveniently provides the translation from `Julia` structures to the
`JSON` structures needed in a light-weight package, which plots quickly, without the delays due to compilation of the more complicated interfaces. Minor modifications would be needed to adjust the examples to work with `PlotlyJS` or `PlotlyBase`. The documentation for the `JavaScript` [library](https://plotly.com/javascript/) provides numerous examples which can easily be translated. The [one-page-reference](https://plotly.com/javascript/reference/) gives specific details, and is quoted from below, at times.
This discussion covers the basic of graphing for calculus purposes. It does not cover, for example, the faceting common in statistical usages, or the chart types common in business and statistics uses. The `plotly` library is much more extensive than what is reviewed below.
## Julia dictionaries to JSON
`PlotlyLight` uses the `JavaScript` interface for the `plotly` libraries. Unlike more developed interfaces, like the one for `Python`, `PlotlyLight` only manages the translation from `Julia` structures to `JavaScript` structures and the display of the results.
The key to translation is the mapping for `Julia`'s dictionaries to
the nested `JSON` structures needed by the `JavaScript` library.
For example, an introductory [example](https://plotly.com/javascript/line-and-scatter/) for a scatter plot includes this `JSON` structure:
```julia; eval=false
var trace1 = {
x: [1, 2, 3, 4],
y: [10, 15, 13, 17],
mode: 'markers',
type: 'scatter'
};
```
The `{}` create a list, the `[]` an Array (or vector, as it does with `Julia`), the `name:` are keys. The above is simply translated via:
```julia
Config(x = [1,2,3,4],
y = [10, 15, 13, 17],
mode = "markers",
type = "scatter"
)
```
The `Config` constructor (from the `EasyConfig` package loaded with `PlotlyLight`) is an interface for a dictionary whose keys are symbols, which are produced by the named arguments passed to `Config`. By nesting `Config` statements, nested `JavaScript` structures can be built up. As well, these can be built on the fly using `.` notation, as in:
```julia
cfg = Config()
cfg.key1.key2.key3 = "value"
cfg
```
To produce a figure with `PlotlyLight` then is fairly straightforward: data and, optionally, a layout are created using `Config`, then passed along to the `Plot` command producing a `Plot` object which has `display` methods defined for it. This will be illustrated through the examples.
## Scatter plot
A basic scatter plot of points ``(x,y)`` is created as follows:
```julia; hold=true
xs = 1:5
ys = rand(5)
data = Config(x = xs,
y = ys,
type="scatter",
mode="markers"
)
Plot(data)
```
The symbols `x` and `y` (and later `z`) specify the data to `plotly`. Here the `mode` is specified to show markers.
The `type` key specifies the chart or trace type. The `mode` specification sets the drawing mode for the trace. Above it is "markers". It can be any combination of "lines", "markers", or "text" joined with a "+" if more than one is desired.
## Line plot
A line plot is very similar, save for a different `mode` specification:
```julia; hold=true
xs = 1:5
ys = rand(5)
data = Config(x = xs,
y = ys,
type="scatter",
mode="lines"
)
Plot(data)
```
The difference is solely the specification of the `mode` value, for a line plot it is "lines," for a scatter plot it is "markers" The `mode` "lines+markers" will plot both. The default for the "scatter" types is to use "lines+markers" for small data sets, and "lines" for others, so for this example, `mode` could be left off.
### Nothing
The line graph plays connect-the-dots with the points specified by paired `x` and `y` values. *Typically*, when and `x` value is `NaN` that "dot" (or point) is skipped. However, `NaN` doesn't pass through the JSON conversion -- `nothing` can be used.
```julia; hold=true
data = Config(
x=[0,1,nothing,3,4,5],
y = [0,1,2,3,4,5],
type="scatter", mode="markers+lines")
Plot(data)
```
## Multiple plots
More than one graph or layer can appear on a plot. The `data` argument can be a vector of `Config` values, each describing a plot. For example, here we make a scatter plot and a line plot:
```julia; hold=true
data = [Config(x = 1:5,
y = rand(5),
type = "scatter",
mode = "markers",
name = "scatter plot"),
Config(x = 1:5,
y = rand(5),
type = "scatter",
mode = "lines",
name = "line plot")
]
Plot(data)
```
The `name` argument adjusts the name in the legend referencing the plot. This is produced by default.
### Adding a layer
In `PlotlyLight`, the `Plot` object has a field `data` for storing a vector of configurations, as above. After a plot is made, this field can have values pushed onto it and the corresponding layers will be rendered when the plot is redisplayed.
For example, here we plot the graphs of both the ``\sin(x)`` and ``\cos(x)`` over ``[0,2\pi]``. We used the utility `PlotUtils.adapted_grid` to select the points to use for the graph.
```julia; hold=true
a, b = 0, 2pi
xs, ys = PlotUtils.adapted_grid(sin, (a,b))
p = Plot(Config(x=xs, y=ys, name="sin"))
xs, ys = PlotUtils.adapted_grid(cos, (a,b))
push!(p.data, Config(x=xs, y=ys, name="cos"))
p # to display the plot
```
The values for `a` and `b` are used to generate the ``x``- and ``y``-values. These can also be gathered from the existing plot object. Here is one way, where for each trace with an `x` key, the extrema are consulted to update a list of left and right ranges.
```julia
xs, ys = PlotUtils.adapted_grid(x -> x^5 - x - 1, (0, 2)) # answer is (0,2)
p = Plot([Config(x=xs, y=ys, name="Polynomial"),
Config(x=xs, y=0 .* ys, name="x-axis", mode="lines", line=Config(width=5))]
)
ds = filter(d -> !isnothing(get(d, :x, nothing)), p.data)
a=reduce(min, [minimum(d.x) for d ∈ ds]; init=Inf)
b=reduce(max, [maximum(d.x) for d ∈ ds]; init=-Inf)
(a, b)
```
## Interactivity
`JavaScript` allows interaction with a plot as it is presented within a browser. (Not the `Julia` process which produced the data or the plot. For that interaction, `PlotlyJS` may be used.) The basic *default* features are:
* The data producing a graphic are displayed on hover using flags.
* The legend may be clicked to toggle whether the corresponding graph is displayed.
* The viewing region can be narrowed using the mouse for selection.
* The toolbar has several features for panning and zooming, as well as adjusting the information shown on hover.
Later we will see that ``3``-dimensional surfaces can be rotated interactively.
## Plot attributes
Attributes of the markers and lines may be adjusted when the data configuration is specified. A selection is shown below. Consult the reference for the extensive list.
### Marker attributes
A marker's attributes can be adjusted by values passed to the `marker` key. Labels for each marker can be assigned through a `text` key and adding `text` to the `mode` key. For example:
```julia; hold=true
data = Config(x = 1:5,
y = rand(5),
mode="markers+text",
type="scatter",
name="scatter plot",
text = ["marker $i" for i in 1:5],
textposition = "top center",
marker = Config(size=12, color=:blue)
)
Plot(data)
```
The `text` mode specification is necessary to have text be displayed
on the chart, and not just appear on hover. The `size` and `color`
attributes are recycled; they can be specified using a vector for
per-marker styling. Here the symbol `:blue` is used to specify a
color, which could also be a name, such as `"blue"`.
#### RGB Colors
The `ColorTypes` package is the standard `Julia` package providing an
`RGB` type (among others) for specifying red-green-blue colors. To
make this work with `Config` and `JSON3` requires some type-piracy
(modifying `Base.string` for the `RGB` type) to get, say, `RGB(0.5,
0.5, 0.5)` to output as `"rgb(0.5, 0.5, 0.5)"`. (RGB values in
JavaScript are integers between ``0`` and ``255`` or floating point
values between ``0`` and ``1``.) A string with this content can be
specified. Otherwise, something like the following can be used to
avoid the type piracy:
```julia
struct rgb
r
g
b
end
PlotlyLight.JSON3.StructTypes.StructType(::Type{rgb}) = PlotlyLight.JSON3.StructTypes.StringType()
Base.string(x::rgb) = "rgb($(x.r), $(x.g), $(x.b))"
```
With these defined, red-green-blue values can be used for colors. For example to give a range of colors, we might have:
```julia; hold=true
cols = [rgb(i,i,i) for i in range(10, 245, length=5)]
sizes = [12, 16, 20, 24, 28]
data = Config(x = 1:5,
y = rand(5),
mode="markers+text",
type="scatter",
name="scatter plot",
text = ["marker $i" for i in 1:5],
textposition = "top center",
marker = Config(size=sizes, color=cols)
)
Plot(data)
```
The `opacity` key can be used to control the transparency, with a value between ``0`` and ``1``.
#### Marker symbols
The `marker_symbol` key can be used to set a marker shape, with the basic values being: `circle`, `square`, `diamond`, `cross`, `x`, `triangle`, `pentagon`, `hexagram`, `star`, `diamond`, `hourglass`, `bowtie`, `asterisk`, `hash`, `y`, and `line`. Add `-open` or `-open-dot` modifies the basic shape.
```julia; hold=true
markers = ["circle", "square", "diamond", "cross", "x", "triangle", "pentagon",
"hexagram", "star", "diamond", "hourglass", "bowtie", "asterisk",
"hash", "y", "line"]
n = length(markers)
data = [Config(x=1:n, y=1:n, mode="markers",
marker = Config(symbol=markers, size=10)),
Config(x=1:n, y=2 .+ (1:n), mode="markers",
marker = Config(symbol=markers .* "-open", size=10)),
Config(x=1:n, y=4 .+ (1:n), mode="markers",
marker = Config(symbol=markers .* "-open-dot", size=10))
]
Plot(data)
```
### Line attributes
The `line` key can be used to specify line attributes, such as `width` (pixel width), `color`, or `dash`.
The `width` key specifies the line width in pixels.
The `color` key specifies the color of the line drawn.
The `dash` key specifies the style for the drawn line. Values can be set by string from "solid", "dot", "dash", "longdash", "dashdot", or "longdashdot" or set by specifying a pattern in pixels, e.g. "5px,10px,2px,2px".
The `shape` attribute determine how the points are connected. The default is `linear`, but other possibilities are `hv`, `vh`, `hvh`, `vhv`, `spline` for various patterns of connectivity. The following example, from the plotly documentation, shows the differences:
```julia; hold=true
shapes = ["linear", "hv", "vh", "hvh", "vhv", "spline"]
data = [Config(x = 1:5, y = 5*(i-1) .+ [1,3,2,3,1], mode="lines+markers", type="scatter",
name=shape,
line=Config(shape=shape)
) for (i, shape) ∈ enumerate(shapes)]
Plot(data)
```
### Text
The text associated with each point can be drawn on the chart, when "text" is included in the `mode` or shown on hover.
The onscreen text is passed to the `text` attribute. The [`texttemplate`](https://plotly.com/javascript/reference/scatter/#scatter-texttemplate) key can be used to format the text with details in the accompanying link.
Similarly, the `hovertext` key specifies the text shown on hover, with [`hovertemplate`](https://plotly.com/javascript/reference/scatter/#scatter-hovertemplate) used to format the displayed text.
### Filled regions
The `fill` key for a chart of mode `line` specifies how the area
around a chart should be colored, or filled. The specification are
declarative, with values in "none", "tozeroy", "tozerox", "tonexty",
"tonextx", "toself", and "tonext". The value of "none" is the default, unless stacked traces are used.
In the following, to highlight the difference between ``f(x) = \cos(x)`` and ``p(x) = 1 - x^2/2`` the area from ``f`` to the next ``y`` is declared; for ``p``, the area to ``0`` is declared.
```julia; hold=true
xs = range(-1, 1, 100)
data = [
Config(
x=xs, y=cos.(xs),
fill = "tonexty",
fillcolor = "rgba(0,0,255,0.25)", # to get transparency
line = Config(color=:blue)
),
Config(
x=xs, y=[1 - x^2/2 for x ∈ xs ],
fill = "tozeroy",
fillcolor = "rgba(255,0,0,0.25)", # to get transparency
line = Config(color=:red)
)
]
Plot(data)
```
The `toself` declaration is used below to fill in a polygon:
```julia; hold=true
data = Config(
x=[-1,1,1,-1,-1], y = [-1,1,-1,1,-1],
fill="toself",
type="scatter")
Plot(data)
```
## Layout attributes
The `title` key sets the main title; the `title` key in the `xaxis` configuration sets the ``x``-axis title (similarly for the ``y`` axis).
The legend is shown when ``2`` or more charts or specified, by default. This can be adjusted with the `showlegend` key, as below. The legend shows the corresponding `name` for each chart.
```julia; hold=true
data = Config(x=1:5, y=rand(5), type="scatter", mode="markers", name="legend label")
lyt = Config(title = "Main chart title",
xaxis = Config(title="x-axis label"),
yaxis = Config(title="y-axis label"),
showlegend=true
)
Plot(data, lyt)
```
The `xaxis` and `yaxis` keys have many customizations. For example: `nticks` specifies the maximum number of ticks; `range` to set the range of the axis; `type` to specify the axis type from "linear", "log", "date", "category", or "multicategory;" and `visible`
The aspect ratio of the chart can be set to be equal through the `scaleanchor` key, which specifies another axis to take a value from. For example, here is a parametric plot of a circle:
```julia; hold=true
ts = range(0, 2pi, length=100)
data = Config(x = sin.(ts), y = cos.(ts), mode="lines", type="scatter")
lyt = Config(title = "A circle",
xaxis = Config(title = "x"),
yaxis = Config(title = "y",
scaleanchor = "x")
)
Plot(data, lyt)
```
#### Annotations
Text annotations may be specified as part of the layout object. Annotations may or may not show an arrow. Here is a simple example using a vector of annotations.
```julia; hold=true
data = Config(x = [0, 1], y = [0, 1], mode="markers", type="scatter")
layout = Config(title = "Annotations",
xaxis = Config(title="x",
range = (-0.5, 1.5)),
yaxis = Config(title="y",
range = (-0.5, 1.5)),
annotations = [
Config(x=0, y=0, text = "(0,0)"),
Config(x=1, y=1.2, text = "(1,1)", showarrow=false)
]
)
Plot(data, layout)
```
The following example is more complicated use of the elements previously described. It comes from an image from [Wikipedia](https://en.wikipedia.org/wiki/List_of_trigonometric_identities) for trigonometric identities. The use of ``\LaTeX`` does not seem to be supported through the `JavaScript` interface; unicode symbols are used instead. The `xanchor` and `yanchor` keys are used to position annotations away from the default. The `textangle` key is used to rotate text, as desired.
```julia, hold=true
alpha = pi/6
beta = pi/5
xₘ = cos(alpha)*cos(beta)
yₘ = sin(alpha+beta)
r₀ = 0.1
data = [
Config(
x = [0,xₘ, xₘ, 0, 0],
y = [0, 0, yₘ, yₘ, 0],
type="scatter", mode="line"
),
Config(
x = [0, xₘ],
y = [0, sin(alpha)*cos(beta)],
fill = "tozeroy",
fillcolor = "rgba(100, 100, 100, 0.5)"
),
Config(
x = [0, cos(alpha+beta), xₘ],
y = [0, yₘ, sin(alpha)*cos(beta)],
fill = "tonexty",
fillcolor = "rgba(200, 0, 100, 0.5)",
),
Config(
x = [0, cos(alpha+beta)],
y = [0, yₘ],
line = Config(width=5, color=:black)
)
]
lyt = Config(
height=450,
showlegend=false,
xaxis=Config(visible=false),
yaxis = Config(visible=false, scaleanchor="x"),
annotations = [
Config(x = r₀*cos(alpha/2), y = r₀*sin(alpha/2),
text="α", showarrow=false),
Config(x = r₀*cos(alpha+beta/2), y = r₀*sin(alpha+beta/2),
text="β", showarrow=false),
Config(x = cos(alpha+beta) + r₀*cos(pi+(alpha+beta)/2),
y = yₘ + r₀*sin(pi+(alpha+beta)/2),
xanchor="center", yanchor="center",
text="α+β", showarrow=false),
Config(x = xₘ + r₀*cos(pi/2+alpha/2),
y = sin(alpha)*cos(beta) + r₀ * sin(pi/2 + alpha/2),
text="α", showarrow=false),
Config(x = 1/2 * cos(alpha+beta),
y = 1/2 * sin(alpha+beta),
text = "1"),
Config(x = xₘ/2*cos(alpha), y = xₘ/2*sin(alpha),
xanchor="center", yanchor="bottom",
text = "cos(β)",
textangle=-rad2deg(alpha),
showarrow=false),
Config(x = xₘ + sin(beta)/2*cos(pi/2 + alpha),
y = sin(alpha)*cos(beta) + sin(beta)/2*sin(pi/2 + alpha),
xanchor="center", yanchor="top",
text = "sin(β)",
textangle = rad2deg(pi/2-alpha),
showarrow=false),
Config(x = xₘ/2,
y = 0,
xanchor="center", yanchor="top",
text = "cos(α)⋅cos(β)", showarrow=false),
Config(x = 0,
y = yₘ/2,
xanchor="right", yanchor="center",
text = "sin(α+β)",
textangle=-90,
showarrow=false),
Config(x = cos(alpha+beta)/2,
y = yₘ,
xanchor="center", yanchor="bottom",
text = "cos(α+β)", showarrow=false),
Config(x = cos(alpha+beta) + (xₘ - cos(alpha+beta))/2,
y = yₘ,
xanchor="center", yanchor="bottom",
text = "sin(α)⋅sin(β)", showarrow=false),
Config(x = xₘ, y=sin(alpha)*cos(beta) + (yₘ - sin(alpha)*cos(beta))/2,
xanchor="left", yanchor="center",
text = "cos(α)⋅sin(β)",
textangle=90,
showarrow=false),
Config(x = xₘ,
y = sin(alpha)*cos(beta)/2,
xanchor="left", yanchor="center",
text = "sin(α)⋅cos(β)",
textangle=90,
showarrow=false)
]
)
Plot(data, lyt)
```
## Parameterized curves
In ``2``-dimensions, the plotting of a parameterized curve is similar to that of plotting a function. In ``3``-dimensions, an extra ``z``-coordinate is included.
To help, we define an `unzip` function as an interface to `SplitApplyCombine`'s `invert` function:
```julia
unzip(v) = SplitApplyCombine.invert(v)
```
Earlier, we plotted a two dimensional circle, here we plot the related helix.
```julia; hold=true
helix(t) = [cos(t), sin(t), t]
ts = range(0, 4pi, length=200)
xs, ys, zs = unzip(helix.(ts))
data = Config(x=xs, y=ys, z=zs,
type = "scatter3d", # <<- note the 3d
mode = "lines",
line=(width=2,
color=:red)
)
Plot(data)
```
The main difference is the chart type, as this is a ``3``-dimensional plot, "scatter3d" is used.
### Quiver plots
There is no `quiver` plot for `plotly` using JavaScript. In ``2``-dimensions a text-less annotation could be employed. In ``3``-dimensions, the following (from [stackoverflow.com](https://stackoverflow.com/questions/43164909/plotlypython-how-to-plot-arrows-in-3d) is a possible workaround where a line segment is drawn and capped with a small cone. Somewhat opaquely, we use `NamedTuple` for an iterator to create the keys for the data below:
```julia; hold=true
helix(t) = [cos(t), sin(t), t]
helix(t) = [-sin(t), cos(t), 1]
ts = range(0, 4pi, length=200)
xs, ys, zs = unzip(helix.(ts))
helix_trace = Config(;NamedTuple(zip((:x,:y,:z), unzip(helix.(ts))))...,
type = "scatter3d", # <<- note the 3d
mode = "lines",
line=(width=2,
color=:red)
)
tss = pi/2:pi/2:7pi/2
rs, rs = helix.(tss), helix.(tss)
arrows = [
Config(x = [x[1], x[1]+x[1]],
y = [x[2], x[2]+x[2]],
z = [x[3], x[3]+x[3]],
mode="lines", type="scatter3d")
for (x, x) ∈ zip(rs, rs)
]
tips = rs .+ rs
lengths = 0.1 * rs
caps = Config(;
NamedTuple(zip([:x,:y,:z], unzip(tips)))...,
NamedTuple(zip([:u,:v,:w], unzip(lengths)))...,
type="cone", anchor="tail")
data = vcat(helix_trace, arrows, caps)
Plot(data)
```
If several arrows are to be drawn, it might be more efficient to pass multiple values in for the `x`, `y`, ... values. They expect a vector. In the above, we create ``1``-element vectors.
## Contour plots
A contour plot is created by the "contour" trace type. The data is prepared as a vector of vectors, not a matrix. The following has the interior vector corresponding to slices ranging over ``x`` for a fixed ``y``. With this, the construction is straightforward using a comprehension:
```julia; hold=true
f(x,y) = x^2 - 2y^2
xs = range(0,2,length=25)
ys = range(0,2, length=50)
zs = [[f(x,y) for x in xs] for y in ys]
data = Config(
x=xs, y=ys, z=zs,
type="contour"
)
Plot(data)
```
The same `zs` data can be achieved by broadcasting and then collecting as follows:
```julia; hold=true
f(x,y) = x^2 - 2y^2
xs = range(0,2,length=25)
ys = range(0,2, length=50)
zs = collect(eachrow(f.(xs', ys)))
data = Config(
x=xs, y=ys, z=zs,
type="contour"
)
Plot(data)
```
The use of just `f.(xs', ys)` or `f.(xs, ys')`, as with other plotting packages, is not effective, as `JSON3` writes matrices as vectors (with linear indexing).
## Surface plots
The chart type "surface" allows surfaces in ``3`` dimensions to be plotted.
### Surfaces defined by ``z = f(x,y)``
Surfaces defined through a scalar-valued function are drawn quite naturally, save for needing to express the height data (``z`` axis) using a vector of vectors, and not a matrix.
```julia; hold=true
peaks(x,y) = 3 * (1-x)^2 * exp(-(x^2) - (y+1)^2) -
10*(x/5 - x^3 - y^5) * exp(-x^2-y^2) - 1/3 * exp(-(x+1)^2 - y^2)
xs = range(-3,3, length=50)
ys = range(-3,3, length=50)
zs = [[peaks(x,y) for x in xs] for y in ys]
data = Config(x=xs, y=ys, z=zs,
type="surface")
Plot(data)
```
### Parametrically defined surfaces
For parametrically defined surfaces, the ``x`` and ``y`` values also correspond to matrices. Her we see a pattern to plot a torus. The [`aspectmode`](https://plotly.com/javascript/reference/layout/scene/#layout-scene-aspectmode) instructs the scene's axes to be drawn in proportion with the axes' ranges.
```julia; hold=true
r, R = 1, 5
X(theta,phi) = [(r*cos(theta)+R)*cos(phi), (r*cos(theta)+R)*sin(phi), r*sin(theta)]
us = range(0, 2pi, length=25)
vs = range(0, pi, length=25)
xs = [[X(u,v)[1] for u in us] for v in vs]
ys = [[X(u,v)[2] for u in us] for v in vs]
zs = [[X(u,v)[3] for u in us] for v in vs]
data = Config(
x = xs, y = ys, z = zs,
type="surface",
mode="scatter3d"
)
lyt = Config(scene=Config(aspectmode="data"))
Plot(data, lyt)
```

View File

@ -0,0 +1,358 @@
# Symbolics.jl
The `Symbolics.jl` package is a Computer Algebra System (CAS) built entirely in `Julia`.
This package is under heavy development.
## Algebraic manipulations
### construction
@variables
SymbolicUtils.@syms assumptions
x is a `Num`, `Symbolics.value(x)` is of type `SymbolicUtils{Real, Nothing}
relation to SymbolicUtils
Num wraps things; Term
### Substitute
### Simplify
simplify
expand
rewrite rules
### Solving equations
solve_for
## Expressions to functions
build_function
## Derivatives
1->1: Symbolics.derivative(x^2 + cos(x), x)
1->3: Symbolics.derivative.([x^2, x, cos(x)], x)
3 -> 1: Symbolics.gradient(x*y^z, [x,y,z])
2 -> 2: Symbolics.jacobian([x,y^z], [x,y])
# higher order
1 -> 1: D(ex, x, n=1) = foldl((ex,_) -> Symbolics.derivative(ex, x), 1:n, init=ex)
2 -> 1: (2nd) Hessian
## Differential equations
## Integrals
WIP
## ----
# follow sympy tutorial
using Symbolics
import SymbolicUtils
@variables x y z
# substitution
ex = cos(x) + 1
substitute(ex, Dict(x=>y))
substitute(ex, Dict(x=>0)) # does eval
ex = x^y
substitute(ex, Dict(y=> x^y))
# expand trig
r1 = @rule sin(2 * ~x) => 2sin(~x)*cos(~x)
r2 = @rule cos(2 * ~x) => cos(~x)^2 - sin(~x)^2
expand_trig(ex) = simplify(ex, RuleSet([r1, r2]))
ex = sin(2x) + cos(2x)
expand_trig(ex)
## Multiple
@variables x y z
ex = x^3 + 4x*y -z
substitute(ex, Dict(x=>2, y=>4, z=>0))
# Converting Strings to Expressions
# what is sympify?
# evalf
# lambdify: symbolic expression -> function
ex = x^3 + 4x*y -z
λ = build_function(ex, x,y,z, expression=Val(false))
λ(2,4,0)
# pretty printing
using Latexify
latexify(ex)
# Simplify
@variables x y z t
simplify(sin(x)^2 + cos(x)^2)
simplify((x^3 + x^2 - x - 1) / (x^2 + 2x + 1)) # fails, no factor
simplify(((x+1)*(x^2-1))/((x+1)^2)) # works
import SpecialFunctions: gamma
simplify(gamma(x) / gamma(x-2)) # fails
# Polynomial
## expand
expand((x+1)^2)
expand((x+2)*(x-3))
expand((x+1)*(x-2) - (x-1)*x)
## factor
### not defined
## collect
COLLECT_RULES = [
@rule(~x*x^(~n::SymbolicUtils.isnonnegint) => (~x, ~n))
@rule(~x * x => (~x, 1))
]
function _collect(ex, x)
d = Dict()
exs = expand(ex)
if SymbolicUtils.operation(Symbolics.value(ex)) != +
d[0] => ex
else
for aᵢ ∈ SymbolicUtils.arguments(Symbolics.value(expand(ex)))
u = simplify(aᵢ, RuleSet(COLLECT_RULES))
if isa(u, Tuple)
a,n = u
else
a,n = u,0
end
d[n] = get(d, n, 0) + a
end
end
d
end
## cancel -- no factor
## apart -- no factor
## Trignometric simplification
INVERSE_TRIG_RUELS = [@rule(cos(acos(~x)) => ~x)
@rule(acos(cos(~x)) => abs(rem2pi(~x, RoundNearest)))
@rule(sin(asin(~x)) => ~x)
@rule(asin(sin(~x)) => abs(rem2pi(x + pi/2, RoundNearest)) - pi/2)
]
@variables θ
simplify(cos(acos(θ)), RuleSet(INVERSE_TRIG_RUELS))
# Copy from https://github.com/JuliaSymbolics/SymbolicUtils.jl/blob/master/src/simplify_rules.jl
# the TRIG_RULES are applied by simplify by default
HTRIG_RULES = [
@acrule(-sinh(~x)^2 + cosh(~x)^2 => one(~x))
@acrule(sinh(~x)^2 + 1 => cosh(~x)^2)
@acrule(cosh(~x)^2 + -1 => -sinh(~x)^2)
@acrule(tanh(~x)^2 + 1*sech(~x)^2 => one(~x))
@acrule(-tanh(~x)^2 + 1 => sech(~x)^2)
@acrule(sech(~x)^2 + -1 => -tanh(~x)^2)
@acrule(coth(~x)^2 + -1*csch(~x)^2 => one(~x))
@acrule(coth(~x)^2 + -1 => csch(~x)^2)
@acrule(csch(~x)^2 + 1 => coth(~x)^2)
@acrule(tanh(~x) => sinh(~x)/cosh(~x))
@acrule(sinh(-~x) => -sinh(~x))
@acrule(cosh(-~x) => -cosh(~x))
]
trigsimp(ex) = simplify(simplify(ex, RuleSet(HTRIG_RULES)))
trigsimp(sin(x)^2 + cos(x)^2)
trigsimp(sin(x)^4 -2cos(x)^2*sin(x)^2 + cos(x)^4) # no factor
trigsimp(cosh(x)^2 + sinh(x)^2)
trigsimp(sinh(x)/tanh(x))
EXPAND_TRIG_RULES = [
@acrule(sin(~x+~y) => sin(~x)*cos(~y) + cos(~x)*sin(~y))
@acrule(sinh(~x+~y) => sinh(~x)*cosh(~y) + cosh(~x)*sinh(~y))
@acrule(sin(2*~x) => 2sin(~x)*cos(~x))
@acrule(sinh(2*~x) => 2sinh(~x)*cosh(~x))
@acrule(cos(~x+~y) => cos(~x)*cos(~y) - sin(~x)*sin(~y))
@acrule(cosh(~x+~y) => cosh(~x)*cosh(~y) + sinh(~x)*sinh(~y))
@acrule(cos(2*~x) => cos(~x)^2 - sin(~x)^2)
@acrule(cosh(2*~x) => cosh(~x)^2 + sinh(~x)^2)
@acrule(tan(~x+~y) => (tan(~x) - tan(~y)) / (1 + tan(~x)*tan(~y)))
@acrule(tanh(~x+~y) => (tanh(~x) + tanh(~y)) / (1 + tanh(~x)*tanh(~y)))
@acrule(tan(2*~x) => 2*tan(~x)/(1 - tan(~x)^2))
@acrule(tanh(2*~x) => 2*tanh(~x)/(1 + tanh(~x)^2))
]
expandtrig(ex) = simplify(simplify(ex, RuleSet(EXPAND_TRIG_RULES)))
expandtrig(sin(x+y))
expandtrig(tan(2x))
# powers
# in genearl x^a*x^b = x^(a+b)
@variables x y a b
simplify(x^a*x^b - x^(a+b)) # 0
# x^a*y^a = (xy)^a When x,y >=0, a ∈ R
simplify(x^a*y^a - (x*y)^a)
## ??? How to specify such assumptions?
# (x^a)^b = x^(ab) only if b ∈ Int
@syms x a b
simplify((x^a)^b - x^(a*b))
@syms x a b::Int
simplify((x^a)^b - x^(a*b)) # nope
ispositive(x) = isa(x, Real) && x > 0
_isinteger(x) = isa(x, Integer)
_isinteger(x::SymbolicUtils.Sym{T,S}) where {T <: Integer, S} = true
POWSIMP_RULES = [
@acrule((~x::ispositive)^(~a::isreal) * (~y::ispositive)^(~a::isreal) => (~x*~y)^~a)
@rule(((~x)^(~a))^(~b::_isinteger) => ~x^(~a * ~b))
]
powsimp(ex) = simplify(simplify(ex, RuleSet(POWSIMP_RULES)))
@syms x a b::Int
simplify((x^a)^b - x^(a*b)) # nope
EXPAND_POWER_RULES = [
@rule((~x)^(~a + ~b) => (_~)^(~a) * (~x)^(~b))
@rule((~x*~y)^(~a) => (~x)^(~a) * (~y)^(~a))
## ... more on simplification...
## Calculus
@variables x y z
import Symbolics: derivative
derivative(cos(x), x)
derivative(exp(x^2), x)
# multiple derivative
Symbolics.derivative(ex, x, n::Int) = reduce((ex,_) -> derivative(ex, x), 1:n, init=ex) # helper
derivative(x^4, x, 3)
ex = exp(x*y*z)
using Chain
@chain ex begin
derivative(x, 3)
derivative(y, 3)
derivative(z, 3)
end
# using Differential operator
expr = exp(x*y*z)
expr |> Differential(x)^2 |> Differential(y)^3 |> expand_derivatives
# no integrate
# no limit
# Series
function series(ex, x, x0=0, n=5)
Σ = zero(ex)
for i ∈ 0:n
ex = expand_derivatives((Differential(x))(ex))
Σ += substitute(ex, Dict(x=>0)) * x^i / factorial(i)
end
Σ
end
# finite differences
# Solvers
@variables x y z a
eq = x ~ a
Symbolics.solve_for(eq, x)
eqs = [x + y + z ~ 1
x + y + 2z ~ 3
x + 2y + 3z ~ 3
]
vars = [x,y,z]
xs = Symbolics.solve_for(eqs, vars)
[reduce((ex, r)->substitute(ex, r), Pair.(vars, xs), init=ex.lhs) for ex ∈ eqs] == [eq.rhs for eq ∈ eqs]
A = [1 1; 1 2]
b = [1, 3]
xs = Symbolics.solve_for(A*[x,y] .~ b, [x,y])
A*xs - b
A = [1 1 1; 1 1 2]
b = [1,3]
A*[x,y,z] - b
Symbolics.solve_for(A*[x,y,z] .~ b, [x,y,z]) # fails, singular
# nonlinear solve
# use `λ = mk_function(ex, args, expression=Val(false))`
# polynomial roots
# differential equations

View File

@ -0,0 +1,15 @@
[deps]
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
IntervalArithmetic = "d1acc4aa-44c8-5952-acd4-ba5d80a2a253"
IntervalConstraintProgramming = "138f1668-1576-5ad7-91b9-7425abbf3153"
LaTeXStrings = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
MDBM = "dd61e66b-39ce-57b0-8813-509f78be4b4d"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
QuadGK = "1fd47b50-473d-5c70-9696-f719f8f3bcdc"
Roots = "f2b01f46-fcfa-551c-844a-d8ac1e96c665"
SymPy = "24249f21-da20-56a4-8eb1-6a02cf4ae2e6"
TaylorSeries = "6aa5eb33-94cf-58f4-a9d0-e4b2c4fc25ea"
TermInterface = "8ea1fca8-c5ef-4a55-8b96-4e9afe9c9a3c"
Unitful = "1986cc42-f94f-5a68-af5c-568840ba703d"

View File

@ -0,0 +1,543 @@
# Curve Sketching
This section uses the following add-on packages:
```julia
using CalculusWithJulia
using Plots
using SymPy
using Roots
using Polynomials # some name clash with SymPy
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
fig_size=(600, 400)
const frontmatter = (
title = "Curve Sketching",
description = "Calculus with Julia: Curve Sketching",
tags = ["CalculusWithJulia", "derivatives", "curve sketching"],
);
nothing
```
----
The figure illustrates a means to *sketch* a sine curve - identify as
many of the following values as you can:
* asymptotic behaviour (as ``x \rightarrow \pm \infty``),
* periodic behaviour,
* vertical asymptotes,
* the $y$ intercept,
* any $x$ intercept(s),
* local peaks and valleys (relative extrema).
* concavity
With these, a sketch fills in between the
points/lines associated with these values.
```julia; hold=true; echo=false; cache=true
### {{{ sketch_sin_plot }}}
function sketch_sin_plot_graph(i)
f(x) = 10*sin(pi/2*x) # [0,4]
deltax = 1/10
deltay = 5/10
zs = find_zeros(f, 0-deltax, 4+deltax)
cps = find_zeros(D(f), 0-deltax, 4+deltax)
xs = range(0, stop=4*(i-2)/6, length=50)
if i == 1
## plot zeros
title = "Plot the zeros"
p = scatter(zs, 0*zs, title=title, xlim=(-deltax,4+deltax), ylim=(-10-deltay,10+deltay), legend=false)
elseif i == 2
## plot extrema
title = "Plot the local extrema"
p = scatter(zs, 0*zs, title=title, xlim=(-deltax,4+deltax), ylim=(-10-deltay,10+deltay), legend=false)
scatter!(p, cps, f.(cps))
else
## sketch graph
title = "sketch the graph"
p = scatter(zs, 0*zs, title=title, xlim=(-deltax,4+deltax), ylim=(-10-deltay,10+deltay), legend=false)
scatter!(p, cps, f.(cps))
plot!(p, xs, f.(xs))
end
p
end
caption = L"""
After identifying asymptotic behaviours,
a curve sketch involves identifying the $y$ intercept, if applicable; the $x$ intercepts, if possible; the local extrema; and changes in concavity. From there a sketch fills in between the points. In this example, the periodic function $f(x) = 10\cdot\sin(\pi/2\cdot x)$ is sketched over $[0,4]$.
"""
n = 8
anim = @animate for i=1:n
sketch_sin_plot_graph(i)
end
imgfile = tempname() * ".gif"
gif(anim, imgfile, fps = 1)
ImageFile(imgfile, caption)
```
Though this approach is most useful for hand-sketches, the underlying
concepts are important for properly framing graphs made with the
computer.
We can easily make a graph of a function over a specified
interval. What is not always so easy is to pick an interval that shows
off the features of interest. In the section on
[rational](../precalc/rational_functions.html) functions there was a
discussion about how to draw graphs for rational functions so that
horizontal and vertical asymptotes can be seen. These are properties
of the "large." In this section, we build on this, but concentrate now
on more local properties of a function.
##### Example
Produce a graph of the function $f(x) = x^4 -13x^3 + 56x^2-92x + 48$.
We identify this as a fourth-degree polynomial with postive leading
coefficient. Hence it will eventually look $U$-shaped. If we graph
over a too-wide interval, that is all we will see. Rather, we do some
work to produce a graph that shows the zeros, peaks, and valleys of
$f(x)$. To do so, we need to know the extent of the zeros. We can try
some theory, but instead we just guess and if that fails, will work harder:
```julia;
f(x) = x^4 - 13x^3 + 56x^2 -92x + 48
rts = find_zeros(f, -10, 10)
```
As we found $4$ roots, we know by the fundamental theorem of algebra we have them all. This means, our graph need not focus on values much larger than $6$ or much smaller than $1$.
To know where the peaks and valleys are, we look for the critical points:
```julia;
cps = find_zeros(f', 1, 6)
```
Because we have the $4$ distinct zeros, we must have the peaks and
valleys appear in an interleaving manner, so a search over $[1,6]$
finds all three critical points and without checking, they must
correspond to relative extrema.
Next we identify the *inflection points* which are among the zeros of the second derivative (when defined):
```julia
ips = find_zeros(f'', 1, 6)
```
If there is no sign change for either ``f'`` or ``f''`` over ``[a,b]`` then the sketch of ``f`` on this interval must be one of:
* increasing and concave up (if ``f' > 0`` and ``f'' > 0``)
* increasing and concave down (if ``f' > 0`` and ``f'' < 0``)
* decreasing and concave up (if ``f' < 0`` and ``f'' > 0``)
* decreasing and concave down (if ``f' < 0`` and ``f'' < 0``)
This aids in sketching the graph between the critical points and inflection points.
We finally check that if we were to just use $[0,7]$ as a domain to
plot over that the function doesn't get too large to mask the
oscillations. This could happen if the $y$ values at the end points
are too much larger than the $y$ values at the peaks and valleys, as
only so many pixels can be used within a graph. For this we have:
```julia;
f.([0, cps..., 7])
```
The values at $0$ and at $7$ are a bit large, as compared to the
relative extrema, and since we know the graph is eventually
$U$-shaped, this offers no insight. So we narrow the range a bit for
the graph:
```julia;
plot(f, 0.5, 6.5)
```
----
This sort of analysis can be automated. The plot "recipe" for polynomials from the `Polynomials` package does similar considerations to choose a viewing window:
```julia
xₚ = variable(Polynomial)
plot(f(xₚ)) # f(xₚ) of Polynomial type
```
##### Example
Graph the function
```math
f(x) = \frac{(x-1)\cdot(x-3)^2}{x \cdot (x-2)}.
```
Not much to do here if you are satisfied with a graph that only gives insight into the asymptotes of this rational function:
```julia;
𝒇(x) = ( (x-1)*(x-3)^2 ) / (x * (x-2) )
plot(𝒇, -50, 50)
```
We can see the slant asymptote and hints of vertical asymptotes, but,
we'd like to see more of the basic features of the graph.
Previously, we have discussed rational functions and their
asymptotes. This function has numerator of degree ``3`` and denominator of
degree ``2``, so will have a slant asymptote. As well, the zeros of the
denominator, $0$ and $-2$, will lead to vertical asymptotes.
To identify how wide a viewing window should be, for the rational
function the asymptotic behaviour is determined after the concavity is
done changing and we are past all relative extrema, so we should take
an interval that includes all potential inflection points and critical
points:
```julia;
𝒇cps = find_zeros(𝒇', -10, 10)
poss_ips = find_zero(𝒇'', (-10, 10))
extrema(union(𝒇cps, poss_ips))
```
So a range over $[-5,5]$ should display the key features including the slant asymptote.
Previously we used the `rangeclamp` function defined in `CalculusWithJulia` to avoid the distortion that vertical asymptotes can have:
```julia;
plot(rangeclamp(𝒇), -5, 5)
```
With this graphic, we can now clearly see in the graph the two zeros at $x=1$ and $x=3$, the vertical asymptotes at $x=0$ and $x=2$, and the slant asymptote.
----
Again, this sort of analysis can be systematized. The rational function type in the `Polynomials` package takes a stab at that, but isn't quite so good at capturing the slant asymptote:
```julia
xᵣ = variable(RationalFunction)
plot(𝒇(xᵣ)) # f(x) of RationalFunction type
```
##### Example
Consider the function ``V(t) = 170 \sin(2\pi\cdot 60 \cdot t)``, a model for the alternating current waveform for an outlet in the United States. Create a graph.
Blindly trying to graph this, we will see immediate issues:
```julia
V(t) = 170 * sin(2*pi*60*t)
plot(V, -2pi, 2pi)
```
Ahh, this periodic function is *too* rapidly oscillating to be plotted without care. We recognize this as being of the form ``V(t) = a\cdot\sin(c\cdot t)``, so where the sine function has a period of ``2\pi``, this will have a period of ``2\pi/c``, or ``1/60``. So instead of using ``(-2\pi, 2\pi)`` as the interval to plot over, we need something much smaller:
```julia
plot(V, -1/60, 1/60)
```
##### Example
Plot the function ``f(x) = \ln(x/100)/x``.
We guess that this function has a *vertical* asymptote at ``x=0+`` and a horizontal asymptote as ``x \rightarrow \infty``, we verify through:
```julia
@syms x
ex = log(x/100)/x
limit(ex, x=>0, dir="+"), limit(ex, x=>oo)
```
The ``\ln(x/100)`` part of ``f`` goes ``-\infty`` as ``x \rightarrow 0+``; yet ``f(x)`` is eventually positive as ``x \rightarrow 0``. So a graph should
* not show too much of the vertical asymptote
* capture the point where ``f(x)`` must cross ``0``
* capture the point where ``f(x)`` has a relative maximum
* show enough past this maximum to indicate to the reader the eventual horizontal asyptote.
For that, we need to get the ``x`` intercepts and the critical points. The ``x/100`` means this graph has some scaling to it, so we first look between ``0`` and ``200``:
```julia
find_zeros(ex, 0, 200) # domain is (0, oo)
```
Trying the same for the critical points comes up empty. We know there is one, but it is past ``200``. Scanning wider, we see:
```julia
find_zeros(diff(ex,x), 0, 500)
```
So maybe graphing over ``[50, 300]`` will be a good start:
```julia
plot(ex, 50, 300)
```
But it isn't! The function takes its time getting back towards ``0``. We know that there must be a change of concavity as ``x \rightarrow \infty``, as there is a horizontal asymptote. We looks for the anticipated inflection point to ensure our graph includes that:
```julia
find_zeros(diff(ex, x, x), 1, 5000)
```
So a better plot is found by going well beyond that inflection point:
```julia
plot(ex, 75, 1500)
```
## Questions
###### Question
Consider this graph
```julia; hold=true; echo=false
f(x) = (x-2)* (x-2.5)*(x-3) / ((x-1)*(x+1))
p = plot(f, -20, -1-.3, legend=false, xlim=(-15, 15), color=:blue)
plot!(p, f, -1 + .2, 1 - .02, color=:blue)
plot!(p, f, 1 + .05, 20, color=:blue)
```
What kind of *asymptotes* does it appear to have?
```julia; hold=true; echo=false
choices = [
L"Just a horizontal asymptote, $y=0$",
L"Just vertical asymptotes at $x=-1$ and $x=1$",
L"Vertical asymptotes at $x=-1$ and $x=1$ and a horizontal asymptote $y=1$",
L"Vertical asymptotes at $x=-1$ and $x=1$ and a slant asymptote"
]
ans = 4
radioq(choices, ans)
```
###### Question
Consider the function ``p(x) = x + 2x^3 + 3x^3 + 4x^4 + 5x^5 +6x^6``. Which interval shows more than a ``U``-shaped graph that dominates for large ``x`` due to the leading term being ``6x^6``?
(Find an interval that contains the zeros, critical points, and inflection points.)
```julia; hold=true; echo=false
choices = ["``(-5,5)``, the default bounds of a calculator",
"``(-3.5, 3.5)``, the bounds given by Cauchy for the real roots of ``p``",
"``(-1, 1)``, as many special polynomials have their roots in this interval",
"``(-1.1, .25)``, as this constains all the roots, the critical points, and inflection points and just a bit more"
]
radioq(choices, 4, keep_order=true)
```
###### Question
Let ``f(x) = x^3/(9-x^2)``.
What points are *not* in the domain of ``f``?
```julia; echo=false
qchoices = [
"The values of `find_zeros(f, -10, 10)`: `[-3, 0, 3]`",
"The values of `find_zeros(f', -10, 10)`: `[-5.19615, 0, 5.19615]`",
"The values of `find_zeros(f'', -10, 10)`: `[-3, 0, 3]`",
"The zeros of the numerator: `[0]`",
"The zeros of the denominator: `[-3, 3]`",
"The value of `f(0)`: `0`",
"None of these choices"
]
radioq(qchoices, 5, keep_order=true)
```
The ``x``-intercepts are:
```julia; hold=true; echo=false
radioq(qchoices, 4, keep_order=true)
```
The ``y``-intercept is:
```julia; hold=true; echo=false
radioq(qchoices, 6, keep_order=true)
```
There are *vertical asymptotes* at ``x=\dots``?
```julia; hold=true; echo=false
radioq(qchoices, 5)
```
The *slant* asymptote has slope?
```julia; hold=true; echo=false
numericq(1)
```
The function has critical points at
```julia; hold=true,echo=false
radioq(qchoices, 2, keep_order=true)
```
The function has relative extrema at
```julia; hold=true;echo=false
radioq(qchoices, 7, keep_order=true)
```
The function has inflection points at
```julia; hold=true;echo=false
radioq(qchoices, 7, keep_order=true)
```
###### Question
A function ``f`` has
* zeros of ``\{-0.7548\dots, 2.0\}``,
* critical points at ``\{-0.17539\dots, 1.0, 1.42539\dots\}``,
* inflection points at ``\{0.2712\dots,1.2287\}``.
Is this a possible graph of ``f``?
```julia; hold=true;echo=false
f(x) = x^4 - 3x^3 + 2x^2 + x - 2
plot(f, -1, 2.5, legend=false)
```
```julia; hold=true;echo=false
yesnoq("yes")
```
###### Question
Two models for population growth are *exponential* growth: $P(t) = P_0 a^t$ and
[logistic growth](https://en.wikipedia.org/wiki/Logistic_function#In_ecology:_modeling_population_growth): $P(t) = K P_0 a^t / (K + P_0(a^t - 1))$. The exponential growth model has growth rate proportional to the current population. The logistic model has growth rate depending on the current population *and* the available resources (which can limit growth).
Letting $K=10$, $P_0=5$, and $a= e^{1/4}$. A plot over $[0,5]$ shows somewhat similar behaviour:
```julia;
K, P0, a = 50, 5, exp(1/4)
exponential_growth(t) = P0 * a^t
logistic_growth(t) = K * P0 * a^t / (K + P0*(a^t-1))
plot(exponential_growth, 0, 5)
plot!(logistic_growth)
```
Does a plot over $[0,50]$ show qualitatively similar behaviour?
```julia; hold=true; echo=false
yesnoq(true)
```
Exponential growth has $P''(t) = P_0 a^t \log(a)^2 > 0$, so has no inflection point. By plotting over a sufficiently wide interval, can you answer: does the logistic growth model have an inflection point?
```julia; hold=true; echo=false
yesnoq(true)
```
If yes, find it numerically:
```julia; hold=true; echo=false
val = find_zero(D(logistic_growth,2), (0, 20))
numericq(val)
```
The available resources are quantified by $K$. As $K \rightarrow \infty$ what is the limit of the logistic growth model:
```julia; hold=true; echo=false
choices = [
"The exponential growth model",
"The limit does not exist",
"The limit is ``P_0``"]
ans = 1
radioq(choices, ans)
```
##### Question
The plotting algorithm for plotting functions starts with a small
initial set of points over the specified interval ($21$) and then
refines those sub-intervals where the second derivative is determined
to be large.
Why are sub-intervals where the second derivative is large different than those where the second derivative is small?
```julia; hold=true; echo=false
choices = [
"The function will increase (or decrease) rapidly when the second derivative is large, so there needs to be more points to capture the shape",
"The function will have more curvature when the second derivative is large, so there needs to be more points to capture the shape",
"The function will be much larger (in absolute value) when the second derivative is large, so there needs to be more points to capture the shape",
]
ans = 2
radioq(choices, ans)
```
##### Question
Is there a nice algorithm to identify what domain a function should be
plotted over to produce an informative graph?
[Wilkinson](https://www.cs.uic.edu/~wilkinson/Publications/plotfunc.pdf)
has some suggestions. (Wilkinson is well known to the `R` community as
the specifier of the grammar of graphics.) It is mentioned that
"finding an informative domain for a given function depends on at least
three features: periodicity, asymptotics, and monotonicity."
Why would periodicity matter?
```julia; hold=true; echo=false
choices = [
"An informative graph only needs to show one or two periods, as others can be inferred.",
"An informative graph need only show a part of the period, as the rest can be inferred.",
L"An informative graph needs to show several periods, as that will allow proper computation for the $y$ axis range."]
ans = 1
radioq(choices, ans)
```
Why should asymptotics matter?
```julia; hold=true; echo=false
choices = [
L"A vertical asymptote can distory the $y$ range, so it is important to avoid too-large values",
L"A horizontal asymptote must be plotted from $-\infty$ to $\infty$",
"A slant asymptote must be plotted over a very wide domain so that it can be identified."
]
ans = 1
radioq(choices, ans)
```
Monotonicity means increasing or decreasing. This is important for what reason?
```julia; hold=true; echo=false
choices = [
"For monotonic regions, a large slope or very concave function might require more care to plot",
"For monotonic regions, a function is basically a straight line",
"For monotonic regions, the function will have a vertical asymptote, so the region should not be plotted"
]
ans = 1
radioq(choices, ans)
```

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -0,0 +1,17 @@
## Used to make ring figure. Redo in Julia??
plot.new()
plot.window(xlim=c(0,1), ylim=c(-5, 1.1))
x <- seq(.1, .9, length=9)
y <- c(-4.46262,-4.46866, -4.47268, -4.47469, -4.47468, -4.47267, -4.46864, -4.4626 , -4.45454)
lines(c(0, x[3], 1), c(0, y[3], 1))
points(c(0,1), c(0,1), pch=16, cex=2)
text(c(0,1), c(0,1), c("(0,0)", c("(a,b)")), pos=3)
lines(c(0, x[3], x[3]), c(0, 0, y[3]), cex=2, col="gray")
lines(c(1, x[3], x[3]), c(1, 1, y[3]), cex=2, col="gray")
text(x[3]/2, 0, "x", pos=1)
text(x[3], y[3]/2, "|y|", pos=2)
text(x[3], (1 + y[3])/2, "b-y", pos=4)
text((x[3] + 1)/2, 1, "a-x", pos=1)
text(x[3], y[3], "0", cex=4, col="gold")

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

View File

@ -0,0 +1,986 @@
# The first and second derivatives
This section uses these add-on packages:
```julia
using CalculusWithJulia
using Plots
using SymPy
using Roots
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
const frontmatter = (
title = "The first and second derivatives",
description = "Calculus with Julia: The first and second derivatives",
tags = ["CalculusWithJulia", "derivatives", "the first and second derivatives"],
);
nothing
```
----
This section explores properties of a function, ``f(x)``, that are described by properties of its first and second derivatives, ``f'(x)`` and ``f''(x)``. As part of the conversation two tests are discussed that characterize when a critical point is a relative maximum or minimum. (We know that any relative maximum or minimum occurs at a critical point, but it is not true that *any* critical point will be a relative maximum or minimum.)
## Positive or increasing on an interval
We start with some vocabulary:
> A function $f$ is **positive** on an interval $I$ if for any $a$ in $I$ it must be that $f(a) > 0$.
Of course, we define *negative* in a parallel manner. The intermediate value theorem says a continuous function can not change from positive to negative without crossing $0$. This is not the case for functions with jumps, of course.
Next,
> A function, $f$, is (strictly) **increasing** on an interval $I$ if for any $a < b$ it must be that $f(a) < f(b)$.
The word strictly is related to the inclusion of the $<$ precluding the possibility of a function being flat over an interval that the $\leq$ inequality would allow.
A parallel definition with $a < b$ implying $f(a) > f(b)$ would be used for a *strictly decreasing* function.
We can try and prove these properties for a function algebraically --
we'll see both are related to the zeros of some function. However,
before proceeding to that it is usually helpful to get an idea of
where the answer is using exploratory graphs.
We will use a helper function, `plotif(f, g, a, b)` that plots the function `f` over `[a,b]` coloring it red when `g` is positive (and blue otherwise).
Such a function is defined for us in the accompanying `CalculusWithJulia` package, which has been previously been loaded.
To see where a function is positive, we simply pass the function
object in for *both* `f` and `g` above. For example, let's look at
where $f(x) = \sin(x)$ is positive:
```julia; hold=true;
f(x) = sin(x)
plotif(f, f, -2pi, 2pi)
```
Let's graph with `cos` in the masking spot and see what happens:
```julia;
plotif(sin, cos, -2pi, 2pi)
```
Maybe surprisingly, we see that the increasing parts of the sine curve are now
highlighted. Of course, the cosine is the derivative of the sine
function, now we discuss that this is no coincidence.
For the sequel, we will use `f'` notation to find numeric derivatives, with the notation being defined in the `CalculusWithJulia` package using the `ForwardDiff` package.
## The relationship of the derivative and increasing
The derivative, $f'(x)$, computes the slope of the tangent line to the
graph of $f(x)$ at the point $(x,f(x))$. If the derivative is
positive, the tangent line will have an increasing slope. Clearly if
we see an increasing function and mentally layer on a tangent line, it will
have a positive slope. Intuitively then, increasing functions and
positive derivatives are related concepts. But there are some
technicalities.
Suppose $f(x)$ has a derivative on $I$ . Then
> If $f'(x)$ is positive on an interval $I=(a,b)$, then $f(x)$ is strictly increasing on $I$.
Meanwhile,
> If a function $f(x)$ is increasing on $I$, then $f'(x) \geq 0$.
The technicality being the equality parts. In the second statement, we
have the derivative is non-negative, as we can't guarantee it is
positive, even if we considered just strictly increasing functions.
We can see by the example of $f(x) = x^3$ that strictly increasing
functions can have a zero derivative, at a point.
The mean value theorem provides the reasoning behind the first statement: on
$I$, the slope of any secant line between $d < e$ (both in $I$) is matched by the slope of some
tangent line, which by assumption will always be positive. If the
secant line slope is written as $(f(e) - f(d))/(e - d)$ with $d < e$,
then it is clear then that $f(e) - f(d) > 0$, or $d < e$ implies $f(d) < f(e)$.
The second part, follows from the secant line equation. The derivative
can be written as a limit of secant-line slopes, each of which is
positive. The limit of positive things can only be non-negative,
though there is no guarantee the limit will be positive.
So, to visualize where a function is increasing, we can just pass in
the derivative as the masking function in our `plotif` function, as long as we are wary about places with $0$ derivative (flat spots).
For example, here, with a more complicated function, the intervals where the function is
increasing are highlighted by passing in the functions derivative to `plotif`:
```julia; hold=true;
f(x) = sin(pi*x) * (x^3 - 4x^2 + 2)
plotif(f, f', -2, 2)
```
### First derivative test
When a function changes from increasing to decreasing, or decreasing to increasing, it will have a peak or a valley. More formally, such points are relative extrema.
When discussing the mean value thereom, we defined *relative
extrema* :
> * The function $f(x)$ has a *relative maximum* at $c$ if the value $f(c)$ is an *absolute maximum* for some *open* interval containing $c$.
> * Similarly, ``f(x)`` has a *relative minimum* at ``c`` if the value ``f(c)`` is an absolute minimum for *some* open interval about ``c``.
We know since [Fermat](http://tinyurl.com/nfgz8fz) that:
> Relative maxima and minima *must* occur at *critical* points.
Fermat says that *critical points* -- where the function is defined, but its derivative is either ``0`` or undefined -- are *interesting* points, however:
> A critical point need not indicate a relative maxima or minima.
Again, $f(x)=x^3$ provides the example at $x=0$. This is a critical point, but clearly not a
relative maximum or minimum - it is just a slight pause for a
strictly increasing function.
This leaves the question:
> When will a critical point correspond to a relative maximum or minimum?
This question can be answered by considering the first derivative.
> *The first derivative test*: If $c$ is a critical point for $f(x)$ and
> *if* $f'(x)$ changes sign at $x=c$, then $f(c)$ will be either a
> relative maximum or a relative minimum.
> * It will be a relative maximum if the derivative changes sign from $+$ to $-$.
> * It will be a relative minimum if the derivative changes sign from $-$ to $+$.
> * If $f'(x)$ does not change sign at $c$, then $f(c)$ is *not* a relative maximum or minimum.
The classification part, should be clear: e.g., if the derivative is positive then
negative, the function $f$ will increase to $(c,f(c))$ then decrease
from $(c,f(c))$ -- so ``f`` will have a local maximum at ``c``.
Our definition of critical point *assumes* $f(c)$ exists, as $c$ is in
the domain of $f$. With this assumption, vertical asymptotes are
avoided. However, it need not be that $f'(c)$ exists. The absolute
value function at $x=0$ provides an example: this point is a critical
point where the derivative changes sign, but ``f'(x)`` is not defined at exactly
$x=0$. Regardless, it is guaranteed that $f(c)$ will be a relative
minimum by the first derivative test.
##### Example
Consider the function $f(x) = e^{-\lvert x\rvert} \cos(\pi x)$ over $[-3,3]$:
```julia;
𝐟(x) = exp(-abs(x)) * cos(pi * x)
plotif(𝐟, 𝐟', -3, 3)
```
We can see the first derivative test in action: at the peaks and
valleys -- the relative extrema -- the color changes. This is because ``f'`` is changing sign as as the function
changes from increasing to decreasing or vice versa.
This function has a critical point at ``0``, as can be seen. It corresponds to a point where the derivative does not exist. It is still identified through `find_zeros`, which picks up zeros and in case of discontinuous functions, like `f'`, zero crossings:
```julia
find_zeros(𝐟', -3, 3)
```
##### Example
Find all the relative maxima and minima of the function $f(x) =
\sin(\pi \cdot x) \cdot (x^3 - 4x^2 + 2)$ over the interval $[-2, 2]$.
We will do so numerically. For
this task we first need to gather the critical points. As each of the
pieces of $f$ are everywhere differentiable and no quotients are
involved, the function $f$ will be everywhere differentiable. As such,
only zeros of $f'(x)$ can be critical points. We find these with
```julia;
𝒇(x) = sin(pi*x) * (x^3 - 4x^2 + 2)
𝒇cps = find_zeros(𝒇', -2, 2)
```
We should be careful though, as `find_zeros` may miss zeros that are not
simple or too close together. A critical point will correspond to a
relative maximum if the function crosses the axis, so these can not be
"pauses." As this is exactly the case we are screening for, we double
check that all the critical points are accounted for by graphing the
derivative:
```julia;
plot(𝒇', -2, 2, legend=false)
plot!(zero)
scatter!(𝒇cps, 0*𝒇cps)
```
We see the six zeros as stored in `cps` and note that at each the
function clearly crosses the $x$ axis.
From this last graph of the derivative we can also characterize the
graph of $f$: The left-most critical point coincides with a relative minimum
of $f$, as the derivative changes sign from negative to
positive. The critical points then alternate relative maximum,
relative minimum, relative maximum, relative, minimum, and finally relative maximum.
##### Example
Consider the function $g(x) = \sqrt{\lvert x^2 - 1\rvert}$. Find the critical
points and characterize them as relative extrema or not.
We will apply the same approach, but need to get a handle on how large
the values can be. The function is a composition of three
functions. We should expect that the only critical points will occur
when the interior polynomial, $x^2-1$ has values of interest, which is
around the interval $(-1, 1)$. So we look to the slightly wider interval $[-2, 2]$:
```julia;
g(x) = sqrt(abs(x^2 - 1))
gcps = find_zeros(g', -2, 2)
```
We see the three values $-1$, $0$, $1$ that correspond to the two
zeros and the relative minimum of $x^2 - 1$. We could graph things,
but instead we characterize these values using a sign chart. A
piecewise continuous function can only change sign when it crosses $0$ or jumps over ``0``. The
derivative will be continuous, except possibly at the three values
above, so is piecewise continuous.
A sign chart picks convenient values between crossing points to test if the function is positive or negative over those intervals. When computing by hand, these would ideally be values for which the function is easily computed. On the computer, this isn't a concern; below the midpoint is chosen:
```julia;
pts = sort(union(-2, gcps, 2)) # this includes the endpoints (a, b) and the critical points
test_pts = pts[1:end-1] + diff(pts)/2 # midpoints of intervals between pts
[test_pts sign.(g'.(test_pts))]
```
Such values are often summarized graphically on a number line using a *sign chart*:
```julia; eval=false
- ∞ + 0 - ∞ + g'
<---- -1 ----- 0 ----- 1 ---->
```
(The values where the function is ``0`` or could jump over ``0`` are shown on the number line, and the sign between these points is indicated. So the first minus sign shows ``g'(x)`` is *negative* on ``(-\infty, -1)``, the second minus sign shows ``g'(x)`` is negative on ``(0,1)``.)
Reading this we have:
- the derivative changes sign from negative to postive at $x=-1$, so $g(x)$ will have a relative minimum.
- the derivative changes sign from positive to negative at $x=0$, so $g(x)$ will have a relative maximum.
- the derivative changes sign from negative to postive at $x=1$, so $g(x)$ will have a relative minimum.
In the `CalculusWithJulia` package there is `sign_chart` function that will do such work for us, though with a different display:
```julia
sign_chart(g', -2, 2)
```
(This function numerically identifies ``x``-values for the specified function which are zeros, infinities, or points where the function jumps ``0``. It then shows the resulting sign pattern of the function from left to right.)
We did this all without graphs. But, let's look at the graph of the derivative:
```julia;
plot(g', -2, 2)
```
We see asymptotes at $x=-1$ and $x=1$! These aren't zeroes of $f'(x)$,
but rather where $f'(x)$ does not exist. The conclusion is correct -
each of $-1$, $0$ and $1$ are critical points with the identified characterization - but not for the
reason that they are all zeros.
```julia;
plot(g, -2, 2)
```
Finally, why does `find_zeros` find these values that are not zeros of
$g'(x)$? As discussed briefly above, it uses the bisection algorithm
on bracketing intervals to find zeros which are guaranteed by the
intermediate value theorem, but when applied to discontinuous functions, as `f'` is, will also identify values where the function jumps over ``0``.
##### Example
Consider the function $f(x) = \sin(x) - x$. Characterize the critical points.
We will work symbolically for this example.
```julia;
@syms x
fx = sin(x) - x
fp = diff(fx, x)
solve(fp)
```
We get values of $0$ and $2\pi$. Let's look at the derivative at these points:
At $x=0$ we have to the left and right signs found by
```julia;
fp(-pi/2), fp(pi/2)
```
Both are negative. The derivative does not change sign at $0$, so the critical point is neither a relative minimum or maximum.
What about at $2\pi$? We do something similar:
```julia;
fp(2pi - pi/2), fp(2pi + pi/2)
```
Again, both negative. The function $f(x)$ is just decreasing near
$2\pi$, so again the critical point is neither a relative minimum or maximum.
A graph verifies this:
```julia;
plot(fx, -3pi, 3pi)
```
We see that at $0$ and $2\pi$ there are "pauses" as the function
decreases. We should also see that this pattern repeats. The critical
points found by `solve` are only those within a certain domain. Any
value that satisfies $\cos(x) - 1 = 0$ will be a critical point, and
there are infinitely many of these of the form $n \cdot 2\pi$ for $n$
an integer.
As a comment, the `solveset` function, which is replacing `solve`,
returns the entire collection of zeros:
```julia;
solveset(fp)
```
----
Of course, `sign_chart` also does this, only numerically. We just need to pick an interval wide enough to contains ``[0,2\pi]``
```julia
sign_chart((x -> sin(x)-x)', -3pi, 3pi)
```
##### Example
Suppose you know $f'(x) = (x-1)\cdot(x-2)\cdot (x-3) = x^3 - 6x^2 +
11x - 6$ and $g'(x) = (x-1)\cdot(x-2)^2\cdot(x-3)^3 = x^6 -14x^5
+80x^4-238x^3+387x^2-324x+108$.
How would the graphs of $f(x)$ and $g(x)$ differ, as they share identical critical points?
The graph of $f(x)$ - a function we do not have a formula for - can have its critical points characterized by the first derivative test. As the derivative changes sign at each, all critical points correspond to relative maxima. The sign pattern is negative/positive/negative/positive so we have from left to right a relative minimum, a relative maximum, and then a relative minimum. This is consistent with a ``4``th degree polynomial with ``3`` relative extrema.
For the graph of $g(x)$ we can apply the same analysis. Thinking for a
moment, we see as the factor $(x-2)^2$ comes as a power of $2$, the
derivative of $g(x)$ will not change sign at $x=2$, so there is no
relative extreme value there. However, at $x=3$ the factor has an odd
power, so the derivative will change sign at $x=3$. So, as $g'(x)$ is
positive for large *negative* values, there will be a relative maximum
at $x=1$ and, as $g'(x)$ is positive for large *positive* values, a
relative minimum at $x=3$.
The latter is consistent with a $7$th degree polynomial with positive leading coefficient. It is intuitive that since $g'(x)$ is a $6$th degree polynomial, $g(x)$ will be a $7$th degree one, as the power rule applied to a polynomial results in a polynomial of lesser degree by one.
Here is a simple schematic that illustrates the above considerations.
```julia; eval=false
f' - 0 + 0 - 0 + f'-sign
↘ ↗ ↘ ↗ f-direction
f-shape
g' + 0 - 0 - 0 + g'-sign
↗ ↘ ↘ ↗ g-direction
∩ ~ g-shape
<------ 1 ----- 2 ----- 3 ------>
```
## Concavity
Consider the function $f(x) = x^2$. Over this function we draw some
secant lines for a few pairs of $x$ values:
```julia; hold=true; echo=false
f(x) = x^2
seca(f,a,b) = x -> f(a) + (f(b) - f(a)) / (b-a) * (x-a)
p = plot(f, -2, 3, legend=false, linewidth=5, xlim=(-2,3), ylim=(-2, 9))
plot!(p,seca(f, -1, 2))
a,b = -1, 2; xs = range(a, stop=b, length=50)
plot!(xs, seca(f, a, b).(xs), linewidth=5)
plot!(p,seca(f, 0, 3/2))
a,b = 0, 3/2; xs = range(a, stop=b, length=50)
plot!(xs, seca(f, a, b).(xs), linewidth=5)
p
```
The graph attempts to illustrate that for this function the secant
line between any two points $a < b$ will lie above the graph over $[a,b]$.
This is a special property not shared by all functions. Let $I$ be an open interval.
> **Concave up**: A function $f(x)$ is concave up on $I$ if for any $a < b$ in $I$, the secant line between $a$ and $b$ lies above the graph of $f(x)$ over $[a,b]$.
A similar definition exists for *concave down* where the secant lines
lie below the graph. Notationally, concave up says for any $x$ in $[a,b]$:
```math
f(a) + \frac{f(b) - f(a)}{b-a} \cdot (x-a) \geq f(x) \quad\text{ (concave up) }
```
Replacing
$\geq$ with $\leq$ defines *concave down*, and with either $>$ or $<$
will add the prefix "strictly." These definitions are useful for a
general definition of
[convex functions](https://en.wikipedia.org/wiki/Convex_function).
We won't work with these definitions in this section, rather we will characterize
concavity for functions which have either a first or second
derivative:
> * If $f'(x)$ exists and is *increasing* on $(a,b)$, then $f(x)$ is concave up on $(a,b)$.
> * If ``f'(x)`` is *decreasing* on ``(a,b)``, then ``f(x)`` is concave *down*.
A proof of this makes use of the same trick used to establish the mean
value theorem from Rolle's theorem. Assume ``f'`` is increasing and let
$g(x) = f(x) - (f(a) + M \cdot (x-a))$, where $M$ is the slope of
the secant line between $a$ and $b$. By construction $g(a) = g(b) =
0$. If $f'(x)$ is increasing, then so is $g'(x) = f'(x) + M$. By its
definition above, showing ``f`` is concave up is the same as showing $g(x) \leq
0$. Suppose to the contrary that there is a value where $g(x) > 0$ in
$[a,b]$. We show this can't be. Assuming $g'(x)$ always exists, after
some work, Rolle's theorem will ensure there is a value where $g'(c) =
0$ and $(c,g(c))$ is a relative maximum, and as we know there is at
least one positive value, it must be $g(c) > 0$. The first derivative
test then ensures that $g'(x)$ will increase to the left of $c$ and
decrease to the right of $c$, since $c$ is at a critical point and not
an endpoint. But this can't happen as $g'(x)$ is assumed to be
increasing on the interval.
The relationship between increasing functions and their derivatives -- if $f'(x) > 0 $ on $I$, then ``f`` is increasing on $I$ --
gives this second characterization of concavity when the second
derivative exists:
> * If $f''(x)$ exists and is positive on $I$, then $f(x)$ is concave up on $I$.
> * If $f''(x)$ exists and is negative on $I$, then $f(x)$ is concave down on $I$.
This follows, as we can think of $f''(x)$ as just the first derivative
of the function $f'(x)$, so the assumption will force $f'(x)$ to exist and be
increasing, and hence $f(x)$ to be concave up.
##### Example
Let's look at the function $x^2 \cdot e^{-x}$ for positive $x$. A
quick graph shows the function is concave up, then down, then up in
the region plotted:
```julia;
h(x) = x^2 * exp(-x)
plotif(h, h'', 0, 8)
```
From the graph, we would expect that the second derivative - which is continuous - would have two zeros on $[0,8]$:
```julia;
ips = find_zeros(h'', 0, 8)
```
As well, between the zeros we should have the sign pattern `+`, `-`, and `+`, as we verify:
```julia;
sign_chart(h'', 0, 8)
```
### Second derivative test
Concave up functions are "opening" up, and often clearly $U$-shaped, though that is not necessary. At a
relative minimum, where there is a ``U``-shape, the graph will be concave up; conversely
at a relative maximum, where the graph has a downward ``\cap``-shape, the function will be concave down. This observation becomes:
> The **second derivative test**: If $c$ is a critical point of $f(x)$
> with $f''(c)$ existing in a neighborhood of $c$, then
> * The value $f(c)$ will be a relative maximum if $f''(c) > 0$,
> * The value $f(c)$ will be a relative minimum if $f''(c) < 0$, and
> * *if* ``f''(c) = 0`` the test is *inconclusive*.
If $f''(c)$ is positive in an interval about $c$, then $f''(c) > 0$ implies the function is
concave up at $x=c$. In turn, concave up implies the derivative is increasing
so must go from negative to positive at the critical point.
The second derivative test is **inconclusive** when $f''(c)=0$. No such
general statement exists, as there isn't enough information. For
example, the function $f(x) = x^3$ has $0$ as a critical point,
$f''(0)=0$ and the value does not correspond to a relative maximum or minimum. On the
other hand $f(x)=x^4$ has $0$ as a critical point, $f''(0)=0$ is a
relative minimum.
##### Example
Use the second derivative test to characterize the critical points of $j(x) = x^5 - x^4 + x^3$.
```julia;
j(x) = x^5 - 2x^4 + x^3
jcps = find_zeros(j', -3, 3)
```
We can check the sign of the second derivative for each critical point:
```julia;
[jcps j''.(jcps)]
```
That $j''(0.6) < 0$ implies that at $0.6$, $j(x)$ will have a relative
maximum. As $''(1) > 0$, the second derivative test says at $x=1$
there will be a relative minimum. That $j''(0) = 0$ says that only
that there **may** be a relative maximum or minimum at $x=0$, as the second
derivative test does not speak to this situation. (This last check, requiring a function evaluation to be `0`, is susceptible to floating point errors, so isn't very robust as a general tool.)
This should be consistent with
this graph, where $-0.25$, and $1.25$ are chosen to capture the zero at
$0$ and the two relative extrema:
```julia;
plotif(j, j'', -0.25, 1.25)
```
For the graph we see that $0$ **is not** a relative maximum or minimum. We could have seen this numerically by checking the first derivative test, and noting there is no sign change:
```julia;
sign_chart(j', -3, 3)
```
##### Example
One way to visualize the second derivative test is to *locally* overlay on a critical point a parabola. For example, consider ``f(x) = \sin(x) + \sin(2x) + \sin(3x)`` over ``[0,2\pi]``. It has ``6`` critical points over ``[0,2\pi]``. In this graphic, we *locally* layer on ``6`` parabolas:
```julia; hold=true;
f(x) = sin(x) + sin(2x) + sin(3x)
p = plot(f, 0, 2pi, legend=false, color=:blue, linewidth=3)
cps = fzeros(f', 0, 2pi)
h = 0.5
for c in cps
parabola(x) = f(c) + (f''(c)/2) * (x-c)^2
plot!(parabola, c-h, c+h, color=:red, linewidth=5, alpha=0.6)
end
p
```
The graphic shows that for this function near the relative extrema the parabolas *approximate* the function well, so that the relative extrema are characterized by the relative extrema of the parabolas.
At each critical point ``c``, the parabolas have the form
```math
f(c) + \frac{f''(c)}{2}(x-c)^2.
```
The ``2`` is a mystery to be answered in the section on [Taylor series](../taylor_series_polynomials.html), the focus here is on the *sign* of ``f''(c)``:
* if ``f''(c) > 0`` then the approximating parabola opens upward and the critical point is a point of relative minimum for ``f``,
* if ``f''(c) < 0`` then the approximating parabola opens downward and the critical point is a point of relative maximum for ``f``, and
* were ``f''(c) = 0`` then the approximating parabola is just a line -- the tangent line at a critical point -- and is non-informative about extrema.
That is, the parabola picture is just the second derivative test in this light.
### Inflection points
An inflection point is a value where the *second* derivative of $f$
changes sign. At an inflection point the derivative will change from
increasing to decreasing (or vice versa) and the function will change
from concave up to down (or vice versa).
We can use the `find_zeros` function to identify potential inflection
points by passing in the second derivative function. For example,
consider the bell-shaped function
```math
k(x) = e^{-x^2/2}.
```
A graph suggests relative a maximum at $x=0$, a horizontal asymptote of $y=0$,
and two inflection points:
```julia;
k(x) = exp(-x^2/2)
plotif(k, k'', -3, 3)
```
The inflection points can be found directly, if desired, or numerically with:
```julia;
find_zeros(k'', -3, 3)
```
(The `find_zeros` function may return points which are not inflection points. It primarily returns points where $k''(x)$ changes sign, but *may* also find points where $k''(x)$ is $0$ yet does not change sign at $x$.)
##### Example
A car travels from a stop for 1 mile in 2 minutes. A graph of its
position as a function of time might look like any of these graphs:
```julia; hold=true; echo=false
v(t) = 30/60*t
w(t) = t < 1/2 ? 0.0 : (t > 3/2 ? 1.0 : (t-1/2))
y(t) = 1 / (1 + exp(-t))
y1(t) = y(2(t-1))
y2(t) = y1(t) - y1(0)
y3(t) = 1/y2(2) * y2(t)
plot(v, 0, 2, label="f1")
plot!(w, label="f2")
plot!(y3, label="f3")
```
All three graphs have the same *average* velocity which is just the
$1/2$ miles per minute (``30`` miles an hour). But the instantaneous
velocity - which is given by the derivative of the position function)
varies.
The graph `f1` has constant velocity, so the position is a straight
line with slope $v_0$. The graph `f2` is similar, though for first and
last 30 seconds, the car does not move, so must move faster during the
time it moves. A more realistic graph would be `f3`. The position
increases continuously, as do the others, but the velocity changes
more gradually. The initial velocity is less than $v_0$, but
eventually gets to be more than $v_0$, then velocity starts to
increase less. At no point is the velocity not increasing, for `f3`,
the way it is for `f2` after a minute and a half.
The rate of change of the velocity is the acceleration. For `f1` this
is zero, for `f2` it is zero as well - when it is defined. However,
for `f3` we see the increase in velocity is positive in the first
minute, but negative in the second minute. This fact relates to the
concavity of the graph. As acceleration is the derivative of velocity,
it is the second derivative of position - the graph we see. Where the
acceleration is *positive*, the position graph will be concave *up*,
where the acceleration is *negative* the graph will be concave
*down*. The point $t=1$ is an inflection point, and
would be felt by most riders.
## Questions
###### Question
Consider this graph:
```julia;
plot(airyai, -5, 0) # airyai in `SpecialFunctions` loaded with `CalculusWithJulia`
```
On what intervals (roughly) is the function positive?
```julia; hold=true; echo=false
choices=[
"``(-3.2,-1)``",
"``(-5, -4.2)``",
"``(-5, -4.2)`` and ``(-2.5, 0)``",
"``(-4.2, -2.5)``"]
ans = 3
radioq(choices, ans)
```
###### Question
Consider this graph:
```julia; hold=true; echo=false
import SpecialFunctions: besselj
p = plot(x->besselj(x, 1), -5,-3)
```
On what intervals (roughly) is the function negative?
```julia; hold=true; echo=false
choices=[
"``(-5.0, -4.0)``",
"``(-25.0, 0.0)``",
"``(-5.0, -4.0)`` and ``(-4, -3)``",
"``(-4.0, -3.0)``"]
ans = 4
radioq(choices, ans)
```
###### Question
Consider this graph
```julia; hold=true; echo=false
plot(x->besselj(x, 21), -5,-3)
```
On what interval(s) is this function increasing?
```julia; hold=true; echo=false
choices=[
"``(-5.0, -3.8)``",
"``(-3.8, -3.0)``",
"``(-4.7, -3.0)``",
"``(-0.17, 0.17)``"
]
ans = 3
radioq(choices, ans)
```
###### Question
Consider this graph
```julia; hold=true; echo=false
p = plot(x -> 1 / (1+x^2), -3, 3)
```
On what interval(s) is this function concave up?
```julia; hold=true; echo=false
choices=[
"``(0.1, 1.0)``",
"``(-3.0, 3.0)``",
"``(-0.6, 0.6)``",
" ``(-3.0, -0.6)`` and ``(0.6, 3.0)``"
]
ans = 4
radioq(choices, ans)
```
###### Question
If it is known that:
* A function $f(x)$ has critical points at $x=-1, 0, 1$
* at $-2$ an $-1/2$ the values are: $f'(-2) = 1$ and $f'(-1/2) = -1$.
What can be concluded?
```julia; hold=true; echo=false
choices = [
"Nothing",
"That the critical point at ``-1`` is a relative maximum",
"That the critical point at ``-1`` is a relative minimum",
"That the critical point at ``0`` is a relative maximum",
"That the critical point at ``0`` is a relative minimum"
]
ans = 2
radioq(choices, ans, keep_order=true)
```
###### Question
Mystery function $f(x)$ has $f'(2) = 0$ and $f''(0) = 2$. What is the *most* you can say about $x=2$?
```julia; hold=true; echo=false
choices = [
" ``f(x)`` is continuous at ``2``",
" ``f(x)`` is continuous and differentiable at ``2``",
" ``f(x)`` is continuous and differentiable at ``2`` and has a critical point",
" ``f(x)`` is continuous and differentiable at ``2`` and has a critical point that is a relative minimum by the second derivative test"
]
ans = 3
radioq(choices, ans, keep_order=true)
```
###### Question
Find the smallest critical point of $f(x) = x^3 e^{-x}$.
```julia; hold=true; echo=false
f(x)= x^3*exp(-x)
cps = find_zeros(D(f), -5, 10)
val = minimum(cps)
numericq(val)
```
###### Question
How many critical points does $f(x) = x^5 - x + 1$ have?
```julia; hold=true; echo=false
f(x) = x^5 - x + 1
cps = find_zeros(D(f), -3, 3)
val = length(cps)
numericq(val)
```
###### Question
How many inflection points does $f(x) = x^5 - x + 1$ have?
```julia; hold=true; echo=false
f(x) = x^5 - x + 1
cps = find_zeros(D(f,2), -3, 3)
val = length(cps)
numericq(val)
```
###### Question
At $c$, $f'(c) = 0$ and $f''(c) = 1 + c^2$. Is $(c,f(c))$ a relative maximum? ($f$ is a "nice" function.)
```julia; hold=true; echo=false
choices = [
"No, it is a relative minimum",
"No, the second derivative test is possibly inconclusive",
"Yes"
]
ans = 1
radioq(choices, ans)
```
###### Question
At $c$, $f'(c) = 0$ and $f''(c) = c^2$. Is $(c,f(c))$ a relative minimum? ($f$ is a "nice" function.)
```julia; hold=true; echo=false
choices = [
"No, it is a relative maximum",
"No, the second derivative test is possibly inconclusive if ``c=0``, but otherwise yes",
"Yes"
]
ans = 2
radioq(choices, ans)
```
###### Question
```julia; hold=true; echo=false
f(x) = exp(-x) * sin(pi*x)
plot(D(f), 0, 3)
```
The graph shows $f'(x)$. Is it possible that $f(x) = e^{-x} \sin(\pi x)$?
```julia; hold=true; echo=false
yesnoq(true)
```
(Plot ``f(x)`` and compare features like critical points, increasing decreasing to that indicated by ``f'`` through the graph.)
###### Question
```julia; hold=true; echo=false
f(x) = x^4 - 3x^3 - 2x + 4
plot(D(f,2), -2, 4)
```
The graph shows $f'(x)$. Is it possible that $f(x) = x^4 - 3x^3 - 2x + 4$?
```julia; hold=true; echo=false
yesnoq("no")
```
###### Question
```julia; hold=true; echo=false
f(x) = (1+x)^(-2)
plot(D(f,2), 0,2)
```
The graph shows $f''(x)$. Is it possible that $f(x) = (1+x)^{-2}$?
```julia; hold=true; echo=false
yesnoq("yes")
```
###### Question
```julia; hold=true; echo=false
f_p(x) = (x-1)*(x-2)^2*(x-3)^2
plot(f_p, 0.75, 3.5)
```
This plot shows the graph of $f'(x)$. What is true about the critical points and their characterization?
```julia; hold=true; echo=false
choices = [
"The critical points are at ``x=1`` (a relative minimum), ``x=2`` (not a relative extrema), and ``x=3`` (not a relative extrema).",
"The critical points are at ``x=1`` (a relative maximum), ``x=2`` (not a relative extrema), and ``x=3`` (not a relative extrema).",
"The critical points are at ``x=1`` (a relative minimum), ``x=2`` (not a relative extrema), and ``x=3`` (a relative minimum).",
"The critical points are at ``x=1`` (a relative minimum), ``x=2`` (a relative minimum), and ``x=3`` (a relative minimum).",
]
ans=1
radioq(choices, ans)
```
##### Question
You know $f''(x) = (x-1)^3$. What do you know about $f(x)$?
```julia; hold=true; echo=false
choices = [
"The function is concave down over ``(-\\infty, 1)`` and concave up over ``(1, \\infty)``",
"The function is decreasing over ``(-\\infty, 1)`` and increasing over ``(1, \\infty)``",
"The function is negative over ``(-\\infty, 1)`` and positive over ``(1, \\infty)``",
]
ans = 1
radioq(choices, ans)
```
##### Question
While driving we accelerate to get through a light before it turns red. However, at time $t_0$ a car cuts in front of us and we are forced to break. If $s(t)$ represents position, what is $t_0$:
```julia; hold=true; echo=false
choices = ["A zero of the function",
"A critical point for the function",
"An inflection point for the function"]
ans = 3
radioq(choices, ans, keep_order=true)
```
###### Question
The [investopedia](https://www.investopedia.com/terms/i/inflectionpoint.asp) website describes:
"An **inflection point** is an event that results in a significant change in the progress of a company, industry, sector, economy, or geopolitical situation and can be considered a turning point after which a dramatic change, with either positive or negative results, is expected to result."
This accurately summarizes how the term is used outside of math books. Does it also describe how the term is used *inside* math books?
```julia; hold=true, echo=false
choices = ["Yes. Same words, same meaning",
"""No, but it is close. An inflection point is when the *acceleration* changes from positive to negative, so if "results" are about how a company's rate of change is changing, then it is in the ballpark."""]
radioq(choices, 2)
```
###### Question
The function ``f(x) = x^3 + x^4`` has a critical point at ``0`` and a second derivative of ``0`` at ``x=0``. Without resorting to the first derivative test, and only considering that *near* ``x=0`` the function ``f(x)`` is essentially ``x^3``, as ``f(x) = x^3(1+x)``, what can you say about whether the critical point is a relative extrema?
```julia; hold=true; echo=false
choices = ["As ``x^3`` has no extrema at ``x=0``, neither will ``f``",
"As ``x^4`` is of higher degree than ``x^3``, ``f`` will be ``U``-shaped, as ``x^4`` is."]
radioq(choices, 1)
```

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,774 @@
# L'Hospital's Rule
This section uses these add-on packages:
```julia
using CalculusWithJulia
using Plots
using SymPy
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
using Roots
fig_size=(600, 400)
const frontmatter = (
title = "L'Hospital's Rule",
description = "Calculus with Julia: L'Hospital's Rule",
tags = ["CalculusWithJulia", "derivatives", "l'hospital's rule"],
);
nothing
```
----
Let's return to limits of the form ``\lim_{x \rightarrow c}f(x)/g(x)`` which have an
indeterminate form of ``0/0`` if both are evaluated at ``c``. The typical
example being the limit considered by Euler:
```math
\lim_{x\rightarrow 0} \frac{\sin(x)}{x}.
```
We know this is ``1`` using a bound from geometry, but might also guess
this is one, as we know from linearization near ``0`` that we have ``\sin(x) \approx x`` or, more specifically:
```math
\sin(x) = x - \sin(\xi)x^2/2, \quad 0 < \xi < x.
```
This would yield:
```math
\lim_{x \rightarrow 0} \frac{\sin(x)}{x} = \lim_{x\rightarrow 0} \frac{x -\sin(\xi) x^2/2}{x} = \lim_{x\rightarrow 0} 1 + \sin(\xi) \cdot x/2 = 1.
```
This is because we know ``\sin(\xi) x/2`` has a limit of ``0``, when ``|\xi| \leq |x|``.
That doesn't look any easier, as we worried about the error term, but
if just mentally replaced ``\sin(x)`` with ``x`` - which it basically is
near ``0`` - then we can see that the limit should be the same as ``x/x``
which we know is ``1`` without thinking.
Basically, we found that in terms of limits, if both ``f(x)`` and ``g(x)``
are ``0`` at ``c``, that we *might* be able to just take this limit:
``(f(c) + f'(c) \cdot(x-c)) / (g(c) + g'(c) \cdot (x-c))`` which is just
``f'(c)/g'(c)``.
Wouldn't that be nice? We could find difficult limits just by
differentiating the top and the bottom at ``c`` (and not use the messy quotient rule).
Well, in fact that is more or less true, a fact that dates back to
[L'Hospital](http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule) -
who wrote the first textbook on differential calculus - though this result is
likely due to one of the Bernoulli brothers.
> *L'Hospital's rule*: Suppose:
> * that ``\lim_{x\rightarrow c+} f(c) =0`` and ``\lim_{x\rightarrow c+} g(c) =0``,
> * that ``f`` and ``g`` are differentiable in ``(c,b)``, and
> * that ``g(x)`` exists and is non-zero for *all* ``x`` in ``(c,b)``,
> then **if** the following limit exists:
> ``\lim_{x\rightarrow c+}f'(x)/g'(x)=L`` it follows that
> ``\lim_{x \rightarrow c+}f(x)/g(x) = L``.
That is *if* the right limit of ``f(x)/g(x)`` is indeterminate of the form ``0/0``,
but the right limit of ``f'(x)/g'(x)`` is known, possibly by simple
continuity, then the right limit of ``f(x)/g(x)`` exists and is equal to that
of ``f'(x)/g'(x)``.
The rule equally applies to *left limits* and *limits* at ``c``. Later it will see there are other generalizations.
To apply this rule to Euler's example, ``\sin(x)/x``, we just need to consider that:
```math
L = 1 = \lim_{x \rightarrow 0}\frac{\cos(x)}{1},
```
So, as well, ``\lim_{x \rightarrow 0} \sin(x)/x = 1``.
This is due to ``\cos(x)`` being continuous at ``0``, so this limit is
just ``\cos(0)/1``. (More importantly, the tangent line expansion of
``\sin(x)`` at ``0`` is ``\sin(0) + \cos(0)x``, so that ``\cos(0)`` is why
this answer is as it is, but we don't need to think in terms of
``\cos(0)``, but rather the tangent-line expansion, which is ``\sin(x)
\approx x``, as ``\cos(0)`` appears as the coefficient.
```julia; echo=false
note("""
In [Gruntz](http://www.cybertester.com/data/gruntz.pdf), in a reference attributed to Speiss, we learn that L'Hospital was a French Marquis who was taught in ``1692`` the calculus of Leibniz by Johann Bernoulli. They made a contract obliging Bernoulli to leave his mathematical inventions to L'Hospital in exchange for a regular compensation. This result was discovered in ``1694`` and appeared in L'Hospital's book of ``1696``.
"""; title="Bernoulli-de l'Hospital")
```
##### Examples
- Consider this limit at ``0``: ``(a^x - 1)/x``. We have ``f(x) =a^x-1`` has
``f(0) = 0``, so this limit is indeterminate of the form ``0/0``. The
derivative of ``f(x)`` is ``f'(x) = a^x \log(a)`` which has ``f'(0) = \log(a)``.
The derivative of the bottom is also ``1`` at ``0``, so we have:
```math
\log(a) = \frac{\log(a)}{1} = \frac{f'(0)}{g'(0)} = \lim_{x \rightarrow 0}\frac{f'(x)}{g'(x)} = \lim_{x \rightarrow 0}\frac{f(x)}{g(x)}
= \lim_{x \rightarrow 0}\frac{a^x - 1}{x}.
```
```julia; echo=false
note("""Why rewrite in the "opposite" direction? Because the theorem's result -- ``L`` is the limit -- is only true if the related limit involving the derivative exists. We don't do this in the following, but did so here to emphasize the need for the limit of the ratio of the derivatives to exist.
""")
```
- Consider this limit:
```math
\lim_{x \rightarrow 0} \frac{e^x - e^{-x}}{x}.
```
It too is of the indeterminate form ``0/0``. The derivative of the top
is ``e^x + e^{-x}``, which is ``2`` when ``x=0``, so the ratio of
``f'(0)/g'(0)`` is seen to be ``2`` By continuity, the limit of the ratio of the derivatives is ``2``. Then by L'Hospital's rule, the limit above is
``2``.
- Sometimes, L'Hospital's rule must be applied twice. Consider this
limit:
```math
\lim_{x \rightarrow 0} \frac{\cos(x)}{1 - x^2}
```
By L'Hospital's rule *if* this following limit exists, the two will be equal:
```math
\lim_{x \rightarrow 0} \frac{-\sin(x)}{-2x}.
```
But if we didn't guess the answer, we see that this new problem is *also* indeterminate
of the form ``0/0``. So, repeating the process, this new limit will exist and be equal to the following
limit, should it exist:
```math
\lim_{x \rightarrow 0} \frac{-\cos(x)}{-2} = 1/2.
```
As ``L = 1/2`` for this related limit, it must also be the limit of the original problem, by L'Hospital's rule.
- Our "intuitive" limits can bump into issues. Take for example the limit of ``(\sin(x)-x)/x^2`` as ``x`` goes to ``0``. Using ``\sin(x) \approx x`` makes this look like ``0/x^2`` which is still indeterminate. (Because the difference is higher order than ``x``.) Using L'Hospitals, says this limit will exist (and be equal) if the following one does:
```math
\lim_{x \rightarrow 0} \frac{\cos(x) - 1}{2x}.
```
This particular limit is indeterminate of the form ``0/0``, so we again try L'Hospital's rule and consider
```math
\lim_{x \rightarrow 0} \frac{-\sin(x)}{2} = 0
```
So as this limit exists, working backwards, the original limit in question will also be ``0``.
- This example comes from the Wikipedia page. It "proves" a discrete approximation for the second derivative.
Show if ``f''(x)`` exists at ``c`` and is continuous at ``c``, then
```math
f''(c) = \lim_{h \rightarrow 0} \frac{f(c + h) - 2f(c) + f(c-h)}{h^2}.
```
This will follow from two applications of L'Hospital's rule to the
right-hand side. The first says, the limit on the right is equal to
this limit, should it exist:
```math
\lim_{h \rightarrow 0} \frac{f'(c+h) - 0 - f'(c-h)}{2h}.
```
We have to be careful, as we differentiate in the ``h`` variable, not
the ``c`` one, so the chain rule brings out the minus sign. But again,
as we still have an indeterminate form ``0/0``, this limit will equal the
following limit should it exist:
```math
\lim_{h \rightarrow 0} \frac{f''(c+h) - 0 - (-f''(c-h))}{2} =
\lim_{c \rightarrow 0}\frac{f''(c+h) + f''(c-h)}{2} = f''(c).
```
That last equality follows, as it is assumed that ``f''(x)`` exists at ``c`` and is continuous, that is, ``f''(c \pm h) \rightarrow f''(c)``.
The expression above finds use when second derivatives are numerically approximated. (The middle expression is the basis of the central-finite difference approximation to the derivative.)
* L'Hospital himself was interested in this limit for ``a > 0`` ([math overflow](http://mathoverflow.net/questions/51685/how-did-bernoulli-prove-lh%C3%B4pitals-rule))
```math
\lim_{x \rightarrow a} \frac{\sqrt{2a^3\cdot x-x^4} - a\cdot(a^2\cdot x)^{1/3}}{ a - (a\cdot x^3)^{1/4}}.
```
These derivatives can be done by hand, but to avoid any minor mistakes
we utilize `SymPy` taking care to use rational numbers for the
fractional powers, so as not to lose precision through floating point
roundoff:
```julia;
@syms a::positive x::positive
f(x) = sqrt(2a^3*x - x^4) - a * (a^2*x)^(1//3)
g(x) = a - (a*x^3)^(1//4)
```
We can see that at ``x=a`` we have the indeterminate form ``0/0``:
```julia;
f(a), g(a)
```
What about the derivatives?
```julia;
fp, gp = diff(f(x),x), diff(g(x),x)
fp(x=>a), gp(x=>a)
```
Their ratio will not be indeterminate, so the limit in question is just the ratio:
```julia;
fp(x=>a) / gp(x=>a)
```
Of course, we could have just relied on `limit`, which knows about L'Hospital's rule:
```julia;
limit(f(x)/g(x), x, a)
```
## Idea behind L'Hospital's rule
A first proof of L'Hospital's rule takes advantage of Cauchy's
[generalization](http://en.wikipedia.org/wiki/Mean_value_theorem#Cauchy.27s_mean_value_theorem)
of the mean value theorem to two functions. Suppose ``f(x)`` and ``g(x)`` are
continuous on ``[c,b]`` and differentiable on ``(c,b)``. On
``(c,x)``, ``c < x < b`` there exists a ``\xi`` with ``f'(\xi) \cdot (f(x) - f(c)) =
g'(\xi) \cdot (g(x) - g(c))``. In our formulation, both ``f(c)`` and ``g(c)``
are zero, so we have, provided we know that ``g(x)`` is non zero, that
``f(x)/g(x) = f'(\xi)/g'(\xi)`` for some ``\xi``, ``c < \xi < c + x``. That
the right-hand side has a limit as ``x \rightarrow c+`` is true by the
assumption that the limit of the ratio of the derivatives exists. (The ``\xi``
part can be removed by considering it as a composition of a function
going to ``c``.) Thus the right limit of the ratio ``f/g`` is
known.
----
```julia; hold=true; echo=false; cache=true
## {{{lhopitals_picture}}}
function lhopitals_picture_graph(n)
g = (x) -> sqrt(1 + x) - 1 - x^2
f = (x) -> x^2
ts = range(-1/2, stop=1/2, length=50)
a, b = 0, 1/2^n * 1/2
m = (f(b)-f(a)) / (g(b)-g(a))
## get bounds
tl = (x) -> g(0) + m * (x - f(0))
lx = max(fzero(x -> tl(x) - (-0.05),-1000, 1000), -0.6)
rx = min(fzero(x -> tl(x) - (0.25),-1000, 1000), 0.2)
xs = [lx, rx]
ys = map(tl, xs)
plt = plot(g, f, -1/2, 1/2, legend=false, size=fig_size, xlim=(-.6, .5), ylim=(-.1, .3))
plot!(plt, xs, ys, color=:orange)
scatter!(plt, [g(a),g(b)], [f(a),f(b)], markersize=5, color=:orange)
plt
end
caption = L"""
Geometric interpretation of ``L=\lim_{x \rightarrow 0} x^2 / (\sqrt{1 +
x} - 1 - x^2)``. At ``0`` this limit is indeterminate of the form
``0/0``. The value for a fixed ``x`` can be seen as the slope of a secant
line of a parametric plot of the two functions, plotted as ``(g,
f)``. In this figure, the limiting "tangent" line has ``0`` slope,
corresponding to the limit ``L``. In general, L'Hospital's rule is
nothing more than a statement about slopes of tangent lines.
"""
n = 6
anim = @animate for i=1:n
lhopitals_picture_graph(i)
end
imgfile = tempname() * ".gif"
gif(anim, imgfile, fps = 1)
plotly()
ImageFile(imgfile, caption)
```
## Generalizations
L'Hospital's rule generalizes to other indeterminate forms, in
particular the indeterminate form ``\infty/\infty`` can be proved at the same time as ``0/0``
with a more careful
[proof](http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule#General_proof).
The value ``c`` in the limit can also be infinite. Consider this case with ``c=\infty``:
```math
\begin{align*}
\lim_{x \rightarrow \infty} \frac{f(x)}{g(x)} &=
\lim_{x \rightarrow 0} \frac{f(1/x)}{g(1/x)}
\end{align*}
```
L'Hospital's limit applies as ``x \rightarrow 0``, so we differentiate to get:
```math
\begin{align*}
\lim_{x \rightarrow 0} \frac{[f(1/x)]'}{[g(1/x)]'}
&= \lim_{x \rightarrow 0} \frac{f'(1/x)\cdot(-1/x^2)}{g'(1/x)\cdot(-1/x^2)}\\
&= \lim_{x \rightarrow 0} \frac{f'(1/x)}{g'(1/x)}\\
&= \lim_{x \rightarrow \infty} \frac{f'(x)}{g'(x)},
\end{align*}
```
*assuming* the latter limit exists, L'Hospital's rule assures the equality
```math
\lim_{x \rightarrow \infty} \frac{f(x)}{g(x)} =
\lim_{x \rightarrow \infty} \frac{f'(x)}{g'(x)},
```
##### Examples
For example, consider
```math
\lim_{x \rightarrow \infty} \frac{x}{e^x}.
```
We see it is of the form ``\infty/\infty``. Taking advantage of the fact that L'Hospital's rule applies to limits
at ``\infty``, we have that this limit will exist and be equal to this one,
should it exist:
```math
\lim_{x \rightarrow \infty} \frac{1}{e^x}.
```
This limit is, of course, ``0``, as it is of the form ``1/\infty``. It is not
hard to build up from here to show that for any integer value of ``n>0``
that:
```math
\lim_{x \rightarrow \infty} \frac{x^n}{e^x} = 0.
```
This is an expression of the fact that exponential functions grow faster than polynomial functions.
Similarly, powers grow faster than logarithms, as this limit shows, which is indeterminate of the form ``\infty/\infty``:
```math
\lim_{x \rightarrow \infty} \frac{\log(x)}{x} =
\lim_{x \rightarrow \infty} \frac{1/x}{1} = 0,
```
the first equality by L'Hospital's rule, as the second limit exists.
## Other indeterminate forms
Indeterminate forms of the type ``0 \cdot \infty``, ``0^0``,
``\infty^\infty``, ``\infty - \infty`` can be re-expressed to be in the
form ``0/0`` or ``\infty/\infty`` and then L'Hospital's theorem can be
applied.
###### Example: rewriting ``0 \cdot \infty``
What is the limit ``x \log(x)`` as ``x \rightarrow 0+``? The form is ``0\cdot \infty``, rewriting, we see this is just:
```math
\lim_{x \rightarrow 0+}\frac{\log(x)}{1/x}.
```
L'Hospital's rule clearly applies to one-sided limits, as well as two
(our proof sketch used one-sided limits), so this limit will equal the
following, should it exist:
```math
\lim_{x \rightarrow 0+}\frac{1/x}{-1/x^2} = \lim_{x \rightarrow 0+} -x = 0.
```
###### Example: rewriting ``0^0``
What is the limit ``x^x`` as ``x \rightarrow 0+``? The expression is of the form ``0^0``, which is indeterminate. (Even though floating point math defines the value as ``1``.) We can rewrite this by taking a log:
```math
x^x = \exp(\log(x^x)) = \exp(x \log(x)) = \exp(\log(x)/(1/x)).
```
Be just saw that ``\lim_{x \rightarrow 0+}\log(x)/(1/x) = 0``. So by the
rules for limits of compositions and the fact that ``e^x`` is
continuous, we see ``\lim_{x \rightarrow 0+} x^x = e^0 = 1``.
##### Example: rewriting ``\infty - \infty``
A limit ``\lim_{x \rightarrow c} f(x) - g(x)`` of indeterminate form ``\infty - \infty`` can be reexpressed to be of the from ``0/0`` through the transformation:
```math
\begin{align*}
f(x) - g(x) &= f(x)g(x) \cdot (\frac{1}{g(x)} - \frac{1}{f(x)}) \\
&= \frac{\frac{1}{g(x)} - \frac{1}{f(x)}}{\frac{1}{f(x)g(x)}}.
\end{align*}
```
Applying this to
```math
L = \lim_{x \rightarrow 1} \big(\frac{x}{x-1} - \frac{1}{\log(x)}\big)
```
We get that ``L`` is equal to the following limit:
```math
\lim_{x \rightarrow 1} \frac{\log(x) - \frac{x-1}{x}}{\frac{x-1}{x} \log(x)}
=
\lim_{x \rightarrow 1} \frac{x\log(x)-(x-1)}{(x-1)\log(x)}
```
In `SymPy` we have:
```julia
𝒇 = x*log(x) - (x-1)
𝒈 = (x-1)*log(x)
𝒇(1), 𝒈(1)
```
L'Hospital's rule applies to the form ``0/0``, so we try:
```julia
𝒇 = diff(𝒇, x)
𝒈 = diff(𝒈, x)
𝒇(1), 𝒈(1)
```
Again, we get the indeterminate form ``0/0``, so we try again with second derivatives:
```julia
𝒇 = diff(𝒇, x, x)
𝒈 = diff(𝒈, x, x)
𝒇(1), 𝒈(1)
```
From this we see the limit is ``1/2``, as could have been done directly:
```julia
limit(𝒇/𝒈, x=>1)
```
## The assumptions are necessary
##### Example: the limit existing is necessary
The following limit is *easily* seen by comparing terms of largest growth:
```math
1 = \lim_{x \rightarrow \infty} \frac{x - \sin(x)}{x}
```
However, the limit of the ratio of the derivatives *does* not exist:
```math
\lim_{x \rightarrow \infty} \frac{1 - \cos(x)}{1},
```
as the function just oscillates. This shows that L'Hospital's rule does not apply when the limit of the the ratio of the derivatives does not exist.
##### Example: the assumptions matter
This example comes from the thesis of Gruntz to highlight possible issues when computer systems do simplifications.
Consider:
```math
\lim_{x \rightarrow \infty} \frac{1/2\sin(2x) +x}{\exp(\sin(x))\cdot(\cos(x)\sin(x)+x)}.
```
If we apply L'Hospital's rule using simplification we have:
```julia
u(x) = 1//2*sin(2x) + x
v(x) = exp(sin(x))*(cos(x)*sin(x) + x)
up, vp = diff(u(x),x), diff(v(x),x)
limit(simplify(up/vp), x => oo)
```
However, this answer is incorrect. The reason being subtle. The simplification cancels a term of ``\cos(x)`` that appears in the numerator and denominator. Before cancellation, we have `vp` will have infinitely many zero's as ``x`` approaches ``\infty`` so L'Hospital's won't apply (the limit won't exist, as every ``2\pi`` the ratio is undefined so the function is never eventually close to some ``L``).
This ratio has no limit, as it oscillates, as confirmed by `SymPy`:
```julia
limit(u(x)/v(x), x=> oo)
```
## Questions
###### Question
This function ``f(x) = \sin(5x)/x`` is *indeterminate* at ``x=0``. What type?
```julia; echo=false
lh_choices = [
"``0/0``",
"``\\infty/\\infty``",
"``0^0``",
"``\\infty - \\infty``",
"``0 \\cdot \\infty``"
]
nothing
```
```julia; hold=true; echo=false
ans = 1
radioq(lh_choices, ans, keep_order=true)
```
###### Question
This function ``f(x) = \sin(x)^{\sin(x)}`` is *indeterminate* at ``x=0``. What type?
```julia; hold=true; echo=false
ans =3
radioq(lh_choices, ans, keep_order=true)
```
###### Question
This function ``f(x) = (x-2)/(x^2 - 4)`` is *indeterminate* at ``x=2``. What type?
```julia; hold=true; echo=false
ans = 1
radioq(lh_choices, ans, keep_order=true)
```
###### Question
This function ``f(x) = (g(x+h) - g(x-h)) / (2h)`` (``g`` is continuous) is *indeterminate* at ``h=0``. What type?
```julia; hold=true; echo=false
ans = 1
radioq(lh_choices, ans, keep_order=true)
```
###### Question
This function ``f(x) = x \log(x)`` is *indeterminate* at ``x=0``. What type?
```julia; hold=true; echo=false
ans = 5
radioq(lh_choices, ans, keep_order=true)
```
###### Question
Does L'Hospital's rule apply to this limit:
```math
\lim_{x \rightarrow \pi} \frac{\sin(\pi x)}{\pi x}.
```
```julia; hold=true; echo=false
choices = [
"Yes. It is of the form ``0/0``",
"No. It is not indeterminate"
]
ans = 2
radioq(choices, ans)
```
###### Question
Use L'Hospital's rule to find the limit
```math
L = \lim_{x \rightarrow 0} \frac{4x - \sin(x)}{x}.
```
What is ``L``?
```julia; hold=true; echo=false
f(x) = (4x - sin(x))/x
L = float(N(limit(f, 0)))
numericq(L)
```
###### Question
Use L'Hospital's rule to find the limit
```math
L = \lim_{x \rightarrow 0} \frac{\sqrt{1+x} - 1}{x}.
```
What is ``L``?
```julia; hold=true; echo=false
f(x) = (sqrt(1+x) - 1)/x
L = float(N(limit(f, 0)))
numericq(L)
```
###### Question
Use L'Hospital's rule *one* or more times to find the limit
```math
L = \lim_{x \rightarrow 0} \frac{x - \sin(x)}{x^3}.
```
What is ``L``?
```julia; hold=true; echo=false
f(x) = (x - sin(x))/x^3
L = float(N(limit(f, 0)))
numericq(L)
```
###### Question
Use L'Hospital's rule *one* or more times to find the limit
```math
L = \lim_{x \rightarrow 0} \frac{1 - x^2/2 - \cos(x)}{x^3}.
```
What is ``L``?
```julia; hold=true; echo=false
f(x) = (1 - x^2/2 - cos(x))/x^3
L = float(N(limit(f, 0)))
numericq(L)
```
###### Question
Use L'Hospital's rule *one* or more times to find the limit
```math
L = \lim_{x \rightarrow \infty} \frac{\log(\log(x))}{\log(x)}.
```
What is ``L``?
```julia; hold=true; echo=false
f(x) = log(log(x))/log(x)
L = N(limit(f(x), x=> oo))
numericq(L)
```
###### Question
By using a common denominator to rewrite this expression, use L'Hospital's rule to find the limit
```math
L = \lim_{x \rightarrow 0} \frac{1}{x} - \frac{1}{\sin(x)}.
```
What is ``L``?
```julia; hold=true; echo=false
f(x) = 1/x - 1/sin(x)
L = float(N(limit(f, 0)))
numericq(L)
```
##### Question
Use L'Hospital's rule to find the limit
```math
L = \lim_{x \rightarrow \infty} \log(x)/x
```
What is ``L``?
```julia; hold=true; echo=false
L = float(N(limit(log(x)/x, x=>oo)))
numericq(L)
```
##### Question
Using L'Hospital's rule, does
```math
\lim_{x \rightarrow 0+} x^{\log(x)}
```
exist?
Consider ``x^{\log(x)} = e^{\log(x)\log(x)}``.
```julia; hold=true; echo=false
yesnoq(false)
```
##### Question
Using L'Hospital's rule, find the limit of
```math
\lim_{x \rightarrow 1} (2-x)^{\tan(\pi/2 \cdot x)}.
```
(Hint, express as ``\exp^{\tan(\pi/2 \cdot x) \cdot \log(2-x)}`` and take the limit of the resulting exponent.)
```julia; hold=true; echo=false
choices = [
"``e^{2/\\pi}``",
"``{2\\pi}``",
"``1``",
"``0``",
"It does not exist"
]
ans =1
radioq(choices, ans)
```

View File

@ -0,0 +1,806 @@
# Linearization
This section uses these add-on packages:
```julia
using CalculusWithJulia
using Plots
using SymPy
using TaylorSeries
using DualNumbers
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
const frontmatter = (
title = "Linearization",
description = "Calculus with Julia: Linearization",
tags = ["CalculusWithJulia", "derivatives", "linearization"],
);
nothing
```
----
The derivative of $f(x)$ has the interpretation as the slope of the
tangent line. The tangent line is the line that best approximates the
function at the point.
Using the point-slope form of a line, we see that the tangent line to the graph of $f(x)$ at $(c,f(c))$ is given by:
```math
y = f(c) + f'(c) \cdot (x - c).
```
This is written as an equation, though we prefer to work with
functions within `Julia`. Here we write such a function as an
operator - it takes a function `f` and returns a function
representing the tangent line.
```julia; eval=false
tangent(f, c) = x -> f(c) + f'(c) * (x - c)
```
(Recall, the `->` indicates that an anonymous function is being generated.)
This function along with the `f'` notation for automatic derivatives is defined in the
`CalculusWithJulia` package.
We make some graphs with tangent lines:
```julia;hold=true
f(x) = x^2
plot(f, -3, 3)
plot!(tangent(f, -1))
plot!(tangent(f, 2))
```
The graph shows that near the point, the line and function are close,
but this need not be the case away from the point. We can express this informally as
```math
f(x) \approx f(c) + f'(c) \cdot (x-c)
```
with the understanding this applies for $x$ "close" to $c$.
Usually for the applications herein, instead of ``x`` and ``c`` the two points are ``x+\Delta_x`` and ``x``. This gives:
> *Linearization*: ``\Delta_y = f(x +\Delta_x) - f(x) \approx f'(x) \Delta_x``, for small ``\Delta_x``.
This section gives some implications of this fact and quantifies what
"close" can mean.
##### Example
There are several approximations that are well known in physics, due to their widespread usage:
* That $\sin(x) \approx x$ around $x=0$:
```julia;hold=true
plot(sin, -pi/2, pi/2)
plot!(tangent(sin, 0))
```
Symbolically:
```julia; hold=true
@syms x
c = 0
f(x) = sin(x)
f(c) + diff(f(x),x)(c) * (x - c)
```
* That $\log(1 + x) \approx x$ around $x=0$:
```julia; hold=true;
f(x) = log(1 + x)
plot(f, -1/2, 1/2)
plot!(tangent(f, 0))
```
Symbolically:
```julia; hold=true
@syms x
c = 0
f(x) = log(1 + x)
f(c) + diff(f(x),x)(c) * (x - c)
```
(The `log1p` function implements a more accurate version of this function when numeric values are needed.)
* That $1/(1-x) \approx x$ around $x=0$:
```julia; hold=true;
f(x) = 1/(1-x)
plot(f, -1/2, 1/2)
plot!(tangent(f, 0))
```
Symbolically:
```julia; hold=true
@syms x
c = 0
f(x) = 1 / (1 - x)
f(c) + diff(f(x),x)(c) * (x - c)
```
* That ``(1+x)^n \approx 1 + nx`` around ``x = 0``. For example, with ``n=5``
```julia; hold=true;
n = 5
f(x) = (1+x)^n # f'(0) = n = n(1+x)^(n-1) at x=0
plot(f, -1/2, 1/2)
plot!(tangent(f, 0))
```
Symbolically:
```julia; hold=true
@syms x, n::real
c = 0
f(x) = (1 + x)^n
f(c) + diff(f(x),x)(x=>c) * (x - c)
```
----
In each of these cases, a more complicated non-linear function
is well approximated in a region of interest by a simple linear
function.
## Numeric approximations
```julia; hold=true; echo=false
f(x) = sin(x)
a, b = -1/4, pi/2
p = plot(f, a, b, legend=false);
plot!(p, x->x, a, b);
plot!(p, [0,1,1], [0, 0, 1], color=:brown);
plot!(p, [1,1], [0, sin(1)], color=:green, linewidth=4);
annotate!(p, collect(zip([1/2, 1+.075, 1/2-1/8], [.05, sin(1)/2, .75], ["Δx", "Δy", "m=dy/dx"])));
p
```
The plot shows the tangent line with slope $dy/dx$ and the actual
change in $y$, $\Delta y$, for some specified $\Delta x$. The small
gap above the sine curve is the error were the value of the sine approximated using the drawn tangent line. We can see that approximating
the value of $\Delta y = \sin(c+\Delta x) - \sin(c)$ with the often
easier to compute $(dy/dx) \cdot \Delta x = f'(c)\Delta x$ - for small enough values of
$\Delta x$ - is not going to be too far off provided $\Delta x$ is not too large.
This approximation is known as linearization. It can be used both in
theoretical computations and in pratical applications. To see how
effective it is, we look at some examples.
##### Example
If $f(x) = \sin(x)$, $c=0$ and $\Delta x= 0.1$ then the values for the actual change in the function values and the value of $\Delta y$ are:
```julia;
f(x) = sin(x)
c, deltax = 0, 0.1
f(c + deltax) - f(c), f'(c) * deltax
```
The values are pretty close. But what is $0.1$ radians? Lets use degrees. Suppose we have $\Delta x = 10^\circ$:
```julia;
deltax⁰ = 10*pi/180
actual = f(c + deltax⁰) - f(c)
approx = f'(c) * deltax⁰
actual, approx
```
They agree until the third decimal value. The *percentage error* is just $1/2$ a percent:
```julia;
(approx - actual) / actual * 100
```
### Relative error or relative change
The relative error is defined by
```math
\big| \frac{\text{actual} - \text{approximate}}{\text{actual}} \big|.
```
However, typically with linearization, we talk about the *relative change*, not relative error, as the denominator is easier to compute. This is
```math
\frac{f(x + \Delta_x) - f(x)}{f(x)} = \frac{\Delta_y}{f(x)} \approx
\frac{f'(x) \cdot \Delta_x}{f(x)}
```
The *percentage change* multiplies by ``100``.
##### Example
What is the relative change in surface area of a sphere if the radius changes from ``r`` to ``r + dr``?
We have ``S = 4\pi r^2`` so the approximate relative change, ``dy/S`` is given, using the derivative ``dS/dr = 8\pi r``, by
```math
\frac{8\pi\cdot r\cdot dr}{4\pi r^2} = 2r\cdot dr.
```
##### Example
We are traveling ``60`` miles. At ``60`` miles an hour, we will take ``60`` minutes (or one hour). How long will it take at ``70`` miles an hour? (Assume you can't divide, but, instead, can only multiply!)
Well the answer is $60/70$ hours or $60/70 \cdot 60$ minutes. But we
can't divide, so we turn this into a multiplication problem via some algebra:
```math
\frac{60}{70} = \frac{60}{60 + 10} = \frac{1}{1 + 10/60} = \frac{1}{1 + 1/6}.
```
Okay, so far no calculator was needed. We wrote $70 = 60 + 10$, as we
know that $60/60$ is just $1$. This almost gets us there. If we really
don't want to divide, we can get an answer by using the tangent line
approximation for $1/(1+x)$ around $x=0$. This is $1/(1+x) \approx 1 -
x$. (You can check by finding that $f'(0) = -1$.) Thus, our answer is
approximately $5/6$ of an hour or 50 minutes.
How much in error are we?
```julia;
abs(50 - 60/70*60) / (60/70*60) * 100
```
That's about $3$ percent. Not bad considering we could have done all
the above in our head while driving without taking our eyes off the
road to use the calculator on our phone for a division.
##### Example
A ``10``cm by ``10``cm by ``10``cm cube will contain ``1`` liter
(``1000``cm``^3``). In manufacturing such a cube, the side lengths are
actually $10.1$ cm. What will be the volume in liters? Compute this
with a linear approximation to $(10.1)^3$.
Here $f(x) = x^3$ and we are asked to approximate $f(10.1)$. Letting $c=10$, we have:
```math
f(c + \Delta) \approx f(c) + f'(c) \cdot \Delta = 1000 + f'(c) \cdot (0.1)
```
Computing the derivative can be done easily, we get for our answer:
```julia;
fp(x) = 3*x^2
c₀, Delta = 10, 0.1
approx₀ = 1000 + fp(c₀) * Delta
```
This is a relative error as a percent of:
```julia;
actual₀ = 10.1^3
(actual₀ - approx₀)/actual₀ * 100
```
The manufacturer may be interested instead in comparing the volume of the actual object to the $1$ liter target. They might use the approximate value for this comparison, which would yield:
```julia;
(1000 - approx₀)/approx₀ * 100
```
This is off by about $3$ percent. Not so bad for some applications, devastating for others.
##### Example: Eratosthenes and the circumference of the earth
[Eratosthenes](https://en.wikipedia.org/wiki/Eratosthenes) is said to have been the first person to estimate the radius (or by relation the circumference) of the earth. The basic idea is based on the difference of shadows cast by the sun. Suppose Eratosthenes sized the circumference as ``252,000`` *stadia*. Taking ``1``` stadia as ``160`` meters and the actual radius of the earth as ``6378.137`` kilometers, we can convert to see that Eratosthenes estimated the radius as ``6417``.
If Eratosthenes were to have estimated the volume of a spherical earth, what would be his approximate percentage change between his estimate and the actual?
Using ``V = 4/3 \pi r^3`` we get ``V' = 4\pi r^2``:
```julia
rₑ = 6417
rₐ = 6378.137
Δᵣ = rₑ - rₐ
Vₛ(r) = 4/3 * pi * r^3
Δᵥ = Vₛ'(rₑ) * Δᵣ
Δᵥ / Vₛ(rₑ) * 100
```
##### Example: a simple pendulum
A *simple* pendulum is comprised of a massless "bob" on a rigid "rod"
of length $l$. The rod swings back and forth making an angle $\theta$
with the perpendicular. At rest $\theta=0$, here we have $\theta$ swinging with $\lvert\theta\rvert \leq \theta_0$
for some $\theta_0$.
According to [Wikipedia](http://tinyurl.com/yz5sz7e) - and many
introductory physics book - while swinging, the angle $\theta$ varies
with time following this equation:
```math
\theta''(t) + \frac{g}{l} \sin(\theta(t)) = 0.
```
That is, the second derivative of $\theta$ is proportional to the sine
of $\theta$ where the proportionality constant involves $g$ from
gravity and the length of the "rod."
This would be much easier if the second derivative were proportional to the angle $\theta$ and not its sine.
[Huygens](http://en.wikipedia.org/wiki/Christiaan_Huygens) used the
approximation of $\sin(x) \approx x$, noted above, to say that when
the angle is not too big, we have the pendulum's swing obeying
$\theta''(t) = -g/l \cdot t$. Without getting too involved in why,
we can verify by taking two derivatives that $\theta_0\sin(\sqrt{g/l}\cdot t)$ will be a solution to this
modified equation.
With this solution, the motion is periodic with constant amplitude (assuming frictionless behaviour), as
the sine function is. More surprisingly, the period is found from $T =
2\pi/(\sqrt{g/l}) = 2\pi \sqrt{l/g}$. It depends on $l$ - longer
"rods" take more time to swing back and forth - but does not depend
on the how wide the pendulum is swinging between (provided $\theta_0$
is not so big the approximation of $\sin(x) \approx x$ fails). This
latter fact may be surprising, though not to Galileo who discovered
it.
## Differentials
The Leibniz notation for a derivative is ``dy/dx`` indicating the
change in ``y`` as ``x`` changes. It proves convenient to decouple
this using *differentials* ``dx`` and ``dy``. What do these notations
mean? They measure change along the tangent line in same way
``\Delta_x`` and ``\Delta_y`` measure change for the function. The differential ``dy`` depends on both ``x`` and ``dx``, it being defined by ``dy=f'(x)dx``. As tangent lines locally represent a function, ``dy`` and ``dx`` are often associated with an *infinitesimal* difference.
Taking ``dx = \Delta_x``, as in the previous graphic, we can compare ``dy`` -- the change along the tangent line given by ``dy/dx \cdot dx`` -- and ``\Delta_y`` -- the change along the function given by ``f(x + \Delta_x) - f(x)``. The linear approximation, ``f(x + \Delta_x) - f(x)\approx f'(x)dx``, says that
```math
\Delta_y \approx dy; \quad \text{ when } \Delta_x = dx
```
## The error in approximation
How good is the approximation? Graphically we can see it is pretty
good for the graphs we choose, but are there graphs out there for
which the approximation is not so good? Of course. However, we can
say this (the
[Lagrange](http://en.wikipedia.org/wiki/Taylor%27s_theorem) form of a
more general Taylor remainder theorem):
> Let ``f(x)`` be twice differentiable on ``I=(a,b)``,
>``f`` is continuous on ``[a,b]``, and
> ``a < c < b``. Then for any ``x`` in ``I``, there exists some value ``\xi`` between ``c`` and ``x`` such that
> ``f(x) = f(c) + f'(c)(x-c) + (f''(\xi)/2)\cdot(x-c)^2``.
That is, the error is basically a constant depending on the concavity
of $f$ times a quadratic function centered at $c$.
For $\sin(x)$ at $c=0$ we get $\lvert\sin(x) - x\rvert = \lvert-\sin(\xi)\cdot x^2/2\rvert$.
Since $\lvert\sin(\xi)\rvert \leq 1$, we must have this bound:
$\lvert\sin(x) - x\rvert \leq x^2/2$.
Can we verify? Let's do so graphically:
```julia; hold=true
h(x) = abs(sin(x) - x)
g(x) = x^2/2
plot(h, -2, 2, label="h")
plot!(g, -2, 2, label="f")
```
The graph shows a tight bound near ``0`` and then a bound over this viewing window.
Similarly, for $f(x) = \log(1 + x)$ we have the following at $c=0$:
```math
f'(x) = 1/(1+x), \quad f''(x) = -1/(1+x)^2.
```
So, as $f(c)=0$ and $f'(c) = 1$, we have
```math
\lvert f(x) - x\rvert \leq \lvert f''(\xi)\rvert \cdot \frac{x^2}{2}
```
We see that $\lvert f''(x)\rvert$ is decreasing for $x > -1$. So if $-1 < x < c$ we have
```math
\lvert f(x) - x\rvert \leq \lvert f''(x)\rvert \cdot \frac{x^2}{2} = \frac{x^2}{2(1+x)^2}.
```
And for $c=0 < x$, we have
```math
\lvert f(x) - x\rvert \leq \lvert f''(0)\rvert \cdot \frac{x^2}{2} = x^2/2.
```
Plotting we verify the bound on ``|\log(1+x)-x|``:
```julia; hold=true
h(x) = abs(log(1+x) - x)
g(x) = x < 0 ? x^2/(2*(1+x)^2) : x^2/2
plot(h, -0.5, 2, label="h")
plot!(g, -0.5, 2, label="g")
```
Again, we see the very close bound near ``0``, which widens at the edges of the viewing window.
### Why is the remainder term as it is?
To see formally why the remainder is as it is, we recall the mean value
theorem in the extended form of Cauchy. Suppose $c=0$, $x > 0$, and let $h(x) = f(x) - (f(0) +
f'(0) x)$ and $g(x) = x^2$. Then we have that there exists a $e$ with
$0 < e < x$ such that
```math
\text{error} = h(x) - h(0) = (g(x) - g(0)) \frac{h'(e)}{g'(e)} = x^2 \cdot \frac{1}{2} \cdot \frac{f'(e) - f'(0)}{e} =
x^2 \cdot \frac{1}{2} \cdot f''(\xi).
```
The value of $\xi$, from the mean value theorem applied to $f'(x)$,
satisfies $0 < \xi < e < x$, so is in $[0,x].$
### The big (and small) "oh"
`SymPy` can find the tangent line expression as a special case of its `series` function (which implements [Taylor series](../taylor_series_polynomials.html)). The `series` function needs an expression to approximate; a variable specified, as there may be parameters in the expression; a value ``c`` for *where* the expansion is taken, with default ``0``; and a number of terms, for this example ``2`` for a constant and linear term. (There is also an optional `dir` argument for one-sided expansions.)
Here we see the answer provided for $e^{\sin(x)}$:
```julia;
@syms x
series(exp(sin(x)), x, 0, 2)
```
The expression $1 + x$ comes from the fact that `exp(sin(0))` is $1$, and the derivative `exp(sin(0)) * cos(0)` is *also* $1$. But what is the $\mathcal{O}(x^2)$?
We know the answer is *precisely* $f''(\xi)/2 \cdot x^2$ for some ``\xi``, but were we
only concerned about the scale as $x$ goes to zero
that when ``f''`` is continuous that the error when divided by ``x^2`` goes to some finite value (``f''(0)/2``). More generally, if the error divided by ``x^2`` is *bounded* as ``x`` goes to ``0``, then we say the error is "big oh" of ``x^2``.
The [big](http://en.wikipedia.org/wiki/Big_O_notation) "oh" notation,
``f(x) = \mathcal{O}(g(x))``, says that the ratio ``f(x)/g(x)`` is
bounded as ``x`` goes to ``0`` (or some other value ``c``, depending
on the context). A little "oh" (e.g., ``f(x) = \mathcal{o}(g(x))``)
would mean that the limit ``f(x)/g(x)`` would be ``0``, as
``x\rightarrow 0``, a much stronger assertion.
Big "oh" and little "oh" give us a sense of how good an approximation
is without being bogged down in the details of the exact value. As
such they are useful guides in focusing on what is primary and what is
secondary. Applying this to our case, we have this rough form of the
tangent line approximation valid for functions having a continuous second
derivative at ``c``:
```math
f(x) = f(c) + f'(c)(x-c) + \mathcal{O}((x-c)^2).
```
##### Example: the algebra of tangent line approximations
Suppose $f(x)$ and $g(x)$ are represented by their tangent lines about $c$, respectively:
```math
\begin{align*}
f(x) &= f(c) + f'(c)(x-c) + \mathcal{O}((x-c)^2), \\
g(x) &= g(c) + g'(c)(x-c) + \mathcal{O}((x-c)^2).
\end{align*}
```
Consider the sum, after rearranging we have:
```math
\begin{align*}
f(x) + g(x) &= \left(f(c) + f'(c)(x-c) + \mathcal{O}((x-c)^2)\right) + \left(g(c) + g'(c)(x-c) + \mathcal{O}((x-c)^2)\right)\\
&= \left(f(c) + g(c)\right) + \left(f'(c)+g'(c)\right)(x-c) + \mathcal{O}((x-c)^2).
\end{align*}
```
The two big "Oh" terms become just one as the sum of a constant times $(x-c)^2$ plus a constant time $(x-c)^2$ is just some other constant times $(x-c)^2$. What we can read off from this is the term multiplying $(x-c)$ is just the derivative of $f(x) + g(x)$ (from the sum rule), so this too is a tangent line approximation.
Is it a coincidence that a basic algebraic operation with tangent lines approximations produces a tangent line approximation? Let's try multiplication:
```math
\begin{align*}
f(x) \cdot g(x) &= [f(c) + f'(c)(x-c) + \mathcal{O}((x-c)^2)] \cdot [g(c) + g'(c)(x-c) + \mathcal{O}((x-c)^2)]\\
&=[(f(c) + f'(c)(x-c)] \cdot [g(c) + g'(c)(x-c)] + (f(c) + f'(c)(x-c) \cdot \mathcal{O}((x-c)^2)) + g(c) + g'(c)(x-c) \cdot \mathcal{O}((x-c)^2)) + [\mathcal{O}((x-c)^2))]^2\\
&= [(f(c) + f'(c)(x-c)] \cdot [g(c) + g'(c)(x-c)] + \mathcal{O}((x-c)^2)\\
&= f(c) \cdot g(c) + [f'(c)\cdot g(c) + f(c)\cdot g'(c)] \cdot (x-c) + [f'(c)\cdot g'(c) \cdot (x-c)^2 + \mathcal{O}((x-c)^2)] \\
&= f(c) \cdot g(c) + [f'(c)\cdot g(c) + f(c)\cdot g'(c)] \cdot (x-c) + \mathcal{O}((x-c)^2)
\end{align*}
```
The big "oh" notation just sweeps up many things including any products of it *and* the term $f'(c)\cdot g'(c) \cdot (x-c)^2$. Again, we see from the product rule that this is just a tangent line approximation for $f(x) \cdot g(x)$.
The basic mathematical operations involving tangent lines can be computed just using the tangent lines when the desired accuracy is at the tangent line level. This is even true for composition, though there the outer and inner functions may have different "$c$"s.
Knowing this can simplify the task of finding tangent line approximations of compound expressions.
For example, suppose we know that at $c=0$ we have these formula where $a \approx b$ is a shorthand for the more formal $a=b + \mathcal{O}(x^2)$:
```math
\sin(x) \approx x, \quad e^x \approx 1 + x, \quad \text{and}\quad 1/(1+x) \approx 1 - x.
```
Then we can immediately see these tangent line approximations about $x=0$:
```math
e^x \cdot \sin(x) \approx (1+x) \cdot x = x + x^2 \approx x,
```
and
```math
\frac{\sin(x)}{e^x} \approx \frac{x}{1 + x} \approx x \cdot(1-x) = x-x^2 \approx x.
```
Since $\sin(0) = 0$, we can use these to find the tangent line approximation of
```math
e^{\sin(x)} \approx e^x \approx 1 + x.
```
Note that $\sin(\exp(x))$ is approximately $\sin(1+x)$ but not approximately $1+x$, as the expansion for $\sin$ about $1$ is not simply $x$.
### The TaylorSeries package
The `TaylorSeries` packages will do these calculations in a manner similar to how `SymPy` transforms a function and a symbolic variable into a symbolic expression.
For example, we have
```julia
t = Taylor1(Float64, 1)
```
The number type and the order is specified to the constructor. Linearization is order ``1``, other orders will be discussed later. This variable can now be composed with mathematical functions and the linearization of the function will be returned:
```julia
sin(t), exp(t), 1/(1+t)
```
```julia
sin(t)/exp(t), exp(sin(t))
```
##### Example: Automatic differentiation
Automatic differentiation (forward mode) essentially uses this technique. A "dual" is introduced which has terms ``a +b\epsilon`` where ``\epsilon^2 = 0``.
The ``\epsilon`` is like ``x`` in a linear expansion, so the `a` coefficient encodes the value and the `b` coefficient reflects the derivative at the value. Numbers are treated like a variable, so their "b coefficient" is a `1`. Here then is how `0` is encoded:
```julia;
Dual(0, 1)
```
Then what is ``\(x)``? It should reflect both ``(\sin(0), \cos(0))`` the latter being the derivative of ``\sin``. We can see this is *almost* what is computed behind the scenes through:
```julia; hold=true
x = Dual(0, 1)
@code_lowered sin(x)
```
This output of `@code_lowered` can be confusing, but this simple case needn't be. Working from the end we see an assignment to a variable named `%7` of `Dual(%3, %6)`. The value of `%3` is `sin(x)` where `x` is the value `0` above. The value of `%6` is `cos(x)` *times* the value `1` above (the `xp`), which reflects the *chain* rule being used. (The derivative of `sin(u)` is `cos(u)*du`.) So this dual number encodes both the function value at `0` and the derivative of the function at `0`.)
Similarly, we can see what happens to `log(x)` at `1` (encoded by `Dual(1,1)`):
```julia; hold=true
x = Dual(1, 1)
@code_lowered log(x)
```
We can see the derivative again reflects the chain rule, it being given by `1/x * xp` where `xp` acts like `dx` (from assignments `%5` and `%4`). Comparing the two outputs, we see only the assignment to `%4` differs, it reflecting the derivative of the function.
## Questions
###### Question
What is the right linear approximation for $\sqrt{1 + x}$ near $0$?
```julia; hold=true; echo=false
choices = [
"``1 + 1/2``",
"``1 + x^{1/2}``",
"``1 + (1/2) \\cdot x``",
"``1 - (1/2) \\cdot x``"]
ans = 3
radioq(choices, ans)
```
###### Question
What is the right linear approximation for $(1 + x)^k$ near $0$?
```julia; hold=true; echo=false
choices = [
"``1 + k``",
"``1 + x^k``",
"``1 + k \\cdot x``",
"``1 - k \\cdot x``"]
ans = 3
radioq(choices, ans)
```
###### Question
What is the right linear approximation for $\cos(\sin(x))$ near $0$?
```julia; hold=true; echo=false
choices = [
"``1``",
"``1 + x``",
"``x``",
"``1 - x^2/2``"
]
ans = 1
radioq(choices, ans)
```
###### Question
What is the right linear approximation for $\tan(x)$ near $0$?
```julia; hold=true; echo=false
choices = [
"``1``",
"``x``",
"``1 + x``",
"``1 - x``"
]
ans = 2
radioq(choices, ans)
```
###### Question
What is the right linear approximation of $\sqrt{25 + x}$ near $x=0$?
```julia; hold=true; echo=false
choices = [
"``5 \\cdot (1 + (1/2) \\cdot (x/25))``",
"``1 - (1/2) \\cdot x``",
"``1 + x``",
"``25``"
]
ans = 1
radioq(choices, ans)
```
###### Question
Let $f(x) = \sqrt{x}$. Find the actual error in approximating $f(26)$ by the
value of the tangent line at $(25, f(25))$ at $x=26$.
```julia; hold=true; echo=false
tgent(x) = 5 + x/10
ans = tgent(1) - sqrt(26)
numericq(ans)
```
###### Question
An estimate of some quantity was $12.34$ the actual value was $12$. What was the *percentage error*?
```julia; hold=true; echo=false
est = 12.34
act = 12.0
ans = (est -act)/act * 100
numericq(ans)
```
###### Question
Find the percentage error in estimating $\sin(5^\circ)$ by $5 \pi/180$.
```julia; hold=true; echo=false
tl(x) = x
x0 = 5 * pi/180
est = x0
act = sin(x0)
ans = (est -act)/act * 100
numericq(ans)
```
###### Question
The side length of a square is measured roughly to be $2.0$ cm. The actual length $2.2$ cm. What is the difference in area (in absolute values) as *estimated* by a tangent line approximation.
```julia; hold=true; echo=false
tl(x) = 4 + 4x
ans = tl(.2) - 4
numericq(abs(ans))
```
###### Question
The [Birthday problem](https://en.wikipedia.org/wiki/Birthday_problem) computes the probability that in a group of ``n`` people, under some assumptions, that no two share a birthday. Without trying to spoil the problem, we focus on the calculus specific part of the problem below:
```math
\begin{align*}
p
&= \frac{365 \cdot 364 \cdot \cdots (365-n+1)}{365^n} \\
&= \frac{365(1 - 0/365) \cdot 365(1 - 1/365) \cdot 365(1-2/365) \cdot \cdots \cdot 365(1-(n-1)/365)}{365^n}\\
&= (1 - \frac{0}{365})\cdot(1 -\frac{1}{365})\cdot \cdots \cdot (1-\frac{n-1}{365}).
\end{align*}
```
Taking logarithms, we have ``\log(p)`` is
```math
\log(1 - \frac{0}{365}) + \log(1 -\frac{1}{365})+ \cdots + \log(1-\frac{n-1}{365}).
```
Now, use the tangent line approximation for ``\log(1 - x)`` and the sum formula for ``0 + 1 + 2 + \dots + (n-1)`` to simplify the value of ``\log(p)``:
```julia; hold=true; echo=false
choices = ["``-n(n-1)/2/365``",
"``-n(n-1)/2\\cdot 365``",
"``-n^2/(2\\cdot 365)``",
"``-n^2 / 2 \\cdot 365``"]
radioq(choices, 1, keep_order=true)
```
If ``n = 10``, what is the approximation for ``p`` (not ``\log(p)``)?
```julia; hold=true; echo=false
n=10
val = exp(-n*(n-1)/2/365)
numericq(val)
```
If ``n=100``, what is the approximation for ``p`` (not ``\log(p)``?
```julia; hold=true; echo=false
n=100
val = exp(-n*(n-1)/2/365)
numericq(val, 1e-2)
```

View File

@ -0,0 +1,23 @@
// https://jsxgraph.uni-bayreuth.de/wiki/index.php?title=Mean_Value_Theorem
var board = JXG.JSXGraph.initBoard('jsxgraph', {boundingbox: [-5, 10, 7, -6], axis:true});
board.suspendUpdate();
var p = [];
p[0] = board.create('point', [-1,-2], {size:2});
p[1] = board.create('point', [6,5], {size:2});
p[2] = board.create('point', [-0.5,1], {size:2});
p[3] = board.create('point', [3,3], {size:2});
var f = JXG.Math.Numerics.lagrangePolynomial(p);
var graph = board.create('functiongraph', [f,-10, 10]);
var g = function(x) {
return JXG.Math.Numerics.D(f)(x)-(p[1].Y()-p[0].Y())/(p[1].X()-p[0].X());
};
var r = board.create('glider', [
function() { return JXG.Math.Numerics.root(g,(p[0].X()+p[1].X())*0.5); },
function() { return f(JXG.Math.Numerics.root(g,(p[0].X()+p[1].X())*0.5)); },
graph], {name:' ',size:4,fixed:true});
board.create('tangent', [r], {strokeColor:'#ff0000'});
line = board.create('line',[p[0],p[1]],{strokeColor:'#ff0000',dash:1});
board.unsuspendUpdate();

View File

@ -0,0 +1,693 @@
# The mean value theorem for differentiable functions.
This section uses these add-on packages:
```julia
using CalculusWithJulia
using Plots
using Roots
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
using Printf
using SymPy
fig_size = (600, 400)
const frontmatter = (
title = "The mean value theorem for differentiable functions.",
description = "Calculus with Julia: The mean value theorem for differentiable functions.",
tags = ["CalculusWithJulia", "derivatives", "the mean value theorem for differentiable functions."],
);
nothing
```
----
A function is *continuous* at $c$ if $f(c+h) - f(c) \rightarrow 0$ as $h$ goes to $0$. We can write that as ``f(c+h) - f(x) = \epsilon_h``, with ``\epsilon_h`` denoting a function going to ``0`` as ``h \rightarrow 0``. With this notion, differentiability could be written as ``f(c+h) - f(c) - f'(c)h = \epsilon_h \cdot h``. This is clearly a more demanding requirement that mere continuity at ``c``.
We defined a function to be *continuous* on an interval $I=(a,b)$ if
it was continuous at each point $c$ in $I$. Similarly, we define a
function to be *differentiable* on the interval $I$ it it is differentiable
at each point $c$ in $I$.
This section looks at properties of differentiable functions. As there is a more stringent definition, perhaps more properties are a consequence of the definition.
## Differentiable is more restrictive than continuous.
Let $f$ be a differentiable function on $I=(a,b)$. We see that
``f(c+h) - f(c) = f'(c)h + \epsilon_h\cdot h = h(f'(c) + \epsilon_h)``. The right hand side will clearly go to ``0`` as ``h\rightarrow 0``, so ``f`` will be continuous. In short:
> A differentiable function on $I=(a,b)$ is continuous on $I$.
Is it possible that all continuous functions are differentiable?
The fact that the derivative is related to the tangent line's slope
might give an indication that this won't be the case - we just need a
function which is continuous but has a point with no tangent line. The
usual suspect is $f(x) = \lvert x\rvert$ at $0$.
```julia; hold=true
f(x) = abs(x)
plot(f, -1,1)
```
We can see formally that the secant line expression will not have a
limit when $c=0$ (the left limit is $-1$, the right limit $1$). But
more insight is gained by looking a the shape of the graph. At the origin, the graph
always is vee-shaped. There is no linear function that approximates this function
well. The function is just not smooth enough, as it has a kink.
There are other functions that have kinks. These are often associated
with powers. For example, at $x=0$ this function will not have a
derivative:
```julia; hold=true;
f(x) = (x^2)^(1/3)
plot(f, -1, 1)
```
Other functions have tangent lines that become vertical. The natural slope would be $\infty$, but this isn't a limiting answer (except in the extended sense we don't apply to the definition of derivatives). A candidate for this case is the cube root function:
```julia;
plot(cbrt, -1, 1)
```
The derivative at $0$ would need to be $+\infty$ to match the
graph. This is implied by the formula for the derivative from the
power rule: $f'(x) = 1/3 \cdot x^{-2/3}$, which has a vertical
asymptote at $x=0$.
```julia; echo=false
note("""
The `cbrt` function is used above, instead of `f(x) = x^(1/3)`, as the
latter is not defined for negative `x`. Though it can be for the exact
power `1/3`, it can't be for an exact power like `1/2`. This means the
value of the argument is important in determining the type of the
output - and not just the type of the argument. Having type-stable
functions is part of the magic to making `Julia` run fast, so `x^c` is
not defined for negative `x` and most floating point exponents.
""")
```
Lest you think that continuous functions always have derivatives
except perhaps at exceptional points, this isn't the case. The
functions used to
[model](http://tinyurl.com/cpdpheb) the
stock market are continuous but have no points where they are
differentiable.
## Derivatives and maxima.
We have defined an *absolute maximum* of $f(x)$ over an interval to be
a value $f(c)$ for a point $c$ in the interval that is as large as any
other value in the interval. Just specifying a function and an
interval does not guarantee an absolute maximum, but specifying a
*continuous* function and a *closed* interval does, by the extreme value theorem.
> *A relative maximum*: We say $f(x)$ has a *relative maximum* at $c$
> if there exists *some* interval $I=(a,b)$ with $a < c < b$ for which
> $f(c)$ is an absolute maximum for $f$ and $I$.
The difference is a bit subtle, for an absolute maximum the interval
must also be specified, for a relative maximum there just needs to
exist some interval, possibly really small, though it must be bigger
than a point.
```julia; echo=false
note("""
A hiker can appreciate the difference. A relative maximum would be the
crest of any hill, but an absolute maximum would be the summit.
""")
```
What does this have to do with derivatives?
[Fermat](http://science.larouchepac.com/fermat/fermat-maxmin.pdf),
perhaps with insight from Kepler, was interested in maxima of
polynomial functions. As a warm up, he considered a line segment $AC$ and a point $E$
with the task of choosing $E$ so that $(E-A) \times (C-A)$ being a maximum. We might recognize this as
finding the maximum of $f(x) = (x-A)\cdot(C-x)$ for some $A <
C$. Geometrically, we know this to be at the midpoint, as the equation
is a parabola, but Fermat was interested in an algebraic solution that
led to more generality.
He takes $b=AC$ and $a=AE$. Then the product is $a \cdot (b-a) =
ab - a^2$. He then perturbs this writing $AE=a+e$, then this new
product is $(a+e) \cdot (b - a - e)$. Equating the two, and canceling
like terms gives $be = 2ae + e^2$. He cancels the $e$ and basically
comments that this must be true for all $e$ even as $e$ goes to $0$,
so $b = 2a$ and the value is at the midpoint.
In a more modern approach, this would be the same as looking at this expression:
```math
\frac{f(x+e) - f(x)}{e} = 0.
```
Working on the left hand side, for non-zero $e$ we can cancel the
common $e$ terms, and then let $e$ become $0$. This becomes a problem
in solving $f'(x)=0$. Fermat could compute the derivative for any
polynomial by taking a limit, a task we would do now by the power
rule and the sum and difference of function rules.
This insight holds for other types of functions:
> If $f(c)$ is a relative maximum then either $f'(c) = 0$ or the
> derivative at $c$ does not exist.
When the derivative exists, this says the tangent line is flat. (If it
had a slope, then the the function would increase by moving left or
right, as appropriate, a point we pursue later.)
For a continuous function $f(x)$, call a point $c$ in the domain of
$f$ where either $f'(c)=0$ or the derivative does not exist a **critical**
**point**.
We can combine Bolzano's extreme value theorem with Fermat's insight to get the following:
> A continuous function on $[a,b]$ has an absolute maximum that occurs
> at a critical point $c$, $a < c < b$, or an endpoint, $a$ or $b$.
A similar statement holds for an absolute minimum. This gives a
restricted set of places to look for absolute maximum and minimum values - all the critical points and the endpoints.
It is also the case that all relative extrema occur at a critical point, *however* not all critical points correspond to relative extrema. We will see *derivative tests* that help characterize when that occurs.
```julia;hold=true; echo=false;
### {{{lhopital_32}}}
imgfile = "figures/lhopital-32.png"
caption = L"""
Image number ``32`` from L'Hopitals calculus book (the first) showing that
at a relative minimum, the tangent line is parallel to the
$x$-axis. This of course is true when the tangent line is well defined
by Fermat's observation.
"""
ImageFile(:derivatives, imgfile, caption)
```
### Numeric derivatives
The `ForwardDiff` package provides a means to numerically compute derivatives without approximations at a point. In `CalculusWithJulia` this is extended to find derivatives of functions and the `'` notation is overloaded for function objects. Hence these two give nearly identical answers, the difference being only the type of number used:
```julia;hold=true
f(x) = 3x^3 - 2x
fp(x) = 9x^2 - 2
f'(3), fp(3)
```
##### Example
For the function $f(x) = x^2 \cdot e^{-x}$ find the absolute maximum over the interval $[0, 5]$.
We have that $f(x)$ is continuous on the closed interval of the
question, and in fact differentiable on $(0,5)$, so any critical point
will be a zero of the derivative. We can check for these with:
```julia;
f(x) = x^2 * exp(-x)
cps = find_zeros(f', -1, 6) # find_zeros in `Roots`
```
We get $0$ and $2$ are critical points. The endpoints are $0$ and
$5$. So the absolute maximum over this interval is either at $0$, $2$,
or $5$:
```julia;
f(0), f(2), f(5)
```
We see that $f(2)$ is then the maximum.
A few things. First, `find_zeros` can miss some roots, in particular
endpoints and roots that just touch $0$. We should graph to verify it
didn't. Second, it can be easier sometimes to check the values using
the "dot" notation. If `f`, `a`,`b` are the function and the interval,
then this would typically follow this pattern:
```julia
a, b = 0, 5
critical_pts = find_zeros(f', a, b)
f.(critical_pts), f(a), f(b)
```
For this problem, we have the left endpoint repeated, but in general
this won't be a point where the derivative is zero.
As an aside, the output above is not a single container. To achieve that, the values can be combined before the broadcasting:
```julia
f.(vcat(a, critical_pts, b))
```
##### Example
For the function $g(x) = e^x\cdot(x^3 - x)$ find the absolute maximum over the interval $[0, 2]$.
We follow the same pattern. Since $f(x)$ is continuous on the closed interval and differentiable on the open interval we know that the absolute maximum must occur at an endpoint ($0$ or $2$) or a critical point where $f'(c)=0$. To solve for these, we have again:
```julia;
g(x) = exp(x) * (x^3 - x)
gcps = find_zeros(g', 0, 2)
```
And checking values gives:
```julia;
g.(vcat(0, gcps, 2))
```
Here the maximum occurs at an endpoint. The critical point $c=0.67\dots$
does not produce a maximum value. Rather $f(0.67\dots)$ is an absolute
minimum.
```julia; echo=false
note(L"""
**Absolute minimum** We haven't discussed the parallel problem of
absolute minima over a closed interval. By considering the function
$h(x) = - f(x)$, we see that the any thing true for an absolute
maximum should hold in a related manner for an absolute minimum, in
particular an absolute minimum on a closed interval will only occur
at a critical point or an end point.
""")
```
## Rolle's theorem
Let $f(x)$ be differentiable on $(a,b)$ and continuous on
$[a,b]$. Then the absolute maximum occurs at an endpoint or where the
derivative is ``0`` (as the derivative is always defined). This gives rise to:
> *[Rolle's](http://en.wikipedia.org/wiki/Rolle%27s_theorem) theorem*: For $f$ differentiable on ``(a,b)`` and continuous on ``[a,b]``, if $f(a)=f(b)$, then there exists some $c$ in $(a,b)$ with $f'(c) = 0$.
This modest observation opens the door to many relationships between a function and its derivative, as it ties the two together in one statement.
To see why Rolle's theorem is true, we assume that $f(a)=0$, otherwise
consider $g(x)=f(x)-f(a)$. By the extreme value theorem, there must be
an absolute maximum and minimum. If $f(x)$ is ever positive, then the
absolute maximum occurs in $(a,b)$ - not at an endpoint - so at a
critical point where the derivative is $0$. Similarly if $f(x)$ is
ever negative. Finally, if $f(x)$ is just $0$, then take any $c$ in
$(a,b)$.
The statement in Rolle's theorem speaks to existence. It doesn't give
a recipe to find $c$. It just guarantees that there is *one* or *more*
values in the interval $(a,b)$ where the derivative is $0$ if we
assume differentiability on $(a,b)$ and continuity on $[a,b]$.
##### Example
Let $j(x) = e^x \cdot x \cdot (x-1)$. We know $j(0)=0$ and $j(1)=0$,
so on $[0,1]$. Rolle's theorem
guarantees that we can find *at* *least* one answer (unless numeric
issues arise):
```julia;
j(x) = exp(x) * x * (x-1)
find_zeros(j', 0, 1)
```
This graph illustrates the lone value for $c$ for this problem
```julia; echo=false
x0 = find_zero(j', (0, 1))
plot([j, x->j(x0) + 0*(x-x0)], 0, 1)
```
## The mean value theorem
We are driving south and in one hour cover 70 miles. If the speed
limit is 65 miles per hour, were we ever speeding? We'll we averaged
more than the speed limit so we know the answer is yes, but why?
Speeding would mean our instantaneous speed was more than the speed
limit, yet we only know for sure our *average* speed was more than the
speed limit. The mean value tells us that if some conditions are met,
then at some point (possibly more than one) we must have that our
instantaneous speed is equal to our average speed.
The mean value theorem is a direct generalization of Rolle's theorem.
> *Mean value theorem*: Let $f(x)$ be differentiable on $(a,b)$ and
> continuous on $[a,b]$. Then there exists a value $c$ in $(a,b)$
> where $f'(c) = (f(b) - f(a)) / (b - a)$.
This says for any secant line between $a < b$ there will
be a parallel tangent line at some $c$ with $a < c < b$ (all provided $f$
is differentiable on $(a,b)$ and continuous on $[a,b]$).
This graph illustrates the theorem. The orange line is the secant
line. A parallel line tangent to the graph is guaranteed by the mean
value theorem. In this figure, there are two such lines, rendered
using red.
```julia; hold=true; echo=false
f(x) = x^3 - x
a, b = -2, 1.75
m = (f(b) - f(a)) / (b-a)
cps = find_zeros(x -> f'(x) - m, a, b)
p = plot(f, a-1, b+1, linewidth=3, legend=false)
plot!(x -> f(a) + m*(x-a), a-1, b+1, linewidth=3, color=:orange)
scatter!([a,b], [f(a), f(b)])
for cp in cps
plot!(x -> f(cp) + f'(cp)*(x-cp), a-1, b+1, color=:red)
end
p
```
Like Rolle's theorem this is a guarantee that something exists, not a
recipe to find it. In fact, the mean value theorem is just Rolle's
theorem applied to:
```math
g(x) = f(x) - (f(a) + (f(b) - f(a)) / (b-a) \cdot (x-a))
```
That is the function $f(x)$, minus the secant line between $(a,f(a))$ and $(b, f(b))$.
```julia; hold=true; echo=false
# Need to bring jsxgraph into PLUTO
#caption = """
#Illustration of the mean value theorem from
#[jsxgraph](https://jsxgraph.uni-bayreuth.de/).
#The polynomial function interpolates the points ``A``,``B``,``C``, and ``D``.
#Adjusting these creates different functions. Regardless of the
#function -- which as a polynomial will always be continuous and
#differentiable -- the slope of the secant line between ``A`` and ``B`` is alway#s matched by **some** tangent line between the points ``A`` and ``B``.
#"""
#JSXGraph(:derivatives, "mean-value.js", caption)
nothing
```
An interactive example can be found at [jsxgraph](http://jsxgraph.uni-bayreuth.de/wiki/index.php?title=Mean_Value_Theorem).
##### Example
The mean value theorem is an extremely useful tool to relate properties of a function with properties of its derivative, as, like Rolle's theorem, it includes both ``f`` and ``f'`` in its statement.
For example, suppose we have a function $f(x)$ and we know that the
derivative is **always** $0$. What can we say about the function?
Well, constant functions have derivatives that are constantly $0$.
But do others? We will see the answer is no: If a function has a zero derivative in ``(a,b)`` it must be a constant. We can readily see that if ``f`` is a polynomial function this is the case, as we can differentiate a polynomial function and this will be zero only if **all** its coefficients are ``0``, which would mean there is no non-constant leading term in the polynomial. But polynomials are not representative of all functions, and so a proof requires a bit more effort.
Suppose it is known that $f'(x)=0$ on some interval ``I`` and we take any ``a < b`` in ``I``. Since $f'(x)$ always exists, $f(x)$ is always differentiable, and
hence always continuous. So on $[a,b]$ the conditions of the mean
value theorem apply. That is, there is a $c$ in ``(a,b)`` with $(f(b) - f(a)) / (b-a) =
f'(c) = 0$. But this would imply $f(b) - f(a)=0$. That is $f(x)$ is a
constant, as for any $a$ and $b$, we see $f(a)=f(b)$.
### The Cauchy mean value theorem
[Cauchy](http://en.wikipedia.org/wiki/Mean_value_theorem#Cauchy.27s_mean_value_theorem)
offered an extension to the mean value theorem above. Suppose both $f$
and $g$ satisfy the conditions of the mean value theorem on $[a,b]$ with $g(b)-g(a) \neq 0$,
then there exists at least one $c$ with $a < c < b$ such that
```math
f'(c) = g'(c) \cdot \frac{f(b) - f(a)}{g(b) - g(a)}.
```
The proof follows by considering $h(x) = f(x) - r\cdot g(x)$, with $r$ chosen so that $h(a)=h(b)$. Then Rolle's theorem applies so that there is a $c$ with $h'(c)=0$, so $f'(c) = r g'(c)$, but $r$ can be seen to be $(f(b)-f(a))/(g(b)-g(a))$, which proves the theorem.
Letting $g(x) = x$ demonstrates that the mean value theorem is a special case.
##### Example
Suppose $f(x)$ and $g(x)$ satisfy the Cauchy mean value theorem on
$[0,x]$, $g'(x)$ is non-zero on $(0,x)$, and $f(0)=g(0)=0$. Then we have:
```math
\frac{f(x) - f(0)}{g(x) - g(0)} = \frac{f(x)}{g(x)} = \frac{f'(c)}{g'(c)},
```
For some $c$ in $[0,x]$. If $\lim_{x \rightarrow 0} f'(x)/g'(x) = L$,
then the right hand side will have a limit of $L$, and hence the left
hand side will too. That is, when the limit exists, we have under
these conditions that $\lim_{x\rightarrow 0}f(x)/g(x) =
\lim_{x\rightarrow 0}f'(x)/g'(x)$.
This could be used to prove the limit of $\sin(x)/x$ as $x$ goes to
$0$ just by showing the limit of $\cos(x)/1$ is $1$, as is known by
continuity.
### Visualizing the Cauchy mean value theorem
The Cauchy mean value theorem can be visualized in terms of a tangent
line and a *parallel* secant line in a similar manner as the mean
value theorem as long as a *parametric* graph is used. A parametric
graph plots the points $(g(t), f(t))$ for some range of $t$. That is,
it graphs *both* functions at the same time. The following illustrates
the construction of such a graph:
```julia; hold=true; echo=false; cache=true
### {{{parametric_fns}}}
function parametric_fns_graph(n)
f = (x) -> sin(x)
g = (x) -> x
ns = (1:10)/10
ts = range(-pi/2, stop=-pi/2 + ns[n] * pi, length=100)
plt = plot(f, g, -pi/2, -pi/2 + ns[n] * pi, legend=false, size=fig_size,
xlim=(-1.1,1.1), ylim=(-pi/2-.1, pi/2+.1))
scatter!(plt, [f(ts[end])], [g(ts[end])], color=:orange, markersize=5)
val = @sprintf("% 0.2f", ts[end])
annotate!(plt, [(0, 1, "t = $val")])
end
caption = L"""
Illustration of parametric graph of $(g(t), f(t))$ for $-\pi/2 \leq t
\leq \pi/2$ with $g(x) = \sin(x)$ and $f(x) = x$. Each point on the
graph is from some value $t$ in the interval. We can see that the
graph goes through $(0,0)$ as that is when $t=0$. As well, it must go
through $(1, \pi/2)$ as that is when $t=\pi/2$
"""
n = 10
anim = @animate for i=1:n
parametric_fns_graph(i)
end
imgfile = tempname() * ".gif"
gif(anim, imgfile, fps = 1)
ImageFile(imgfile, caption)
```
With $g(x) = \sin(x)$ and $f(x) = x$, we can take $I=[a,b] =
[0, \pi/2]$. In the figure below, the *secant line* is drawn in red which
connects $(g(a), f(a))$ with the point $(g(b), f(b))$, and hence
has slope $\Delta f/\Delta g$. The parallel lines drawn show the *tangent* lines with slope $f'(c)/g'(c)$. Two exist for this problem, the mean value theorem guarantees at least one will.
```julia; hold=true; echo=false
g(x) = sin(x)
f(x) = x
ts = range(-pi/2, stop=pi/2, length=50)
a,b = 0, pi/2
m = (f(b) - f(a))/(g(b) - g(a))
cps = find_zeros(x -> f'(x)/g'(x) - m, -pi/2, pi/2)
c = cps[1]
Delta = (0 + m * (c - 0)) - (g(c))
p = plot(g, f, -pi/2, pi/2, linewidth=3, legend=false)
plot!(x -> f(a) + m * (x - g(a)), -1, 1, linewidth=3, color=:red)
scatter!([g(a),g(b)], [f(a), f(b)])
for c in cps
plot!(x -> f(c) + m * (x - g(c)), -1, 1, color=:orange)
end
p
```
## Questions
###### Question
Rolle's theorem is a guarantee of a value, but does not provide a recipe to find it. For the function $1 - x^2$ over the interval $[-5,5]$, find a value $c$ that satisfies the result.
```julia; hold=true; echo=false
c = 0
numericq(c)
```
###### Question
The extreme value theorem is a guarantee of a value, but does not provide a recipe to find it. For the function $f(x) = \sin(x)$ on $I=[0, \pi]$ find a value $c$ satisfying the theorem for an absolute maximum.
```julia; hold=true; echo=false
c = pi/2
numericq(c)
```
###### Question
The extreme value theorem is a guarantee of a value, but does not provide a recipe to find it. For the function $f(x) = \sin(x)$ on $I=[\pi, 3\pi/2]$ find a value $c$ satisfying the theorem for an absolute maximum.
```julia; hold=true; echo=false
c = pi
numericq(c)
```
###### Question
The mean value theorem is a guarantee of a value, but does not provide a recipe to find it. For $f(x) = x^2$ on $[0,2]$ find a value of $c$ satisfying the theorem.
```julia; hold=true; echo=false
c = 1
numericq(c)
```
###### Question
The Cauchy mean value theorem is a guarantee of a value, but does not provide a recipe to find it. For $f(x) = x^3$ and $g(x) = x^2$ find a value $c$ in the interval $[1, 2]$
```julia; hold=true; echo=false
c,x = symbols("c, x", real=true)
val = solve(3c^2 / (2c) - (2^3 - 1^3) / (2^2 - 1^2), c)[1]
numericq(float(val))
```
###### Question
Will the function $f(x) = x + 1/x$ satisfy the conditions of the mean value theorem over $[-1/2, 1/2]$?
```julia; hold=true; echo=false
radioq(["Yes", "No"], 2)
```
###### Question
Just as it is a fact that $f'(x) = 0$ (for all $x$ in $I$) implies
$f(x)$ is a constant, so too is it a fact that if $f'(x) = g'(x)$ that
$f(x) - g(x)$ is a constant. What function would you consider, if you
wanted to prove this with the mean value theorem?
```julia; hold=true; echo=false
choices = [
"``h(x) = f(x) - (f(b) - f(a)) / (b - a)``",
"``h(x) = f(x) - (f(b) - f(a)) / (b - a) \\cdot g(x)``",
"``h(x) = f(x) - g(x)``",
"``h(x) = f'(x) - g'(x)``"
]
ans = 3
radioq(choices, ans)
```
###### Question
Suppose $f''(x) > 0$ on $I$. Why is it impossible that $f'(x) = 0$ at more than one value in $I$?
```julia; hold=true; echo=false
choices = [
L"It isn't. The function $f(x) = x^2$ has two zeros and $f''(x) = 2 > 0$",
"By the Rolle's theorem, there is at least one, and perhaps more",
L"By the mean value theorem, we must have $f'(b) - f'(a) > 0$ when ever $b > a$. This means $f'(x)$ is increasing and can't double back to have more than one zero."
]
ans = 3
radioq(choices, ans)
```
###### Question
Let $f(x) = 1/x$. For $0 < a < b$, find $c$ so that $f'(c) = (f(b) - f(a)) / (b-a)$.
```julia; hold=true; echo=false
choices = [
"``c = (a+b)/2``",
"``c = \\sqrt{ab}``",
"``c = 1 / (1/a + 1/b)``",
"``c = a + (\\sqrt{5} - 1)/2 \\cdot (b-a)``"
]
ans = 2
radioq(choices, ans)
```
###### Question
Let $f(x) = x^2$. For $0 < a < b$, find $c$ so that $f'(c) = (f(b) - f(a)) / (b-a)$.
```julia; hold=true; echo=false
choices = [
"``c = (a+b)/2``",
"``c = \\sqrt{ab}``",
"``c = 1 / (1/a + 1/b)``",
"``c = a + (\\sqrt{5} - 1)/2 \\cdot (b-a)``"
]
ans = 1
radioq(choices, ans)
```
###### Question
In an example, we used the fact that if $0 < c < x$, for some $c$ given by the mean value theorem and $f(x)$ goes to $0$ as $x$ goes to zero then $f(c)$ will also go to zero. Suppose we say that $c=g(x)$ for some function $c$.
Why is it known that $g(x)$ goes to $0$ as $x$ goes to zero (from the right)?
```julia; hold=true; echo=false
choices = [L"The squeeze theorem applies, as $0 < g(x) < x$.",
L"As $f(x)$ goes to zero by Rolle's theorem it must be that $g(x)$ goes to $0$.",
L"This follows by the extreme value theorem, as there must be some $c$ in $[0,x]$."]
ans = 1
radioq(choices, ans)
```
Since $g(x)$ goes to zero, why is it true that if $f(x)$ goes to $L$ as $x$ goes to zero that $f(g(x))$ must also have a limit $L$?
```julia; hold=true; echo=false
choices = ["It isn't true. The limit must be 0",
L"The squeeze theorem applies, as $0 < g(x) < x$",
"This follows from the limit rules for composition of functions"]
ans = 3
radioq(choices, ans)
```

View File

@ -0,0 +1,530 @@
# Derivative-free alternatives to Newton's method
This section uses these add-on packages:
```julia
using CalculusWithJulia
using Plots
using ImplicitEquations
using Roots
using SymPy
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
const frontmatter = (
title = "Derivative-free alternatives to Newton's method",
description = "Calculus with Julia: Derivative-free alternatives to Newton's method",
tags = ["CalculusWithJulia", "derivatives", "derivative-free alternatives to newton's method"],
);
nothing
```
----
Newton's method is not the only algorithm of its kind for identifying zeros of a function. In this section we discuss some alternatives.
## The `find_zero(f, x0)` function
The function `find_zero` from the `Roots` packages provides several different algorithms for finding a zero of a function, including some a derivative-free
algorithms for finding zeros when started with an initial
guess. The default method is similar to Newton's method in that only a good initial
guess is needed. However, the algorithm, while possibly slower in terms of
function evaluations and steps, is engineered to be a bit more
robust to the choice of initial estimate than Newton's method. (If it
finds a bracket, it will use a bisection algorithm which is guaranteed to
converge, but can be slower to do so.) Here we see how to call the
function:
```julia;
f(x) = cos(x) - x
x₀ = 1
find_zero(f, x₀)
```
Compare to this related call which uses the bisection method:
```julia;
find_zero(f, (0, 1)) ## [0,1] must be a bracketing interval
```
For this example both give the same answer, but the bisection method
is a bit less convenient as a bracketing interval must be pre-specified.
## The secant method
The default `find_zero` method above uses a secant-like method unless a bracketing method is found. The secant method is historic, dating back over ``3000`` years. Here we discuss the secant method in a more general framework.
One way to view Newton's method is through the inverse of ``f`` (assuming it exists): if ``f(\alpha) = 0`` then ``\alpha = f^{-1}(0)``.
If ``f`` has a simple zero at ``\alpha`` and is locally invertible (that is some ``f^{-1}`` exists) then the update step for Newton's method can be identified with:
* fitting a polynomial to the local inverse function of ``f`` going through through the point ``(f(x_0),x_0)``,
* and matching the slope of ``f`` at the same point.
That is, we can write ``g(y) = h_0 + h_1 (y-f(x_0))``. Then ``g(f(x_0)) = x_0 = h_0``, so ``h_0 = x_0``. From ``g'(f(x_0)) = 1/f'(x_0)``, we get ``h_1 = 1/f'(x_0)``. That is, ``g(y) = x_0 + (y-f(x_0))/f'(x_0)``. At ``y=0,`` we get the update step ``x_1 = g(0) = x_0 - f(x_0)/f'(x_0)``.
A similar viewpoint can be used to create derivative-free methods.
For example, the [secant method](https://en.wikipedia.org/wiki/Secant_method) can be seen as the result of fitting a degree-``1`` polynomial approximation for ``f^{-1}`` through two points ``(f(x_0),x_0)`` and ``(f(x_1), x_1)``.
Again, expressing this approximation as ``g(y) = h_0 + h_1(y-f(x_1))`` leads to ``g(f(x_1)) = x_1 = h_0``.
Substituting ``f(x_0)`` gives ``g(f(x_0)) = x_0 = x_1 + h_1(f(x_0)-f(x_1))``. Solving for ``h_1`` leads to ``h_1=(x_1-x_0)/(f(x_1)-f(x_0))``. Then ``x_2 = g(0) = x_1 + (x_1-x_0)/(f(x_1)-f(x_0)) \cdot f(x_1)``. This is the first step of the secant method:
```math
x_{n+1} = x_n - f(x_n) \frac{x_n - x_{n-1}}{f(x_n) - f(x_{n-1})}.
```
That is, where the next step of Newton's method comes from the intersection of the tangent line at ``x_n`` with the ``x``-axis, the next step of the secant method comes from the intersection of the secant line defined by ``x_n`` and ``x_{n-1}`` with the ``x`` axis. That is, the secant method simply replaces ``f'(x_n)`` with the slope of the secant line between ``x_n`` and ``x_{n-1}``.
We code the update step as `λ2`:
```julia;
λ2(f0,f1,x0,x1) = x1 - f1 * (x1-x0) / (f1-f0)
```
Then we can run a few steps to identify the zero of sine starting at ``3`` and ``4``
```julia; hold=true; term=true
x0,x1 = 4,3
f0,f1 = sin.((x0,x1))
@show x1,f1
x0,x1 = x1, λ2(f0,f1,x0,x1)
f0,f1 = f1, sin(x1)
@show x1,f1
x0,x1 = x1, λ2(f0,f1,x0,x1)
f0,f1 = f1, sin(x1)
@show x1,f1
x0,x1 = x1, λ2(f0,f1,x0,x1)
f0,f1 = f1, sin(x1)
@show x1,f1
x0,x1 = x1, λ2(f0,f1,x0,x1)
f0,f1 = f1, sin(x1)
x1,f1
```
Like Newton's method, the secant method coverges quickly for this problem (though its rate is less than the quadratic rate of Newton's method).
This method is included in `Roots` as `Secant()` (or `Order1()`):
```julia;
find_zero(sin, (4,3), Secant())
```
Though the derivative is related to the slope of the secant line, that is in the limit. The convergence of the secant method is not as fast as Newton's method, though at each step of the secant method, only one new function evaluation is needed, so it can be more efficient for functions that are expensive to compute or differentiate.
Let ``\epsilon_{n+1} = x_{n+1}-\alpha``, where ``\alpha`` is assumed to be the *simple* zero of ``f(x)`` that the secant method converges to. A [calculation](https://math.okstate.edu/people/binegar/4513-F98/4513-l08.pdf) shows that
```math
\begin{align*}
\epsilon_{n+1} &\approx \frac{x_n-x_{n-1}}{f(x_n)-f(x_{n-1})} \frac{(1/2)f''(\alpha)(e_n-e_{n-1})}{x_n-x_{n-1}} \epsilon_n \epsilon_{n-1}\\
& \approx \frac{f''(\alpha)}{2f'(\alpha)} \epsilon_n \epsilon_{n-1}\\
&= C \epsilon_n \epsilon_{n-1}.
\end{align*}
```
The constant `C` is similar to that for Newton's method, and reveals potential troubles for the secant method similar to those of Newton's method: a poor initial guess (the initial error is too big), the second derivative is too large, the first derivative too flat near the answer.
Assuming the error term has the form ``\epsilon_{n+1} = A|\epsilon_n|^\phi`` and substituting into the above leads to the equation
```math
\frac{A^{1-1/\phi}}{C} = |\epsilon_n|^{1 - \phi +1/\phi}.
```
The left side being a constant suggests ``\phi`` solves: ``1 - \phi + 1/\phi = 0`` or ``\phi^2 -\phi - 1 = 0``. The solution is the golden ratio, ``(1 + \sqrt{5})/2 \approx 1.618\dots``.
### Steffensen's method
Steffensen's method is a secant-like method that converges with ``|\epsilon_{n+1}| \approx C |\epsilon_n|^2``. The secant is taken between the points ``(x_n,f(x_n))`` and ``(x_n + f(x_n), f(x_n + f(x_n))``. Like Newton's method this requires ``2`` function evaluations per step. Steffensen's is implemented through `Roots.Steffensen()`. Steffensen's method is more sensitive to the initial guess than other methods, so in practice must be used with care, though it is a starting point for many higher-order derivative-free methods.
## Inverse quadratic interpolation
Inverse quadratic interpolation fits a quadratic polynomial through three points, not just two like the Secant method. The third being ``(f(x_2), x_2)``.
For example, here is the inverse quadratic function, ``g(y)``, going through three points marked with red dots. The blue dot is found from ``(g(0), 0)``.
```julia; hold=true; echo=false
a,b,c = 1,2,3
fa,fb,fc = -1,1/4,1
g(y) = (y-fb)*(y-fa)/(fc-fb)/(fc-fa)*c + (y-fc)*(y-fa)/(fb-fc)/(fb-fa)*b + (y-fc)*(y-fb)/(fa-fc)/(fa-fb)*a
ys = range(-2,2, length=100)
xs = g.(ys)
plot(xs, ys, legend=false)
scatter!([a,b,c],[fa,fb,fc], color=:red, markersize=5)
scatter!([g(0)],[0], color=:blue, markersize=5)
plot!(zero, color=:blue)
```
Here we use `SymPy` to identify the degree-``2`` polynomial as a function of ``y``, then evaluate it at ``y=0`` to find the next step:
```julia
@syms y hs[0:2] xs[0:2] fs[0:2]
H(y) = sum(hᵢ*(y - fs[end])^i for (hᵢ,i) ∈ zip(hs, 0:2))
eqs = [H(fᵢ) ~ xᵢ for (xᵢ, fᵢ) ∈ zip(xs, fs)]
ϕ = solve(eqs, hs)
hy = subs(H(y), ϕ)
```
The value of `hy` at ``y=0`` yields the next guess based on the past three, and is given by:
```julia;
q⁻¹ = hy(y => 0)
```
Though the above can be simplified quite a bit when computed by hand, here we simply make this a function with `lambdify` which we will use below.
```julia;
λ3 = lambdify(q⁻¹) # fs, then xs
```
(`SymPy`'s `lambdify` function, by default, picks the order of its argument lexicographically, in this case they will be the `f` values then the `x` values.)
An inverse quadratic step is utilized by Brent's method, as possible, to yield a rapidly convergent bracketing algorithm implemented as a default zero finder in many software languages. `Julia`'s `Roots` package implements the method in `Roots.Brent()`. An inverse cubic interpolation is utilized by [Alefeld, Potra, and Shi](https://dl.acm.org/doi/10.1145/210089.210111) which gives an asymptotically even more rapidly convergent algorithm then Brent's (implemented in `Roots.AlefeldPotraShi()` and also `Roots.A42()`). This is used as a finishing step in many cases by the default hybrid `Order0()` method of `find_zero`.
In a bracketing algorithm, the next step should reduce the size of the bracket, so the next iterate should be inside the current bracket. However, quadratic convergence does not guarantee this to happen. As such, sometimes a subsitute method must be chosen.
[Chandrapatla's](https://www.google.com/books/edition/Computational_Physics/cC-8BAAAQBAJ?hl=en&gbpv=1&pg=PA95&printsec=frontcover) method, is a bracketing method utilizing an inverse quadratic step as the centerpiece. The key insight is the test to choose between this inverse quadratic step and a bisection step. This is done in the following based on values of ``\xi`` and ``\Phi`` defined within:
```julia;
function chandrapatla(f, u, v, λ; verbose=false)
a,b = promote(float(u), float(v))
fa,fb = f(a),f(b)
@assert fa * fb < 0
if abs(fa) < abs(fb)
a,b,fa,fb = b,a,fb,fa
end
c, fc = a, fa
maxsteps = 100
for ns in 1:maxsteps
Δ = abs(b-a)
m, fm = (abs(fa) < abs(fb)) ? (a, fa) : (b, fb)
ϵ = eps(m)
if Δ ≤ 2ϵ
return m
end
@show m,fm
iszero(fm) && return m
ξ = (a-b)/(c-b)
Φ = (fa-fb)/(fc-fb)
if Φ^2 < ξ < 1 - (1-Φ)^2
xt = λ(fa,fc,fb, a,c,b) # inverse quadratic
else
xt = a + (b-a)/2
end
ft = f(xt)
isnan(ft) && break
if sign(fa) == sign(ft)
c,fc = a,fa
a,fa = xt,ft
else
c,b,a = b,a,xt
fc,fb,fa = fb,fa,ft
end
verbose && @show ns, a, fa
end
error("no convergence: [a,b] = $(sort([a,b]))")
end
```
Like bisection, this method ensures that ``a`` and ``b`` is a bracket, but it moves ``a`` to the newest estimate, so does not maintain that ``a < b`` throughout.
We can see it in action on the sine function. Here we pass in ``\lambda``, but in a real implementation (as in `Roots.Chandrapatla()`) we would have programmed the algorithm to compute the inverse quadratic value.
```julia; term=true
chandrapatla(sin, 3, 4, λ3, verbose=true)
```
The condition `Φ^2 < ξ < 1 - (1-Φ)^2` can be visualized. Assume `a,b=0,1`, `fa,fb=-1/2,1`, Then `c < a < b`, and `fc` has the same sign as `fa`, but what values of `fc` will satisfy the inequality?
```julia;
ξ(c,fc) = (a-b)/(c-b)
Φ(c,fc) = (fa-fb)/(fc-fb)
Φl(c,fc) = Φ(c,fc)^2
Φr(c,fc) = 1 - (1-Φ(c,fc))^2
a,b = 0, 1
fa,fb = -1/2, 1
region = Lt(Φl, ξ) & Lt(ξ,Φr)
plot(region, xlims=(-2,a), ylims=(-3,0))
```
When `(c,fc)` is in the shaded area, the inverse quadratic step is chosen. We can see that `fc < fa` is needed.
For these values, this area is within the area where a implicit quadratic step will result in a value between `a` and `b`:
```julia;
l(c,fc) = λ3(fa,fb,fc,a,b,c)
region₃ = ImplicitEquations.Lt(l,b) & ImplicitEquations.Gt(l,a)
plot(region₃, xlims=(-2,0), ylims=(-3,0))
```
There are values in the parameter space where this does not occur.
## Tolerances
The `chandrapatla` algorithm typically waits until `abs(b-a) <= 2eps(m)` (where ``m`` is either ``b`` or ``a`` depending on the size of ``f(a)`` and ``f(b)``) is satisfied. Informally this means the algorithm stops when the two bracketing values are no more than a small amount apart. What is a "small amount?"
To understand, we start with the fact that floating point numbers are an approximation to real numbers.
Floating point numbers effectively represent a number in scientific
notation in terms of
* a sign (plus or minus) ,
* a *mantissa* (a number in ``[1,2)``, in binary ), and
* an exponent (to represent a power of ``2``).
The mantissa is of the form `1.xxxxx...xxx` where there are ``m``
different `x`s each possibly a `0` or `1`. The `i`th `x` indicates if the term `1/2^i` should be
included in the value. The mantissa is the sum of `1` plus the
indicated values of `1/2^i` for `i` in `1` to `m`. So the last `x` represents if `1/2^m` should be
included in the sum. As such, the
mantissa represents a discrete set of values, separated by `1/2^m`, as
that is the smallest difference possible.
For example if `m=2` then the possible value for the mantissa are `11 => 1 + 1/2 + 1/4 = 7/4`,
`10 => 1 + 1/2 = 6/4`, `01 => 1 + 1/4 = 5/4`. and `00 => 1 = 4/4`, values separated by `1/4 = 1/2^m`.
For ``64``-bit floating point numbers `m=52`, so the values in the mantissa differ by `1/2^52 = 2.220446049250313e-16`. This is the value of `eps()`.
However, this "gap" between numbers is for values when the exponent is `0`. That is the numbers in `[1,2)`. For values in `[2,4)` the gap is twice, between `[1/2,1)` the gap is half. That is the gap depends on the size of the number. The gap between `x` and its next largest floating point number is given by `eps(x)` and that always satisfies `eps(x) <= eps() * abs(x)`.
One way to think about this is the difference between `x` and the next largest floating point values is *basically* `x*(1+eps()) - x` or `x*eps()`.
For the specific example, `abs(b-a) <= 2eps(m)` means that the gap between `a` and `b` is essentially 2 floating point values from the ``x`` value with the smallest ``f(x)`` value.
For bracketing methods that is about as good as you can get. However, once floating values are understood, the absolute best you can get for a bracketing interval would be
* along the way, a value `f(c)` is found which is *exactly* `0.0`
* the endpoints of the bracketing interval are *adjacent* floating point values, meaning the interval can not be bisected and `f` changes sign between the two values.
There can be problems when the stopping criteria is `abs(b-a) <= 2eps(m))` and the answer is `0.0` that require engineering around. For example, the algorithm above for the function `f(x) = -40*x*exp(-x)` does not converge when started with `[-9,1]`, even though `0.0` is an obvious zero.
```julia; hold=true
fu(x) = -40*x*exp(-x)
chandrapatla(fu, -9, 1, λ3)
```
Here the issue is `abs(b-a)` is tiny (of the order `1e-119`) but `eps(m)` is even smaller.
For non-bracketing methods, like Newton's method or the secant method, different criteria are useful.
There may not be a bracketing interval for `f` (for example `f(x) = (x-1)^2`) so the second criteria above might need to be restated in terms of the last two iterates, ``x_n`` and ``x_{n-1}``. Calling this difference ``\Delta = |x_n - x_{n-1}|``, we might stop if ``\Delta`` is small enough. As there are scenarios where this can happen, but the function is not at a zero, a check on the size of ``f`` is needed.
However, there may be no floating point value where ``f`` is exactly `0.0` so checking the size of `f(x_n)` requires some agreement.
First if `f(x_n)` is `0.0` then it makes sense to call `x_n` an *exact zero* of ``f``, even though this may hold even if `x_n`, a floating point value, is not mathematically an *exact* zero of ``f``. (Consider `f(x) = x^2 - 2x + 1`. Mathematically, this is identical to `g(x) = (x-1)^2`, but `f(1 + eps())` is zero, while `g(1+eps())` is `4.930380657631324e-32`.
However, there may never be a value with `f(x_n)` exactly `0.0`. (The value of `sin(pi)` is not zero, for example, as `pi` is an approximation to ``\pi``, as well the `sin` of values adjacent to `float(pi)` do not produce `0.0` exactly.)
Suppose `x_n` is the closest floating number to ``\alpha``, the zero. Then the relative rounding error, ``(`` `x_n` ``- \alpha)/\alpha``, will be a value ``(1 + \delta)`` with ``\delta`` less than `eps()`.
How far then can `f(x_n)` be from ``0 = f(\alpha)``?
```math
f(x_n) = f(x_n - \alpha + \alpha) = f(\alpha + \alpha \cdot \delta) = f(\alpha \cdot (1 + \delta)),
```
Assuming ``f`` has a derivative, the linear approximation gives:
```math
f(x_n) \approx f(\alpha) + f'(\alpha) \cdot (\alpha\delta) = f'(\alpha) \cdot \alpha \delta
```
So we should consider `f(x_n)` an *approximate zero* when it is on the scale of
``f'(\alpha) \cdot \alpha \delta``.
That ``\alpha`` factor means we consider a *relative* tolerance for `f`.
Also important -- when `x_n` is close to `0`,
is the need for an *absolute* tolerance, one not dependent on the size of `x`.
So a good condition to check if `f(x_n)` is small is
`abs(f(x_n)) <= abs(x_n) * rtol + atol`, or `abs(f(x_n)) <= max(abs(x_n) * rtol, atol)`
where the relative tolerance, `rtol`, would absorb an estimate for ``f'(\alpha)``.
Now, in Newton's method the update step is ``f(x_n)/f'(x_n)``. Naturally when ``f(x_n)`` is close to ``0``, the update step is small and ``\Delta`` will be close to ``0``. *However*, should ``f'(x_n)`` be large, then ``\Delta`` can also be small and the algorithm will possibly stop, as ``x_{n+1} \approx x_n`` -- but not necessarily ``x_{n+1} \approx \alpha``. So termination on ``\Delta`` alone can be off. Checking if ``f(x_{n+1})`` is an approximate zero is also useful to include in a stopping criteria.
One thing to keep in mind is that the right-hand side of the rule `abs(f(x_n)) <= abs(x_n) * rtol + atol`, as a function of `x_n`, goes to `Inf` as `x_n` increases. So if `f` has `0` as an asymptote (like `e^(-x)`) for large enough `x_n`, the rule will be `true` and `x_n` could be counted as an approximate zero, despite it not being one.
So a modified criteria for convergence might look like:
* stop if ``\Delta`` is small and `f` is an approximate zero with some tolerances
* stop if `f` is an approximate zero with some tolerances, but be mindful that this rule can identify mathematically erroneous answers.
It is not uncommon to assign `rtol` to have a value like `sqrt(eps())` to account for accumulated floating point errors and the factor of ``f'(\alpha)``, though in the `Roots` package it is set smaller by default.
## Questions
###### Question
Let `f(x) = tanh(x)` (the hyperbolic tangent) and `fp(x) = sech(x)^2`, its derivative.
Does *Newton's* method (using `Roots.Newton()`) converge starting at `1.0`?
```julia; hold=true; echo=false
yesnoq("yes")
```
Does *Newton's* method (using `Roots.Newton()`) converge starting at `1.3`?
```julia; hold=true; echo=false
yesnoq("no")
```
Does the secant method (using `Roots.Secant()`) converge starting at `1.3`? (a second starting value will automatically be chosen, if not directly passed in.)
```julia; hold=true; echo=false
yesnoq("yes")
```
###### Question
For the function `f(x) = x^5 - x - 1` both Newton's method and the secant method will converge to the one root when started from `1.0`. Using `verbose=true` as an argument to `find_zero`, (e.g., `find_zero(f, x0, Roots.Secant(), verbose=true)`) how many *more* steps does the secant method need to converge?
```julia; hold=true; echo=false
numericq(2)
```
Do the two methods converge to the exact same value?
```julia; hold=true; echo=false
yesnoq("yes")
```
###### Question
Let `f(x) = exp(x) - x^4` and `x0=8.0`. How many steps (iterations) does it take for the secant method to converge using the default tolerances?
```julia; hold=true; echo=false
numericq(10, 1)
```
###### Question
Let `f(x) = exp(x) - x^4` and a starting bracket be `x0 = [8.9]`. Then calling `find_zero(f,x0, verbose=true)` will show that 49 steps are needed for exact bisection to converge. What about with the `Roots.Brent()` algorithm, which uses inverse quadratic steps when it can?
It takes how many steps?
```julia; hold=true; echo=false
numericq(36, 1)
```
The `Roots.A42()` method uses inverse cubic interpolation, as possible, how many steps does this method take to converge?
```julia; hold=true; echo=false
numericq(3, 1)
```
The large difference is due to how the tolerances are set within `Roots`. The `Brent method gets pretty close in a few steps, but takes a much longer time to get close enough for the default tolerances
###### Question
Consider this crazy function defined by:
```julia; eval=false
f(x) = cos(100*x)-4*erf(30*x-10)
```
(The `erf` function is the (error function](https://en.wikipedia.org/wiki/Error_function) and is in the `SpecialFunctions` package loaded with `CalculusWithJulia`.)
Make a plot over the interval $[-3,3]$ to see why it is called "crazy".
Does `find_zero` find a zero to this function starting from $0$?
```julia; hold=true; echo=false
yesnoq("yes")
```
If so, what is the value?
```julia; hold=true; echo=false
f(x) = cos(100*x)-4*erf(30*x-10)
val = find_zero(f, 0)
numericq(val)
```
If not, what is the reason?
```julia; hold=true; echo=false
choices = [
"The zero is a simple zero",
"The zero is not a simple zero",
"The function oscillates too much to rely on the tangent line approximation far from the zero",
"We can find an answer"
]
ans = 4
radioq(choices, ans, keep_order=true)
```
Does `find_zero` find a zero to this function starting from $1$?
```julia; hold=true; echo=false
yesnoq(false)
```
If so, what is the value?
```julia; hold=true; echo=false
numericq(-999.999)
```
If not, what is the reason?
```julia; hold=true; echo=false
choices = [
"The zero is a simple zero",
"The zero is not a simple zero",
"The function oscillates too much to rely on the tangent line approximations far from the zero",
"We can find an answer"
]
ans = 3
radioq(choices, ans, keep_order=true)
```

View File

@ -0,0 +1,72 @@
// newton's method
const b = JXG.JSXGraph.initBoard('jsxgraph', {
boundingbox: [-3,5,3,-5], axis:true
});
var f = function(x) {return x*x*x*x*x - x - 1};
var fp = function(x) { return 4*x*x*x*x - 1};
var x0 = 0.85;
var nm = function(x) { return x - f(x)/fp(x);};
var l = b.create('point', [-1.5,0], {name:'', size:0});
var r = b.create('point', [1.5,0], {name:'', size:0});
var xaxis = b.create('line', [l,r])
var P0 = b.create('glider', [x0,0,xaxis], {name:'x0'});
var P0a = b.create('point', [function() {return P0.X();},
function() {return f(P0.X());}], {name:''});
var P1 = b.create('point', [function() {return nm(P0.X());},
0], {name:''});
var P1a = b.create('point', [function() {return P1.X();},
function() {return f(P1.X());}], {name:''});
var P2 = b.create('point', [function() {return nm(P1.X());},
0], {name:''});
var P2a = b.create('point', [function() {return P2.X();},
function() {return f(P2.X());}], {name:''});
var P3 = b.create('point', [function() {return nm(P2.X());},
0], {name:''});
var P3a = b.create('point', [function() {return P3.X();},
function() {return f(P3.X());}], {name:''});
var P4 = b.create('point', [function() {return nm(P3.X());},
0], {name:''});
var P4a = b.create('point', [function() {return P4.X();},
function() {return f(P4.X());}], {name:''});
var P5 = b.create('point', [function() {return nm(P4.X());},
0], {name:'x5', strokeColor:'black'});
P0a.setAttribute({fixed:true});
P1.setAttribute({fixed:true});
P1a.setAttribute({fixed:true});
P2.setAttribute({fixed:true});
P2a.setAttribute({fixed:true});
P3.setAttribute({fixed:true});
P3a.setAttribute({fixed:true});
P4.setAttribute({fixed:true});
P4a.setAttribute({fixed:true});
P5.setAttribute({fixed:true});
var sc = '#000000';
b.create('segment', [P0,P0a], {strokeColor:sc, strokeWidth:1});
b.create('segment', [P0a, P1], {strokeColor:sc, strokeWidth:1});
b.create('segment', [P1,P1a], {strokeColor:sc, strokeWidth:1});
b.create('segment', [P1a, P2], {strokeColor:sc, strokeWidth:1});
b.create('segment', [P2,P2a], {strokeColor:sc, strokeWidth:1});
b.create('segment', [P2a, P3], {strokeColor:sc, strokeWidth:1});
b.create('segment', [P3,P3a], {strokeColor:sc, strokeWidth:1});
b.create('segment', [P3a, P4], {strokeColor:sc, strokeWidth:1});
b.create('segment', [P4,P4a], {strokeColor:sc, strokeWidth:1});
b.create('segment', [P4a, P5], {strokeColor:sc, strokeWidth:1});
b.create('functiongraph', [f, -1.5, 1.5])

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,352 @@
# Numeric derivatives
This section uses these add-on packages:
```julia
using CalculusWithJulia
using Plots
using ForwardDiff
using SymPy
using Roots
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
const frontmatter = (
title = "Numeric derivatives",
description = "Calculus with Julia: Numeric derivatives",
tags = ["CalculusWithJulia", "derivatives", "numeric derivatives"],
);
nothing
```
----
`SymPy` returns symbolic derivatives. Up to choices of simplification, these answers match those that would be derived by hand. This is useful when comparing with known answers and for seeing the structure of the answer. However, there are times we just want to work with the answer numerically. For that we have other options within `Julia`. We discuss approximate derivatives and automatic derivatives. The latter will find wide usage in these notes.
### Approximate derivatives
By approximating the limit of the secant line with a value for a small, but positive, $h$, we get an approximation to the derivative. That is
```math
f'(x) \approx \frac{f(x+h) - f(x)}{h}.
```
This is the forward-difference approximation. The central difference approximation looks both ways:
```math
f'(x) \approx \frac{f(x+h) - f(x-h)}{2h}.
```
Though in general they are different, they are both
approximations. The central difference is usually more accurate for the
same size $h$. However, both are susceptible to round-off errors. The
numerator is a subtraction of like-size numbers - a perfect
opportunity to lose precision.
As such there is a balancing act:
* if $h$ is too small the round-off errors are problematic,
* if $h$ is too big, the approximation to the limit is not good.
For the forward
difference $h$ values around $10^{-8}$ are typically good, for the central
difference, values around $10^{-6}$ are typically good.
##### Example
Let's verify that the forward difference isn't too far off.
```julia;
f(x) = exp(-x^2/2)
c = 1
h = 1e-8
fapprox = (f(c+h) - f(c)) / h
```
We can compare to the actual with:
```julia;
@syms x
df = diff(f(x), x)
factual = N(df(c))
abs(factual - fapprox)
```
The error is about ``1`` part in ``100`` million.
The central difference is better here:
```julia; hold=true
h = 1e-6
cdapprox = (f(c+h) - f(c-h)) / (2h)
abs(factual - cdapprox)
```
----
The [FiniteDifferences](https://github.com/JuliaDiff/FiniteDifferences.jl) and [FiniteDiff](https://github.com/JuliaDiff/FiniteDiff.jl) packages provide performant interfaces for differentiation based on finite differences.
### Automatic derivatives
There are some other ways to compute derivatives numerically that give
much more accuracy at the expense of slightly increased computing
time. Automatic differentiation is the general name for a few
different approaches. These approaches promise less complexity - in
some cases - than symbolic derivatives and more accuracy than
approximate derivatives; the accuracy is on the order of
machine precision.
The `ForwardDiff` package provides one of [several](https://juliadiff.org/) ways for `Julia` to compute automatic derivatives. `ForwardDiff` is well suited for functions encountered in these notes, which depend on at most a few variables and output no more than a few values at once.
The `ForwardDiff` package was loaded in this section; in general its features are available when the `CalculusWithJulia` package is loaded, as that package provides a more convenient interface.
The `derivative` function is not exported by `FiniteDiff`, so its usage requires qualification. To illustrate, to find the derivative of $f(x)$ at a *point* we have this syntax:
```julia
ForwardDiff.derivative(f, c) # derivative is qualified by a module name
```
The `CalculusWithJulia` package defines an operator `D` which goes from finding a derivative at a point with `ForwardDiff.derivative` to defining a function which evaluates the derivative at each point. It is defined along the lines of `D(f) = x -> ForwardDiff.derivative(f,x)` in parallel to how the derivative operation for a function is defined mathematically from the definition for its value at a point.
Here we see the error in estimating $f'(1)$:
```julia;
fauto = D(f)(c) # D(f) is a function, D(f)(c) is the function called on c
abs(factual - fauto)
```
In this case, it is exact.
The `D` operator is defined for most all functions in `Julia`, though, like the `diff` operator in `SymPy` there are some for which it won't work.
##### Example
For $f(x) = \sqrt{1 + \sin(\cos(x))}$ compare the difference between the forward derivative with $h=1e-8$ and that computed by `D` at $x=\pi/4$.
The forward derivative is found with:
```julia;
𝒇(x) = sqrt(1 + sin(cos(x)))
𝒄, 𝒉 = pi/4, 1e-8
fwd = (𝒇(𝒄+𝒉) - 𝒇(𝒄))/𝒉
```
That given by `D` is:
```julia;
ds_value = D(𝒇)(𝒄)
ds_value, fwd, ds_value - fwd
```
Finally, `SymPy` gives an exact value we use to compare:
```julia;
𝒇𝒑 = diff(𝒇(x), x)
```
```julia
actual = N(𝒇𝒑(PI/4))
actual - ds_value, actual - fwd
```
#### Convenient notation
`Julia` allows the possibility of extending functions to different
types. Out of the box, the `'` notation is not employed for functions,
but is used for matrices. It is used in postfix position, as with
`A'`. We can define it to do the same thing as `D` for functions and
then, we can evaluate derivatives with the familiar `f'(x)`.
This is done in `CalculusWithJulia` along the lines of `Base.adjoint(f::Function) = D(f)`.
Then, we have, for example:
```julia; hold=true;
f(x) = sin(x)
f'(pi), f''(pi)
```
##### Example
Suppose our task is to find a zero of the second derivative of $k(x) =
e^{-x^2/2}$ in $[0, 10]$, a known bracket. The `D` function takes a second argument to indicate the order of the derivative (e.g., `D(f,2)`), but we use the more familiar notation:
```julia; hold=true
k(x) = exp(-x^2/2)
find_zero(k'', 0..10)
```
We pass in the function object, `k''`, and not the evaluated function.
## Recap on derivatives in Julia
A quick summary for finding derivatives in `Julia`, as there are $3$ different manners:
* Symbolic derivatives are found using `diff` from `SymPy`
* Automatic derivatives are found using the notation `f'` using `ForwardDiff.derivative`
* approximate derivatives at a point, `c`, for a given `h` are found with `(f(c+h)-f(c))/h`.
For example, here all three are computed and compared:
```julia; hold=true
f(x) = exp(-x)*sin(x)
c = pi
h = 1e-8
fp = diff(f(x),x)
fp, fp(c), f'(c), (f(c+h) - f(c))/h
```
```julia;echo=false
note("""
The use of `'` to find derivatives provided by `CalculusWithJulia` is convenient, and used extensively in these notes, but it needs to be noted that it does **not conform** with the generic meaning of `'` within `Julia`'s wider package ecosystem and may cause issue with linear algebra operations; the symbol is meant for the adjoint of a matrix.
""")
```
## Questions
##### Question
Find the derivative using a forward difference approximation of $f(x) = x^x$ at the point $x=2$ using `h=0.1`:
```julia; hold=true; echo=false
f(x) = x^x
c, h = 2, 0.1
val = (f(c+h) - f(c))/h
numericq(val)
```
Using `D` or `f'` find the value using automatic differentiation
```julia; hold=true; echo=false
f(x) = x^x
c = 2
val = f'(c)
numericq(val)
```
##### Question
Mathematically, as the value of `h` in the forward difference gets
smaller the forward difference approximation gets better. On the
computer, this is thwarted by floating point representation issues (in
particular the error in subtracting two like-sized numbers in forming
$f(x+h)-f(x)$.)
For `1e-16` what is the error (in absolute value) in finding the forward difference
approximation for the derivative of $\sin(x)$ at $x=0$?
```julia; hold=true; echo=false
f(x) = sin(x)
h = 1e-16
c = 0
approx = (f(c+h)-f(c))/h
val = abs(cos(0) - approx)
numericq(val)
```
Repeat for $x=\pi/4$:
```julia; hold=true; echo=false
f(x) = sin(x)
h = 1e-16
c = pi/4
approx = (f(c+h)-f(c))/h
val = abs(cos(0) - approx)
numericq(val)
```
###### Question
Let $f(x) = x^x$. Using `D`, find $f'(3)$.
```julia; hold=true; echo=false
f(x) = x^x
val = D(f)(3)
numericq(val)
```
###### Question
Let $f(x) = \lvert 1 - \sqrt{1 + x}\rvert$. Using `D`, find $f'(3)$.
```julia; hold=true; echo=false
f(x) = abs(1 - sqrt(1 + x))
val = D(f)(3)
numericq(val)
```
###### Question
Let $f(x) = e^{\sin(x)}$. Using `D`, find $f'(3)$.
```julia; hold=true; echo=false
f(x) = exp(sin(x))
val = D(f)(3)
numericq(val)
```
###### Question
For `Julia`'s
`airyai` function find a numeric derivative using the
forward difference. For $c=3$ and $h=10^{-8}$ find the forward
difference approximation to $f'(3)$ for the `airyai` function.
```julia; hold=true; echo=false
h = 1e-8
c = 3
val = (airyai(c+h) - airyai(c))/h
numericq(val)
```
###### Question
Find the rate of change with respect to time of the function $f(t)= 64 - 16t^2$ at $t=1$.
```julia; hold=true; echo=false
fp_(t) = -16*2*t
c = 1
numericq(fp_(c))
```
###### Question
Find the rate of change with respect to height, $h$, of $f(h) = 32h^3 - 62 h + 12$ at $h=2$.
```julia; hold=true; echo=false
fp_(h) = 3*32h^2 - 62
c = 2
numericq(fp_(2))
```

View File

@ -0,0 +1,36 @@
// inscribe trapezoid
var R = 5;
var Delta = 0.5
const b = JXG.JSXGraph.initBoard('jsxgraph', {
boundingbox: [-R-Delta,R+Delta,R+Delta,-1], axis:true
});
var xax = b.create("segment", [[0,0],[R,0]]);
var P4 = b.create("glider", [R/2,0, xax], {name: "P_4=(r,0)"});
var CL = b.create('point', [function() {return -P4.X()},0], {name:''});
var CR = b.create('point', [function() {return P4.X()},0], {name:''});
var C = b.create('semicircle', [CL,CR]);
var Crestricted = b.create("functiongraph",
[function(x) {
r = P4.X();
y = Math.sqrt(r*r - x*x);
return y;
}, 0, function() {return P4.X()}]);
var P3 = b.create("glider", [
P4.X()/2,
Math.sqrt(P4.X()*P4.X()*(1 - 1/4)),
Crestricted], {name:"P_3=(x,y)"});
var P1 = b.create('point', [function() {return -Math.abs(P4.X());},
function() {return P4.Y();}], {name:'P_1'});
var P2 = b.create('point', [function() {return -P3.X();},
function() {return P3.Y();}], {name:'P_2'});
var poly = b.create('polygon',[P1, P2, P3, P4], { borders:{strokeColor:'black'} });
b.create('text',[-1.5,.25, function(){ return 'Area='+ poly.Area().toFixed(1); }]);

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,44 @@
using WeavePynb
using CwJWeaveTpl
fnames = [
"derivatives", ## more questions
"numeric_derivatives",
"mean_value_theorem",
"optimization",
"curve_sketching",
"linearization",
"newtons_method",
"lhopitals_rule", ## Okay - -but could beef up questions..
"implicit_differentiation", ## add more questions?
"related_rates",
"taylor_series_polynomials"
]
process_file(nm; cache=:off) = CwJWeaveTpl.mmd(nm * ".jmd", cache=cache)
function process_files(;cache=:user)
for f in fnames
@show f
process_file(f, cache=cache)
end
end
"""
## TODO derivatives
tangent lines intersect at avearge for a parabola
Should we have derivative results: inverse functions, logarithmic differentiation...
"""

View File

@ -0,0 +1,778 @@
# Related rates
This section uses these add-on packaages:
```julia
using CalculusWithJulia
using Plots
using Roots
using SymPy
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
fig_size=(600, 400)
const frontmatter = (
title = "Related rates",
description = "Calculus with Julia: Related rates",
tags = ["CalculusWithJulia", "derivatives", "related rates"],
);
nothing
```
----
Related rates problems involve two (or more) unknown quantities that
are related through an equation. As the two variables depend on each
other, also so do their rates - change with respect to some variable
which is often time, though exactly how remains to be
discovered. Hence the name "related rates."
#### Examples
The following is a typical "book" problem:
> A screen saver displays the outline of a ``3`` cm by ``2`` cm rectangle and
> then expands the rectangle in such a way that the ``2`` cm side is
> expanding at the rate of ``4`` cm/sec and the proportions of the
> rectangle never change. How fast is the area of the rectangle
> increasing when its dimensions are ``12`` cm by ``8`` cm?
> [Source.](http://oregonstate.edu/instruct/mth251/cq/Stage9/Practice/ratesProblems.html)
```julia; hold=true; echo=false; cache=true
### {{{growing_rects}}}
## Secant line approaches tangent line...
function growing_rects_graph(n)
w = (t) -> 2 + 4t
h = (t) -> 3/2 * w(t)
t = n - 1
w_2 = w(t)/2
h_2 = h(t)/2
w_n = w(5)/2
h_n = h(5)/2
plt = plot(w_2 * [-1, -1, 1, 1, -1], h_2 * [-1, 1, 1, -1, -1], xlim=(-17,17), ylim=(-17,17),
legend=false, size=fig_size)
annotate!(plt, [(-1.5, 1, "Area = $(round(Int, 4*w_2*h_2))")])
plt
end
caption = L"""
As $t$ increases, the size of the rectangle grows. The ratio of width to height is fixed. If we know the rate of change in time for the width ($dw/dt$) and the height ($dh/dt$) can we tell the rate of change of *area* with respect to time ($dA/dt$)?
"""
n=6
anim = @animate for i=1:n
growing_rects_graph(i)
end
imgfile = tempname() * ".gif"
gif(anim, imgfile, fps = 1)
ImageFile(imgfile, caption)
```
Here we know $A = w \cdot h$ and we know some things about how $w$ and
$h$ are related *and* about the rate of how both $w$ and $h$ grow in
time $t$. That means that we could express this growth in terms of
some functions $w(t)$ and $h(t)$, then we can figure out that the area - as a function of $t$ - will be expressed as:
```math
A(t) = w(t) \cdot h(t).
```
We would get by the product rule that the *rate of change* of area with respect to time, $A'(t)$ is just:
```math
A'(t) = w'(t) h(t) + w(t) h'(t).
```
As an aside, it is fairly conventional to suppress the $(t)$ part of
the notation $A=wh$ and to use the Leibniz notation for derivatives:
```math
\frac{dA}{dt} = \frac{dw}{dt} h + w \frac{dh}{dt}.
```
This relationship is true for all $t$, but the problem discusses a
certain value of $t$ - when $w(t)=8$ and $h(t) = 12$. At this same
value of $t$, we have $w'(t) = 4$ and so $h'(t) = 6$. Substituting these 4 values into the 4 unknowns in the formula for $A'(t)$ gives:
```math
A'(t) = 4 \cdot 12 + 8 \cdot 6 = 96.
```
Summarizing, from the relationship between $A$, $w$ and $t$, there is
a relationship between their rates of growth with respect to $t$, a
time variable. Using this and known values, we can compute. In this
case, $A'$ at the specific $t$.
We could also have done this differently. We would recognize the following:
- The area of a rectangle is just:
```julia;
A(w,h) = w * h
```
- The width - expanding at a rate of $4t$ from a starting value of $2$ - must satisfy:
```julia;
w(t) = 2 + 4*t
```
- The height is a constant proportion of the width:
```julia;
h(t) = 3/2 * w(t)
```
This means again that area depends on $t$ through this formula:
```julia;
A(t) = A(w(t), h(t))
```
This is why the rates of change are related: as $w$ and $h$ change in
time, the functional relationship with $A$ means $A$ also changes in
time.
Now to answer the question, when the width is 8, we must have that $t$ is:
```julia;
tstar = find_zero(x -> w(x) - 8, [0, 4]) # or solve by hand to get 3/2
```
The question is to find the rate the area is increasing at the given
time $t$, which is $A'(t)$ or $dA/dt$. We get this by performing the
differentiation, then substituting in the value.
Here we do so with the aid of `Julia`, though this problem could readily be done "by hand."
We have expressed $A$ as a function of $t$ by composition, so can differentiate that:
```julia;
A'(tstar)
```
----
Now what? Why is ``96`` of any interest? It is if the value at a specific
time is needed. But in general, a better question might be to
understand if there is some pattern to the numbers in the figure,
these being $6, 54, 150, 294, 486, 726$. Their differences are the
*average* rate of change:
```julia;
xs = [6, 54, 150, 294, 486, 726]
ds = diff(xs)
```
Those seem to be increasing by a fixed amount each time, which we can see by one more application of `diff`:
```julia;
diff(ds)
```
How can this relationship be summarized? Well, let's go back to what we know, though this time using symbolic math:
```julia;
@syms t
diff(A(t), t)
```
This should be clear: the rate of change, $dA/dt$, is increasing
linearly, hence the second derivative, $dA^2/dt^2$ would be constant,
just as we saw for the average rate of change.
So, for this problem, a constant rate of change in width and height
leads to a linear rate of change in area, put otherwise, linear growth
in both width and height leads to quadratic growth in area.
##### Example
A ladder, with length $l$, is leaning against a wall. We parameterize
this problem so that the top of the ladder is at $(0,h)$ and the
bottom at $(b, 0)$. Then $l^2 = h^2 + b^2$ is a constant.
If the ladder starts to slip away at the base, but remains in contact
with the wall, express the rate of change of $h$ with respect to $t$
in terms of $db/dt$.
We have from implicitly differentiating in $t$ the equation $l^2 = h^2 + b^2$, noting that $l$ is a constant, that:
```math
0 = 2h \frac{dh}{dt} + 2b \frac{db}{dt}.
```
Solving, yields:
```math
\frac{dh}{dt} = -\frac{b}{h} \cdot \frac{db}{dt}.
```
* If $l = 12$ and $db/dt = 2$ when $b=4$, find $dh/dt$.
We just need to find $h$ for this value of $b$, as the other two quantities in the last equation are known.
But $h = \sqrt{l^2 - b^2}$, so the answer is:
```julia;
length, bottom, dbdt = 12, 4, 2
height = sqrt(length^2 - bottom^2)
-bottom/height * dbdt
```
* What happens to the rate as $b$ goes to $l$?
As $b$ goes to $l$, $h$ goes to ``0``, so $b/h$ blows up. Unless $db/dt$
goes to $0$, the expression will become $-\infty$.
##### Example
```julia; hold=true; echo=false
caption = "A man and woman walk towards the light."
imgfile = "figures/long-shadow-noir.png"
ImageFile(:derivatives, imgfile, caption)
```
Shadows are a staple of film noir. In the photo, suppose a man and a woman walk towards a street light. As they approach the light the length of their shadow changes.
Suppose, we focus on the ``5`` foot tall woman. Her shadow comes from a streetlight ``15`` feet high. She is walking at ``3`` feet per second towards the light. What is the rate of change of her shadow?
The setup for this problem involves drawing a right triangle with height ``12`` and base given by the distance ``x`` from the light the woman is *plus* the length ``l`` of the shadow. There is a similar triangle formed by the woman's height with length ``l``. Equating the ratios of the sided gives:
```math
\frac{5}{l} = \frac{12}{x + l}
```
As we need to take derivatives, we work with the reciprocal relationship:
```math
\frac{l}{5} = \frac{x + l}{12}
```
Differentiating in ``t`` gives:
```math
\frac{l'}{5} = \frac{x' + l'}{12}
```
Or
```math
l' \cdot (\frac{1}{5} - \frac{1}{12}) = \frac{x'}{12}
```
Solving for ``l'`` gives an answer in terms of ``x'`` the rate the woman is walking. In this description ``x`` is getting shorter, so ``x'`` would be ``-3`` feet per second and the shadow length would be decreasing at a rate proportional to the walking speed.
##### Example
```julia; hold=true; echo=false
p = plot(; axis=nothing, border=:none, legend=false, aspect_ratio=:equal)
scatter!(p, [0],[50], color=:yellow, markersize=50)
plot!(p, [0, 50], [0,0], linestyle=:dash)
plot!(p, [0,50], [50,0], linestyle=:dot)
plot!(p, [25,25],[25,0], linewidth=5, color=:black)
plot!(p, [25,50], [0,0], linewidth=2, color=:black)
```
The sun is setting at the rate of ``1/20`` radian/min, and appears to be dropping perpendicular to the horizon, as depicted in the figure. How fast is the shadow of a ``25`` meter wall lengthening at the moment when the shadow is ``25`` meters long?
Let the shadow length be labeled ``x``, as it appears on the ``x`` axis above. Then we have by right-angle trigonometry:
```math
\tan(\theta) = \frac{25}{x}
```
of ``x\tan(\theta) = 25``.
As ``t`` evolves, we know ``d\theta/dt`` but what is ``dx/dt``? Using implicit differentiation yields:
```math
\frac{dx}{dt} \cdot \tan(\theta) + x \cdot (\sec^2(\theta)\cdot \frac{d\theta}{dt}) = 0
```
Substituting known values and identifying ``\theta=\pi/4`` when the shadow length, ``x``, is ``25`` gives:
```math
\frac{dx}{dt} \cdot \tan(\pi/4) + 25 \cdot((4/2) \cdot \frac{-1}{20} = 0
```
This can be solved for the unknown: ``dx/dt = 50/20``.
##### Example
A batter hits a ball toward third base at ``75`` ft/sec and runs toward first base at a rate of ``24`` ft/sec. At what rate does the distance between the ball and the batter change when ``2`` seconds have passed?
We will answer this with `SymPy`. First we create some symbols for the movement of the ball towardsthird base, `b(t)`, the runner toward first base, `r(t)`, and the two velocities. We use symbolic functions for the movements, as we will be differentiating them in time:
```julia
@syms b() r() v_b v_r
d = sqrt(b(t)^2 + r(t)^2)
```
The distance formula applies to give ``d``. As the ball and runner are moving in a perpendicular direction, the formula is easy to apply.
We can differentiate `d` in terms of `t` and in process we also find the derivatives of `b` and `r`:
```julia
db, dr = diff(b(t),t), diff(r(t),t) # b(t), r(t) -- symbolic functions
dd = diff(d,t) # d -- not d(t) -- an expression
```
The slight difference in the commands is due to `b` and `r` being symbolic functions, whereas `d` is a symbolic expression. Now we begin substituting. First, from the problem `db` is just the velocity in the ball's direction, or `v_b`. Similarly for `v_r`:
```julia
ddt = subs(dd, db => v_b, dr => v_r)
```
Now, we can substitute in for `b(t)`, as it is `v_b*t`, etc.:
```julia
ddt₁ = subs(ddt, b(t) => v_b * t, r(t) => v_r * t)
```
This finds the rate of change of time for any `t` with symbolic values of the velocities. (And shows how the answer doesn't actually depend on ``t``.) The problem's answer comes from a last substitution:
```julia
ddt₁(t => 2, v_b => 75, v_r => 24)
```
Were this done by "hand," it would be better to work with distance squared to avoid the expansion of complexity from the square root. That is, using implicit differentiation:
```math
\begin{align*}
d^2 &= b^2 + r^2\\
2d\cdot d' &= 2b\cdot b' + 2r\cdot r'\\
d' &= (b\cdot b' + r \cdot r')/d\\
d' &= (tb'\cdot b' + tr' \cdot r')/d\\
d' &= \left((b')^2 + (r')^2\right) \cdot \frac{t}{d}.
\end{align*}
```
##### Example
```julia; hold=true; echo=false; cache=true
###{{{baseball_been_berry_good}}}
## Secant line approaches tangent line...
function baseball_been_berry_good_graph(n)
v0 = 15
x = (t) -> 50t
y = (t) -> v0*t - 5 * t^2
ns = range(.25, stop=3, length=8)
t = ns[n]
ts = range(0, stop=t, length=50)
xs = map(x, ts)
ys = map(y, ts)
degrees = atand(y(t)/(100-x(t)))
degrees = degrees < 0 ? 180 + degrees : degrees
plt = plot(xs, ys, legend=false, size=fig_size, xlim=(0,150), ylim=(0,15))
plot!(plt, [x(t), 100], [y(t), 0.0], color=:orange)
annotate!(plt, [(55, 4,"θ = $(round(Int, degrees)) degrees"),
(x(t), y(t), "($(round(Int, x(t))), $(round(Int, y(t))))")])
end
caption = L"""
The flight of the ball as being tracked by a stationary outfielder. This ball will go over the head of the player. What can the player tell from the quantity $d\theta/dt$?
"""
n = 8
anim = @animate for i=1:n
baseball_been_berry_good_graph(i)
end
imgfile = tempname() * ".gif"
gif(anim, imgfile, fps = 1)
ImageFile(imgfile, caption)
```
A baseball player stands ``100`` meters from home base. A batter hits the
ball directly at the player so that the distance from home plate is
$x(t)$ and the height is $y(t)$.
The player tracks the flight of the ball in terms of the angle
$\theta$ made between the ball and the player. This will satisfy:
```math
\tan(\theta) = \frac{y(t)}{100 - x(t)}.
```
What is the rate of change of $\theta$ with respect to $t$ in terms of that of $x$ and $y$?
We have by the chain rule and quotient rule:
```math
\sec^2(\theta) \theta'(t) = \frac{y'(t) \cdot (100 - x(t)) - y(t) \cdot (-x'(t))}{(100 - x(t))^2}.
```
If we have $x(t) = 50t$ and $y(t)=v_{0y} t - 5 t^2$ when is the rate of change of the angle happening most quickly?
The formula for $\theta'(t)$ is
```math
\theta'(t) = \cos^2(\theta) \cdot \frac{y'(t) \cdot (100 - x(t)) - y(t) \cdot (-x'(t))}{(100 - x(t))^2}.
```
This question requires us to differentiate *again* in $t$. Since we
have fairly explicit function for $x$ and $y$, we will use `SymPy` to
do this.
```julia;
@syms theta()
v0 = 5
x(t) = 50t
y(t) = v0*t - 5 * t^2
eqn = tan(theta(t)) - y(t) / (100 - x(t))
```
```julia;
thetap = diff(theta(t),t)
dtheta = solve(diff(eqn, t), thetap)[1]
```
We could proceed directly by evaluating:
```julia;
d2theta = diff(dtheta, t)(thetap => dtheta)
```
That is not so tractable, however.
It helps to simplify
$\cos^2(\theta(t))$ using basic right-triangle trigonometry. Recall, $\theta$ comes from a right triangle with
height $y(t)$ and length $(100 - x(t))$. The cosine of this angle will
be $100 - x(t)$ divided by the length of the hypotenuse. So we can
substitute:
```julia;
dtheta₁ = dtheta(cos(theta(t))^2 => (100 -x(t))^2/(y(t)^2 + (100-x(t))^2))
```
Plotting reveals some interesting things. For $v_{0y} < 10$ we have graphs that look like:
```julia;
plot(dtheta₁, 0, v0/5)
```
The ball will drop in front of the player, and the change in $d\theta/dt$ is monotonic.
But let's rerun the code with $v_{0y} > 10$:
```julia; hold=true
v0 = 15
x(t) = 50t
y(t) = v0*t - 5 * t^2
eqn = tan(theta(t)) - y(t) / (100 - x(t))
thetap = diff(theta(t),t)
dtheta = solve(diff(eqn, t), thetap)[1]
dtheta₁ = subs(dtheta, cos(theta(t))^2, (100 - x(t))^2/(y(t)^2 + (100 - x(t))^2))
plot(dtheta₁, 0, v0/5)
```
In the second case we have a different shape. The graph is not
monotonic, and before the peak there is an inflection point. Without
thinking too hard, we can see that the greatest change in the angle is
when it is just above the head ($t=2$ has $x(t)=100$).
That these two graphs differ so, means that the player may be able to
read if the ball is going to go over his or her head by paying
attention to the how the ball is being tracked.
##### Example
Hipster pour-over coffee is made with a conical coffee filter. The
cone is actually a [frustum](http://en.wikipedia.org/wiki/Frustum) of
a cone with small diameter, say $r_0$, chopped off. We will parameterize
our cone by a value $h \geq 0$ on the $y$ axis and an angle $\theta$
formed by a side and the $y$ axis. Then the coffee filter is the part
of the cone between some $h_0$ (related $r_0=h_0 \tan(\theta)$) and $h$.
The volume of a cone of height $h$ is $V(h) = \pi/3 h \cdot
R^2$. From the geometry, $R = h\tan(\theta)$. The volume of the
filter then is:
```math
V = V(h) - V(h_0).
```
What is $dV/dh$ in terms of $dR/dh$?
Differentiating implicitly gives:
```math
\frac{dV}{dh} = \frac{\pi}{3} ( R(h)^2 + h \cdot 2 R \frac{dR}{dh}).
```
We see that it depends on $R$ and the change in $R$ with respect to $h$. However, we visualize $h$ - the height - so it is better to re-express. Clearly, $dR/dh = \tan\theta$ and using $R(h) = h \tan(\theta)$ we get:
```math
\frac{dV}{dh} = \pi h^2 \tan^2(\theta).
```
The rate of change goes down as $h$ gets smaller ($h \geq h_0$) and gets bigger for bigger $\theta$.
How do the quantities vary in time?
For an incompressible fluid, by balancing the volume leaving with how
it leaves we will have $dh/dt$ is the ratio of the cross-sectional
area at bottom over that at the height of the fluid $(\pi \cdot (h_0\tan(\theta))^2) /
(\pi \cdot ((h\tan\theta))^2)$ times the outward velocity of the fluid.
That is $dh/dt = (h_0/h)^2 \cdot v$. Which makes sense - larger openings
($h_0$) mean more fluid lost per unit time so the height change
follows, higher levels ($h$) means the change in height is slower, as
the cross-sections have more volume.
By [Torricelli's](http://en.wikipedia.org/wiki/Torricelli's_law) law,
the out velocity follows the law $v = \sqrt{2g(h-h_0)}$. This gives:
```math
\frac{dh}{dt} = \frac{h_0^2}{h^2} \cdot v = \frac{h_0^2}{h^2} \sqrt{2g(h-h_0)}.
```
If $h >> h_0$, then $\sqrt{h-h_0} = \sqrt{h}\sqrt(1 - h_0/h) \approx \sqrt{h}(1 - (1/2)(h_0/h)) \approx \sqrt{h}$. So the rate of change of height in time is like $1/h^{3/2}$.
Now, by the chain rule, we have then the rate of change of volume with respect to time, $dV/dt$, is:
```math
\begin{align*}
\frac{dV}{dt} &=
\frac{dV}{dh} \cdot \frac{dh}{dt}\\
&= \pi h^2 \tan^2(\theta) \cdot \frac{h_0^2}{h^2} \sqrt{2g(h-h_0)} \\
&= \pi \sqrt{2g} \cdot (r_0)^2 \cdot \sqrt{h-h_0} \\
&\approx \pi \sqrt{2g} \cdot r_0^2 \cdot \sqrt{h}.
\end{align*}
```
This rate depends on the square of the size of the
opening ($r_0^2$) and the square root of the height ($h$), but not the
angle of the cone.
## Questions
###### Question
Supply and demand. Suppose demand for product $XYZ$ is $d(x)$ and supply
is $s(x)$. The excess demand is $d(x) - s(x)$. Suppose this is positive. How does this influence
price? Guess the "law" of economics that applies:
```julia; hold=true; echo=false
choices = [
"The rate of change of price will be ``0``",
"The rate of change of price will increase",
"The rate of change of price will be positive and will depend on the rate of change of excess demand."
]
ans = 3
radioq(choices, ans, keep_order=true)
```
(Theoretically, when demand exceeds supply, prices increase.)
###### Question
Which makes more sense from an economic viewpoint?
```julia; hold=true; echo=false
choices = [
"If the rate of change of unemployment is negative, the rate of change of wages will be negative.",
"If the rate of change of unemployment is negative, the rate of change of wages will be positive."
]
ans = 2
radioq(choices, ans, keep_order=true)
```
(Colloquially, "the rate of change of unemployment is negative" means the unemployment rate is going down, so there are fewer workers available to fill new jobs.)
###### Question
In chemistry there is a fundamental relationship between pressure
($P$), temperature ($T)$ and volume ($V$) given by $PV=cT$ where $c$
is a constant. Which of the following would be true with respect to time?
```julia; hold=true; echo=false
choices = [
L"The rate of change of pressure is always increasing by $c$",
"If volume is constant, the rate of change of pressure is proportional to the temperature",
"If volume is constant, the rate of change of pressure is proportional to the rate of change of temperature",
"If pressure is held constant, the rate of change of pressure is proportional to the rate of change of temperature"]
ans = 3
radioq(choices, ans, keep_order=true)
```
###### Question
A pebble is thrown into a lake causing ripples to form expanding
circles. Suppose one of the circles expands at a rate of ``1`` foot per second and
the radius of the circle is ``10`` feet, what is the rate of change of
the area enclosed by the circle?
```julia; hold=true; echo=false
# a = pi*r^2
# da/dt = pi * 2r * drdt
r = 10; drdt = 1
val = pi * 2r * drdt
numericq(val, units=L"feet$^2$/second")
```
###### Question
A pizza maker tosses some dough in the air. The dough is formed in a
circle with radius ``10``. As it rotates, its area increases at a rate of
``1`` inch$^2$ per second. What is the rate of change of the radius?
```julia; hold=true; echo=false
# a = pi*r^2
# da/dt = pi * 2r * drdt
r = 10; dadt = 1
val = dadt /( pi * 2r)
numericq(val, units="inches/second")
```
###### Question
An FBI agent with a powerful spyglass is located in a boat anchored
400 meters offshore. A gangster under surveillance is driving along
the shore. Assume the shoreline is straight and that the gangster is 1
km from the point on the shore nearest to the boat. If the spyglasses
must rotate at a rate of $\pi/4$ radians per minute to track
the gangster, how fast is the gangster moving? (In kilometers per minute.)
[Source.](http://oregonstate.edu/instruct/mth251/cq/Stage9/Practice/ratesProblems.html)
```julia; hold=true; echo=false
## tan(theta) = x/y
## sec^2(theta) dtheta/dt = 1/y dx/dt (y is constant)
## dxdt = y sec^2(theta) dtheta/dt
dthetadt = pi/4
y0 = .4; x0 = 1.0
theta = atan(x0/y0)
val = y0 * sec(theta)^2 * dthetadt
numericq(val, units="kilometers/minute")
```
###### Question
A flood lamp is installed on the ground 200 feet from a vertical
wall. A six foot tall man is walking towards the wall at the rate of
4 feet per second. How fast is the tip of his shadow moving down the
wall when he is 50 feet from the wall?
[Source.](http://oregonstate.edu/instruct/mth251/cq/Stage9/Practice/ratesProblems.html)
(As the question is written the answer should be positive.)
```julia; hold=true; echo=false
## y/200 = 6/x
## dydt = 200 * 6 * -1/x^2 dxdt
x0 = 200 - 50
dxdt = 4
val = 200 * 6 * (1/x0^2) * dxdt
numericq(val, units="feet/second")
```
###### Question
Consider the hyperbola $y = 1/x$ and think of it as a slide. A
particle slides along the hyperbola so that its x-coordinate is
increasing at a rate of $f(x)$ units/sec. If its $y$-coordinate is
decreasing at a constant rate of $1$ unit/sec, what is $f(x)$?
[Source.](http://oregonstate.edu/instruct/mth251/cq/Stage9/Practice/ratesProblems.html)
```julia; hold=true; echo=false
choices = [
"``f(x) = 1/x``",
"``f(x) = x^0``",
"``f(x) = x``",
"``f(x) = x^2``"
]
ans = 4
radioq(choices, ans, keep_order=true)
```
###### Question
A balloon is in the shape of a sphere, fortunately, as this gives
a known formula, $V=4/3 \pi r^3$, for the volume. If the balloon is being filled with a rate of
change of volume per unit time is $2$ and the radius is $3$, what is
rate of change of radius per unit time?
```julia; hold=true; echo=false
r, dVdt = 3, 2
drdt = dVdt / (4 * pi * r^2)
numericq(drdt, units="units per unit time")
```
###### Question
Consider the curve $f(x) = x^2 - \log(x)$. For a given $x$, the tangent line intersects the $y$ axis. Where?
```julia; hold=true; echo=false
choices = [
"``y = 1 - x^2 - \\log(x)``",
"``y = 1 - x^2``",
"``y = 1 - \\log(x)``",
"``y = x(2x - 1/x)``"
]
ans = 1
radioq(choices, ans)
```
If $dx/dt = -1$, what is $dy/dt$?
```julia; hold=true; echo=false
choices = [
"``dy/dt = 2x + 1/x``",
"``dy/dt = 1 - x^2 - \\log(x)``",
"``dy/dt = -2x - 1/x``",
"``dy/dt = 1``"
]
ans=1
radioq(choices, ans)
```

View File

@ -0,0 +1,218 @@
# Symbolic derivatives
This section uses this add-on package:
```julia
using TermInterface
```
```julia; echo=false
const frontmatter = (
title = "Symbolic derivatives",
description = "Calculus with Julia: Symbolic derivatives",
tags = ["CalculusWithJulia", "derivatives", "symbolic derivatives"],
);
```
----
The ability to breakdown an expression into operations and their
arguments is necessary when trying to apply the differentiation
rules. Such rules are applied from the outside in. Identifying
the proper "outside" function is usually most of the battle when finding derivatives.
In the following example, we provide a sketch of a framework to differentiate expressions by a chosen symbol to illustrate how the outer function drives the task of differentiation.
The `Symbolics` package provides native symbolic manipulation abilities for `Julia`, similar to `SymPy`, though without the dependence on `Python`. The `TermInterface` package, used by `Symbolics`, provides a generic interface for expression manipulation for this package that *also* is implemented for `Julia`'s expressions and symbols.
An expression is an unevaluated portion of code that for our purposes
below contains other expressions, symbols, and numeric literals. They
are held in the `Expr` type. A symbol, such as `:x`, is distinct from
a string (e.g. `"x"`) and is useful to the programmer to distinguish
between the contents a variable points to from the name of the
variable. Symbols are fundamental to metaprogramming in `Julia`. An
expression is a specification of some set of statements to execute. A
numeric literal is just a number.
The three main functions from `TermInterface` we leverage are `istree`, `operation`, and `arguments`. The `operation` function returns the "outside" function of an expression. For example:
```julia
operation(:(sin(x)))
```
We see the `sin` function, referred to by a symbol (`:sin`).
The `:(...)` above *quotes* the argument, and does not evaluate it, hence `x` need not be defined above. (The `:` notation is used to create both symbols and expressions.)
The arguments are the terms that the outside function is called on. For our purposes there may be ``1`` (*unary*), ``2`` (*binary*), or more than ``2`` (*nary*) arguments. (We ignore zero-argument functions.) For example:
```julia
arguments(:(-x)), arguments(:(pi^2)), arguments(:(1 + x + x^2))
```
(The last one may be surprising, but all three arguments are passed to the `+` function.)
Here we define a function to decide the *arity* of an expression based on the number of arguments it is called with:
```julia
function arity(ex)
n = length(arguments(ex))
n == 1 ? Val(:unary) :
n == 2 ? Val(:binary) : Val(:nary)
end
```
Differentiation must distinguish between expressions, variables, and
numbers. Mathematically expressions have an "outer" function, whereas variables and numbers can be directly differentiated. The `istree`
function in `TermInterface` returns `true` when passed an expression,
and `false` when passed a symbol or numeric literal. The latter two
may be distinguished by `isa(..., Symbol)`.
Here we create a function, `D`, that when it encounters an expression it *dispatches* to a specific method of `D` based on the outer operation and arity, otherwise if it encounters a symbol or a numeric literal it does the differentiation:
```julia
function D(ex, var=:x)
if istree(ex)
op, args = operation(ex), arguments(ex)
D(Val(op), arity(ex), args, var)
elseif isa(ex, Symbol) && ex == :x
1
else
0
end
end
```
Now to develop methods for `D` for different "outside" functions and arities.
Addition can be unary (`:(+x)` is a valid quoting, even if it might simplify to the symbol `:x` when evaluated), *binary*, or *nary*. Here we implement the *sum rule*:
```julia
D(::Val{:+}, ::Val{:unary}, args, var) = D(first(args), var)
function D(::Val{:+}, ::Val{:binary}, args, var)
a, b = D.(args, var)
:($a + $b)
end
function D(::Val{:+}, ::Val{:nary}, args, var)
as = D.(args, var)
:(+($as...))
end
```
The `args` are always held in a container, so the unary method must pull out the first one. The binary case should read as: apply `D` to each of the two arguments, and then create a quoted expression containing the sum of the results. The dollar signs interpolate into the quoting. (The "primes" are unicode notation achieved through `\prime[tab]` and not operations.) The *nary* case does something similar, only uses splatting to produce the sum.
Subtraction must also be implemented in a similar manner, but not for the *nary* case:
```julia
function D(::Val{:-}, ::Val{:unary}, args, var)
a = D(first(args), var)
:(-$a)
end
function D(::Val{:-}, ::Val{:binary}, args, var)
a, b = D.(args, var)
:($a - $b)
end
```
The *product rule* is similar to addition, in that ``3`` cases are considered:
```julia
D(op::Val{:*}, ::Val{:unary}, args, var) = D(first(args), var)
function D(::Val{:*}, ::Val{:binary}, args, var)
a, b = args
a, b = D.(args, var)
:($a * $b + $a * $b)
end
function D(op::Val{:*}, ::Val{:nary}, args, var)
a, bs... = args
b = :(*($(bs...)))
a = D(a, var)
b = D(b, var)
:($a * $b + $a * $b)
end
```
The *nary* case above just peels off the first factor and then uses the binary product rule.
Division is only a binary operation, so here we have the *quotient rule*:
```julia
function D(::Val{:/}, ::Val{:binary}, args, var)
u,v = args
u, v = D(u, var), D(v, var)
:( ($u*$v - $u*$v)/$v^2 )
end
```
Powers are handled a bit differently. The power rule would require checking if the exponent does not contain the variable of differentiation, exponential derivatives would require checking the base does not contain the variable of differentation. Trying to implement both would be tedious, so we use the fact that ``x = \exp(\log(x))`` (for `x` in the domain of `log`, more care is necessary if `x` is negative) to differentiate:
```julia
function D(::Val{:^}, ::Val{:binary}, args, var)
a, b = args
D(:(exp($b*log($a))), var) # a > 0 assumed here
end
```
That leaves the task of defining a rule to differentiate both `exp` and `log`.
We do so with *unary* definitions. In the following we also implement `sin` and `cos` rules:
```julia
function D(::Val{:exp}, ::Val{:unary}, args, var)
a = first(args)
a = D(a, var)
:(exp($a) * $a)
end
function D(::Val{:log}, ::Val{:unary}, args, var)
a = first(args)
a = D(a, var)
:(1/$a * $a)
end
function D(::Val{:sin}, ::Val{:unary}, args, var)
a = first(args)
a = D(a, var)
:(cos($a) * $a)
end
function D(::Val{:cos}, ::Val{:unary}, args, var)
a = first(args)
a = D(a, var)
:(-sin($a) * $a)
end
```
The pattern is similar for each. The `$a` factor is needed due to the *chain rule*. The above illustrates the simple pattern necessary to add a derivative rule for a function. More could be, but for this example the above will suffice, as now the system is ready to be put to work.
```julia
ex₁ = :(x + 2/x)
D(ex₁, :x)
```
The output does not simplify, so some work is needed to identify `1 - 2/x^2` as the answer.
```julia
ex₂ = :( (x + sin(x))/sin(x))
D(ex₂, :x)
```
Again, simplification is not performed.
Finally, we have a second derivative taken below:
```julia
ex₃ = :(sin(x) - x - x^3/6)
D(D(ex₃, :x), :x)
```
The length of the expression should lead to further appreciation for simplification steps taken when doing such a computation by hand.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,15 @@
[deps]
CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b"
Contour = "d38c429a-6771-53c6-b99e-75d170b6e991"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
IntervalSets = "8197267c-284f-5f27-9208-e0e47529a953"
JSON = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
MDBM = "dd61e66b-39ce-57b0-8813-509f78be4b4d"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
PyPlot = "d330b81b-6aea-500a-939a-2ce795aea3ee"
QuadGK = "1fd47b50-473d-5c70-9696-f719f8f3bcdc"
Roots = "f2b01f46-fcfa-551c-844a-d8ac1e96c665"
SymPy = "24249f21-da20-56a4-8eb1-6a02cf4ae2e6"

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,29 @@
From [bennedich](https://discourse.julialang.org/t/love-in-245-characters-code-golf/20771)
```
0:2e-3:2π .|>d->(P=
fill(5<<11,64 ,25);z=8cis(
d)sin(.46d);P[ 64,:].=10;for
r=0:98,c=0 :5^3 x,y=@.mod(2-
$reim((.016c-r/49im-1-im)z),
4)-2;4-x^2>√2(y+.5-√√x^2)^
2&&(P[c÷2+1,r÷4+1]|=Int(
")*,h08H¨"[4&4c+1+r&
3])-40)end;print(
"\e[H\e[1;31m",
join(Char.(
P)))
);
```
[New York Times](https://www.nytimes.com/2019/02/14/science/math-algorithm-valentine.html)
Süss — German for “sweet” — is an interactive widget that allows you to tweak the algebra and customize the heart to your soulss delight. It was created for Valentines Day by Imaginary, a nonprofit organization in Berlin that designs open-source mathematics programs and exhibitions.
You can stretch and squeeze the heart by moving the two left-most sliders, which change the “a” and “b” parameters; the right-most slider zooms in and out. Better yet, canoodle directly with Süsss equation and engage in the dialectical interplay between algebra and geometry. (Change that final z³ to a z² to see the heart in its underwear.)
```
(x^2+((1+b)*y)^2+z^2-1)^3-x^2*z^3-a*y^2*z^3
```

View File

@ -0,0 +1,72 @@
"","elevation","elev_units","longitude","latitude"
"1",126.85,"meters",-74.2986363,40.7541939
"2",125.19,"meters",-74.298561,40.754122
"3",123.52,"meters",-74.298505,40.754049
"4",121.92,"meters",-74.298435,40.753972
"5",119.86,"meters",-74.298402,40.753872
"6",119.86,"meters",-74.298416,40.753818
"7",119.86,"meters",-74.298393,40.753805
"8",118.32,"meters",-74.298233,40.753717
"9",118.48,"meters",-74.298113,40.753706
"10",118.48,"meters",-74.298079,40.753714
"11",110.65,"meters",-74.297548,40.753434
"12",108.68,"meters",-74.297364,40.753392
"13",108.68,"meters",-74.2973338,40.7533463
"14",107.67,"meters",-74.2972265,40.7533169
"15",107.54,"meters",-74.297087,40.753356
"16",107.54,"meters",-74.2970438,40.7533584
"17",106.74,"meters",-74.296979,40.753397
"18",107.69,"meters",-74.29689,40.753533
"19",108.01,"meters",-74.296812,40.753661
"20",108.34,"meters",-74.296718,40.753785
"21",108.93,"meters",-74.296627,40.753874
"22",109.26,"meters",-74.296514,40.753973
"23",109.44,"meters",-74.296377,40.754026
"24",107.8,"meters",-74.296184,40.754049
"25",108.14,"meters",-74.29596,40.754119
"26",108.31,"meters",-74.295761,40.754191
"27",107.08,"meters",-74.295542,40.754277
"28",106.54,"meters",-74.295345,40.754276
"29",105.18,"meters",-74.295177,40.754295
"30",104.93,"meters",-74.2951,40.754358
"31",103.79,"meters",-74.294976,40.754381
"32",103.79,"meters",-74.294943,40.754379
"33",103.62,"meters",-74.294873,40.754362
"34",103.46,"meters",-74.294805,40.754359
"35",102.68,"meters",-74.294687,40.754349
"36",102.78,"meters",-74.294537,40.754269
"37",100.91,"meters",-74.294341,40.754248
"38",101.24,"meters",-74.294228,40.754249
"39",101.15,"meters",-74.294146,40.75427
"40",100.73,"meters",-74.294043,40.754277
"41",100.77,"meters",-74.293997,40.75418
"42",97.54,"meters",-74.293672,40.75418
"43",97.58,"meters",-74.293539,40.754324
"44",97.41,"meters",-74.293442,40.754447
"45",97.02,"meters",-74.29342,40.754555
"46",96.78,"meters",-74.293397,40.754677
"47",96.72,"meters",-74.293319,40.754787
"48",96.98,"meters",-74.2933093,40.7549621
"49",97.04,"meters",-74.2931914,40.7550903
"50",95.89,"meters",-74.2931359,40.7552002
"51",95.48,"meters",-74.293124,40.75528
"52",95.43,"meters",-74.293142,40.755375
"53",95.58,"meters",-74.293163,40.7554692
"54",95.58,"meters",-74.2931806,40.7555174
"55",95.31,"meters",-74.2930826,40.7555402
"56",95.45,"meters",-74.2930283,40.7555572
"57",94.19,"meters",-74.2929292,40.7555853
"58",93.57,"meters",-74.2928114,40.7556067
"59",92.9,"meters",-74.2927408,40.7556127
"60",92.9,"meters",-74.2926921,40.7556257
"61",91.46,"meters",-74.2926528,40.7556602
"62",91.46,"meters",-74.2926104,40.7556888
"63",88.42,"meters",-74.2925696,40.7557042
"64",88.42,"meters",-74.2925272,40.7556876
"65",85.62,"meters",-74.2924927,40.7556674
"66",85.32,"meters",-74.2924503,40.755646
"67",85.32,"meters",-74.2924377,40.7556222
"68",85.32,"meters",-74.2924377,40.7555877
"69",84.49,"meters",-74.2924346,40.7555365
"70",84.49,"meters",-74.2924236,40.755502
"71",84.36,"meters",-74.2923562,40.7554961
1 elevation elev_units longitude latitude
2 1 126.85 meters -74.2986363 40.7541939
3 2 125.19 meters -74.298561 40.754122
4 3 123.52 meters -74.298505 40.754049
5 4 121.92 meters -74.298435 40.753972
6 5 119.86 meters -74.298402 40.753872
7 6 119.86 meters -74.298416 40.753818
8 7 119.86 meters -74.298393 40.753805
9 8 118.32 meters -74.298233 40.753717
10 9 118.48 meters -74.298113 40.753706
11 10 118.48 meters -74.298079 40.753714
12 11 110.65 meters -74.297548 40.753434
13 12 108.68 meters -74.297364 40.753392
14 13 108.68 meters -74.2973338 40.7533463
15 14 107.67 meters -74.2972265 40.7533169
16 15 107.54 meters -74.297087 40.753356
17 16 107.54 meters -74.2970438 40.7533584
18 17 106.74 meters -74.296979 40.753397
19 18 107.69 meters -74.29689 40.753533
20 19 108.01 meters -74.296812 40.753661
21 20 108.34 meters -74.296718 40.753785
22 21 108.93 meters -74.296627 40.753874
23 22 109.26 meters -74.296514 40.753973
24 23 109.44 meters -74.296377 40.754026
25 24 107.8 meters -74.296184 40.754049
26 25 108.14 meters -74.29596 40.754119
27 26 108.31 meters -74.295761 40.754191
28 27 107.08 meters -74.295542 40.754277
29 28 106.54 meters -74.295345 40.754276
30 29 105.18 meters -74.295177 40.754295
31 30 104.93 meters -74.2951 40.754358
32 31 103.79 meters -74.294976 40.754381
33 32 103.79 meters -74.294943 40.754379
34 33 103.62 meters -74.294873 40.754362
35 34 103.46 meters -74.294805 40.754359
36 35 102.68 meters -74.294687 40.754349
37 36 102.78 meters -74.294537 40.754269
38 37 100.91 meters -74.294341 40.754248
39 38 101.24 meters -74.294228 40.754249
40 39 101.15 meters -74.294146 40.75427
41 40 100.73 meters -74.294043 40.754277
42 41 100.77 meters -74.293997 40.75418
43 42 97.54 meters -74.293672 40.75418
44 43 97.58 meters -74.293539 40.754324
45 44 97.41 meters -74.293442 40.754447
46 45 97.02 meters -74.29342 40.754555
47 46 96.78 meters -74.293397 40.754677
48 47 96.72 meters -74.293319 40.754787
49 48 96.98 meters -74.2933093 40.7549621
50 49 97.04 meters -74.2931914 40.7550903
51 50 95.89 meters -74.2931359 40.7552002
52 51 95.48 meters -74.293124 40.75528
53 52 95.43 meters -74.293142 40.755375
54 53 95.58 meters -74.293163 40.7554692
55 54 95.58 meters -74.2931806 40.7555174
56 55 95.31 meters -74.2930826 40.7555402
57 56 95.45 meters -74.2930283 40.7555572
58 57 94.19 meters -74.2929292 40.7555853
59 58 93.57 meters -74.2928114 40.7556067
60 59 92.9 meters -74.2927408 40.7556127
61 60 92.9 meters -74.2926921 40.7556257
62 61 91.46 meters -74.2926528 40.7556602
63 62 91.46 meters -74.2926104 40.7556888
64 63 88.42 meters -74.2925696 40.7557042
65 64 88.42 meters -74.2925272 40.7556876
66 65 85.62 meters -74.2924927 40.7556674
67 66 85.32 meters -74.2924503 40.755646
68 67 85.32 meters -74.2924377 40.7556222
69 68 85.32 meters -74.2924377 40.7555877
70 69 84.49 meters -74.2924346 40.7555365
71 70 84.49 meters -74.2924236 40.755502
72 71 84.36 meters -74.2923562 40.7554961

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,294 @@
## container of points into vectors n vectors of length N
## N points, each of size n
## Lesson learned -- this is a very bad idea!
## better to handle the T a different way
evec(T,n) = Tuple(T[] for _ in 1:n)
evec(T, N, n) = Tuple(Vector{T}(undef, N) for _ in 1:n)
## julia> @btime xs_ys1(vs) setup=(vs=[randn(1000) for i in 1:3]);
## 83.308 μs (1013 allocations: 172.67 KiB)
## julia> @btime xs_ys2(vs) setup=(vs=[randn(1000) for i in 1:3]);
## 222.371 μs (2016 allocations: 180.72 KiB)
## julia> @btime xs_ys3(vs) setup=(vs=[randn(1000) for i in 1:3]);
## 1.003 ms (1019 allocations: 165.20 KiB)
## julia> @btime xs_ys4(vs) setup=(vs=[randn(1000) for i in 1:3]);
## 1.115 ms (5474 allocations: 210.95 KiB)
## julia> @btime xs_ys5(vs) setup=(vs=[randn(1000) for i in 1:3]);
## 1.120 ms (5474 allocations: 210.95 KiB)
## julia> @btime xs_ys6(vs) setup=(vs=[randn(1000) for i in 1:3]);
## 76.604 μs (1008 allocations: 164.63 KiB)
## julia> @btime xs_ys7(vs) setup=(vs=[randn(1000) for i in 1:3]);
## 74.306 μs (1008 allocations: 164.63 KiB)
## julia> @btime xs_ys8(vs) setup=(vs=[randn(1000) for i in 1:3]);
## 36.098 μs (2006 allocations: 94.25 KiB)
## julia> @btime xs_ys9(vs) setup=(vs=[randn(1000) for i in 1:3]);
## 85.732 μs (3006 allocations: 203.63 KiB)
## ....
## THE WINNER, but we would use one with keywords
## julia> @btime xs_ys13a(vs) setup=(vs=[randn(1000) for i in 1:3]);
## 62.768 μs (1003 allocations: 117.28 KiB)
## julia> @btime xs_ys13akw(vs) setup=(vs=[randn(1000) for i in 1:3]);
## 65.905 μs (1003 allocations: 117.28 KiB)
## make a matrix n x N, then go down 1:n
function xs_ys1(vs)
A=hcat(vs...)
Tuple([A[i,:] for i in eachindex(first(vs))])
end
## broadcast push!
function xs_ys2(vs)
u = first(vs); N = length(vs)
T = eltype(u); n = length(u)
v0 = evec(T,n)
for v in vs
push!.(v0, v)
end
v0
end
## broadcast push!
function xs_ys2a(vs)
u = first(vs); N = length(vs)
n = length(u)
v0 = Tuple(eltype(u)[] for _ in eachindex(u))
for v in vs
push!.(v0, v)
end
v0
end
## broadcast setindex!
function xs_ys3(vs)
u = first(vs); N = length(vs)
T = eltype(u); n = length(u)
v0 = evec(T,N,n)
for (i,v) in enumerate(vs)
setindex!.(v0, v, i)
end
v0
end
## 10 times faster ~77mus avoiding passing T
function xs_ys3a(vs)
u = first(vs); N = length(vs)
n = length(u)
v0 = Tuple(Vector{eltype(u)}(undef, N) for _ in eachindex(u))
for (i,v) in enumerate(vs)
setindex!.(v0, v, i)
end
v0
end
function xs_ys3b(vs)
u = first(vs); N = length(vs)
n = length(u)
v0 = ntuple(_ -> Vector{eltype(u)}(undef, N), n)
for (i,v) in enumerate(vs)
setindex!.(v0, v, i)
end
v0
end
## loop N n
function xs_ys4(vs)
u = first(vs); N = length(vs)
T = eltype(u); n = length(u)
v0 = evec(T,N,n)
for i in 1:N
for j in 1:n
v0[j][i] = vs[i][j]
end
end
v0
end
## loop N n
function xs_ys4a(vs)
u = first(vs); N = length(vs)
T = eltype(u); n = length(u)
v0 = evec(T,N,n)
for (i,v) in enumerate(vs)
for j in 1:n
v0[j][i] = v[j]
end
end
v0
end
## fast 67mus
function xs_ys4b(vs)
u = first(vs); N = length(vs)
n = length(u)
v0 = Tuple(Vector{eltype(u)}(undef, N) for _ in eachindex(u))
for (i,v) in enumerate(vs)
for j in 1:n
v0[j][i] = v[j]
end
end
v0
end
## loop n N
function xs_ys5(vs)
u = first(vs); N = length(vs)
T = eltype(u); n = length(u)
v0 = evec(T,N,n)
for j in 1:n
for i in 1:N
v0[j][i] = vs[i][j]
end
end
v0
end
function xs_ys6(vs)
u = first(vs); N = length(vs)
T = eltype(u); n = length(u)
A = Matrix{T}(undef, (N,n))
for (i,v) in enumerate(vs)
A[i,:] = v
end
Tuple(A[:,i] for i in 1:n)
end
function xs_ys7(vs)
u = first(vs); N = length(vs)
T = eltype(u); n = length(u)
A = Matrix{T}(undef, (n, N))
for (i,v) in enumerate(vs)
A[:,i] = v
end
Tuple(A[i, :] for i in 1:n)
end
# faster but doesn't wotk with plot recipes
# and may be slower once realized
function xs_ys8(vs)
N = length(vs)
u = first(vs); T = eltype(u); n = length(u)
Tuple((vs[j][i] for j in 1:N) for i in 1:n)
end
function xs_ys9(vs)
N = length(vs)
u = first(vs); T = eltype(u); n = length(u)
Tuple(collect(vs[j][i] for j in 1:N) for i in 1:n)
end
function xs_ys10(vs)
N = length(vs)
u = first(vs); T = eltype(u); n = length(u)
v0 = evec(T,N, n)
for j in 1:n
v0[j][:] .= (v[j] for v in vs)
end
v0
end
# mauro3 https://github.com/JuliaDiffEq/ODE.jl/issues/80
_pluck(y,i) = eltype(first(y))[el[i] for el in y]
xs_ys11(vs) = Tuple(_pluck(vs, i) for i in eachindex(first(vs)))
# slower
xs_ys11a(vs) = ntuple(i->_pluck(vs, i), length(first(vs)))
# one liner
xs_ys11b(vs) = Tuple(eltype(first(vs))[el[i] for el in vs] for i in eachindex(first(vs)))
function xs_ys11c(vs)
u = first(vs)
Tuple(eltype(u)[el[i] for el in vs] for i in eachindex(u))
end
xs_ys11d(vs) = (u=first(vs); Tuple(eltype(u)[el[i] for el in vs] for i in eachindex(u)))
xs_ys11e(vs) = (u=first(vs); ntuple(i->eltype(u)[v[i] for v in vs], length(u)))
xs_ys11f(vs) = (u=first(vs);n::Int=length(u);T::DataType=eltype(u);ntuple(i->eltype(u)[v[i] for v in vs], n))
xs_ys11g(vs::Vector{Vector{T}}) where {T} = (u=first(vs);n::Int=length(u);ntuple(i->T[v[i] for v in vs], n))
@inline _pluck(T, y, i) = T[el[i] for el in y]
function xs_ys11b(vs)
T = eltype(first(vs))
Tuple(_pluck(T, vs, i) for i in eachindex(first(vs)))
end
function xs_ys12(vs)
N = length(vs)
u = first(vs); T = eltype(u); n = length(u)
Tuple(T[el[i] for el in vs] for i in eachindex(first(vs)))
end
function xs_ys12a(vs)
N = length(vs)
u = first(vs); T = eltype(u); n = length(u)
ntuple( i -> T[el[i] for el in vs], n)
end
function xs_ys11h(vs)
u = first(vs)
T = eltype(u)
Tuple(T[el[i] for el in vs] for i in eachindex(u))
end
function xs_ys11i(vs)
u = first(vs)
Tuple(eltype(u)[el[i] for el in vs] for i in eachindex(u))
end
function _xs_ys12(vs, u::Vector{T}) where {T}
Tuple(T[el[i] for el in vs] for i in eachindex(u))
end
xs_ys13(vs, u::Vector{T}=first(vs)) where {T} = Tuple(T[el[i] for el in vs] for i in eachindex(u))
xs_ys13a(vs, u::Vector{T}=first(vs), n::Val{N}=Val(length(u))) where {T,N} = ntuple(i -> T[el[i] for el in vs], n)
## cleaned up
function xs_ys13a(vs, u::Vector{T}=first(vs), n::Val{N}=Val(length(u))) where {T,N}
plucki = i -> T[el[i] for el in vs]
ntuple(plucki, n)
end
function xs_ys13akw(vs; u::Vector{T}=first(vs), n::Val{N}=Val(length(u))) where {T,N}
plucki = i -> T[el[i] for el in vs]
ntuple(plucki, n)
end
function xs_ys13b(vs, u::Vector{T}=first(vs), n::Val{N}=Val(length(u))) where {T,N}
Tuple(T[el[i] for el in vs] for i in eachindex(u))
end
xs_ys14(vs) = Tuple(eltype(vs[1])[vs[i][j] for i in 1:length(vs)] for j in 1:length(vs[1]))
xs_ys14a(vs) = Tuple([vs[i][j] for i in 1:length(vs)] for j in 1:length(first(vs)))

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 441 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 623 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 623 KiB

View File

@ -0,0 +1,370 @@
# 2D and 3D plots in Julia with Plots
This section uses these add-on packages:
```julia
using CalculusWithJulia
using Plots
import Contour: contours, levels, level, lines, coordinates
using LinearAlgebra
using ForwardDiff
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
const frontmatter = (
title = "2D and 3D plots in Julia with Plots",
description = "Calculus with Julia: 2D and 3D plots in Julia with Plots",
tags = ["CalculusWithJulia", "differentiable_vector_calculus", "2d and 3d plots in julia with plots"],
);
nothing
```
----
This covers plotting the typical 2D and 3D plots in Julia with the `Plots` package.
We will make use of some helper functions that will simplify plotting provided by the `CalculusWithJulia` package. As well, we will need to manipulate contours directly, so pull in the `Contours` package, using `import` to avoid name collisions and explicitly listing the methods we will use.
## Parametrically described curves in space
Let $r(t)$ be a vector-valued function with values in $R^d$, $d$ being $2$ or $3$. A familiar example is the equation for a line that travels in the direction of $\vec{v}$ and goes through the point $P$: $r(t) = P + t \cdot \vec{v}$.
A *parametric plot* over $[a,b]$ is the collection of all points $r(t)$ for $a \leq t \leq b$.
In `Plots`, parameterized curves can be plotted through two interfaces, here illustrated for $d=2$: `plot(f1, f2, a, b)` or `plot(xs, ys)`. The former is convenient for some cases, but typically we will have a function `r(t)` which is vector-valued, as opposed to a vector of functions. As such, we only discuss the latter.
An example helps illustrate. Suppose $r(t) = \langle \sin(t), 2\cos(t) \rangle$ and the goal is to plot the full ellipse by plotting over $0 \leq t \leq 2\pi$. As with plotting of curves, the goal would be to take many points between `a` and `b` and from there generate the $x$ values and $y$ values.
Let's see this with 5 points, the first and last being identical due to the curve:
```julia
r₂(t) = [sin(t), 2cos(t)]
ts = range(0, stop=2pi, length=5)
```
Then we can create the $5$ points easily through broadcasting:
```julia
vs = r₂.(ts)
```
This returns a vector of points (stored as vectors). The plotting function wants two collections: the set of $x$ values for the points and the set of $y$ values. The data needs to be generated differently or reshaped. The function `unzip` above takes data in this style and returns the desired format, returning a tuple with the $x$ values and $y$ values pulled out:
```julia
unzip(vs)
```
To plot this, we "splat" the tuple so that `plot` gets the arguments separately:
```julia
plot(unzip(vs)...)
```
This basic plot is lacking, of course, as there are not enough points. Using more initially is a remedy.
```julia; hold=true
ts = range(0, 2pi, length=100)
plot(unzip(r₂.(ts))...)
```
As a convenience, `CalculusWithJulia` provides `plot_parametric` to produce this plot. The interval is specified with the `a..b` notation, the points to plot are adaptively chosen:
```julia
plot_parametric(0..2pi, r₂) # interval first
```
### Plotting a space curve in 3 dimensions
A parametrically described curve in 3D is similarly created. For example, a helix is described mathematically by $r(t) = \langle \sin(t), \cos(t), t \rangle$. Here we graph two turns:
```julia;
r₃(t) = [sin(t), cos(t), t]
plot_parametric(0..4pi, r₃)
```
### Adding a vector
The tangent vector indicates the instantaneous direction one would travel were they walking along the space curve. We can add a tangent vector to the graph. The `quiver!` function would be used to add a 2D vector, but `Plots` does not currently have a `3D` analog. In addition, `quiver!` has a somewhat cumbersome calling pattern when adding just one vector. The `CalculusWithJulia` package defines an `arrow!` function that uses `quiver` for 2D arrows and a simple line for 3D arrows. As a vector incorporates magnitude and direction, but not a position, `arrow!` needs both a point for the position and a vector.
Here is how we can visualize the tangent vector at a few points on the helix:
```julia; hold=true
plot_parametric(0..4pi, r₃, legend=false)
ts = range(0, 4pi, length=5)
for t in ts
arrow!(r₃(t), r₃'(t))
end
```
```julia; echo=false
note("""Adding many arrows this way would be inefficient.""")
```
### Setting a viewing angle for 3D plots
For 3D plots, the viewing angle can make the difference in visualizing the key features. In `Plots`, some backends allow the viewing angle to be set with the mouse by clicking and dragging. Not all do. For such, the `camera` argument is used, as in `camera(azimuthal, elevation)` where the angles are given in degrees. If the $x$-$y$-$z$ coorinates are given, then `elevation` or *inclination*, is the angle between the $z$ axis and the $x-y$ plane (so `90` is a top view) and `azimuthal` is the angle in the $x-y$ plane from the $x$ axes.
## Visualizing functions from $R^2 \rightarrow R$
If a function $f: R^2 \rightarrow R$ then a graph of $(x,y,f(x,y))$ can be represented in 3D. It will form a surface. Such graphs can be most simply made by specifying a set of $x$ values, a set of $y$ values and a function $f$, as with:
```julia
xs = range(-2, stop=2, length=100)
ys = range(-pi, stop=pi, length=100)
f(x,y) = x*sin(y)
surface(xs, ys, f)
```
Rather than pass in a function, values can be passed in. Here they are generated with a list comprehension. The `y` values are innermost to match the graphic when passing in a function object:
```julia; hold=true
zs = [f(x,y) for y in ys, x in xs]
surface(xs, ys, zs)
```
Remembering if the `ys` or `xs` go first in the above can be
hard. Alternatively, broadcasting can be used. The command `f.(xs,ys)`
would return a vector, as the `xs` and `ys` match in shape--they are both column vectors. But the
*transpose* of `xs` looks like a *row* vector and `ys` looks like a
column vector, so broadcasting will create a matrix of values, as
desired here:
```julia
surface(xs, ys, f.(xs', ys))
```
This graph shows the tessalation algorithm. Here only the grid in the $x$-$y$ plane is just one cell:
```julia; hold=true
xs = ys = range(-1, 1, length=2)
f(x,y) = x*y
surface(xs, ys, f)
```
A more accurate graph, can be seen here:
```julia; hold=true
xs = ys = range(-1, 1, length=100)
f(x,y) = x*y
surface(xs, ys, f)
```
### Contour plots
Returning to the
The contour plot of $f:R^2 \rightarrow R$ draws level curves, $f(x,y)=c$, for different values of $c$ in the $x-y$ plane.
They are produced in a similar manner as the surface plots:
```julia; hold=true
xs = ys = range(-2,2, length=100)
f(x,y) = x*y
contour(xs, ys, f)
```
The cross in the middle corresponds to $c=0$, as when $x=0$ or $y=0$ then $f(x,y)=0$.
Similarly, computed values for $f(x,y)$ can be passed in. Here we change the function:
```julia; hold=true
f(x,y) = 2 - (x^2 + y^2)
xs = ys = range(-2,2, length=100)
zs = [f(x,y) for y in ys, x in xs]
contour(xs, ys, zs)
```
The chosen levels can be specified by the user through the `levels` argument, as in:
```julia; hold=true
f(x,y) = 2 - (x^2 + y^2)
xs = ys = range(-2,2, length=100)
zs = [f(x,y) for y in ys, x in xs]
contour(xs, ys, zs, levels = [-1.0, 0.0, 1.0])
```
If only a single level is desired, as scalar value can be specified. Though not with all backends for `Plots`. For example, this next graphic shows the $0$-level of the [devil](http://www-groups.dcs.st-and.ac.uk/~history/Curves/Devils.html)'s curve.
```julia; hold=true
a, b = -1, 2
f(x,y) = y^4 - x^4 + a*y^2 + b*x^2
xs = ys = range(-5, stop=5, length=100)
contour(xs, ys, f, levels=[0.0])
```
Contour plots are well known from the presence of contour lines on many maps. Contour lines indicate constant elevations. A peak is characterized by a series of nested closed paths. The following graph shows this for the peak at $(x,y)=(0,0)$.
```julia; hold=true
xs = ys = range(-pi/2, stop=pi/2, length=100)
f(x,y) = sinc(sqrt(x^2 + y^2)) # sinc(x) is sin(x)/x
contour(xs, ys, f)
```
Contour plots can be filled with colors through the `contourf` function:
```julia; hold=true
xs = ys = range(-pi/2, stop=pi/2, length=100)
f(x,y) = sinc(sqrt(x^2 + y^2))
contourf(xs, ys, f)
```
### Combining surface plots and contour plots
In `PyPlot` it is possible to add a contour lines to the surface, or projected onto an axis.
To replicate something similar, though not as satisfying, in `Plots` we use the `Contour` package.
```julia; hold=true
f(x,y) = 2 + x^2 + y^2
xs = ys = range(-2, stop=2, length=100)
zs = [f(x,y) for y in ys, x in xs]
p = surface(xs, ys, zs, legend=false, fillalpha=0.5)
## we add to the graphic p, then plot
for cl in levels(contours(xs, ys, zs))
lvl = level(cl) # the z-value of this contour level
for line in lines(cl)
_xs, _ys = coordinates(line) # coordinates of this line segment
_zs = 0 * _xs
plot!(p, _xs, _ys, lvl .+ _zs, alpha=0.5) # add on surface
plot!(p, _xs, _ys, _zs, alpha=0.5) # add on x-y plane
end
end
p
```
There is no hidden line calculuation, in place we give the contour lines a transparency through the argument `alpha=0.5`.
### Gradient and surface plots
The surface plot of $f: R^2 \rightarrow R$ plots $(x, y, f(x,y))$ as a surface. The *gradient* of $f$ is $\langle \partial f/\partial x, \partial f/\partial y\rangle$. It is a two-dimensional object indicating the direction at a point $(x,y)$ where the surface has the greatest ascent. Illurating the gradient and the surface on the same plot requires embedding the 2D gradient into the 3D surface. This can be done by adding a constant $z$ value to the gradient, such as $0$.
```julia; hold=true
f(x,y) = 2 - (x^2 + y^2)
xs = ys = range(-2, stop=2, length=100)
zs = [f(x,y) for y in ys, x in xs]
surface(xs, ys, zs, camera=(40, 25), legend=false)
p = [-1, 1] # in the region graphed, [-2,2] × [-2, 2]
f(x) = f(x...)
v = ForwardDiff.gradient(f, p)
# add 0 to p and v (two styles)
push!(p, -15)
scatter!(unzip([p])..., markersize=3)
v = vcat(v, 0)
arrow!(p, v)
```
### The tangent plane
Let $z = f(x,y)$ describe a surface, and $F(x,y,z) = f(x,y) - z$. The the gradient of $F$ at a point $p$ on the surface, $\nabla F(p)$, will be normal to the surface and for a function, $f(p) + \nabla f \cdot (x-p)$ describes the tangent plane. We can visualize each, as follows:
```julia; hold=true
f(x,y) = 2 - x^2 - y^2
f(v) = f(v...)
F(x,y,z) = z - f(x,y)
F(v) = F(v...)
p = [1/10, -1/10]
global p1 = vcat(p, f(p...)) # note F(p1) == 0
global n⃗ = ForwardDiff.gradient(F, p1)
global tl(x) = f(p) + ForwardDiff.gradient(f, p) ⋅ (x - p)
tl(x,y) = tl([x,y])
xs = ys = range(-2, stop=2, length=100)
surface(xs, ys, f)
surface!(xs, ys, tl)
arrow!(p1, 5n⃗)
```
From some viewing angles, the normal does not look perpendicular to the tangent plane. This is a quick verification for a randomly chosen point in the $x-y$ plane:
```julia
a, b = randn(2)
dot(n⃗, (p1 - [a,b, tl(a,b)]))
```
### Parameterized surface plots
As illustrated, we can plot surfaces of the form $(x,y,f(x,y)$. However, not all surfaces are so readily described. For example, if $F(x,y,z)$ is a function from $R^3 \rightarrow R$, then $F(x,y,z)=c$ is a surface of interest. For example, the sphere of radius one is a solution to $F(x,y,z)=1$ where $F(x,y,z) = x^2 + y^2 + z^2$.
Plotting such generally described surfaces is not so easy, but *parameterized* surfaces can be represented. For example, the sphere as a surface is not represented as a surface of a function, but can be represented in spherical coordinates as parameterized by two angles, essentially an "azimuth" and and "elevation", as used with the `camera` argument.
Here we define functions that represent $(x,y,z)$ coordinates in terms of the corresponding spherical coordinates $(r, \theta, \phi)$.
```julia
# spherical: (radius r, inclination θ, azimuth φ)
X(r,theta,phi) = r * sin(theta) * sin(phi)
Y(r,theta,phi) = r * sin(theta) * cos(phi)
Z(r,theta,phi) = r * cos(theta)
```
We can parameterize the sphere by plotting values for $x$, $y$, and $z$ produced by a sequence of values for $\theta$ and $\phi$, holding $r=1$:
```julia; hold=true
thetas = range(0, stop=pi, length=50)
phis = range(0, stop=pi/2, length=50)
xs = [X(1, theta, phi) for theta in thetas, phi in phis]
ys = [Y(1, theta, phi) for theta in thetas, phi in phis]
zs = [Z(1, theta, phi) for theta in thetas, phi in phis]
surface(xs, ys, zs)
```
```julia; echo=false
note("The above may not work with all backends for `Plots`, even if those that support 3D graphics.")
```
For convenience, the `plot_parametric` function from `CalculusWithJulia` can produce these plots using interval notation and a function:
```julia; hold=true
F(theta, phi) = [X(1, theta, phi), Y(1, theta, phi), Z(1, theta, phi)]
plot_parametric(0..pi, 0..pi/2, F)
```
### Plotting F(x,y, z) = c
There is no built in functionality in `Plots` to create surface described by $F(x,y,z) = c$. An example of how to provide some such functionality for `PyPlot` appears [here](https://stackoverflow.com/questions/4680525/plotting-implicit-equations-in-3d ). The non-exported `plot_implicit_surface` function can be used to approximate this.
To use it, we see what happens when a sphere if rendered:
```julia; hold=true
f(x,y,z) = x^2 + y^2 + z^2 - 25
CalculusWithJulia.plot_implicit_surface(f)
```
This figure comes from a February 14, 2019 article in the [New York Times](https://www.nytimes.com/2019/02/14/science/math-algorithm-valentine.html). It shows an equation for a "heart," as the graphic will illustrate:
```julia; hold=true
a,b = 1,3
f(x,y,z) = (x^2+((1+b)*y)^2+z^2-1)^3-x^2*z^3-a*y^2*z^3
CalculusWithJulia.plot_implicit_surface(f, xlim=-2..2, ylim=-1..1, zlim=-1..2)
```

View File

@ -0,0 +1,720 @@
# Polar Coordinates and Curves
This section uses these add-on packages:
```julia;
using CalculusWithJulia
using Plots
using SymPy
using Roots
using QuadGK
```
```julia; echo=false; results="hidden"
using CalculusWithJulia.WeaveSupport
const frontmatter = (
title = "Polar Coordinates and Curves",
description = "Calculus with Julia: Polar Coordinates and Curves",
tags = ["CalculusWithJulia", "differentiable_vector_calculus", "polar coordinates and curves"],
);
using LaTeXStrings
nothing
```
----
The description of the $x$-$y$ plane via Cartesian coordinates is not
the only possible way, though one that is most familiar. Here we discuss
a different means. Instead of talking about over and up from an
origin, we focus on a direction and a distance from the origin.
## Definition of polar coordinates
Polar coordinates parameterize the plane though an angle $\theta$ made from the positive ray of the $x$ axis and a radius $r$.
```julia; hold=true; echo=false
theta = pi/6
rr = 1
p = plot(xticks=nothing, yticks=nothing, border=:none, aspect_ratio=:equal, xlim=(-.1,1), ylim=(-.1,3/4))
plot!([0,rr*cos(theta)], [0, rr*sin(theta)], legend=false, color=:blue, linewidth=2)
scatter!([rr*cos(theta)],[rr*sin(theta)], markersize=3, color=:blue)
arrow!([0,0], [0,3/4], color=:black)
arrow!([0,0], [1,0], color=:black)
ts = range(0, theta, length=50)
rr = 1/6
plot!(rr*cos.(ts), rr*sin.(ts), color=:black)
plot!([cos(theta),cos(theta)],[0, sin(theta)], linestyle=:dash, color=:gray)
plot!([0,cos(theta)],[sin(theta), sin(theta)], linestyle=:dash, color=:gray)
annotate!([
(1/5*cos(theta/2), 1/5*sin(theta/2), L"\theta"),
(1/2*cos(theta*1.2), 1/2*sin(theta*1.2), L"r"),
(cos(theta), sin(theta)+.05, L"(x,y)"),
(cos(theta),-.05, L"x"),
(-.05, sin(theta),L"y")
])
```
To recover the Cartesian coordinates from the pair $(r,\theta)$, we have these formulas from [right](http://en.wikipedia.org/wiki/Polar_coordinate_system#Converting_between_polar_and_Cartesian_coordinates) triangle geometry:
```math
x = r \cos(\theta),~ y = r \sin(\theta).
```
Each point $(x,y)$ corresponds to several possible values of
$(r,\theta)$, as any integer multiple of $2\pi$ added to $\theta$ will
describe the same point. Except for the origin, there is only one pair
when we restrict to $r > 0$ and $0 \leq \theta < 2\pi$.
For values in the first and fourth quadrants (the range of
$\tan^{-1}(x)$), we have:
```math
r = \sqrt{x^2 + y^2},~ \theta=\tan^{-1}(y/x).
```
For the other two quadrants, the signs of $y$ and $x$ must be
considered. This is done with the function `atan` when two arguments are used.
For example, $(-3, 4)$ would have polar coordinates:
```julia;
x,y = -3, 4
rad, theta = sqrt(x^2 + y^2), atan(y, x)
```
And reversing
```julia;
rad*cos(theta), rad*sin(theta)
```
This figure illustrates:
```julia; hold=true; echo=false
p = plot([-5,5], [0,0], color=:blue, legend=false)
plot!([0,0], [-5,5], color=:blue)
plot!([-3,0], [4,0])
scatter!([-3], [4])
title!("(-3,4) Cartesian or (5, 2.21...) polar")
p
```
The case where $r < 0$ is handled by going ``180`` degrees in the opposite direction, in other
words the point $(r, \theta)$ can be described as well by $(-r,\theta+\pi)$.
## Parameterizing curves using polar coordinates
If $r=r(\theta)$, then the parameterized curve $(r(\theta), \theta)$
is just the set of points generated as $\theta$ ranges over some set
of values. There are many examples of parameterized curves that
simplify what might be a complicated presentation in Cartesian coordinates.
For example, a circle has the form $x^2 + y^2 = R^2$. Whereas
parameterized by polar coordinates it is just $r(\theta) = R$, or a
constant function.
The circle centered at $(r_0, \gamma)$ (in polar coordinates) with
radius $R$ has a more involved description in polar coordinates:
```math
r(\theta) = r_0 \cos(\theta - \gamma) + \sqrt{R^2 - r_0^2\sin^2(\theta - \gamma)}.
```
The case where $r_0 > R$ will not be defined for all values of $\theta$, only when $|\sin(\theta-\gamma)| \leq R/r_0$.
#### Examples
The `Plots.jl` package provides a means to visualize polar plots through `plot(thetas, rs, proj=:polar)`. For example, to plot a circe with $r_0=1/2$ and $\gamma=\pi/6$ we would have:
```julia; hold=true
R, r0, gamma = 1, 1/2, pi/6
r(theta) = r0 * cos(theta-gamma) + sqrt(R^2 - r0^2*sin(theta-gamma)^2)
ts = range(0, 2pi, length=100)
rs = r.(ts)
plot(ts, rs, proj=:polar, legend=false)
```
To avoid having to create values for $\theta$ and values for $r$, the `CalculusWithJulia` package provides a helper function, `plot_polar`. To distinguish it from other functions provided by `Plots`, the calling pattern is different. It specifies an interval to plot over by `a..b` and puts that first, followed by `r`. Other keyword arguments are passed onto a `plot` call.
We will use this in the following, as the graphs are a bit more familiar and the calling pattern similar to how we have plotted functions.
As `Plots` will make a parametric plot when called as `plot(function, function, a,b)`, the above
function creates two such functions using the relationship $x=r\cos(\theta)$ and $y=r\sin(\theta)$.
Using `plot_polar`, we can plot circles with the following. We have to be a bit careful for the general circle, as when the center is farther away from the origin that the radius ($R$), then not all angles will be acceptable and there are two functions needed to describe the radius, as this comes from a quadratic equation and both the "plus" and "minus" terms are used.
```julia; hold=true
R=4; r(t) = R;
function plot_general_circle!(r0, gamma, R)
# law of cosines has if gamma=0, |theta| <= asin(R/r0)
# R^2 = a^2 + r^2 - 2a*r*cos(theta); solve for a
r(t) = r0 * cos(t - gamma) + sqrt(R^2 - r0^2*sin(t-gamma)^2)
l(t) = r0 * cos(t - gamma) - sqrt(R^2 - r0^2*sin(t-gamma)^2)
if R < r0
theta = asin(R/r0)-1e-6 # avoid round off issues
plot_polar!((gamma-theta)..(gamma+theta), r)
plot_polar!((gamma-theta)..(gamma+theta), l)
else
plot_polar!(0..2pi, r)
end
end
plot_polar(0..2pi, r, aspect_ratio=:equal, legend=false)
plot_general_circle!(2, 0, 2)
plot_general_circle!(3, 0, 1)
```
There are many interesting examples of curves described by polar coordinates. An interesting [compilation](http://www-history.mcs.st-and.ac.uk/Curves/Curves.html) of famous curves is found at the MacTutor History of Mathematics archive, many of which have formulas in polar coordinates.
##### Example
The [rhodenea](http://www-history.mcs.st-and.ac.uk/Curves/Rhodonea.html) curve has
```math
r(\theta) = a \sin(k\theta)
```
```julia; hold=true
a, k = 4, 5
r(theta) = a * sin(k * theta)
plot_polar(0..pi, r)
```
This graph has radius $0$ whenever $\sin(k\theta) = 0$ or $k\theta
=n\pi$. Solving means that it is $0$ at integer multiples of
$\pi/k$. In the above, with $k=5$, there will $5$ zeroes in
$[0,\pi]$. The entire curve is traced out over this interval, the
values from $\pi$ to $2\pi$ yield negative value of $r$, so are
related to values within $0$ to $\pi$ via the relation $(r,\pi
+\theta) = (-r, \theta)$.
##### Example
The [folium](http://www-history.mcs.st-and.ac.uk/Curves/Folium.html)
is a somewhat similar looking curve, but has this description:
```math
r(\theta) = -b \cos(\theta) + 4a \cos(\theta) \sin(2\theta)
```
```julia;
𝒂, 𝒃 = 4, 2
𝒓(theta) = -𝒃 * cos(theta) + 4𝒂 * cos(theta) * sin(2theta)
plot_polar(0..2pi, 𝒓)
```
The folium has radial part $0$ when $\cos(\theta) = 0$ or
$\sin(2\theta) = b/4a$. This could be used to find out what values
correspond to which loop. For our choice of $a$ and $b$ this gives $\pi/2$, $3\pi/2$ or, as
$b/4a = 1/8$, when $\sin(2\theta) = 1/8$ which happens at
$a_0=\sin^{-1}(1/8)/2=0.0626...$ and $\pi/2 - a_0$, $\pi+a_0$ and $3\pi/2 - a_0$. The first folium can be plotted with:
```julia;
𝒂0 = (1/2) * asin(1/8)
plot_polar(𝒂0..(pi/2-𝒂0), 𝒓)
```
The second - which is too small to appear in the initial plot without zooming in - with
```julia;
plot_polar((pi/2 - 𝒂0)..(pi/2), 𝒓)
```
The third with
```julia;
plot_polar((pi/2)..(pi + 𝒂0), 𝒓)
```
The plot repeats from there, so the initial plot could have been made over $[0, \pi + a_0]$.
##### Example
The [Limacon of Pascal](http://www-history.mcs.st-and.ac.uk/Curves/Limacon.html) has
```math
r(\theta) = b + 2a\cos(\theta)
```
```julia; hold=true
a,b = 4, 2
r(theta) = b + 2a*cos(theta)
plot_polar(0..2pi, r)
```
##### Example
Some curves require a longer parameterization, such as this where we
plot over $[0, 8\pi]$ so that the cosine term can range over an entire
half period:
```julia; hold=true
r(theta) = sqrt(abs(cos(theta/8)))
plot_polar(0..8pi, r)
```
## Area of polar graphs
Consider the [cardioid](http://www-history.mcs.st-and.ac.uk/Curves/Cardioid.html) described by $r(\theta) = 2(1 + \cos(\theta))$:
```julia; hold=true
r(theta) = 2(1 + cos(theta))
plot_polar(0..2pi, r)
```
How much area is contained in the graph?
In some cases it might be possible to translate back into Cartesian
coordinates and compute from there. In practice, this is not usually the best
solution.
The area can be approximated by wedges (not rectangles). For example, here we see that the area over a given angle is well approximated by the wedge for each of the sectors:
```julia; hold=true; echo=false
r(theta) = 1/(1 + (1/3)cos(theta))
p = plot_polar(0..pi/2, r, legend=false, linewidth=3, aspect_ratio=:equal)
t0, t1, t2, t3 = collect(range(pi/12, pi/2 - pi/12, length=4))
for s in (t0,t1,t2,t3)
plot!(p, [0, r(s)*cos(s)], [0, r(s)*sin(s)], linewidth=3)
end
for (s0,s1) in ((t0,t1), (t1, t2), (t2,t3))
s = (s0 + s1)/2
plot!(p, [0, ])
plot!(p, [0,r(s)*cos(s)], [0, r(s)*sin(s)])
ts = range(s0, s1, length=25)
xs, ys = r(s)*cos.(ts), r(s)*sin.(ts)
plot!(p, xs, ys)
plot!(p, [0,xs[1]],[0,ys[1]])
end
p
```
As well, see this part of a
[Wikipedia](http://en.wikipedia.org/wiki/Polar_coordinate_system#Integral_calculus_.28area.29)
page for a figure.
Imagine we have $a < b$ and a partition $a=t_0 < t_1 < \cdots < t_n =
b$. Let $\phi_i = (1/2)(t_{i-1} + t_{i})$ be the midpoint.
Then the wedge of radius $r(\phi_i)$ with angle between $t_{i-1}$ and $t_i$ will have area $\pi r(\phi_i)^2 (t_i-t_{i-1}) / (2\pi) = (1/2) r(\phi_i)(t_i-t_{i-1})$, the ratio $(t_i-t_{i-1}) / (2\pi)$ being the angle to the total angle of a circle.
Summing the area of these wedges
over the partition gives a Riemann sum approximation for the integral $(1/2)\int_a^b
r(\theta)^2 d\theta$. This limit of this sum defines the area in polar coordinates.
> *Area of polar regions*. Let $R$ denote the region bounded by the curve $r(\theta)$ and bounded by the rays
> $\theta=a$ and $\theta=b$ with $b-a \leq 2\pi$, then the area of $R$ is given by:
>
> ``A = \frac{1}{2}\int_a^b r(\theta)^2 d\theta.``
So the area of the cardioid, which is parameterized over $[0, 2\pi]$ is found by
```julia; hold=true
r(theta) = 2(1 + cos(theta))
@syms theta
(1//2) * integrate(r(theta)^2, (theta, 0, 2PI))
```
##### Example
The folium has general formula $r(\theta) = -b \cos(\theta)
+4a\cos(\theta)\sin(\theta)^2$. When $a=1$ and $b=1$ a leaf of the
folium is traced out between $\pi/6$ and $\pi/2$. What is the area of
that leaf?
An antiderivative exists for arbitrary $a$ and $b$:
```julia;
@syms 𝐚 𝐛 𝐭heta
𝐫(theta) = -𝐛*cos(theta) + 4𝐚*cos(theta)*sin(theta)^2
integrate(𝐫(𝐭heta)^2, 𝐭heta) / 2
```
For our specific values, the answer can be computed with:
```julia;
ex = integrate(𝐫(𝐭heta)^2, (𝐭heta, PI/6, PI/2)) / 2
ex(𝐚 => 1, 𝐛=>1)
```
###### Example
Pascal's
[limacon](http://www-history.mcs.st-and.ac.uk/Curves/Limacon.html) is
like the cardioid, but contains an extra loop. When $a=1$ and $b=1$ we
have this graph.
```julia; hold=true; echo=false
a,b = 1,1
r(theta) = b + 2a*cos(theta)
p = plot(t->r(t)*cos(t), t->r(t)*sin(t), 0, pi/2 + pi/6, legend=false, color=:blue)
plot!(p, t->r(t)*cos(t), t->r(t)*sin(t), 3pi/2 - pi/6, pi/2 + pi/6, color=:orange)
plot!(p, t->r(t)*cos(t), t->r(t)*sin(t), 3pi/2 - pi/6, 2pi, color=:blue)
p
```
What is the area contained in the outer loop, that is not in the inner loop?
To answer, we need to find out what range of values in $[0, 2\pi]$ the
inner and outer loops are traced. This will be when $r(\theta) = 0$,
which for the choice of $a$ and $b$ solves $1 + 2\cos(\theta) = 0$, or
$\cos(\theta) = -1/2$. This is $\pi/2 + \pi/6$ and $3\pi/2 -
\pi/6$. The inner loop is traversed between those values and has area:
```julia;
@syms 𝖺 𝖻 𝗍heta
𝗋(theta) = 𝖻 + 2𝖺*cos(𝗍heta)
𝖾x = integrate(𝗋(𝗍heta)^2 / 2, (𝗍heta, PI/2 + PI/6, 3PI/2 - PI/6))
𝗂nner = 𝖾x(𝖺=>1, 𝖻=>1)
```
The outer area (including the inner loop) is the integral from $0$ to $\pi/2 + \pi/6$ plus that from $3\pi/2 - \pi/6$ to $2\pi$. These areas are equal, so we double the first:
```julia;
𝖾x1 = 2 * integrate(𝗋(𝗍heta)^2 / 2, (𝗍heta, 0, PI/2 + PI/6))
𝗈uter = 𝖾x1(𝖺=>1, 𝖻=>1)
```
The answer is the difference:
```julia;
𝗈uter - 𝗂nner
```
## Arc length
The length of the arc traced by a polar graph can also be expressed
using an integral. Again, we partition the interval $[a,b]$ and
consider the wedge from $(r(t_{i-1}), t_{i-1})$ to $(r(t_i),
t_i)$. The curve this wedge approximates will have its arc length
approximated by the line segment connecting the points. Expressing the
points in Cartesian coordinates and simplifying gives the distance
squared as:
```math
\begin{align}
d_i^2 &= (r(t_i) \cos(t_i) - r(t_{i-1})\cos(t_{i-1}))^2 + (r(t_i) \sin(t_i) - r(t_{i-1})\sin(t_{i-1}))^2\\
&= r(t_i)^2 - 2r(t_i)r(t_{i-1}) \cos(t_i - t_{i-1}) + r(t_{i-1})^2 \\
&\approx r(t_i)^2 - 2r(t_i)r(t_{i-1}) (1 - \frac{(t_i - t_{i-1})^2}{2})+ r(t_{i-1})^2 \quad(\text{as} \cos(x) \approx 1 - x^2/2)\\
&= (r(t_i) - r(t_{i-1}))^2 + r(t_i)r(t_{i-1}) (t_i - t_{i-1})^2.
\end{align}
```
As was done with arc length we multiply $d_i$ by $(t_i - t_{i-1})/(t_i - t_{i-1})$
and move the bottom factor under the square root:
```math
\begin{align}
d_i
&= d_i \frac{t_i - t_{i-1}}{t_i - t_{i-1}} \\
&\approx \sqrt{\frac{(r(t_i) - r(t_{i-1}))^2}{(t_i - t_{i-1})^2} +
\frac{r(t_i)r(t_{i-1}) (t_i - t_{i-1})^2}{(t_i - t_{i-1})^2}} \cdot (t_i - t_{i-1})\\
&= \sqrt{(r'(\xi_i))^2 + r(t_i)r(t_{i-1})} \cdot (t_i - t_{i-1}).\quad(\text{the mean value theorem})
\end{align}
```
Adding the approximations to the $d_i$ looks like a Riemann sum approximation to the
integral $\int_a^b \sqrt{(r'(\theta)^2) + r(\theta)^2} d\theta$ (with
the extension to the Riemann sum formula needed to derive the arc
length for a parameterized curve). That is:
> *Arc length of a polar curve*. The arc length of the curve described in polar coordinates by $r(\theta)$ for $a \leq \theta \leq b$ is given by:
>
> ``\int_a^b \sqrt{r'(\theta)^2 + r(\theta)^2} d\theta.``
We test this out on a circle with $r(\theta) = R$, a constant. The
integrand simplifies to just $\sqrt{R^2}$ and the integral is from $0$
to $2\pi$, so the arc length is $2\pi R$, precisely the formula for
the circumference.
##### Example
A cardioid is described by $r(\theta) = 2(1 + \cos(\theta))$. What is the arc length from $0$ to $2\pi$?
The integrand is integrable with antiderivative $4\sqrt{2\cos(\theta) + 2} \cdot \tan(\theta/2)$,
but `SymPy` isn't able to find the integral. Instead we give a numeric answer:
```julia; hold=true
r(theta) = 2*(1 + cos(theta))
quadgk(t -> sqrt(r'(t)^2 + r(t)^2), 0, 2pi)[1]
```
##### Example
The [equiangular](http://www-history.mcs.st-and.ac.uk/Curves/Equiangular.html) spiral has polar representation
```math
r(\theta) = a e^{\theta \cot(b)}
```
With $a=1$ and $b=\pi/4$, find the arc length traced out from $\theta=0$ to $\theta=1$.
```julia; hold=true
a, b = 1, PI/4
@syms θ
r(theta) = a * exp(theta * cot(b))
ds = sqrt(diff(r(θ), θ)^2 + r(θ)^2)
integrate(ds, (θ, 0, 1))
```
##### Example
An Archimedean [spiral](http://en.wikipedia.org/wiki/Archimedean_spiral) is defined in polar form by
```math
r(\theta) = a + b \theta
```
That is, the radius increases linearly. The crossings of the positive $x$ axis occur at $a + b n 2\pi$, so are evenly spaced out by $2\pi b$. These could be a model for such things as coils of materials of uniform thickness.
For example, a roll of toilet paper promises ``1000`` sheets with the
[smaller](http://www.phlmetropolis.com/2011/03/the-incredible-shrinking-toilet-paper.php)
$4.1 \times 3.7$ inch size. This $3700$ inch long connected sheet of
paper is wrapped around a paper tube in an Archimedean spiral with
$r(\theta) = d_{\text{inner}}/2 + b\theta$. The entire roll must fit in a standard
dimension, so the outer diameter will be $d_{\text{outer}} = 5~1/4$ inches. Can we figure out
$b$?
Let $n$ be the number of windings and assume the starting and ending point is on the positive $x$ axis,
$r(2\pi n) = d_{\text{outer}}/2 = d_{\text{inner}}/2 + b (2\pi n)$. Solving for $n$ in terms of $b$ we get:
$n = ( d_{\text{outer}} - d_{\text{inner}})/2 / (2\pi b)$. With this, the following must hold as the total arc length is $3700$ inches.
```math
\int_0^{n\cdot 2\pi} \sqrt{r(\theta)^2 + r'(\theta)^2} d\theta = 3700
```
Numerically then we have:
```julia; hold=true
dinner = 1 + 5/8
douter = 5 + 1/4
r(b,t) = dinner/2 + b*t
rp(b,t) = b
integrand(b,t) = sqrt((r(b,t))^2 + rp(b,t)^2) # sqrt(r^2 + r'^2)
n(b) = (douter - dinner)/2/(2*pi*b)
b = find_zero(b -> quadgk(t->integrand(b,t), 0, n(b)*2*pi)[1] - 3700, (1/100000, 1/100))
b, b*25.4
```
The value `b` gives a value in inches, the latter in millimeters.
## Questions
###### Question
Let $r=3$ and $\theta=\pi/8$. In Cartesian coordinates what is $x$?
```julia; hold=true; echo=false
x,y = 3 * [cos(pi/8), sin(pi/8)]
numericq(x)
```
What is $y$?
```julia; hold=true; echo=false
numericq(y)
```
###### Question
A point in Cartesian coordinates is given by $(-12, -5)$. In has a polar coordinate representation with an angle $\theta$ in $[0,2\pi]$ and $r > 0$. What is $r$?
```julia; hold=true; echo=false
x,y = -12, -5
r1, theta1 = sqrt(x^2 + y^2), atan(y,x)
numericq(r1)
```
What is $\theta$?
```julia; hold=true; echo=false
x,y = -12, -5
r1, theta1 = sqrt(x^2 + y^2), atan(y,x)
numericq(theta1)
```
###### Question
Does $r(\theta) = a \sec(\theta - \gamma)$ describe a line for $0$ when $a=3$ and $\gamma=\pi/4$?
```julia; hold=true; echo=false
yesnoq("yes")
```
If yes, what is the $y$ intercept
```julia; hold=true; echo=false
r(theta) = 3 * sec(theta -pi/4)
val = r(pi/2)
numericq(val)
```
What is slope of the line?
```julia; hold=true; echo=false
r(theta) = 3 * sec(theta -pi/4)
val = (r(pi/2)*sin(pi/2) - r(pi/4)*sin(pi/4)) / (r(pi/2)*cos(pi/2) - r(pi/4)*cos(pi/4))
numericq(val)
```
Does this seem likely: the slope is $-1/\tan(\gamma)$?
```julia; hold=true; echo=false
yesnoq("yes")
```
###### Question
The polar curve $r(\theta) = 2\cos(\theta)$ has tangent lines at most points. This differential representation of the chain rule
```math
\frac{dy}{dx} = \frac{dy}{d\theta} / \frac{dx}{d\theta},
```
allows the slope to be computed when $y$ and $x$ are the Cartesian
form of the polar curve. For this curve, we have
```math
\frac{dy}{d\theta} = \frac{d}{d\theta}(2\cos(\theta) \cdot \cos(\theta)),~ \text{ and }
\frac{dx}{d\theta} = \frac{d}{d\theta}(2\sin(\theta) \cdot \cos(\theta)).
```
Numerically, what is the slope of the tangent line when $\theta = \pi/4$?
```julia; hold=true; echo=false
r(theta) = 2cos(theta)
g(theta) = r(theta)*cos(theta)
f(theta) = r(theta)*sin(theta)
c = pi/4
val = D(g)(c) / D(f)(c)
numericq(val)
```
###### Question
For different values $k > 0$ and $e > 0$ the polar equation
```math
r(\theta) = \frac{ke}{1 + e\cos(\theta)}
```
has a familiar form. The value of $k$ is just a scale factor, but different values of $e$ yield different shapes.
When $0 < e < 1$ what is the shape of the curve? (Answer by making a plot and guessing.)
```julia; hold=true; echo=false
choices = [
"an ellipse",
"a parabola",
"a hyperbola",
"a circle",
"a line"
]
ans = 1
radioq(choices, ans, keep_order=true)
```
When $e = 1$ what is the shape of the curve?
```julia; hold=true; echo=false
choices = [
"an ellipse",
"a parabola",
"a hyperbola",
"a circle",
"a line"
]
ans = 2
radioq(choices, ans, keep_order=true)
```
When $1 < e$ what is the shape of the curve?
```julia; hold=true; echo=false
choices = [
"an ellipse",
"a parabola",
"a hyperbola",
"a circle",
"a line"
]
ans = 3
radioq(choices, ans, keep_order=true)
```
###### Question
Find the area of a lobe of the
[lemniscate](http://www-history.mcs.st-and.ac.uk/Curves/Lemniscate.html)
curve traced out by $r(\theta) = \sqrt{\cos(2\theta)}$ between
$-\pi/4$ and $\pi/4$. What is the answer?
```julia; hold=true; echo=false
choices = [
"``1/2``",
"``\\pi/2``",
"``1``"
]
ans=1
radioq(choices, ans)
```
###### Question
Find the area of a lobe of the [eight](http://www-history.mcs.st-and.ac.uk/Curves/Eight.html) curve traced out by $r(\theta) = \cos(2\theta)\sec(\theta)^4$ from $-\pi/4$ to $\pi/4$. Do this numerically.
```julia; hold=true; echo=false
r(theta) = sqrt(cos(2theta) * sec(theta)^4)
val, _ = quadgk(t -> r(t)^2/2, -pi/4, pi/4)
numericq(val)
```
###### Question
Find the arc length of a lobe of the
[lemniscate](http://www-history.mcs.st-and.ac.uk/Curves/Lemniscate.html)
curve traced out by $r(\theta) = \sqrt{\cos(2\theta)}$ between
$-\pi/4$ and $\pi/4$. What is the answer (numerically)?
```julia; hold=true; echo=false
r(theta) = sqrt(cos(2theta))
val, _ = quadgk(t -> sqrt(D(r)(t)^2 + r(t)^2), -pi/4, pi/4)
numericq(val)
```
###### Question
Find the arc length of a lobe of the [eight](http://www-history.mcs.st-and.ac.uk/Curves/Eight.html) curve traced out by $r(\theta) = \cos(2\theta)\sec(\theta)^4$ from $-\pi/4$ to $\pi/4$. Do this numerically.
```julia; hold=true; echo=false
r(theta) = sqrt(cos(2theta) * sec(theta)^4)
val, _ = quadgk(t -> sqrt(D(r)(t)^2 + r(t)^2), -pi/4, pi/4)
numericq(val)
```

View File

@ -0,0 +1,36 @@
using WeavePynb
using Mustache
mmd(fname) = mmd_to_html(fname, BRAND_HREF="../toc.html", BRAND_NAME="Calculus with Julia")
## uncomment to generate just .md files
#mmd(fname) = mmd_to_md(fname, BRAND_HREF="../toc.html", BRAND_NAME="Calculus with Julia")
fnames = ["polar_coordinates",
"vectors",
"vector_valued_functions",
"scalar_functions",
"scalar_functions_applications",
"vector_fields"
]
function process_file(nm, twice=false)
include("$nm.jl")
mmd_to_md("$nm.mmd")
markdownToHTML("$nm.md")
twice && markdownToHTML("$nm.md")
end
process_files(twice=false) = [process_file(nm, twice) for nm in fnames]
"""
## TODO differential_vector_calcululs
### Add questions for scalar_function_applications
* Newton's method??
* optimization. Find least squares for perpendicular distance using the same 3 points...??
"""

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,8 @@
[deps]
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
HCubature = "19dc6840-f33b-545b-b366-655c7e3ffd49"
LaTeXStrings = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
QuadGK = "1fd47b50-473d-5c70-9696-f719f8f3bcdc"
Roots = "f2b01f46-fcfa-551c-844a-d8ac1e96c665"
SymPy = "24249f21-da20-56a4-8eb1-6a02cf4ae2e6"

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 209 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 515 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 513 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Some files were not shown because too many files have changed in this diff Show More