many edits

This commit is contained in:
jverzani 2024-04-26 18:26:12 -04:00
parent 6e807edb46
commit 4f924557ad
45 changed files with 326 additions and 296 deletions

3
.gitignore vendored
View File

@ -5,3 +5,6 @@ docs/site
test/benchmarks.json
Manifest.toml
TODO.md
default.profraw
/quarto/default.profraw
/*/*/default.profraw

1
Project.toml Normal file
View File

@ -0,0 +1 @@
[deps]

View File

@ -16,7 +16,7 @@ using ModelingToolkit
---
The [`DifferentialEquations`](https://github.com/SciML/DifferentialEquations.jl) suite of packages contains solvers for a wide range of various differential equations. This section just briefly touches on ordinary differential equations (ODEs), and so relies only on `OrdinaryDiffEq` part of the suite. For more detail on this type and many others covered by the suite of packages, there are many other resources, including the [documentation](https://diffeq.sciml.ai/stable/) and accompanying [tutorials](https://github.com/SciML/SciMLTutorials.jl).
The [`DifferentialEquations`](https://github.com/SciML/DifferentialEquations.jl) suite of packages contains solvers for a wide range of various differential equations. This section just briefly touches on ordinary differential equations (ODEs), and so relies only on `OrdinaryDiffEq`, a small part of the suite. For more detail on this type and many others covered by the suite of packages, there are many other resources, including the [documentation](https://diffeq.sciml.ai/stable/) and accompanying [tutorials](https://github.com/SciML/SciMLTutorials.jl).
## SIR Model

View File

@ -94,7 +94,7 @@ function make_euler_graph(n)
scatter!(p, xs, ys)
## add function
out = dsolve(u'(x) - F(u(x), x), u(x), ics=(u, x0, y0))
out = dsolve(D(u)(x) - F(u(x), x), u(x), ics=Dict(u(x0) => y0))
plot!(p, rhs(out), x0, xs[end], linewidth=5)
p

View File

@ -567,7 +567,7 @@ We enter this into `Julia`:
```{julia}
@syms w::positive H::positive y()
eqnc = D2(y)(x) ~ (w/H) * sqrt(1 + y'(x)^2)
eqnc = D2(y)(x) ~ (w/H) * sqrt(1 + D(y(x))^2)
```
Unfortunately, `SymPy` needs a bit of help with this problem, by breaking the problem into steps.

View File

@ -227,7 +227,6 @@ Finally, the author of the post shows how the interface can compose with other p
```{julia}
earth₄ = Problem(y0 = 0.0 ± 0.0, v0 = 30.0 ± 1.0)
sol_euler₄ = solve(earth₄)
sol_sympl₄ = solve(earth₄, Symplectic2ndOrder(dt = 2.0))

View File

@ -19,10 +19,12 @@ LaTeXStrings = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
Measures = "442fdcdd-2543-5da2-b0f3-8c86c306513e"
Meshing = "e6723b4c-ebff-59f1-b4b7-d97aa5274f73"
ModelingToolkit = "961ee093-0014-501f-94e3-6117800e7a78"
MonteCarloMeasurements = "0987c9cc-fe09-11e8-30f0-b96dd679fdca"
Mustache = "ffc61752-8dc7-55ee-8c37-f3e9cdd09e70"
NonlinearSolve = "8913a72c-1f9b-4ce2-8d82-65094dcecaec"
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e"
OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed"
PlotUtils = "995b91a9-d308-5afd-9ec6-746e21dbc043"
PlotlyLight = "ca7969ec-10b3-423e-8d99-40f33abb42bf"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
@ -31,6 +33,7 @@ Primes = "27ebfcd6-29c5-5fa9-bf4b-fb8fc14df3ae"
QuadGK = "1fd47b50-473d-5c70-9696-f719f8f3bcdc"
QuizQuestions = "612c44de-1021-4a21-84fb-7261cf5eb2d4"
RealPolynomialRoots = "87be438c-38ae-47c4-9398-763eabe5c3be"
RecipesBase = "3cdcf5f2-1ef4-517c-9805-6587b60abb01"
Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
Richardson = "708f8203-808e-40c0-ba2d-98a6953ed40d"
Roots = "f2b01f46-fcfa-551c-844a-d8ac1e96c665"

View File

@ -1,22 +1,18 @@
# CalculusWithJulia via quarto
Short cut. Run first command until happy, then run second to publish
```
quarto render
julia adjust_plotly.jl; quarto publish gh-pages --no-render
#julia adjust_plotly.jl # <-- no longer needed
quarto publish gh-pages --no-render
```
To compile the pages through quarto
* author in `.jmd` files (to be run through pluto)
* run `julia make_qmd.jl` to convert to `.qmd` files
- The files in subdirectories are generated, they should not be edited
- The files in this main directory are quarto specific.
- `_book` and `_freeze` are conveniences
* author in `.qmd` files (to be run through pluto)
* run `quarto preview` to develop interactively (kinda slow!)
* run `quarto render` to render pages (not too bad)
@ -28,9 +24,7 @@ To compile the pages through quarto
* should also push project to github
* no need to push `_freeze` the repo, as files are locally rendered for now.
* XXX to get `PlotlyLight` to work the plotly library needs loading **before** require.min.js. This is accomplished by **editing** the .html file and moving up this line:
<script src="https://cdn.plot.ly/plotly-2.11.0.min.js"></script>
* NO LONGERto get `PlotlyLight` to work the plotly library needs loading **before** require.min.js. This is accomplished by **editing** the .html file and moving up this line: <script src="https://cdn.plot.ly/plotly-2.11.0.min.js"></script>
This can be done with this commandline call: julia adjust_plotly.jl

View File

@ -19,6 +19,11 @@ import Logging
Logging.disable_logging(Logging.Info) # or e.g. Logging.Info
Logging.disable_logging(Logging.Warn)
```
```{julia}
#| eval: false
#| echo: false
import SymPy
function Base.show(io::IO, ::MIME"text/html", x::T) where {T <: SymPy.SymbolicObject}
println(io, "<span class=\"math-left-align\" style=\"padding-left: 4px; width:0; float:left;\"> ")
@ -215,5 +220,4 @@ function Base.show(io::IO, m::MIME"text/plain", x::HTMLoutput)
println(io, caption)
return nothing
end
```

View File

@ -1,5 +1,5 @@
version: "0.18"
jupyter: julia-1.9
engine: julia
project:
type: book
@ -153,5 +153,5 @@ format:
# colorlinks: true
execute:
error: true
error: false
freeze: auto

View File

@ -147,8 +147,8 @@ Incorporating parameters is readily done. For example to solve $f(x) = \cos(x) -
```{julia}
f(x, p) = @. cos(x) - x/p
u0 = (0, pi/2)
p = 2
u0 = (0.0, pi/2)
p = 2.0
prob = IntervalNonlinearProblem(f, u0, p)
solve(prob, Bisection())
```
@ -410,6 +410,7 @@ could be similarly approached:
y = Area/x # from A = xy
P = 2x + 2y
@named sys = OptimizationSystem(P, [x], [Area]);
sys = structural_simplify(sys)
u0 = [x => 4.0]
p = [Area => 25.0]
@ -730,6 +731,6 @@ For a trivial example, we have:
```{julia}
f(x, p) = [x[1], x[2]^2]
prob = IntegralProblem(f, [0,0],[3,4], nout=2)
prob = IntegralProblem(f, [0,0],[3,4])
solve(prob, HCubatureJL())
```

View File

@ -34,7 +34,6 @@ We begin by loading the main package and the `norm` function from the standard `
```{julia}
using GLMakie
import LinearAlgebra: norm
```
The `Makie` developers have workarounds for the delayed time to first plot, but without utilizing these the time to load the package is lengthy.
@ -317,13 +316,13 @@ xs = 1:5
pts = Point2.(xs, xs)
scatter(pts)
annotations!("Point " .* string.(xs), pts;
textsize = 50 .- 2*xs,
fontsize = 50 .- 2*xs,
rotation = 2pi ./ xs)
current_figure()
```
The graphic shows that `textsize` adjusts the displayed size and `rotation` adjusts the orientation. (The graphic also shows a need to manually override the limits of the `y` axis, as the `Point 5` is chopped off; the `ylims!` function to do so will be shown later.)
The graphic shows that `fontsize` adjusts the displayed size and `rotation` adjusts the orientation. (The graphic also shows a need to manually override the limits of the `y` axis, as the `Point 5` is chopped off; the `ylims!` function to do so will be shown later.)
Attributes for `text`, among many others, include:
@ -331,7 +330,7 @@ Attributes for `text`, among many others, include:
* `align` Specify the text alignment through `(:pos, :pos)`, where `:pos` can be `:left`, `:center`, or `:right`.
* `rotation` to indicate how the text is to be rotated
* `textsize` the font point size for the text
* `fontsize` the font point size for the text
* `font` to indicate the desired font
@ -1151,5 +1150,3 @@ current_figure()
```
The slider value is "lifted" by its `value` component, as shown. Otherwise, the above is fairly similar to just using an observable for `h`.

View File

@ -1,21 +1,8 @@
---
format:
html:
header-includes: |
<script src="https://cdn.plot.ly/plotly-2.11.0.min.js"></script>
---
# JavaScript based plotting libraries
{{< include ../_common_code.qmd >}}
:::{.callout-note}
## Plotly and Quarto
There are some oddities working with `Plotly`, `Julia`, and Quarto that require some hand editing of an HTML file. If the images are not shown below, there was an oversight. A reported issue would be welcome.
:::
This section uses this add-on package:
@ -23,12 +10,6 @@ This section uses this add-on package:
using PlotlyLight
```
```{julia}
#| echo: false
PlotlyLight.src!(:none)
nothing
```
To avoid a dependence on the `CalculusWithJulia` package, we load two utility packages:

View File

@ -501,10 +501,11 @@ d, r = polynomial_coeffs(ex, (x,))
r
```
To find the degree of a monomial expression, the `degree` function is available. Here it is applied to each monomial in `d`:
To find the degree of a monomial expression, the `degree` function is available, though not exported. Here it is applied to each monomial in `d`:
```{julia}
import Symbolics: degree
[degree(k) for (k,v) ∈ d]
```
@ -773,6 +774,11 @@ Symbolics.jacobian(eqs, [x,y])
### Integration
::: {.callout-note}
#### This area is very much WIP
The use of `SymbolicNumericIntegration` below is currently broken.
:::
The `SymbolicNumericIntegration` package provides a means to integrate *univariate* expressions through its `integrate` function.
@ -904,18 +910,27 @@ v = factor_rational(u)
As such, the integrals have numeric differences from their mathematical counterparts:
::: {.callout-note}
#### Errors ahead
These last commands are note being executed, as there are errors.
:::
```{julia}
a,b,c = integrate(u)
#| eval: false
a,b,c = integrate(u) # not
```
We can see a bit of how much through the following, which needs a tolerance set to identify the rational numbers of the mathematical factorization correctly:
```{julia}
#| eval: false
cs = [first(arguments(term)) for term ∈ arguments(a)] # pick off coefficients
```
```{julia}
rationalize.(cs; tol=1e-8)
#| eval: false
rationalize.(cs[2:end]; tol=1e-8)
```

View File

@ -179,8 +179,8 @@ Not much to do here if you are satisfied with a graph that only gives insight in
```{julia}
𝒇(x) = ( (x-1)*(x-3)^2 ) / (x * (x-2) )
plot(𝒇, -50, 50)
f(x) = ( (x-1)*(x-3)^2 ) / (x * (x-2) )
plot(f, -50, 50)
```
We can see the slant asymptote and hints of vertical asymptotes, but, we'd like to see more of the basic features of the graph.
@ -193,9 +193,9 @@ To identify how wide a viewing window should be, for the rational function the a
```{julia}
𝒇cps = find_zeros(𝒇', -10, 10)
poss_ips = find_zero(𝒇'', (-10, 10))
extrema(union(𝒇cps, poss_ips))
cps = find_zeros(f', -10, 10)
poss_ips = find_zero(f'', (-10, 10))
extrema(union(cps, poss_ips))
```
So a range over $[-5,5]$ should display the key features including the slant asymptote.
@ -205,7 +205,7 @@ Previously we used the `rangeclamp` function defined in `CalculusWithJulia` to a
```{julia}
plot(rangeclamp(𝒇), -5, 5)
plot(rangeclamp(f), -5, 5)
```
With this graphic, we can now clearly see in the graph the two zeros at $x=1$ and $x=3$, the vertical asymptotes at $x=0$ and $x=2$, and the slant asymptote.
@ -219,7 +219,7 @@ Again, this sort of analysis can be systematized. The rational function type in
```{julia}
xᵣ = variable(RationalFunction)
plot(𝒇(xᵣ)) # f(x) of RationalFunction type
plot(f(xᵣ)) # f(x) of RationalFunction type
```
##### Example

View File

@ -454,7 +454,7 @@ The notation - $\big|$ - on the right-hand side separates the tasks of finding t
* Euler used `D` for the operator `D(f)`. This was initially used by [Argobast](http://jeff560.tripod.com/calculus.html). The notation `D(f)(c)` would be needed to evaluate the derivative at a point.
* Newton used a "dot" above the variable, $\dot{x}(t)$, which is still widely used in physics to indicate a derivative in time. This inidicates take the derivative and then plug in $t$.
* Newton used a "dot" above the variable, $\dot{x}(t)$, which is still widely used in physics to indicate a derivative in time. This indicates first taking the derivative and then plugging in $t$.
* The notation $[expr]'(c)$ or $[expr]'\big|_{x=c}$would similarly mean, take the derivative of the expression and **then** evaluate at $c$.
@ -493,8 +493,10 @@ We can rearrange $(k(x+h) - k(x))$ as follows:
$$
(a\cdot f(x+h) + b\cdot g(x+h)) - (a\cdot f(x) + b \cdot g(x)) =
a\cdot (f(x+h) - f(x)) + b \cdot (g(x+h) - g(x)).
\begin{align*}
(a\cdot f(x+h) + b\cdot g(x+h)) - (a\cdot f(x) + b \cdot g(x)) &=\\
\quada\cdot (f(x+h) - f(x)) + b \cdot (g(x+h) - g(x)). &
\end{align*}
$$
Dividing by $h$, we see that this becomes
@ -645,8 +647,12 @@ Let's first treat the case of $3$ products:
$$
[u\cdot v\cdot w]' =[ u \cdot (vw)]' = u' (vw) + u [vw]' = u'(vw) + u[v' w + v w'] =
u' vw + u v' w + uvw'.
\begin{align*}
[u\cdot v\cdot w]' &=[ u \cdot (vw)]'\\
&= u' (vw) + u [vw]'\\
&= u'(vw) + u[v' w + v w'] \\
&=u' vw + u v' w + uvw'.
\end{align*}
$$
This pattern generalizes, clearly, to:

View File

@ -180,8 +180,8 @@ Consider the function $f(x) = e^{-\lvert x\rvert} \cos(\pi x)$ over $[-3,3]$:
```{julia}
𝐟(x) = exp(-abs(x)) * cos(pi * x)
plotif(𝐟, 𝐟', -3, 3)
f(x) = exp(-abs(x)) * cos(pi * x)
plotif(f, f', -3, 3)
```
We can see the first derivative test in action: at the peaks and valleys the relative extrema the color changes. This is because $f'$ is changing sign as the function changes from increasing to decreasing or vice versa.
@ -191,7 +191,7 @@ This function has a critical point at $0$, as can be seen. It corresponds to a p
```{julia}
find_zeros(𝐟', -3, 3)
find_zeros(f', -3, 3)
```
##### Example
@ -204,17 +204,17 @@ We will do so numerically. For this task we first need to gather the critical po
```{julia}
𝒇(x) = sin(pi*x) * (x^3 - 4x^2 + 2)
𝒇cps = find_zeros(𝒇', -2, 2)
f(x) = sin(pi*x) * (x^3 - 4x^2 + 2)
cps = find_zeros(f', -2, 2)
```
We should be careful though, as `find_zeros` may miss zeros that are not simple or too close together. A critical point will correspond to a relative maximum if the function crosses the axis, so these can not be "pauses." As this is exactly the case we are screening for, we double check that all the critical points are accounted for by graphing the derivative:
```{julia}
plot(𝒇', -2, 2, legend=false)
plot(f', -2, 2, legend=false)
plot!(zero)
scatter!(𝒇cps, 0*𝒇cps)
scatter!(cps, 0*cps)
```
We see the six zeros as stored in `cps` and note that at each the function clearly crosses the $x$ axis.
@ -234,7 +234,7 @@ We will apply the same approach, but need to get a handle on how large the value
```{julia}
g(x) = sqrt(abs(x^2 - 1))
gcps = find_zeros(g', -2, 2)
cps = find_zeros(g', -2, 2)
```
We see the three values $-1$, $0$, $1$ that correspond to the two zeros and the relative minimum of $x^2 - 1$. We could graph things, but instead we characterize these values using a sign chart. A piecewise continuous function can only change sign when it crosses $0$ or jumps over $0$. The derivative will be continuous, except possibly at the three values above, so is piecewise continuous.
@ -244,7 +244,7 @@ A sign chart picks convenient values between crossing points to test if the func
```{julia}
pts = sort(union(-2, gcps, 2)) # this includes the endpoints (a, b) and the critical points
pts = sort(union(-2, cps, 2)) # this includes the endpoints (a, b) and the critical points
test_pts = pts[1:end-1] + diff(pts)/2 # midpoints of intervals between pts
[test_pts sign.(g'.(test_pts))]
```
@ -512,14 +512,14 @@ Use the second derivative test to characterize the critical points of $j(x) = x^
```{julia}
j(x) = x^5 - 2x^4 + x^3
jcps = find_zeros(j', -3, 3)
cps = find_zeros(j', -3, 3)
```
We can check the sign of the second derivative for each critical point:
```{julia}
[jcps j''.(jcps)]
[cps j''.(cps)]
```
That $j''(0.6) < 0$ implies that at $0.6$, $j(x)$ will have a relative maximum. As $j''(1) > 0$, the second derivative test says at $x=1$ there will be a relative minimum. That $j''(0) = 0$ says that only that there **may** be a relative maximum or minimum at $x=0$, as the second derivative test does not speak to this situation. (This last check, requiring a function evaluation to be `0`, is susceptible to floating point errors, so isn't very robust as a general tool.)

View File

@ -268,8 +268,8 @@ Setting $a=1$ we have the graph:
```{julia}
𝒂 = 1
G(x,y) = x^3 + y^3 - 3*𝒂*x*y
a = 1
G(x,y) = x^3 + y^3 - 3*a*x*y
implicit_plot(G)
```
@ -573,15 +573,15 @@ To illustrate, we need specific values of $a$, $b$, and $L$:
```{julia}
𝐚, 𝐛, 𝐋 = 3, 3, 10 # L > sqrt{a^2 + b^2}
F₀(x, y) = F₀(x, y, 𝐚, 𝐛)
a, b, L = 3, 3, 10 # L > sqrt{a^2 + b^2}
F₀(x, y) = F₀(x, y, a, b)
```
Our values $(x,y)$ must satisfy $f(x,y) = L$. Let's graph:
```{julia}
implicit_plot((x,y) -> F₀(x,y) - 𝐋, xlims=(-5, 7), ylims=(-5, 7))
implicit_plot((x,y) -> F₀(x,y) - L, xlims=(-5, 7), ylims=(-5, 7))
```
The graph is an ellipse, though slightly tilted.

View File

@ -229,7 +229,7 @@ Of course, we could have just relied on `limit`, which knows about L'Hospital's
```{julia}
limit(f(x)/g(x), x, a)
limit(f(x)/g(x), x => a)
```
## Idea behind L'Hospital's rule
@ -272,13 +272,13 @@ function lhopitals_picture_graph(n)
plt
end
caption = L"""
Geometric interpretation of ``L=\lim_{x \rightarrow 0} x^2 / (\sqrt{1 +
x} - 1 - x^2)``. At ``0`` this limit is indeterminate of the form
caption = raw"""
Geometric interpretation of
``L=\lim_{x \rightarrow 0} x^2 / (\sqrt{1 + x} - 1 - x^2)``.
At ``0`` this limit is indeterminate of the form
``0/0``. The value for a fixed ``x`` can be seen as the slope of a secant
line of a parametric plot of the two functions, plotted as ``(g,
f)``. In this figure, the limiting "tangent" line has ``0`` slope,
line of a parametric plot of the two functions, plotted as
``(g, f)``. In this figure, the limiting "tangent" line has ``0`` slope,
corresponding to the limit ``L``. In general, L'Hospital's rule is
nothing more than a statement about slopes of tangent lines.
@ -437,38 +437,38 @@ $$
\lim_{x \rightarrow 1} \frac{x\log(x)-(x-1)}{(x-1)\log(x)}
$$
In `SymPy` we have:
In `SymPy` we have (using italic variable names to avoid a problem with the earlier use of `f` as a function):
```{julia}
𝒇 = x*log(x) - (x-1)
𝒈 = (x-1)*log(x)
𝒇(1), 𝒈(1)
𝑓 = x * log(x) - (x-1)
𝑔 = (x-1) * log(x)
𝑓(1), 𝑔(1)
```
L'Hospital's rule applies to the form $0/0$, so we try:
```{julia}
𝒇 = diff(𝒇, x)
𝒈 = diff(𝒈, x)
𝒇(1), 𝒈(1)
𝑓 = diff(𝑓, x)
𝑔 = diff(𝑔, x)
𝑓(1), 𝑔(1)
```
Again, we get the indeterminate form $0/0$, so we try again with second derivatives:
```{julia}
𝒇 = diff(𝒇, x, x)
𝒈 = diff(𝒈, x, x)
𝒇(1), 𝒈(1)
𝑓 = diff(𝑓, x, x)
𝑔 = diff(𝑔, x, x)
𝑓(1), 𝑔(1)
```
From this we see the limit is $1/2$, as could have been done directly:
```{julia}
limit(𝒇/𝒈, x=>1)
limit(𝑓/𝑔, x=>1)
```
## The assumptions are necessary
@ -524,7 +524,7 @@ This ratio has no limit, as it oscillates, as confirmed by `SymPy`:
```{julia}
limit(u(x)/v(x), x=> oo)
limit(u(x)/v(x), x => oo)
```
## Questions
@ -645,8 +645,8 @@ What is $L$?
#| hold: true
#| echo: false
f(x) = (4x - sin(x))/x
L = float(N(limit(f, 0)))
numericq(L)
L = limit(f(x), x=>0)
numericq(float(L))
```
###### Question
@ -666,8 +666,8 @@ What is $L$?
#| hold: true
#| echo: false
f(x) = (sqrt(1+x) - 1)/x
L = float(N(limit(f, 0)))
numericq(L)
L = limit(f(x), x => 0)
numericq(float(L))
```
###### Question
@ -687,8 +687,8 @@ What is $L$?
#| hold: true
#| echo: false
f(x) = (x - sin(x))/x^3
L = float(N(limit(f, 0)))
numericq(L)
L = limit(f(x), x=>0)
numericq(float(L))
```
###### Question
@ -708,8 +708,8 @@ What is $L$?
#| hold: true
#| echo: false
f(x) = (1 - x^2/2 - cos(x))/x^3
L = float(N(limit(f, 0)))
numericq(L)
L = limit(f(x), x=>0)
numericq(float(L))
```
###### Question
@ -729,8 +729,8 @@ What is $L$?
#| hold: true
#| echo: false
f(x) = log(log(x))/log(x)
L = N(limit(f(x), x=> oo))
numericq(L)
L = limit(f(x), x=> oo)
numericq(float(L))
```
###### Question
@ -750,8 +750,8 @@ What is $L$?
#| hold: true
#| echo: false
f(x) = 1/x - 1/sin(x)
L = float(N(limit(f, 0)))
numericq(L)
L = limit(f(x), x => 0)
numericq(float(L))
```
###### Question
@ -770,8 +770,8 @@ What is $L$?
```{julia}
#| hold: true
#| echo: false
L = float(N(limit(log(x)/x, x=>oo)))
numericq(L)
L = limit(log(x)/x, x=>oo)
numericq(float(L))
```
###### Question

View File

@ -355,6 +355,7 @@ There can be problems when the stopping criteria is `abs(b-a) <= 2eps(m))` and t
```{julia}
#| hold: true
#| error: true
fu(x) = -40*x*exp(-x)
chandrapatla(fu, -9, 1, λ3)
```

View File

@ -403,8 +403,8 @@ A plot shows us roughly where the value lies:
#| hold: true
f(x) = exp(x)
g(x) = x^6
plot(f, 0, 25, label="f")
plot!(g, label="g")
plot(f, 0, 25; label="f")
plot!(g; label="g")
```
Clearly by $20$ the two paths diverge. We know exponentials eventually grow faster than powers, and this is seen in the graph.
@ -858,8 +858,10 @@ The calculation that produces the quadratic convergence now becomes:
$$
x_{i+1} - \alpha = (x_i - \alpha) - \frac{1}{k}(x_i-\alpha - \frac{f''(\xi)}{2f'(x_i)}(x_i-\alpha)^2) =
\frac{k-1}{k} (x_i-\alpha) + \frac{f''(\xi)}{2kf'(x_i)}(x_i-\alpha)^2.
\begin{align*}
x_{i+1} - \alpha &= (x_i - \alpha) - \frac{1}{k}(x_i-\alpha - \frac{f''(\xi)}{2f'(x_i)}(x_i-\alpha)^2)
&= \frac{k-1}{k} (x_i-\alpha) + \frac{f''(\xi)}{2kf'(x_i)}(x_i-\alpha)^2.
\end{align*}
$$
As $k > 1$, the $(x_i - \alpha)$ term dominates, and we see the convergence is linear with $\lvert e_{i+1}\rvert \approx (k-1)/k \lvert e_i\rvert$.

View File

@ -1475,8 +1475,10 @@ Numerically find the value of $a$ that minimizes the length of the line seqment
```{julia}
#| hold: true
#| echo: false
x(a) = -a - 1/(2a)
d(a) = (a-x(a))^2 + (a^2 - x(a)^2)^2
a = find_zero(d', 1)
numericq(a)
let
x(a) = -a - 1/(2a)
d(a) = (a-x(a))^2 + (a^2 - x(a)^2)^2
a = find_zero(d', 1)
numericq(a)
end
```

View File

@ -34,7 +34,7 @@ The `Symbolics` package provides native symbolic manipulation abilities for `Jul
An expression is an unevaluated portion of code that for our purposes below contains other expressions, symbols, and numeric literals. They are held in the `Expr` type. A symbol, such as `:x`, is distinct from a string (e.g. `"x"`) and is useful to the programmer to distinguish between the contents a variable points to from the name of the variable. Symbols are fundamental to metaprogramming in `Julia`. An expression is a specification of some set of statements to execute. A numeric literal is just a number.
The three main functions from `TermInterface` we leverage are `istree`, `operation`, and `arguments`. The `operation` function returns the "outside" function of an expression. For example:
The three main functions from `TermInterface` we leverage are `isexpr`, `operation`, and `arguments`. The `operation` function returns the "outside" function of an expression. For example:
```{julia}
@ -65,7 +65,7 @@ function arity(ex)
end
```
Differentiation must distinguish between expressions, variables, and numbers. Mathematically expressions have an "outer" function, whereas variables and numbers can be directly differentiated. The `istree` function in `TermInterface` returns `true` when passed an expression, and `false` when passed a symbol or numeric literal. The latter two may be distinguished by `isa(..., Symbol)`.
Differentiation must distinguish between expressions, variables, and numbers. Mathematically expressions have an "outer" function, whereas variables and numbers can be directly differentiated. The `isexpr` function in `TermInterface` returns `true` when passed an expression, and `false` when passed a symbol or numeric literal. The latter two may be distinguished by `isa(..., Symbol)`.
Here we create a function, `D`, that when it encounters an expression it *dispatches* to a specific method of `D` based on the outer operation and arity, otherwise if it encounters a symbol or a numeric literal it does the differentiation:
@ -73,7 +73,7 @@ Here we create a function, `D`, that when it encounters an expression it *dispat
```{julia}
function D(ex, var=:x)
if istree(ex)
if isexpr(ex)
op, args = operation(ex), arguments(ex)
D(Val(op), arity(ex), args, var)
elseif isa(ex, Symbol) && ex == :x

View File

@ -592,12 +592,12 @@ To see a plot, we have
```{julia}
𝒇(x) = sin(x)
𝒄, 𝒉, 𝒏 = 0, 1/4, 4
int_poly = newton_form(𝒇, [𝒄 + i*𝒉 for i in 0:𝒏])
tp = taylor_poly(𝒇, 𝒄, 𝒏)
𝒂, 𝒃 = -pi, pi
plot(𝒇, 𝒂, 𝒃; linewidth=5, label="f")
f(x) = sin(x)
c, h, n = 0, 1/4, 4
int_poly = newton_form(f, [c + i*h for i in 0:n])
tp = taylor_poly(f, c, n)
a, b = -pi, pi
plot(f, a, b; linewidth=5, label="f")
plot!(int_poly; color=:green, label="interpolating")
plot!(tp; color=:red, label="Taylor")
```
@ -606,10 +606,10 @@ To get a better sense, we plot the residual differences here:
```{julia}
d1(x) = 𝒇(x) - int_poly(x)
d2(x) = 𝒇(x) - tp(x)
plot(d1, 𝒂, 𝒃; color=:blue, label="interpolating")
plot!(d2; color=:green, label="Taylor")
d1(x) = f(x) - int_poly(x)
d2(x) = f(x) - tp(x)
plot(d1, a, b; linecolor=:blue, label="interpolating")
plot!(d2; linecolor=:green, label="Taylor")
```
The graph should be $0$ at each of the points in `xs`, which we can verify in the graph above. Plotting over a wider region shows a common phenomenon that these polynomials approximate the function near the values, but quickly deviate away:
@ -825,10 +825,10 @@ To try this out to compute $\log(5)$. We have $5 = 2^2(1+0.25)$, so $k=2$ and $m
```{julia}
k, m = 2, 0.25
𝒔 = m / (2+m)
pₗ = 2 * sum(𝒔^(2i)/(2i+1) for i in 1:8) # where the polynomial approximates the logarithm...
s = m / (2+m)
pₗ = 2 * sum(s^(2i)/(2i+1) for i in 1:8) # where the polynomial approximates the logarithm...
log(1 + m), m - 𝒔*(m-pₗ), log(1 + m) - ( m - 𝒔*(m-pₗ))
log(1 + m), m - s*(m-pₗ), log(1 + m) - ( m - s*(m-pₗ))
```
@ -836,7 +836,7 @@ The two values differ by less than $10^{-16}$, as advertised. Re-assembling then
```{julia}
Δ = k * log(2) + (m - 𝒔*(m-pₗ)) - log(5)
Δ = k * log(2) + (m - s*(m-pₗ)) - log(5)
```
The actual code is different, as the Taylor polynomial isn't used. The Taylor polynomial is a great approximation near a point, but there might be better polynomial approximations for all values in an interval. In this case there is, and that polynomial is used in the production setting. This makes things a bit more efficient, but the basic idea remains - for a prescribed accuracy, a polynomial approximation can be found over a given interval, which can be cleverly utilized to solve for all applicable values.
@ -1166,7 +1166,7 @@ Here is one way to get all the values bigger than 1:
ex = (1 - x + x^2)*exp(x)
Tn = series(ex, x, 0, 100).removeO()
ps = sympy.Poly(Tn, x).coeffs()
qs = numer.(ps)
qs = numerator.(ps)
qs[qs .> 1] |> Tuple # format better for output
```

View File

@ -1653,7 +1653,7 @@ Here is a snippet of `SymPy` code to verify the above:
```{julia}
#| hold: true
@vars y y λ C
@syms y y λ C
ex = Eq(-λ*y^2/sqrt(1 + y^2) + λ*sqrt(1 + y^2), y - C)
Δ = sqrt(1 + y^2) / (y - C)
ex1 = Eq(simplify(ex.lhs()*Δ), simplify(ex.rhs() * Δ))

View File

@ -1345,6 +1345,7 @@ e₃(t) = sol(t)[7:9]
We fix a small time range and show the trace of the spine and the frame at a single point in time:
```{julia}
a_0, b_0 = 50, 60
ts_0 = range(a_0, b_0, length=251)
@ -1995,10 +1996,10 @@ t0, t1, a = 2PI, PI, PI
rp = diff.(r(t), t)
speed = 2sin(t/2)
ex = r(t) - rp/speed * integrate(speed, a, t)
ex = r(t) - rp/speed * integrate(speed, t)
plot_parametric(0..4pi, r, legend=false)
plot_parametric!(0..4pi, u -> SymPy.N.(subs.(ex, t .=> u)))
plot_parametric!(0..4pi, u -> float.(subs.(ex, t .=> u)))
```
The expression `ex` is secretly `[t + sin(t), 3 + cos(t)]`, another cycloid.

View File

@ -201,13 +201,13 @@ For example, if `xs` is a vector and `ys` a scalar, then the value in `ys` is re
```{julia}
𝐟(x,y) = x + y
f(x,y) = x + y
```
```{julia}
#| hold: true
xs = ys = [0, 1]
𝐟.(xs, ys)
f.(xs, ys)
```
This matches `xs` and `ys` to pass `(0,0)` and then `(1,1)` to `f`, returning `0` and `2`. Now consider
@ -216,7 +216,7 @@ This matches `xs` and `ys` to pass `(0,0)` and then `(1,1)` to `f`, returning `0
```{julia}
#| hold: true
xs = [0, 1]; ys = [0 1] # xs is a column vector, ys a row vector
𝐟.(xs, ys)
f.(xs, ys)
```
The two dimensions are different so for each value of `xs` the vector of `ys` is broadcast. This returns a matrix now. This will be important for some plotting usages where a grid (matrix) of values is needed.
@ -255,9 +255,9 @@ The dot product is computed in `Julia` by the `dot` function, which is in the `L
```{julia}
𝒖 = [1, 2]
𝒗 = [2, 1]
dot(𝒖, 𝒗)
u = [1, 2]
v = [2, 1]
dot(u, v)
```
:::{.callout-note}
@ -267,15 +267,15 @@ In `Julia`, the unicode operator entered by `\cdot[tab]` can also be used to mir
:::
```{julia}
𝒖𝒗 # u \cdot[tab] v
u ⋅ v # u \cdot[tab] v
```
Continuing, to find the angle between $\vec{u}$ and $\vec{v}$, we might do this:
```{julia}
𝒄theta = dot(𝒖/norm(𝒖), 𝒗/norm(𝒗))
acos(𝒄theta)
cos_theta = dot(u/norm(u), v/norm(v))
acos(cos_theta)
```
The cosine of $\pi/2$ is $0$, so two vectors which are at right angles to each other will have a dot product of $0$:
@ -350,16 +350,16 @@ A [force diagram](https://en.wikipedia.org/wiki/Free_body_diagram) is a useful v
```{julia}
𝗍heta = pi/12
𝗆ass, 𝗀ravity = 1/9.8, 9.8
theta = pi/12
mass, gravity = 1/9.8, 9.8
𝗅 = [-sin(𝗍heta), cos(𝗍heta)]
𝗉 = -𝗅
𝖥g = [0, -𝗆ass * 𝗀ravity]
R = [-sin(theta), cos(theta)]
p = -R
Fg = [0, -mass * gravity]
plot(legend=false)
arrow!(𝗉, 𝗅)
arrow!(𝗉, 𝖥g)
scatter!(𝗉[1:1], 𝗉[2:2], markersize=5)
arrow!(p, R)
arrow!(p, Fg)
scatter!(p[1:1], p[2:2], markersize=5)
```
The magnitude of the tension force is exactly that of the force of gravity projected onto $\vec{l}$, as the bob is not accelerating in that direction. The component of the gravity force in the perpendicular direction is the part of the gravitational force that causes acceleration in the pendulum. Here we find the projection onto $\vec{l}$ and visualize the two components of the gravitational force.
@ -367,15 +367,15 @@ The magnitude of the tension force is exactly that of the force of gravity proje
```{julia}
plot(legend=false, aspect_ratio=:equal)
arrow!(𝗉, 𝗅)
arrow!(𝗉, 𝖥g)
scatter!(𝗉[1:1], 𝗉[2:2], markersize=5)
arrow!(p, R)
arrow!(p, Fg)
scatter!(p[1:1], p[2:2], markersize=5)
𝗉roj = (𝖥g ⋅ 𝗅) / (𝗅𝗅) * 𝗅 # force of gravity in direction of tension
𝗉orth = 𝖥g - 𝗉roj # force of gravity perpendicular to tension
proj = (Fg ⋅ R) / (R ⋅ R) * R # force of gravity in direction of tension
porth = Fg - proj # force of gravity perpendicular to tension
arrow!(𝗉, 𝗉roj)
arrow!(𝗉, 𝗉orth, linewidth=3)
arrow!(p, proj)
arrow!(p, porth, linewidth=3)
```
##### Example
@ -551,7 +551,7 @@ As mentioned previously, a matrix in `Julia` is defined component by component w
```{julia}
= [3 4 -5; 5 -5 7; -3 6 9]
M = [3 4 -5; 5 -5 7; -3 6 9]
```
Space is the separator, which means computing a component during definition (i.e., writing `2 + 3` in place of `5`) can be problematic, as no space can be used in the computation, lest it be parsed as a separator.
@ -568,88 +568,87 @@ In `Julia`, entries in a matrix (or a vector) are stored in a container with a t
```{julia}
@syms x1 x2 x3
𝓍 = [x1, x2, x3]
@syms xs[1:3]
```
Matrices may also be defined from blocks. This example shows how to make two column vectors into a matrix:
```{julia}
𝓊 = [10, 11, 12]
𝓋 = [13, 14, 15]
[𝓊 𝓋] # horizontally combine
u = [10, 11, 12]
v = [13, 14, 15]
[u v] # horizontally combine
```
Vertically combining the two will stack them:
```{julia}
[𝓊; 𝓋]
[u; v]
```
Scalar multiplication will just work as expected:
```{julia}
2 *
2 * M
```
Matrix addition is also straightforward:
```{julia}
+
M + M
```
Matrix addition expects matrices of the same size. An error will otherwise be thrown. However, if addition is *broadcasted* then the sizes need only be commensurate. For example, this will add `1` to each entry of `M`:
```{julia}
.+ 1
M .+ 1
```
Matrix multiplication is defined by `*`:
```{julia}
*
M * M
```
We can then see how the system of equations is represented with matrices:
```{julia}
* 𝓍 - 𝒷
M * xs - 𝒷
```
Here we use `SymPy` to verify the above:
```{julia}
𝒜 = [symbols("A$i$j", real=true) for i in 1:3, j in 1:2]
= [symbols("B$i$j", real=true) for i in 1:2, j in 1:2]
A = [symbols("A$i$j", real=true) for i in 1:3, j in 1:2]
B = [symbols("B$i$j", real=true) for i in 1:2, j in 1:2]
```
The matrix product has the expected size: the number of rows of `A` ($3$) by the number of columns of `B` ($2$):
```{julia}
𝒜 *
A * B
```
This confirms how each entry (`(A*B)[i,j]`) is from a dot product (`A[i,:] ⋅ B[:,j]`):
```{julia}
[ (𝒜 * )[i,j] == 𝒜[i,:] ⋅ [:,j] for i in 1:3, j in 1:2]
[ (A * B)[i,j] == A[i,:] ⋅ B[:,j] for i in 1:3, j in 1:2]
```
When the multiplication is broadcasted though, with `.*`, the operation will be component wise:
```{julia}
.* # component wise (Hadamard product)
M .* M # component wise (Hadamard product)
```
---
@ -659,7 +658,7 @@ The determinant is found by `det` provided by the `LinearAlgebra` package:
```{julia}
det()
det(M)
```
---
@ -669,14 +668,14 @@ The transpose of a matrix is found through `transpose` which doesn't create a ne
```{julia}
transpose()
transpose(M)
```
For matrices with *real* numbers, the transpose can be performed with the postfix operation `'`:
```{julia}
'
M'
```
(However, this is not true for matrices with complex numbers as `'` is the "adjoint," that is, the transpose of the matrix *after* taking complex conjugates.)
@ -686,14 +685,14 @@ With `u` and `v`, vectors from above, we have:
```{julia}
[𝓊' 𝓋'] # [𝓊 𝓋] was a 3 × 2 matrix, above
[u' v'] # [u v] was a 3 × 2 matrix, above
```
and
```{julia}
[𝓊'; 𝓋']
[u'; v']
```
:::{.callout-note}
## Note
@ -782,29 +781,30 @@ In `Julia`, the `cross` function from the `LinearAlgebra` package implements the
```{julia}
𝓪 = [1, 2, 3]
𝓫 = [4, 2, 1]
cross(𝓪, 𝓫)
a = [1, 2, 3]
b = [4, 2, 1]
cross(a, b)
```
There is also the *infix* unicode operator `\times[tab]` that can be used for similarity to traditional mathematical syntax.
```{julia}
𝓪 × 𝓫
a × b
```
We can see the cross product is anti-commutative by comparing the last answer with:
```{julia}
𝓫 × 𝓪
b × a
```
Using vectors of size different than $n=3$ produces a dimension mismatch error:
```{julia}
#| error: true
[1, 2] × [3, 4]
```
@ -816,15 +816,15 @@ Let's see that the matrix definition will be identical (after identifications) t
```{julia}
@syms î ĵ k̂
𝓜 = [î ĵ k̂; 3 4 5; 3 6 7]
det(𝓜) |> simplify
M = [î ĵ k̂; 3 4 5; 3 6 7]
det(M) |> simplify
```
Compare with
```{julia}
𝓜[2,:] × 𝓜[3,:]
M[2,:] × M[3,:]
```
---

View File

@ -69,7 +69,7 @@ $$
It has the interpretation of pointing out the direction of greatest ascent for the surface $z=f(x,y)$.
We move now to two other operations, the divergence and the curl, which combine to give a language to describe vector fields in $R^3$.
We move now to two other operations, the *divergence* and the *curl*, which combine to give a language to describe vector fields in $R^3$.
## The divergence
@ -680,7 +680,7 @@ V(v) = V(v...)
p = plot([NaN],[NaN],[NaN], legend=false)
ys = xs = range(-2,2, length=10 )
zs = range(0, 4, length=3)
CalculusWithJulia.vectorfieldplot3d!(p, V, xs, ys, zs, nz=3)
vectorfieldplot3d!(p, V, xs, ys, zs, nz=3)
plot!(p, [0,0], [0,0],[-1,5], linewidth=3)
p
```

View File

@ -210,8 +210,8 @@ Identifying a formula for this is a bit tricky. Here we use a brute force approa
```{julia}
𝒅(x, y) = sqrt(x^2 + y^2)
function 𝒍(x, y, a)
d(x, y) = sqrt(x^2 + y^2)
function l(x, y, a)
theta = atan(y,x)
atheta = abs(theta)
if (pi/4 <= atheta < 3pi/4) # this is the y=a or y=-a case
@ -226,10 +226,10 @@ And then
```{julia}
𝒇(x,y,a,h) = h * (𝒍(x,y,a) - 𝒅(x,y))/𝒍(x,y,a)
f(x,y,a,h) = h * (l(x,y,a) - d(x,y))/l(x,y,a)
𝒂, 𝒉 = 2, 3
𝒇(x,y) = 𝒇(x, y, 𝒂, 𝒉) # fix a and h
𝒇(v) = 𝒇(v...)
f(x,y) = f(x, y, 𝒂, 𝒉) # fix a and h
f(v) = f(v...)
```
We can visualize the volume to be computed, as follows:
@ -238,14 +238,14 @@ We can visualize the volume to be computed, as follows:
```{julia}
#| hold: true
xs = ys = range(-1, 1, length=20)
surface(xs, ys, 𝒇)
surface(xs, ys, f)
```
Trying this, we have:
```{julia}
hcubature(𝒇, (-𝒂/2, -𝒂/2), (𝒂/2, 𝒂/2))
hcubature(f, (-𝒂/2, -𝒂/2), (𝒂/2, 𝒂/2))
```
The answer agrees with that known from the formula, $4 = (1/3)a^2 h$, but the answer takes a long time to be produce. The `hcubature` function is slow with functions defined in terms of conditions. For this problem, volumes by [slicing](../integrals/volumes_slice.html) is more direct. But also symmetry can be used, were we able to compute the volume above the triangular region formed by the $x$-axis, the line $x=a/2$ and the line $y=x$, which would be $1/8$th the total volume. (As then $l(x,y,a) = (a/2)/\sin(\tan^{-1}(y,x))$.).
@ -691,8 +691,7 @@ The computationally efficient way to perform multiple integrals numerically woul
However, for simple problems, where ease of expressing a region is preferred to computational efficiency, something can be implemented using repeated uses of `quadgk`. Again, this isn't recommended, save for its relationship to how iteration is approached algebraically.
In the `CalculusWithJulia` package, the `fubini` function is provided. For these notes, we define three operations using Unicode operators entered with `\int[tab]`, `\iint[tab]`, `\iiint[tab]`. (Using this, better shows the mechanics involved.)
For these notes, we define three operations using Unicode operators entered with `\int[tab]`, `\iint[tab]`, `\iiint[tab]`.
```{julia}
# adjust endpoints when expressed as a functions of outer variables
@ -1404,11 +1403,11 @@ partition(u,v) = [ u^2-v^2, u*v ]
push!(ps, showG(partition))
xlabel!(ps[end], "partition")
l = @layout [a b c;
lyt = @layout [a b c;
d e f;
g h i]
plot(ps..., layout=l)
plot(ps..., layout=lyt)
```
### Examples

View File

@ -906,8 +906,8 @@ Let $S$ be the closed surface bounded by the cylinder $x^2 + y^2 = 1$, the plane
```{julia}
𝐅(x,y,z) = [1, y, z]
𝐅(v) = 𝐅(v...)
F(x,y,z) = [1, y, z]
F(v) = F(v...)
```
The surface has three faces, with different outward pointing normals for each. Let $S_1$ be the unit disk in the $x-y$ plane with normal $-\hat{k}$; $S_2$ be the top part, with normal $\langle-1, 0, 1\rangle$ (as the plane is $-1x + 0y + 1z = 1$); and $S_3$ be the cylindrical part with outward pointing normal $\vec{r}$.
@ -917,32 +917,32 @@ Integrating over $S_1$, we have the parameterization $\Phi(r,\theta) = \langle r
```{julia}
@syms 𝐑::positive 𝐭heta::positive
𝐏hi₁(r,theta) = [r*cos(theta), r*sin(theta), 0]
𝐉ac₁ = 𝐏hi₁(𝐑, 𝐭heta).jacobian([𝐑, 𝐭heta])
𝐯₁, 𝐰₁ = 𝐉ac₁[:,1], 𝐉ac₁[:,2]
𝐍ormal₁ = 𝐯₁ × 𝐰₁ .|> simplify
@syms R::positive theta::positive
Phi₁(r,theta) = [r*cos(theta), r*sin(theta), 0]
Jac₁ = Phi₁(R, theta).jacobian([R, theta])
v₁, w₁ = Jac₁[:,1], Jac₁[:,2]
𝐍ormal₁ = v₁ × w₁ .|> simplify
```
```{julia}
A₁ = integrate(𝐅(𝐏hi₁(𝐑, 𝐭heta)) ⋅ (-𝐍ormal₁), (𝐭heta, 0, 2PI), (𝐑, 0, 1)) # use -Normal for outward pointing
A₁ = integrate(F(Phi₁(R, theta)) ⋅ (-𝐍ormal₁), (theta, 0, 2PI), (R, 0, 1)) # use -Normal for outward pointing
```
Integrating over $S_2$ we use the parameterization $\Phi(r, \theta) = \langle r\cos(\theta), r\sin(\theta), 1 + r\cos(\theta)\rangle$.
```{julia}
𝐏hi₂(r, theta) = [r*cos(theta), r*sin(theta), 1 + r*cos(theta)]
𝐉ac₂ = 𝐏hi₂(𝐑, 𝐭heta).jacobian([𝐑, 𝐭heta])
𝐯₂, 𝐰₂ = 𝐉ac₂[:,1], 𝐉ac₂[:,2]
𝐍ormal₂ = 𝐯₂ × 𝐰₂ .|> simplify # has correct orientation
Phi₂(r, theta) = [r*cos(theta), r*sin(theta), 1 + r*cos(theta)]
Jac₂ = Phi₂(R, theta).jacobian([R, theta])
v₂, w₂ = Jac₂[:,1], Jac₂[:,2]
Normal₂ = v₂ × w₂ .|> simplify # has correct orientation
```
With this, the contribution for $S_2$ is:
```{julia}
A₂ = integrate(𝐅(𝐏hi₂(𝐑, 𝐭heta)) ⋅ (𝐍ormal₂), (𝐭heta, 0, 2PI), (𝐑, 0, 1))
A₂ = integrate(F(Phi₂(R, theta)) ⋅ (Normal₂), (theta, 0, 2PI), (R, 0, 1))
```
Finally for $S_3$, the parameterization used is $\Phi(z, \theta) = \langle \cos(\theta), \sin(\theta), z\rangle$, but this is over a non-rectangular region, as $z$ is between $0$ and $1 + x$.
@ -953,17 +953,17 @@ This parameterization gives a normal computed through:
```{julia}
@syms 𝐳::positive
𝐏hi₃(z, theta) = [cos(theta), sin(theta), 𝐳]
𝐉ac₃ = 𝐏hi₃(𝐳, 𝐭heta).jacobian([𝐳, 𝐭heta])
𝐯₃, 𝐰₃ = 𝐉ac₃[:,1], 𝐉ac₃[:,2]
𝐍ormal₃ = 𝐯₃ × 𝐰₃ .|> simplify # wrong orientation, so we change sign below
Phi₃(z, theta) = [cos(theta), sin(theta), 𝐳]
Jac₃ = Phi₃(𝐳, theta).jacobian([𝐳, theta])
v₃, w₃ = Jac₃[:,1], Jac₃[:,2]
Normal₃ = v₃ × w₃ .|> simplify # wrong orientation, so we change sign below
```
The contribution is
```{julia}
A₃ = integrate(𝐅(𝐏hi₃(𝐳, 𝐭heta)) ⋅ (-𝐍ormal₃), (𝐳, 0, 1 + cos(𝐭heta)), (𝐭heta, 0, 2PI))
A₃ = integrate(F(Phi₃(𝐳, theta)) ⋅ (-Normal₃), (𝐳, 0, 1 + cos(theta)), (theta, 0, 2PI))
```
In total, the surface integral is

View File

@ -685,7 +685,6 @@ The constant $A$ just sets the scale, the parameter $n$ has a qualitative effect
```{julia}
#| hold: true
gr() # pyplot doesn't like the color as specified below.
n = 2
f(r,theta) = r^n * cos(n*theta)
g(r, theta) = r^n * sin(n*theta)
@ -698,7 +697,6 @@ f(v) = f(v...); g(v)= g(v...)
xs = ys = range(-2,2, length=50)
p = contour(xs, ys, f∘Φ, color=:red, legend=false, aspect_ratio=:equal)
contour!(p, xs, ys, g∘Φ, color=:blue, linewidth=3)
#pyplot()
p
```

View File

@ -369,8 +369,10 @@ Any partition $a =x_0 < x_1 < \cdots < x_n=b$ is related to a partition of $[a-c
\begin{align*}
f(c_1 -c) \cdot (x_1 - x_0) &+ f(c_2 -c) \cdot (x_2 - x_1) + \cdots + f(c_n -c) \cdot (x_n - x_{n-1}) = \\
& f(d_1) \cdot(x_1-c - (x_0-c)) + f(d_2) \cdot(x_2-c - (x_1-c)) + \cdots + f(d_n) \cdot(x_n-c - (x_{n-1}-c)).
f(c_1 -c) \cdot (x_1 - x_0) &+ f(c_2 -c) \cdot (x_2 - x_1) + \cdots\\
&\quad + f(c_n -c) \cdot (x_n - x_{n-1})\\
&= f(d_1) \cdot(x_1-c - (x_0-c)) + f(d_2) \cdot(x_2-c - (x_1-c)) + \cdots\\
&\quad + f(d_n) \cdot(x_n-c - (x_{n-1}-c)).
\end{align*}
@ -636,15 +638,15 @@ With this, we can easily find an approximate answer. We wrote the function to us
```{julia}
𝒇(x) = exp(x)
riemann(𝒇, 0, 5, 10) # S_10
f(x) = exp(x)
riemann(f, 0, 5, 10) # S_10
```
Or with more intervals in the partition
```{julia}
riemann(𝒇, 0, 5, 50_000)
riemann(f, 0, 5, 50_000)
```
(The answer is $e^5 - e^0 = 147.4131591025766\dots$, which shows that even $50,000$ partitions is not enough to guarantee many digits of accuracy.)
@ -719,7 +721,7 @@ We have to be a bit careful with the Riemann sum, as the left Riemann sum will h
```{julia}
𝒉(x) = x > 0 ? x * log(x) : 0.0
h(x) = x > 0 ? x * log(x) : 0.0
```
This is actually inefficient, as the check for the size of `x` will slow things down a bit. Since we will call this function 50,000 times, we would like to avoid this, if we can. In this case just using the right sum will work:
@ -910,7 +912,8 @@ $$
$$
```{julia}
quadgk(x -> x^x, 0, 2)
u(x) = x^x
quadgk(u, 0, 2)
```
$$

View File

@ -79,14 +79,15 @@ For this problem we need to identify $a$ and $b$. These are found numerically th
```{julia}
a,b = find_zeros(x -> f(x) - g(x), -3, 3)
h(x) = f(x) - g(x)
a,b = find_zeros(h, -3, 3)
```
The answer then can be found numerically:
```{julia}
quadgk(x -> f(x) - g(x), a, b)[1]
first(quadgk(h, a, b))
```
##### Example
@ -99,18 +100,18 @@ A plot shows the areas:
```{julia}
𝒇(x) = sin(x)
𝒈(x) = cos(x)
plot(𝒇, 0, 2pi)
plot!(𝒈)
f(x) = sin(x)
g(x) = cos(x)
plot(f, 0, 2pi)
plot!(g)
```
There is a single interval when $f \geq g$ and this can be found algebraically using basic trigonometry, or numerically:
```{julia}
𝒂,𝒃 = find_zeros(x -> 𝒇(x) - 𝒈(x), 0, 2pi) # pi/4, 5pi/4
quadgk(x -> 𝒇(x) - 𝒈(x), 𝒂, 𝒃)[1]
a, b = find_zeros(x -> f(x) - g(x), 0, 2pi) # pi/4, 5pi/4
quadgk(x -> f(x) - g(x), a, b)[1]
```
##### Example
@ -177,15 +178,15 @@ For concreteness, let $f(x) = 2-x^2$ and $[a,b] = [-1, 1/2]$, as in the figure.
```{julia}
𝐟(x) = 2 - x^2
𝐚, 𝐛 = -1, 1/2
𝐜 = (𝐚 + 𝐛)/2
f(x) = 2 - x^2
a, b = -1, 1/2
𝐜 = (a + b)/2
sac, sab, scb = secant(𝐟, 𝐚, 𝐜), secant(𝐟, 𝐚, 𝐛), secant(𝐟, 𝐜, 𝐛)
sac, sab, scb = secant(f, a, 𝐜), secant(f, a, b), secant(f, 𝐜, b)
f1(x) = min(sac(x), scb(x))
f2(x) = sab(x)
A1 = quadgk(x -> f1(x) - f2(x), 𝐚, 𝐛)[1]
A1 = quadgk(x -> f1(x) - f2(x), a, b)[1]
```
As we needed three secant lines, we used the `secant` function from `CalculusWithJulia` to create functions representing each. Once that was done, we used the `min` function to facilitate integrating over the top bounding curve, alternatively, we could break the integral over $[a,c]$ and $[c,b]$.
@ -195,7 +196,7 @@ The area of the parabolic segment is more straightforward.
```{julia}
A2 = quadgk(x -> 𝐟(x) - f2(x), 𝐚, 𝐛)[1]
A2 = quadgk(x -> f(x) - f2(x), a, b)[1]
```
Finally, if Archimedes was right, this relationship should bring about $0$ (or something within round-off error):

View File

@ -202,7 +202,7 @@ $$
\int_{-\pi/2}^{\pi/2} \cos(x) dx = F(\pi/2) - F(-\pi/2) = 1 - (-1) = 2.
$$
### An alternate notation for $F(b) - F(a)$
### An alternate notation
The expression $F(b) - F(a)$ is often written in this more compact form:
@ -617,7 +617,7 @@ We need to figure out when this is $0$. For that, we use some numeric math.
```{julia}
F(x) = exp(-x^2) * quadgk(t -> exp(t^2), 0, x)[1]
Fp(x) = -2x*F(x) + 1
Fp(x) = -2x * F(x) + 1
cps = find_zeros(Fp, -4, 4)
```

View File

@ -34,8 +34,9 @@ function make_sqrt_x_graph(n)
a = 1/2^n
xs = range(1/2^8, stop=b, length=250)
x1s = range(a, stop=b, length=50)
@syms x
f(x) = 1/sqrt(x)
val = N(integrate(f, 1/2^n, b))
val = N(integrate(f(x), (x, 1/2^n, b)))
title = "area under f over [1/$(2^n), $b] is $(rpad(round(val, digits=2), 4))"
plt = plot(f, range(a, stop=b, length=251), xlim=(0,b), ylim=(0, 15), legend=false, size=fig_size, title=title)

View File

@ -440,10 +440,10 @@ Apply the formula to a parameterized circle to ensure, the signed area is proper
```{julia}
#| hold: true
@syms 𝒓 t
𝒙 = 𝒓 * cos(t)
𝒚 = 𝒓 * sin(t)
-integrate(𝒚 * diff(𝒙, t), (t, 0, 2PI))
@syms r t
x = r * cos(t)
y = r * sin(t)
-integrate(y * diff(x, t), (t, 0, 2PI))
```
We see the expected answer for the area of a circle.
@ -461,9 +461,9 @@ Working symbolically, we have one arch given by the following described in a *cl
```{julia}
#| hold: true
@syms t
𝒙 = t - sin(t)
𝒚 = 1 - cos(t)
integrate(𝒚 * diff(𝒙, t), (t, 0, 2PI))
x = t - sin(t)
y = 1 - cos(t)
integrate(y * diff(x, t), (t, 0, 2PI))
```
([Galileo](https://mathshistory.st-andrews.ac.uk/Curves/Cycloid/) was thwarted in finding this answer exactly and resorted to constructing one from metal to *estimate* the value.)

View File

@ -261,15 +261,15 @@ We again just let `SymPy` do the work. A partial fraction decomposition is given
```{julia}
𝒒 = (x^2 - 2x - 3)
apart(1/𝒒)
q = (x^2 - 2x - 3)
apart(1/q)
```
We see what should yield two logarithmic terms:
```{julia}
integrate(1/𝒒, x)
integrate(1/q, x)
```
:::{.callout-note}

View File

@ -300,10 +300,10 @@ The equation $\cos(x) = x$ has just one solution, as can be seen in this plot:
```{julia}
𝒇(x) = cos(x)
𝒈(x) = x
plot(𝒇, -pi, pi)
plot!(𝒈)
f(x) = cos(x)
g(x) = x
plot(f, -pi, pi)
plot!(g)
```
Find it.
@ -313,8 +313,8 @@ We see from the graph that it is clearly between $0$ and $2$, so all we need is
```{julia}
𝒉(x) = 𝒇(x) - 𝒈(x)
find_zero(𝒉, (0, 2))
h(x) = f(x) - g(x)
find_zero(h, (0, 2))
```
##### Example: Inverse functions

View File

@ -517,6 +517,24 @@ ys = f.(xs)
Same story. The numeric evidence supports a limit of $L=0.6$.
::: {.callout-note}
### The `lim` function
The `CalculusWithJulia` package provides a convenience function `lim(f, c)` to create tables to showcase limits. The `dir` keyword can be `"+-"` (the default) to show values from both the left and the right; `"+"` to only show values to the right of `c`; and `"-"` to only show values to the left of `c`:
For example:
```{julia}
lim(f, c)
```
The numbers are displayed in decreasing order so the values on the left are read from top to bottom:
```{julia}
lim(f, c; dir="-") # or even lim(f, c, -)
```
:::
##### Example: the secant line
@ -567,18 +585,18 @@ What is the value of $L$, if it exists? A quick attempt numerically yields:
```{julia}
𝒙s = 0 .+ hs
𝒚s = [g(x) for x in 𝒙s]
[𝒙s 𝒚s]
xs = 0 .+ hs
ys = [g(x) for x in xs]
[xs ys]
```
Hmm, the values in `ys` appear to be going to $0.5$, but then end up at $0$. Is the limit $0$ or $1/2$? The answer is $1/2$. The last $0$ is an artifact of floating point arithmetic and the last few deviations from `0.5` due to loss of precision in subtraction. To investigate, we look more carefully at the two ratios:
```{julia}
y1s = [1 - cos(x) for x in 𝒙s]
y2s = [x^2 for x in 𝒙s]
[𝒙s y1s y2s]
y1s = [1 - cos(x) for x in xs]
y2s = [x^2 for x in xs]
[xs y1s y2s]
```
Looking at the bottom of the second column reveals the error. The value of `1 - cos(1.0e-8)` is `0` and not a value around `5e-17`, as would be expected from the pattern above it. This is because the smallest floating point value less than `1.0` is more than `5e-17` units away, so `cos(1e-8)` is evaluated to be `1.0`. There just isn't enough granularity to get this close to $0$.
@ -928,7 +946,7 @@ We know the first factor has a limit found by evaluation: $2/\pi$, so it is real
```{julia}
l(x) = cos(PI*x) / (1 - (2x)^2)
limit(l, 1//2)
limit(l(x), x => 1//2)
```
Putting together, we would get $1/2$. Which we could have done directly in this case:

View File

@ -153,7 +153,7 @@ Some functions only have one-sided limits as they are not defined in an interval
```{julia}
limit(x^x, x, 0, dir="+")
limit(x^x, x=>0, dir="+")
```
This agrees with the IEEE convention of assigning `0^0` to be `1`.

View File

@ -570,16 +570,7 @@ In `Julia`, there is a richer set of error types. The value `0/0` will in fact n
```{julia}
sqrt(-1)
```
For integer or real-valued inputs, the `sqrt` function expects non-negative values, so that the output will always be a real number.
There are other types of errors. Overflow is a common one on most calculators. The value of $1000!$ is actually *very* large (over 2500 digits large). On the Google calculator it returns `Infinity`, a slight stretch. For `factorial(1000)` `Julia` returns an `OverflowError`. This means that the answer is too large to be represented as a regular integer.
```{julia}
#| error: true
factorial(1000)
```

View File

@ -95,6 +95,7 @@ Functions in `Julia` have an implicit domain, just as they do mathematically. In
```{julia}
#| error: true
h(-1)
```

View File

@ -112,6 +112,7 @@ This could be worked around, as it is with some programming languages, but it is
```{julia}
#| error: true
sqrt(-1.0)
```

View File

@ -427,24 +427,26 @@ SymPy is phasing in the `solveset` function to replace `solve`. The main reason
```{julia}
𝒑 = 8x^4 - 8x^2 + 1
𝒑_rts = solveset(𝒑)
p = 8x^4 - 8x^2 + 1
p_rts = solveset(p)
```
The `𝒑_rts` object, a `FiniteSet`, does not allow immediate access to its elements. For that `elements` will work to return a vector:
The `p_rts` object, a `Set`, does not allow indexed access to its elements. For that `collect` will work to return a vector:
```{julia}
elements(𝒑_rts)
collect(p_rts)
```
To get the numeric approximation, we compose these function calls:
To get the numeric approximation, we can broadcast:
```{julia}
N.(elements(solveset(𝒑)))
N.(solveset(p))
```
(There is no need to call `collect` -- though you can -- as broadcasting over a set falls back to broadcasting over the iteration of the set and in this case returns a vector.)
## Do numeric methods matter when you can just graph?
@ -455,8 +457,8 @@ For another example, consider the polynomial $(x-20)^5 - (x-20) + 1$. In this fo
```{julia}
𝐩 = x^5 - 100x^4 + 4000x^3 - 80000x^2 + 799999x - 3199979
plot(𝐩, -10, 10)
p = x^5 - 100x^4 + 4000x^3 - 80000x^2 + 799999x - 3199979
plot(p, -10, 10)
```
This seems to indicate a root near $10$. But look at the scale of the $y$ axis. The value at $-10$ is around $-25,000,000$ so it is really hard to tell if $f$ is near $0$ when $x=10$, as the range is too large.
@ -466,7 +468,7 @@ A graph over $[10,20]$ is still unclear:
```{julia}
plot(𝐩, 10,20)
plot(p, 10,20)
```
We see that what looked like a zero near $10$, was actually a number around $-100,000$.
@ -476,7 +478,7 @@ Continuing, a plot over $[15, 20]$ still isn't that useful. It isn't until we ge
```{julia}
plot(𝐩, 18, 22)
plot(p, 18, 22)
```
Not that it can't be done, but graphically solving for a root here can require some judicious choice of viewing window. Even worse is the case where something might graphically look like a root, but in fact not be a root. Something like $(x-100)^2 + 0.1$ will demonstrate.
@ -578,6 +580,10 @@ from which it follows that $|x| \leq h$, as desired.
For our polynomial $x^5 -x + 1$ we have the sum above is $3$. The lone real root is approximately $-1.1673$ which satisfies $|-1.1673| \leq 3$.
## Questions
@ -883,10 +889,10 @@ function mag()
p = Permutation(0,2)
q = Permutation(1,2)
m = 0
for perm in (p, q, q*p, p*q, p*q*p, p^2)
as = perm([2,3,4])
for perm in (p, q, q*p, p*q, p*q*p)#, p^2)
as = N.(collect(perm([2,3,4])))
fn = x -> x^3 - as[1]*x^2 + as[2]*x - as[3]
rts_ = find_zeros(fn, -10..10)
rts_ = find_zeros(fn, -10,10)
a1 = maximum(abs.(rts_))
m = a1 > m ? a1 : m
end

View File

@ -141,6 +141,7 @@ As `r` uses "`x`", and `s` a "`t`" the two can not be added, say:
```{julia}
#| error: true
r + s
```