typos
This commit is contained in:
@@ -100,7 +100,7 @@ We can easily make a graph of a function over a specified interval. What is not
|
||||
Produce a graph of the function $f(x) = x^4 -13x^3 + 56x^2-92x + 48$.
|
||||
|
||||
|
||||
We identify this as a fourth-degree polynomial with postive leading coefficient. Hence it will eventually look $U$-shaped. If we graph over a too-wide interval, that is all we will see. Rather, we do some work to produce a graph that shows the zeros, peaks, and valleys of $f(x)$. To do so, we need to know the extent of the zeros. We can try some theory, but instead we just guess and if that fails, will work harder:
|
||||
We identify this as a fourth-degree polynomial with positive leading coefficient. Hence it will eventually look $U$-shaped. If we graph over a too-wide interval, that is all we will see. Rather, we do some work to produce a graph that shows the zeros, peaks, and valleys of $f(x)$. To do so, we need to know the extent of the zeros. We can try some theory, but instead we just guess and if that fails, will work harder:
|
||||
|
||||
|
||||
```{julia}
|
||||
@@ -351,7 +351,7 @@ Consider the function $p(x) = x + 2x^3 + 3x^3 + 4x^4 + 5x^5 +6x^6$. Which interv
|
||||
choices = ["``(-5,5)``, the default bounds of a calculator",
|
||||
"``(-3.5, 3.5)``, the bounds given by Cauchy for the real roots of ``p``",
|
||||
"``(-1, 1)``, as many special polynomials have their roots in this interval",
|
||||
"``(-1.1, .25)``, as this constains all the roots, the critical points, and inflection points and just a bit more"
|
||||
"``(-1.1, .25)``, as this contains all the roots, the critical points, and inflection points and just a bit more"
|
||||
]
|
||||
radioq(choices, 4, keep_order=true)
|
||||
```
|
||||
@@ -577,7 +577,7 @@ Why should asymptotics matter?
|
||||
#| hold: true
|
||||
#| echo: false
|
||||
choices = [
|
||||
L"A vertical asymptote can distory the $y$ range, so it is important to avoid too-large values",
|
||||
L"A vertical asymptote can distort the $y$ range, so it is important to avoid too-large values",
|
||||
L"A horizontal asymptote must be plotted from $-\infty$ to $\infty$",
|
||||
"A slant asymptote must be plotted over a very wide domain so that it can be identified."
|
||||
]
|
||||
|
||||
@@ -1278,7 +1278,7 @@ Consider the graph of the `airyai` function (from `SpecialFunctions`) over $[-5,
|
||||
plot(airyai, -5, 5)
|
||||
```
|
||||
|
||||
At $x = -2.5$ the derivative is postive or negative?
|
||||
At $x = -2.5$ the derivative is positive or negative?
|
||||
|
||||
|
||||
```{julia}
|
||||
@@ -1289,7 +1289,7 @@ answ = 1
|
||||
radioq(choices, answ, keep_order=true)
|
||||
```
|
||||
|
||||
At $x=0$ the derivative is postive or negative?
|
||||
At $x=0$ the derivative is positive or negative?
|
||||
|
||||
|
||||
```{julia}
|
||||
@@ -1300,7 +1300,7 @@ answ = 2
|
||||
radioq(choices, answ, keep_order=true)
|
||||
```
|
||||
|
||||
At $x = 2.5$ the derivative is postive or negative?
|
||||
At $x = 2.5$ the derivative is positive or negative?
|
||||
|
||||
|
||||
```{julia}
|
||||
|
||||
@@ -124,7 +124,7 @@ plotif(f, f', -2, 2)
|
||||
When a function changes from increasing to decreasing, or decreasing to increasing, it will have a peak or a valley. More formally, such points are relative extrema.
|
||||
|
||||
|
||||
When discussing the mean value thereom, we defined *relative extrema* :
|
||||
When discussing the mean value theorem, we defined *relative extrema* :
|
||||
|
||||
|
||||
> * The function $f(x)$ has a *relative maximum* at $c$ if the value $f(c)$ is an *absolute maximum* for some *open* interval containing $c$.
|
||||
@@ -264,9 +264,9 @@ Such values are often summarized graphically on a number line using a *sign char
|
||||
Reading this we have:
|
||||
|
||||
|
||||
* the derivative changes sign from negative to postive at $x=-1$, so $g(x)$ will have a relative minimum.
|
||||
* the derivative changes sign from negative to positive at $x=-1$, so $g(x)$ will have a relative minimum.
|
||||
* the derivative changes sign from positive to negative at $x=0$, so $g(x)$ will have a relative maximum.
|
||||
* the derivative changes sign from negative to postive at $x=1$, so $g(x)$ will have a relative minimum.
|
||||
* the derivative changes sign from negative to positive at $x=1$, so $g(x)$ will have a relative minimum.
|
||||
|
||||
|
||||
In the `CalculusWithJulia` package there is `sign_chart` function that will do such work for us, though with a different display:
|
||||
|
||||
@@ -192,7 +192,7 @@ p
|
||||
The plot shows the tangent line with slope $dy/dx$ and the actual change in $y$, $\Delta y$, for some specified $\Delta x$. The small gap above the sine curve is the error were the value of the sine approximated using the drawn tangent line. We can see that approximating the value of $\Delta y = \sin(c+\Delta x) - \sin(c)$ with the often easier to compute $(dy/dx) \cdot \Delta x = f'(c)\Delta x$ - for small enough values of $\Delta x$ - is not going to be too far off provided $\Delta x$ is not too large.
|
||||
|
||||
|
||||
This approximation is known as linearization. It can be used both in theoretical computations and in pratical applications. To see how effective it is, we look at some examples.
|
||||
This approximation is known as linearization. It can be used both in theoretical computations and in practical applications. To see how effective it is, we look at some examples.
|
||||
|
||||
|
||||
##### Example
|
||||
|
||||
@@ -354,7 +354,8 @@ That is the function $f(x)$, minus the secant line between $(a,f(a))$ and $(b, f
|
||||
#The polynomial function interpolates the points ``A``,``B``,``C``, and ``D``.
|
||||
#Adjusting these creates different functions. Regardless of the
|
||||
#function -- which as a polynomial will always be continuous and
|
||||
#differentiable -- the slope of the secant line between ``A`` and ``B`` is alway#s matched by **some** tangent line between the points ``A`` and ``B``.
|
||||
#differentiable -- the slope of the secant line between ``A`` and ``B`` is
|
||||
#always matched by **some** tangent line between the points ``A`` and ``B``.
|
||||
#"""
|
||||
#JSXGraph(:derivatives, "mean-value.js", caption)
|
||||
nothing
|
||||
|
||||
@@ -111,7 +111,7 @@ f0,f1 = f1, sin(x1)
|
||||
x1,f1
|
||||
```
|
||||
|
||||
Like Newton's method, the secant method coverges quickly for this problem (though its rate is less than the quadratic rate of Newton's method).
|
||||
Like Newton's method, the secant method converges quickly for this problem (though its rate is less than the quadratic rate of Newton's method).
|
||||
|
||||
|
||||
This method is included in `Roots` as `Secant()` (or `Order1()`):
|
||||
@@ -210,7 +210,7 @@ Though the above can be simplified quite a bit when computed by hand, here we si
|
||||
An inverse quadratic step is utilized by Brent's method, as possible, to yield a rapidly convergent bracketing algorithm implemented as a default zero finder in many software languages. `Julia`'s `Roots` package implements the method in `Roots.Brent()`. An inverse cubic interpolation is utilized by [Alefeld, Potra, and Shi](https://dl.acm.org/doi/10.1145/210089.210111) which gives an asymptotically even more rapidly convergent algorithm than Brent's (implemented in `Roots.AlefeldPotraShi()` and also `Roots.A42()`). This is used as a finishing step in many cases by the default hybrid `Order0()` method of `find_zero`.
|
||||
|
||||
|
||||
In a bracketing algorithm, the next step should reduce the size of the bracket, so the next iterate should be inside the current bracket. However, quadratic convergence does not guarantee this to happen. As such, sometimes a subsitute method must be chosen.
|
||||
In a bracketing algorithm, the next step should reduce the size of the bracket, so the next iterate should be inside the current bracket. However, quadratic convergence does not guarantee this to happen. As such, sometimes a substitute method must be chosen.
|
||||
|
||||
|
||||
[Chandrapatla's](https://www.google.com/books/edition/Computational_Physics/cC-8BAAAQBAJ?hl=en&gbpv=1&pg=PA95&printsec=frontcover) method, is a bracketing method utilizing an inverse quadratic step as the centerpiece. The key insight is the test to choose between this inverse quadratic step and a bisection step. This is done in the following based on values of $\xi$ and $\Phi$ defined within:
|
||||
|
||||
@@ -758,7 +758,7 @@ ImageFile(imgfile, caption)
|
||||
# {{{newtons_method_flat}}}
|
||||
caption = L"""
|
||||
|
||||
Illustration of Newton's method failing to coverge as for some $x_i$,
|
||||
Illustration of Newton's method failing to converge as for some $x_i$,
|
||||
$f'(x_i)$ is too close to ``0``. In this instance after a few steps, the
|
||||
algorithm just cycles around the local minimum near $0.66$. The values
|
||||
of $x_i$ repeat in the pattern: $1.0002, 0.7503, -0.0833, 1.0002,
|
||||
@@ -1105,7 +1105,7 @@ If $x_0$ is $1$ what occurs?
|
||||
nm_choices = [
|
||||
"The algorithm converges very quickly. A good initial point was chosen.",
|
||||
"The algorithm converges, but slowly. The initial point is close enough to the answer to ensure decreasing errors.",
|
||||
"The algrithm fails to converge, as it cycles about"
|
||||
"The algorithm fails to converge, as it cycles about"
|
||||
]
|
||||
radioq(nm_choices, 1, keep_order=true)
|
||||
```
|
||||
|
||||
@@ -157,7 +157,7 @@ find_zeros(A', 0, 10) # find_zeros in `Roots`,
|
||||
|
||||
:::{.callout-note}
|
||||
## Note
|
||||
Look at the last definition of `A`. The function `A` appears on both sides, though on the left side with one argument and on the right with two. These are two "methods" of a *generic* function, `A`. `Julia` allows multiple definitions for the same name as long as the arguments (their number and type) can disambiguate which to use. In this instance, when one argument is passed in then the last defintion is used (`A(b,h(b))`), whereas if two are passed in, then the method that multiplies both arguments is used. The advantage of multiple dispatch is illustrated: the same concept - area - has one function name, though there may be different ways to compute the area, so there is more than one implementation.
|
||||
Look at the last definition of `A`. The function `A` appears on both sides, though on the left side with one argument and on the right with two. These are two "methods" of a *generic* function, `A`. `Julia` allows multiple definitions for the same name as long as the arguments (their number and type) can disambiguate which to use. In this instance, when one argument is passed in then the last definition is used (`A(b,h(b))`), whereas if two are passed in, then the method that multiplies both arguments is used. The advantage of multiple dispatch is illustrated: the same concept - area - has one function name, though there may be different ways to compute the area, so there is more than one implementation.
|
||||
|
||||
:::
|
||||
|
||||
@@ -1143,7 +1143,7 @@ numericq(Prim(val), 1e-3) ## a square!
|
||||
###### Question
|
||||
|
||||
|
||||
A running track is in the shape of two straight aways and two half circles. The total distance (perimeter) is 400 meters. Suppose $w$ is the width (twice the radius of the circles) and $h$ is the height. What dimensions minimize the sum $w + h$?
|
||||
A running track is in the shape of two straightaways and two half circles. The total distance (perimeter) is 400 meters. Suppose $w$ is the width (twice the radius of the circles) and $h$ is the height. What dimensions minimize the sum $w + h$?
|
||||
|
||||
|
||||
You have $P(w, h) = 2\pi \cdot (w/2) + 2\cdot(h-w)$.
|
||||
@@ -1306,7 +1306,7 @@ Let $f(x) = (a/x)^x$ for $a,x > 0$. When is this maximized? The following might
|
||||
|
||||
```{julia}
|
||||
#| hold: true
|
||||
@syms x::positive a::postive
|
||||
@syms x::positive a::positive
|
||||
diff((a/x)^x, x)
|
||||
```
|
||||
|
||||
@@ -1377,7 +1377,7 @@ Why is the following set of commands useful in this task:
|
||||
#| eval: false
|
||||
c2 = a^2*(1 + 1/p)^2 + b^2*(1+p)^2
|
||||
c2p = diff(c2, p)
|
||||
eq = numer(together(c2p))
|
||||
eq = numerator(together(c2p))
|
||||
solve(eq ~ 0, p)
|
||||
```
|
||||
|
||||
|
||||
@@ -64,7 +64,7 @@ gif(anim, imgfile, fps = 1)
|
||||
|
||||
caption = L"""
|
||||
|
||||
Illustration of the Taylor polynomial of degree $k$, $T_k(x)$, at $c=0$ and its graph overlayed on that of the function $1 - \cos(x)$.
|
||||
Illustration of the Taylor polynomial of degree $k$, $T_k(x)$, at $c=0$ and its graph overlaid on that of the function $1 - \cos(x)$.
|
||||
|
||||
"""
|
||||
|
||||
@@ -921,7 +921,7 @@ end
|
||||
gᵏs
|
||||
```
|
||||
|
||||
We can see the expected `g' = 1/f'` (where the point of evalution is $g'(y) = 1/f'(f^{-1}(y))$ is not written). In addition, we get 3 more formulas, hinting that the answers grow rapidly in terms of their complexity.
|
||||
We can see the expected `g' = 1/f'` (where the point of evaluation is $g'(y) = 1/f'(f^{-1}(y))$ is not written). In addition, we get 3 more formulas, hinting that the answers grow rapidly in terms of their complexity.
|
||||
|
||||
|
||||
In the above, for each `n`, the code above sets up the two sides, `left` and `right`, of an equation involving the higher-order derivatives of $g$. For example, when `n=2` we have:
|
||||
|
||||
Reference in New Issue
Block a user