Compare commits

...

2 Commits

Author SHA1 Message Date
john verzani
9dcafd7d7d
Merge pull request #145 from fangliu-tju/main
some typos.
2025-05-29 07:47:35 -04:00
Fang Liu
4d0a9e9a72 some typos. 2025-05-23 16:20:13 +08:00
10 changed files with 54 additions and 60 deletions

View File

@ -195,7 +195,7 @@ To identify how wide a viewing window should be, for the rational function the a
```{julia}
cps = find_zeros(f', -10, 10)
poss_ips = find_zero(f'', (-10, 10))
poss_ips = find_zeros(f'', (-10, 10))
extrema(union(cps, poss_ips))
```
@ -340,7 +340,7 @@ radioq(choices, answ)
###### Question
Consider the function $p(x) = x + 2x^3 + 3x^3 + 4x^4 + 5x^5 +6x^6$. Which interval shows more than a $U$-shaped graph that dominates for large $x$ due to the leading term being $6x^6$?
Consider the function $p(x) = x + 2x^2 + 3x^3 + 4x^4 + 5x^5 +6x^6$. Which interval shows more than a $U$-shaped graph that dominates for large $x$ due to the leading term being $6x^6$?
(Find an interval that contains the zeros, critical points, and inflection points.)
@ -494,7 +494,7 @@ Does a plot over $[0,50]$ show qualitatively similar behaviour?
```{julia}
#| hold: true
#| echo: false
yesnoq(true)
yesnoq("no")
```
Exponential growth has $P''(t) = P_0 a^t \log(a)^2 > 0$, so has no inflection point. By plotting over a sufficiently wide interval, can you answer: does the logistic growth model have an inflection point?

View File

@ -432,7 +432,7 @@ $$
\frac{\log(x+h) - \log(x)}{h} = \frac{1}{h}\log(\frac{x+h}{x}) = \log((1+h/x)^{1/h}).
$$
As noted earlier, Cauchy saw the limit as $u$ goes to $0$ of $f(u) = (1 + u)^{1/u}$ is $e$. Re-expressing the above we can get $1/h \cdot \log(f(h/x))$. The limit as $h$ goes to $0$ of this is found from the composition rules for limits: as $\lim_{h \rightarrow 0} f(h/x) = e^{1/x}$, and since $\log(x)$ is continuous at $e^{1/x}$ we get this expression has a limit of $1/x$.
As noted earlier, Cauchy saw the limit as $u$ goes to $0$ of $f(u) = (1 + u)^{1/u}$ is $e$. Re-expressing the above we can get $1/x \cdot \log(f(h/x))$. The limit as $h$ goes to $0$ of this is found from the composition rules for limits: as $\lim_{h \rightarrow 0} f(h/x) = e$, and since $\log(x)$ is continuous at $e$ we get this expression has a limit of $1/x$.
We verify through:
@ -775,7 +775,7 @@ f'(\square) &= 2(\square) & g'(x) &= \frac{1}{2}x^{-1/2}
$$
We use $\square$ for the argument of `f'` to emphasize that $g(x)$ is the needed value, not just $x$:
We use $\square$ for the argument of $f'$ to emphasize that $g(x)$ is the needed value, not just $x$:
$$
@ -1651,7 +1651,7 @@ $$
\frac{d(f\circ g)}{dx}\mid_{x=1}
&= \lim_{h\rightarrow 0} \frac{f(g(1) + g'(1)h)-f(g(1))}{h}\\
&= \lim_{h\rightarrow 0} \frac{f(g(1) + g'(1)h)-f(g(1))}{g'(1)h} \cdot g'(1)\\
&= \lim_{h\rightarrow 0} (f\circ g)'(g(1)) \cdot g'(1).
&= \lim_{h\rightarrow 0} f'(g(1)) \cdot g'(1).
\end{align*}
$$

View File

@ -243,7 +243,7 @@ g(x) = sqrt(abs(x^2 - 1))
cps = find_zeros(g', -2, 2)
```
We see the three values $-1$, $0$, $1$ that correspond to the two zeros and the relative minimum of $x^2 - 1$. We could graph things, but instead we characterize these values using a sign chart. A piecewise continuous function can only change sign when it crosses $0$ or jumps over $0$. The derivative will be continuous, except possibly at the three values above, so is piecewise continuous.
We see the three values $-1$, $0$, $1$ that correspond to the two zeros and the relative maximum of $x^2 - 1$. We could graph things, but instead we characterize these values using a sign chart. A piecewise continuous function can only change sign when it crosses $0$ or jumps over $0$. The derivative will be continuous, except possibly at the three values above, so is piecewise continuous.
A sign chart picks convenient values between crossing points to test if the function is positive or negative over those intervals. When computing by hand, these would ideally be values for which the function is easily computed. On the computer, this isn't a concern; below the midpoint is chosen:
@ -328,7 +328,7 @@ At $x=0$ we have to the left and right signs found by
fp(-pi/2), fp(pi/2)
```
Both are negative. The derivative does not change sign at $0$, so the critical point is neither a relative minimum or maximum.
Both are negative. The derivative does not change sign at $0$, so the critical point is neither a relative minimum nor maximum.
What about at $2\pi$? We do something similar:
@ -338,7 +338,7 @@ What about at $2\pi$? We do something similar:
fp(2pi - pi/2), fp(2pi + pi/2)
```
Again, both negative. The function $f(x)$ is just decreasing near $2\pi$, so again the critical point is neither a relative minimum or maximum.
Again, both negative. The function $f(x)$ is just decreasing near $2\pi$, so again the critical point is neither a relative minimum nor maximum.
A graph verifies this:
@ -454,7 +454,7 @@ We won't work with these definitions in this section, rather we will characteriz
A proof of this makes use of the same trick used to establish the mean value theorem from Rolle's theorem. Assume $f'$ is increasing and let $g(x) = f(x) - (f(a) + M \cdot (x-a))$, where $M$ is the slope of the secant line between $a$ and $b$. By construction $g(a) = g(b) = 0$. If $f'(x)$ is increasing, then so is $g'(x) = f'(x) + M$. By its definition above, showing $f$ is concave up is the same as showing $g(x) \leq 0$. Suppose to the contrary that there is a value where $g(x) > 0$ in $[a,b]$. We show this can't be. Assuming $g'(x)$ always exists, after some work, Rolle's theorem will ensure there is a value where $g'(c) = 0$ and $(c,g(c))$ is a relative maximum, and as we know there is at least one positive value, it must be $g(c) > 0$. The first derivative test then ensures that $g'(x)$ will increase to the left of $c$ and decrease to the right of $c$, since $c$ is at a critical point and not an endpoint. But this can't happen as $g'(x)$ is assumed to be increasing on the interval.
A proof of this makes use of the same trick used to establish the mean value theorem from Rolle's theorem. Assume $f'$ is increasing and let $g(x) = f(x) - (f(a) + M \cdot (x-a))$, where $M$ is the slope of the secant line between $a$ and $b$. By construction $g(a) = g(b) = 0$. If $f'(x)$ is increasing, then so is $g'(x) = f'(x) + M$. By its definition above, showing $f$ is concave up is the same as showing $g(x) \leq 0$. Suppose to the contrary that there is a value where $g(x) > 0$ in $[a,b]$. We show this can't be. Assuming $g'(x)$ always exists, after some work, Rolle's theorem will ensure there is a value where $g'(c) = 0$ and $(c,g(c))$ is a relative maximum, and as we know there is at least one positive value, it must be $g(c) > 0$. The first derivative test then ensures that $g'(x)$ will be positive to the left of $c$ and negative to the right of $c$, since $c$ is at a critical point and not an endpoint. But this can't happen as $g'(x)$ is assumed to be increasing on the interval.
The relationship between increasing functions and their derivatives if $f'(x) > 0$ on $I$, then $f$ is increasing on $I$ gives this second characterization of concavity when the second derivative exists:
@ -503,8 +503,8 @@ Concave up functions are "opening" up, and often clearly $U$-shaped, though that
If $c$ is a critical point of $f(x)$ with $f''(c)$ existing in a neighborhood of $c$, then
* $f$ will have a relative maximum at the critical point $c$ if $f''(c) > 0$,
* $f$ will have a relative minimum at the critical point $c$ if $f''(c) < 0$, and
* $f$ will have a relative minimum at the critical point $c$ if $f''(c) > 0$,
* $f$ will have a relative maximum at the critical point $c$ if $f''(c) < 0$, and
* *if* $f''(c) = 0$ the test is *inconclusive*.
:::
@ -799,6 +799,8 @@ scatter!(ips, ex.(ips), marker=(5, :brown3, :star5))
The black circle denotes what?
```{julia}
#| hold: true
#| echo: false
choices = [raw"A zero of $f$",
raw"A critical point of $f$",
raw"An inflection point of $f$"]
@ -806,21 +808,11 @@ answ = 1
radioq(choices, answ)
```
The black circle denotes what?
```{julia}
choices = [raw"A zero of $f$",
raw"A critical point of $f$",
raw"An inflection point of $f$"]
answ = 1
radioq(choices, answ)
```
The green diamond denotes what?
```{julia}
#| hold: true
#| echo: false
choices = [raw"A zero of $f$",
raw"A critical point of $f$",
raw"An inflection point of $f$"]
@ -832,6 +824,8 @@ radioq(choices, answ)
The red stars denotes what?
```{julia}
#| hold: true
#| echo: false
choices = [raw"Zeros of $f$",
raw"Critical points of $f$",
raw"Inflection points of $f$"]

View File

@ -416,7 +416,7 @@ $$
6x - (6y \frac{dy}{dx} \cdot \frac{dy}{dx} + 3y^2 \frac{d^2y}{dx^2}) = 0.
$$
Again, if must be that $d^2y/dx^2$ appears as a linear factor, so we can solve for it:
Again, it must be that $d^2y/dx^2$ appears as a linear factor, so we can solve for it:
$$
@ -456,7 +456,7 @@ eqn = K(x,y)
eqn1 = eqn(y => u(x))
dydx = solve(diff(eqn1,x), diff(u(x), x))[1] # 1 solution
d2ydx2 = solve(diff(eqn1, x, 2), diff(u(x),x, 2))[1] # 1 solution
eqn2 = d2ydx2(diff(u(x), x) => dydx, u(x) => y)
eqn2 = subs(d2ydx2, diff(u(x), x) => dydx, u(x) => y)
simplify(eqn2)
```
@ -637,7 +637,7 @@ Okay, now we need to put this value back into our expression for the `x` value a
```{julia}
xstar = N(cps[2](y => ystar, a =>3, b => 3, L => 3))
xstar = N(cps[2](y => ystar, a =>3, b => 3))
```
Our minimum is at `(xstar, ystar)`, as this graphic shows:

View File

@ -115,7 +115,7 @@ $$
\lim_{x \rightarrow 0} \frac{e^x - e^{-x}}{x}.
$$
It too is of the indeterminate form $0/0$. The derivative of the top is $e^x + e^{-x}$, which is $2$ when $x=0$, so the ratio of $f'(0)/g'(0)$ is seen to be $2$ By continuity, the limit of the ratio of the derivatives is $2$. Then by L'Hospital's rule, the limit above is $2$.
It too is of the indeterminate form $0/0$. The derivative of the top is $e^x + e^{-x}$, which is $2$ when $x=0$, so the ratio of $f'(0)/g'(0)$ is seen to be $2$. By continuity, the limit of the ratio of the derivatives is $2$. Then by L'Hospital's rule, the limit above is $2$.
* Sometimes, L'Hospital's rule must be applied twice. Consider this limit:
@ -820,7 +820,7 @@ $$
#| hold: true
#| echo: false
choices = [
"``e^{-2/\\pi}``",
"``e^{2/\\pi}``",
"``{2\\pi}``",
"``1``",
"``0``",

View File

@ -642,7 +642,7 @@ x = Dual(0, 1)
@code_lowered sin(x)
```
This output of `@code_lowered` can be confusing, but this simple case needn't be. Working from the end we see an assignment to a variable named `%7` of `Dual(%3, %6)`. The value of `%3` is `sin(x)` where `x` is the value `0` above. The value of `%6` is `cos(x)` *times* the value `1` above (the `xp`), which reflects the *chain* rule being used. (The derivative of `sin(u)` is `cos(u)*du`.) So this dual number encodes both the function value at `0` and the derivative of the function at `0`.)
This output of `@code_lowered` can be confusing, but this simple case needn't be. Working from the end we see an assignment to a variable named `%3` of `Dual(%6, %12)`. The value of `%6` is `sin(x)` where `x` is the value `0` above. The value of `%12` is `cos(x)` *times* the value `1` above (the `xp`), which reflects the *chain* rule being used. (The derivative of `sin(u)` is `cos(u)*du`.) So this dual number encodes both the function value at `0` and the derivative of the function at `0`.
Similarly, we can see what happens to `log(x)` at `1` (encoded by `Dual(1,1)`):
@ -654,14 +654,14 @@ x = Dual(1, 1)
@code_lowered log(x)
```
We can see the derivative again reflects the chain rule, it being given by `1/x * xp` where `xp` acts like `dx` (from assignments `%5` and `%4`). Comparing the two outputs, we see only the assignment to `%5` differs, it reflecting the derivative of the function.
We can see the derivative again reflects the chain rule, it being given by `1/x * xp` where `xp` acts like `dx` (from assignments `%9` and `%8`). Comparing the two outputs, we see only the assignment to `%9` differs, it reflecting the derivative of the function.
## Curvature
The curvature of a function will be a topic in a later section on differentiable vector calculus, but the concept of linearization can be used to give an earlier introduction.
The tangent line linearizes the function, it begin the best linear approximation to the graph of the function at the point. The slope of the tangent line is the limi of the slopes of different secant lines. Consider now, the orthogonal concept, the *normal line* at a point. This is a line perpendicular to the tangent line that goes through the point on the curve.
The tangent line linearizes the function, it begin the best linear approximation to the graph of the function at the point. The slope of the tangent line is the limit of the slopes of different secant lines. Consider now, the orthogonal concept, the *normal line* at a point. This is a line perpendicular to the tangent line that goes through the point on the curve.
At a point $(c,f(c))$ the slope of the normal line is $-1/f'(c)$.
@ -680,7 +680,7 @@ Rearranging, we have
$$
\begin{align*}
-f'(c)(y-f(c) &= x-c\\
-f'(c)(y-f(c)) &= x-c\\
-f'(c+h)(y-f(c+h)) &= x-(c+h)
\end{align*}
$$

View File

@ -330,7 +330,7 @@ Let $f(x)$ be differentiable on $(a,b)$ and continuous on $[a,b]$. Then there ex
This says for any secant line between $a < b$ there will be a parallel tangent line at some $c$ with $a < c < b$ (all provided $f$ is differentiable on $(a,b)$ and continuous on $[a,b]$).
Figure @fig-mean-value-theorem illustrates the theorem. The orange line is the secant line. A parallel line tangent to the graph is guaranteed by the mean value theorem. In this figure, there are two such lines, rendered using red.
Figure @fig-mean-value-theorem illustrates the theorem. The blue line is the secant line. A parallel line tangent to the graph is guaranteed by the mean value theorem. In this figure, there are two such lines, rendered using brown.
```{julia}

View File

@ -69,7 +69,7 @@ x₃ = (babylon ∘ babylon ∘ babylon)(2//1)
x₃, x₃^2.0
```
This is now accurate to the sixth decimal point. That is about as far as we, or the Bablyonians, would want to go by hand. Using rational numbers quickly grows out of hand. The next step shows the explosion.
This is now accurate to the sixth decimal point. That is about as far as we, or the Babylonians, would want to go by hand. Using rational numbers quickly grows out of hand. The next step shows the explosion.
```{julia}
@ -212,7 +212,7 @@ In practice, the algorithm is implemented not by repeating the update step a fix
Newton looked at this same example in 1699 (B.T. Polyak, *Newton's method and its use in optimization*, European Journal of Operational Research. 02/2007; 181(3):1086-1096.; and Deuflhard *Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms*) though his technique was slightly different as he did not use the derivative, *per se*, but rather an approximation based on the fact that his function was a polynomial.
We can read that he guessed the answer was ``2 + p``, as there is a sign change between $2$ and $3$. Newton put this guess into the polynomial to get after simplification ``p^3 + 6p^2 + 10p - 1``. This has an **approximate** zero found by solving the linear part ``10p-1 = 0``. Taking ``p = 0.1`` he then can say the answer looks like ``2 + p + q`` and repeat to get ``q^3 + 6.3\cdot q^2 + 11.23 \cdot q + 0.061 = 0``. Again taking just the linear part estimates `q = 0.005431...`. After two steps the estimate is `2.105431...`. This can be continued by expressing the answer as ``2 + p + q + r`` and then solving for an estimate for ``r``.
We can read that he guessed the answer was ``2 + p``, as there is a sign change between $2$ and $3$. Newton put this guess into the polynomial to get after simplification ``p^3 + 6p^2 + 10p - 1``. This has an **approximate** zero found by solving the linear part ``10p-1 = 0``. Taking ``p = 0.1`` he then can say the answer looks like ``2 + p + q`` and repeat to get ``q^3 + 6.3q^2 + 11.23q + 0.061 = 0``. Again taking just the linear part estimates `q = -0.005431...`. After two steps the estimate is `2.094568...`. This can be continued by expressing the answer as ``2 + p + q + r`` and then solving for an estimate for ``r``.
Raphson (1690) proposed a simplification avoiding the computation of new polynomials, hence the usual name of the Newton-Raphson method. Simpson introduced derivatives into the formulation and systems of equations.
@ -696,7 +696,7 @@ If $M$ were just a constant and we suppose $e_0 = 10^{-1}$ then $e_1$ would be l
To identify $M$, let $\alpha$ be the zero of $f$ to be approximated. Assume
* The function $f$ has at continuous second derivative in a neighborhood of $\alpha$.
* The function $f$ has a continuous second derivative in a neighborhood of $\alpha$.
* The value $f'(\alpha)$ is *non-zero* in the neighborhood of $\alpha$.
@ -865,7 +865,7 @@ The function $f(x) = x^{20} - 1$ has two bad behaviours for Newton's
method: for $x < 1$ the derivative is nearly $0$ and for $x>1$ the
second derivative is very big. In this illustration, we have an
initial guess of $x_0=8/9$. As the tangent line is fairly flat, the
next approximation is far away, $x_1 = 1.313\dots$. As this guess is
next approximation is far away, $x_1 = 1.313\dots$. As this guess
is much bigger than $1$, the ratio $f(x)/f'(x) \approx
x^{20}/(20x^{19}) = x/20$, so $x_i - f(x_i)/f'(x_i) \approx (19/20)x_i$
yielding slow, linear convergence until $f''(x_i)$ is moderate. For
@ -1033,7 +1033,7 @@ Let $f(x) = x^2 - 3^x$. This has derivative $2x - 3^x \cdot \log(3)$. Starting w
f(x) = x^2 - 3^x;
fp(x) = 2x - 3^x*log(3);
val = Roots.newton(f, fp, 0);
numericq(val, 1e-14)
numericq(val, 1e-1)
```
###### Question
@ -1424,7 +1424,7 @@ yesnoq("no")
###### Question
Quadratic convergence of Newton's method only applies to *simple* roots. For example, we can see (using the `verbose=true` argument to the `Roots` package's `newton` method, that it only takes $4$ steps to find a zero to $f(x) = \cos(x) - x$ starting at $x_0 = 1$. But it takes many more steps to find the same zero for $f(x) = (\cos(x) - x)^2$.
Quadratic convergence of Newton's method only applies to *simple* roots. For example, we can see (using the `verbose=true` argument to the `Roots` package's `newton` method), that it only takes $4$ steps to find a zero to $f(x) = \cos(x) - x$ starting at $x_0 = 1$. But it takes many more steps to find the same zero for $f(x) = (\cos(x) - x)^2$.
How many?
@ -1454,7 +1454,7 @@ implicit_plot(f, xlims=(-2,2), ylims=(-2,2), legend=false)
Can we find which point on its graph has the largest $y$ value?
This would be straightforward *if* we could write $y(x) = \dots$, for then we would simply find the critical points and investiate. But we can't so easily solve for $y$ interms of $x$. However, we can use Newton's method to do so:
This would be straightforward *if* we could write $y(x) = \dots$, for then we would simply find the critical points and investigate. But we can't so easily solve for $y$ interms of $x$. However, we can use Newton's method to do so:
```{julia}

View File

@ -639,21 +639,21 @@ It is hard to tell which would minimize time without more work. To check a case
```{julia}
x_straight = t(r1 =>2r0, b=>0, x=>out[1], a=>1, L=>2, r0 => 1) # for x=L
x_straight = subs(t, r1 =>2r0, b=>0, x=>out[1], a=>1, L=>2, r0 => 1) # for x=L
```
Compared to the smaller ($x=\sqrt{3}a/3$):
```{julia}
x_angle = t(r1 =>2r0, b=>0, x=>out[2], a=>1, L=>2, r0 => 1)
x_angle = subs(t, r1 =>2r0, b=>0, x=>out[2], a=>1, L=>2, r0 => 1)
```
What about $x=0$?
```{julia}
x_bent = t(r1 =>2r0, b=>0, x=>0, a=>1, L=>2, r0 => 1)
x_bent = subs(t, r1 =>2r0, b=>0, x=>0, a=>1, L=>2, r0 => 1)
```
The value of $x=\sqrt{3}a/3$ minimizes time:
@ -671,7 +671,7 @@ Will this approach always be true? Consider different parameters, say we switch
```{julia}
pts = [0, out...]
m,i = findmin([t(r1 =>2r0, b=>0, x=>u, a=>2, L=>1, r0 => 1) for u in pts]) # min, index
m,i = findmin([subs(t, r1 =>2r0, b=>0, x=>u, a=>2, L=>1, r0 => 1) for u in pts]) # min, index
m, pts[i]
```
@ -997,7 +997,7 @@ A rain gutter is constructed from a 30" wide sheet of tin by bending it into thi
2 * (1/2 * 10*cos(pi/4) * 10 * sin(pi/4)) + 10*sin(pi/4) * 10
```
Find a value in degrees that gives the maximum. (The first task is to write the area in terms of $\theta$.
Find a value in degrees that gives the maximum. (The first task is to write the area in terms of $\theta$.)
```{julia}
@ -1049,7 +1049,7 @@ plot!(p, [0, 30,30,0], [0,10,30,0], color=:orange)
annotate!(p, [(x,y,l) for (x,y,l) in zip([15, 5, 31, 31], [1.5, 3.5, 5, 20], ["x=30", "θ", "10", "20"])])
```
What value of $x$ gives the largest angle $\theta$? (In degrees.)
What value of the largest angle $\theta$ that $x$ gives? (In degrees.)
```{julia}
@ -1094,7 +1094,7 @@ radioq(choices, answ)
##### Question
Let $x_1$, $x_2$, $x_n$ be a set of unspecified numbers in a data set. Form the expression $s(x) = (x-x_1)^2 + \cdots (x-x_n)^2$. What is the smallest this can be (in $x$)?
Let $x_1$, $x_2$, $\dots, x_n$ be a set of unspecified numbers in a data set. Form the expression $s(x) = (x-x_1)^2 + \cdots + (x-x_n)^2$. What is the smallest this can be (in $x$)?
We approach this using `SymPy` and $n=10$
@ -1108,7 +1108,7 @@ s(x) = sum((x-xi)^2 for xi in xs)
cps = solve(diff(s(x), x), x)
```
Run the above code. Baseed on the critical points found, what do you guess will be the minimum value in terms of the values $x_1$, $x_2, \dots$?
Run the above code. Based on the critical points found, what do you guess will be the minimum value in terms of the values $x_1$, $x_2, \dots$?
```{julia}
@ -1117,7 +1117,7 @@ Run the above code. Baseed on the critical points found, what do you guess will
choices=[
"The mean, or average, of the values",
"The median, or middle number, of the values",
L"The square roots of the values squared, $(x_1^2 + \cdots x_n^2)^2$"
L"The square roots of the values squared, $(x_1^2 + \cdots + x_n^2)^2$"
]
answ = 1
radioq(choices, answ)
@ -1126,7 +1126,7 @@ radioq(choices, answ)
###### Question
Minimize the function $f(x) = 2x + 3/x$ over $(0, \infty)$.
Find $x$ to minimize the function $f(x) = 2x + 3/x$ over $(0, \infty)$.
```{julia}
@ -1190,7 +1190,7 @@ The width is:
w(h) = 12_000 / h
S(w, h) = (w- 2*8) * (h - 2*32)
S(h) = S(w(h), h)
hstar =find_zero(D(S), 500)
hstar = find_zero(D(S), 200)
wstar = w(hstar)
numericq(wstar)
```
@ -1204,7 +1204,7 @@ The height is?
w(h) = 12_000 / h
S(w, h) = (w- 2*8) * (h - 2*32)
S(h) = S(w(h), h)
hstar =find_zero(D(S), 500)
hstar = find_zero(D(S), 200)
numericq(hstar)
```

View File

@ -118,7 +118,7 @@ The term "best" is deserved, as any other straight line will differ at least in
$$
\begin{align*}
\frac{F'(\xi)}{G'(\xi)} &=
\frac{f'(\xi) - f''(\xi)(\xi-x) - f(\xi)\cdot 1}{2(\xi-x)} \\
\frac{f'(\xi) - f''(\xi)(\xi-x) - f'(\xi)\cdot 1}{2(\xi-x)} \\
&= -f''(\xi)/2\\
&= \frac{F(c) - F(x)}{G(c) - G(x)}\\
&= \frac{f(c) - f'(c)(c-x) - (f(x) - f'(x)(x-x))}{(c-x)^2 - (x-x)^2} \\
@ -445,7 +445,7 @@ This can be solved to give this relationship:
$$
\frac{d^2\theta}{dt^2} = - \frac{g}{R}\theta.
\frac{d^2\theta}{dt^2} = \frac{g}{R}\theta.
$$
The solution to this "equation" can be written (in some parameterization) as $\theta(t)=A\cos \left(\omega t+\phi \right)$. This motion is the well-studied simple [harmonic oscillator](https://en.wikipedia.org/wiki/Harmonic_oscillator), a model for a simple pendulum.
@ -721,7 +721,7 @@ The height of a [GPS satellite](http://www.gps.gov/systems/gps/space/) is about
```{julia}
Hₛ = 12250 * 1609.34 # 1609 meters per mile
Hₛ = 12550 * 1609.34 # 1609 meters per mile
HRₛ = Hₛ/R
Prealₛ = P0 * (1 + HRₛ)^(3/2)
@ -753,7 +753,7 @@ Finally, we show how to use the `Unitful` package. This package allows us to def
m, mi, kg, s, hr = u"m", u"mi", u"kg", u"s", u"hr"
G = 6.67408e-11 * m^3 / kg / s^2
H = uconvert(m, 12250 * mi) # unit convert miles to meter
H = uconvert(m, 12550 * mi) # unit convert miles to meter
R = uconvert(m, 3959 * mi)
M = 5.972e24 * kg
@ -858,8 +858,8 @@ For notational purposes, let $g(x)$ be the inverse function for $f(x)$. Assume *
$$
\begin{align*}
f(x_0 + \Delta_x) &= f(x_0) + a_1 \Delta_x + a_2 (\Delta_x)^2 + \cdots a_n (\Delta_x)^n + \dots\\
g(y_0 + \Delta_y) &= g(y_0) + b_1 \Delta_y + b_2 (\Delta_y)^2 + \cdots b_n (\Delta_y)^n + \dots
f(x_0 + \Delta_x) &= f(x_0) + a_1 \Delta_x + a_2 (\Delta_x)^2 + \cdots + a_n (\Delta_x)^n + \dots\\
g(y_0 + \Delta_y) &= g(y_0) + b_1 \Delta_y + b_2 (\Delta_y)^2 + \cdots + b_n (\Delta_y)^n + \dots
\end{align*}
$$
@ -897,7 +897,7 @@ $$
(This is following [Liptaj](https://vixra.org/pdf/1703.0295v1.pdf)).
We will use `SymPy` to take this limit for the first `4` derivatives. Here is some code that expands $x + \Delta_x = g(f(x_0 + \Delta_x))$ and then uses `SymPy` to solve:
We will use `SymPy` to take this limit for the first `4` derivatives. Here is some code that expands $x_0 + \Delta_x = g(f(x_0 + \Delta_x))$ and then uses `SymPy` to solve:
```{julia}