em dash; sentence case

This commit is contained in:
jverzani
2025-07-27 15:26:00 -04:00
parent c3b221cd29
commit 33c6e62d68
59 changed files with 385 additions and 243 deletions

View File

@@ -81,7 +81,7 @@ $$
\| \vec{v} \| = \sqrt{ v_1^2 + v_2^2 + \cdots + v_n^2}.
$$
The definition of a norm leads to a few properties. First, if $c$ is a scalar, $\| c\vec{v} \| = |c| \| \vec{v} \|$ - which says scalar multiplication by $c$ changes the length by $|c|$. (Sometimes, scalar multiplication is described as "scaling by....") The other property is an analog of the triangle inequality, in which for any two vectors $\| \vec{v} + \vec{w} \| \leq \| \vec{v} \| + \| \vec{w} \|$. The right hand side is equal only when the two vectors are parallel.
The definition of a norm leads to a few properties. First, if $c$ is a scalar, $\| c\vec{v} \| = |c| \| \vec{v} \|$---which says scalar multiplication by $c$ changes the length by $|c|$. (Sometimes, scalar multiplication is described as "scaling by....") The other property is an analog of the triangle inequality, in which for any two vectors $\| \vec{v} + \vec{w} \| \leq \| \vec{v} \| + \| \vec{w} \|$. The right hand side is equal only when the two vectors are parallel.
A vector with length $1$ is called a *unit* vector. Dividing a non-zero vector by its norm will yield a unit vector, a consequence of the first property above. Unit vectors are often written with a "hat:" $\hat{v}$.
@@ -234,7 +234,7 @@ A simple example might be to add up a sequence of numbers. A direct way might be
x1, x2, x3, x4, x5, x6 = 1, 2, 3, 4, 5, 6
x1 + x2 + x3 + x4 + x5 + x6
```
Someone doesn't need to know `Julia`'s syntax to guess what this computes, save for the idiosyncratic tuple assignment used, which could have been bypassed at the cost of even more typing.
A more efficient means to do, as each component isn't named, this would be to store the data in a container:
@@ -267,7 +267,7 @@ These two functions are *reductions*. There are others, such as `maximum` and `m
reduce(+, xs; init=0) # sum(xs)
```
or
or
```{julia}
reduce(*, xs; init=1) # prod(xs)
@@ -289,9 +289,9 @@ and
foldr(=>, xs)
```
Next, we do a slightly more complicated problem.
Next, we do a slightly more complicated problem.
Recall the distance formula between two points, also called the *norm*. It is written here with the square root on the other side: $d^2 = (x_1-y_1)^2 + (x_0 - y_0)^2$. This computation can be usefully generalized to higher dimensional points (with $n$ components each).
Recall the distance formula between two points, also called the *norm*. It is written here with the square root on the other side: $d^2 = (x_1-y_1)^2 + (x_0 - y_0)^2$. This computation can be usefully generalized to higher dimensional points (with $n$ components each).
This first example shows how the value for $d^2$ can be found using broadcasting and `sum`:
@@ -309,10 +309,10 @@ This formula is a sum after applying an operation to the paired off values. Usin
sum((xi - yi)^2 for (xi, yi) in zip(xs, ys))
```
The `zip` function, used above, produces an iterator over tuples of the paired off values in the two (or more) containers passed to it.
The `zip` function, used above, produces an iterator over tuples of the paired off values in the two (or more) containers passed to it.
This pattern -- where a reduction follows a function's application to the components -- is implemented in `mapreduce`.
This pattern---where a reduction follows a function's application to the components---is implemented in `mapreduce`.
```{julia}
@@ -337,7 +337,7 @@ mapreduce((xi,yi) -> (xi-yi)^2, +, xs, ys)
At times, extracting all but the first or last value can be of interest. For example, a polygon comprised of $n$ points (the vertices), might be stored using a vector for the $x$ and $y$ values with an additional point that mirrors the first. Here are the points:
```{julia}
xs = [1, 3, 4, 2]
xs = [1, 3, 4, 2]
ys = [1, 1, 2, 3]
pts = zip(xs, ys) # recipe for [(x1,y1), (x2,y2), (x3,y3), (x4,y4)]
```
@@ -392,7 +392,7 @@ The `take` method could be used to remove the padded value from the `xs` and `ys
##### Example: Riemann sums
In the computation of a Riemann sum, the interval $[a,b]$ is partitioned using $n+1$ points $a=x_0 < x_1 < \cdots < x_{n-1} < x_n = b$.
In the computation of a Riemann sum, the interval $[a,b]$ is partitioned using $n+1$ points $a=x_0 < x_1 < \cdots < x_{n-1} < x_n = b$.
```{julia}
a, b, n = 0, 1, 4
@@ -414,7 +414,7 @@ sum(f ∘ first, partitions)
```
This uses a few things: like `mapreduce`, `sum` allows a function to
be applied to each element in the `partitions` collection. (Indeed, the default method to compute `sum(xs)` for an arbitrary container resolves to `mapreduce(identity, add_sum, xs)` where `add_sum` is basically `+`.)
be applied to each element in the `partitions` collection. (Indeed, the default method to compute `sum(xs)` for an arbitrary container resolves to `mapreduce(identity, add_sum, xs)` where `add_sum` is basically `+`.)
In this case, the
values come as tuples to the function to apply to each component.
@@ -636,7 +636,7 @@ But the associative property does not make sense, as $(\vec{u} \cdot \vec{v}) \c
## Matrices
Algebraically, the dot product of two vectors - pair off by components, multiply these, then add - is a common operation. Take for example, the general equation of a line, or a plane:
Algebraically, the dot product of two vectors---pair off by components, multiply these, then add---is a common operation. Take for example, the general equation of a line, or a plane:
$$
@@ -764,7 +764,7 @@ Vectors are defined similarly. As they are identified with *column* vectors, we
```{julia}
𝒷 = [10, 11, 12] # not 𝒷 = [10 11 12], which would be a row vector.
a = [10, 11, 12] # not a = [10 11 12], which would be a row vector.
```
In `Julia`, entries in a matrix (or a vector) are stored in a container with a type wide enough accommodate each entry. In this example, the type is SymPy's `Sym` type:
@@ -822,7 +822,7 @@ We can then see how the system of equations is represented with matrices:
```{julia}
M * xs - 𝒷
M * xs - a
```
Here we use `SymPy` to verify the above:
@@ -899,7 +899,7 @@ and
```
:::{.callout-note}
## Note
The adjoint is defined *recursively* in `Julia`. In the `CalculusWithJulia` package, we overload the `'` notation for *functions* to yield a univariate derivative found with automatic differentiation. This can lead to problems: if we have a matrix of functions, `M`, and took the transpose with `M'`, then the entries of `M'` would be the derivatives of the functions in `M` - not the original functions. This is very much likely to not be what is desired. The `CalculusWithJulia` package commits **type piracy** here *and* abuses the generic idea for `'` in Julia. In general type piracy is very much frowned upon, as it can change expected behaviour. It is defined in `CalculusWithJulia`, as that package is intended only to act as a means to ease users into the wider package ecosystem of `Julia`.
The adjoint is defined *recursively* in `Julia`. In the `CalculusWithJulia` package, we overload the `'` notation for *functions* to yield a univariate derivative found with automatic differentiation. This can lead to problems: if we have a matrix of functions, `M`, and took the transpose with `M'`, then the entries of `M'` would be the derivatives of the functions in `M`---not the original functions. This is very much likely to not be what is desired. The `CalculusWithJulia` package commits **type piracy** here *and* abuses the generic idea for `'` in Julia. In general type piracy is very much frowned upon, as it can change expected behaviour. It is defined in `CalculusWithJulia`, as that package is intended only to act as a means to ease users into the wider package ecosystem of `Julia`.
:::
---
@@ -1081,7 +1081,7 @@ norm(u₂ × v₂)
---
This analysis can be extended to the case of 3 vectors, which - when not co-planar - will form a *parallelepiped*.
This analysis can be extended to the case of 3 vectors, which---when not co-planar---will form a *parallelepiped*.
```{julia}