edits in alternatives
This commit is contained in:
parent
93dc010a3c
commit
9ded7207ff
@ -1,7 +1,7 @@
|
|||||||
name = "CalculusWithJuliaNotes"
|
name = "CalculusWithJuliaNotes"
|
||||||
uuid = "8cd3c377-0a30-4ec5-b85a-75291d749efe"
|
uuid = "8cd3c377-0a30-4ec5-b85a-75291d749efe"
|
||||||
authors = ["jverzani <jverzani@gmail.com> and contributors"]
|
authors = ["jverzani <jverzani@gmail.com> and contributors"]
|
||||||
version = "0.1.3"
|
version = "0.1.4"
|
||||||
|
|
||||||
[compat]
|
[compat]
|
||||||
julia = "1"
|
julia = "1"
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
version: 0.6
|
version: 0.7
|
||||||
|
|
||||||
project:
|
project:
|
||||||
type: book
|
type: book
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
These notes use a particular selection of packages. This selection could have been different. For example:
|
These notes use a particular selection of packages. This selection could have been different. For example:
|
||||||
|
|
||||||
* The symbolic math is provided by `SymPy`. [Symbolics](./alternatives/Symbolics.html) (along with `SymbolicUtils` and `ModelingToolkit`) provides an alternative.
|
* The symbolic math is provided by `SymPy`. [Symbolics](./alternatives/symbolics.html) (along with `SymbolicUtils` and `ModelingToolkit`) provides an alternative.
|
||||||
|
|
||||||
* The finding of zeros of scalar-valued, univariate functions is done with `Roots`. The [NonlinearSolve](./alternatives/SciML.html#nonlinearsolve) package provides an alternative for univariate and multi-variate functions.
|
* The finding of zeros of scalar-valued, univariate functions is done with `Roots`. The [NonlinearSolve](./alternatives/SciML.html#nonlinearsolve) package provides an alternative for univariate and multi-variate functions.
|
||||||
|
|
||||||
|
@ -5,22 +5,17 @@
|
|||||||
The `Julia` ecosystem advances rapidly. For much of it, the driving force is the [SciML](https://github.com/SciML) organization (Scientific Machine Learning).
|
The `Julia` ecosystem advances rapidly. For much of it, the driving force is the [SciML](https://github.com/SciML) organization (Scientific Machine Learning).
|
||||||
|
|
||||||
|
|
||||||
In this section we describe some packages provided by this organization that could be used as alternatives to the ones utilized in these notes. Members of this organization created many packages for solving different types of differential equations, and have branched out from there. Many newer efforts of this organization have been to write uniform interfaces to other packages in the ecosystem, some of which are discussed below. We don't discuss the promise of SCIML: "Performance is considered a priority, and performance issues are considered bugs," as we don't pursue features like in-place modification, sparsity, etc. Interested readers should consult the relevant packages documentation.
|
In this section we describe some packages provided by this organization that could be used as alternatives to the ones utilized in these notes. Members of this organization created many packages for solving different types of differential equations, and have branched out from there. Many newer efforts of this organization have been to write uniform interfaces to other packages in the ecosystem, some of which are discussed below. We don't discuss this key promise of SciML: "Performance is considered a priority, and performance issues are considered bugs," as we don't pursue features like in-place modification, sparsity, etc. Interested readers should consult the relevant packages documentation.
|
||||||
|
|
||||||
|
|
||||||
The basic structure to use these packages is the "problem-algorithm-solve" interface described in [The problem-algorithm-solve interface](../ODEs/solve.html). We also discussed this interface a bit in [ODEs](../ODEs/differential_equations.html).
|
The basic structure to use these packages is the "problem-algorithm-solve" interface described in [The problem-algorithm-solve interface](../ODEs/solve.html) sectino. We also discussed this interface a bit in [ODEs](../ODEs/differential_equations.html).
|
||||||
|
|
||||||
|
|
||||||
:::{.callout-note}
|
:::{.callout-note}
|
||||||
## Note
|
## Note
|
||||||
These packages are in a process of rapid development and change to them is expected. These notes were written using the following versions:
|
These packages are in a process of rapid development and change to them is expected.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
```{julia}
|
|
||||||
pkgs = ["Symbolics", "NonlinearSolve", "Optimization", "Integrals"]
|
|
||||||
import Pkg; Pkg.status(pkgs)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Symbolic math (`Symbolics`)
|
## Symbolic math (`Symbolics`)
|
||||||
|
|
||||||
@ -31,7 +26,7 @@ The `Symbolics`, `SymbolicUtils`, and `ModelingToolkit` packages are provided by
|
|||||||
## Solving equations
|
## Solving equations
|
||||||
|
|
||||||
|
|
||||||
Solving one or more equations (simultaneously) is different in the linear case (where solutions are readily found – though performance can distinguish approaches – and the nonlinear case – where for most situations, numeric approaches are required.
|
Solving one or more equations (simultaneously) is different in the linear case (where solutions are readily found – though performance can distinguish approaches) and the nonlinear case – where for most situations, numeric approaches are required.
|
||||||
|
|
||||||
|
|
||||||
### `LinearSolve`
|
### `LinearSolve`
|
||||||
@ -53,14 +48,22 @@ The package is loaded through the following command:
|
|||||||
using NonlinearSolve
|
using NonlinearSolve
|
||||||
```
|
```
|
||||||
|
|
||||||
Unlike `Roots`, the package handles problems beyond the univariate case, as such the simplest problems have a little extra setup required.
|
Unlike `Roots`, the package handles problems beyond the univariate case, as such, the simplest problems have a little extra setup required.
|
||||||
|
|
||||||
|
|
||||||
For example, suppose we want to use this package to solve for zeros of $f(x) = x^5 - x - 1$. We could do so a few different ways.
|
For example, suppose we want to use this package to solve for zeros of $f(x) = x^5 - x - 1$. We could do so a few different ways.
|
||||||
|
|
||||||
|
An approach that most closely mirrors that of `Roots` would be:
|
||||||
|
|
||||||
First, we need to define a `Julia` function representing `f`. We do so with:
|
```{julia}
|
||||||
|
f(x, p) = x^5 - x - 1
|
||||||
|
prob = NonlinearProblem(f, 1.0)
|
||||||
|
solve(prob, NewtonRaphson())
|
||||||
|
```
|
||||||
|
|
||||||
|
The `NewtonRaphson` method uses a derivative of `f`, which `NonlinearSolve` computes using automatic differentiation.
|
||||||
|
|
||||||
|
However, it is more performant and not much more work to allow for a vector of starting values. For this, `f` can be defined as:
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
f(u, p) = @. (u^5 - u - 1)
|
f(u, p) = @. (u^5 - u - 1)
|
||||||
@ -78,14 +81,16 @@ u0 = @SVector[1.0]
|
|||||||
prob = NonlinearProblem(f, u0)
|
prob = NonlinearProblem(f, u0)
|
||||||
```
|
```
|
||||||
|
|
||||||
The problem is solved by calling `solve` with an appropriate method specified. Here we use Newton's method. The derivative of `f` is computed automatically.
|
The problem is solved by calling `solve` with an appropriate method specified. Here we use Newton's method.
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
soln = solve(prob, NewtonRaphson())
|
soln = solve(prob, NewtonRaphson())
|
||||||
```
|
```
|
||||||
|
|
||||||
The basic interface for retrieving the solution from the solution object is to use indexing:
|
Again, the derivative of `f` is computed automatically.
|
||||||
|
|
||||||
|
The basic interface for retrieving the numeric solution from the solution object is to use indexing:
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
@ -94,12 +99,10 @@ soln[]
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
||||||
:::{.callout-note}
|
:::{.callout-note}
|
||||||
## Note
|
## Note
|
||||||
This interface is more performant than `Roots`, though it isn't an apples to oranges comparison as different stopping criteria are used by the two. In order to be so, we need to help out the call to `NonlinearProblem` to indicate the problem is non-mutating by adding a "`false`", as follows:
|
This interface is more performant than `Roots`, though it isn't an apples to oranges comparison, as different stopping criteria are used by the two. In order to compare, we help out the call to `NonlinearProblem` to indicate the problem is non-mutating by adding a "`false`", as follows:
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
using BenchmarkTools
|
using BenchmarkTools
|
||||||
@ -117,7 +120,8 @@ gp(x) = ForwardDiff.derivative(g, x)
|
|||||||
@btime solve(Roots.ZeroProblem((g, gp), 1.0), Roots.Newton())
|
@btime solve(Roots.ZeroProblem((g, gp), 1.0), Roots.Newton())
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
:::
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
This problem can also be solved using a bracketing method. The package has both `Bisection` and `Falsi` as possible methods. To use a bracketing method, the initial bracket must be specified.
|
This problem can also be solved using a bracketing method. The package has both `Bisection` and `Falsi` as possible methods. To use a bracketing method, the initial bracket must be specified.
|
||||||
@ -153,7 +157,6 @@ solve(prob, Bisection())
|
|||||||
## Note
|
## Note
|
||||||
The *insignificant* difference in stopping criteria used by `NonlinearSolve` and `Roots` is illustrated in this example, where the value returned by `NonlinearSolve` differs by one floating point value:
|
The *insignificant* difference in stopping criteria used by `NonlinearSolve` and `Roots` is illustrated in this example, where the value returned by `NonlinearSolve` differs by one floating point value:
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
an = solve(NonlinearProblem{false}(f, u0, p), Bisection())
|
an = solve(NonlinearProblem{false}(f, u0, p), Bisection())
|
||||||
@ -161,10 +164,11 @@ ar = solve(Roots.ZeroProblem(f, u0), Roots.Bisection(); p=p)
|
|||||||
nextfloat(an[]) == ar, f(an[], p), f(ar, p)
|
nextfloat(an[]) == ar, f(an[], p), f(ar, p)
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
:::
|
||||||
|
|
||||||
|
|
||||||
We can solve for several parameters at once, by using an equal number of initial positions as follows:
|
|
||||||
|
We can solve for several parameters at once, by using a matching number of initial positions as follows:
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
@ -208,7 +212,7 @@ We can see that this identified value is a "zero" through:
|
|||||||
∇peaks(u.u)
|
∇peaks(u.u)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Using Modeling toolkit to model the non-linear problem
|
### Using `ModelingToolkit` to model a non-linear problem
|
||||||
|
|
||||||
|
|
||||||
Nonlinear problems can also be approached symbolically using the `ModelingToolkit` package. There is one additional step necessary.
|
Nonlinear problems can also be approached symbolically using the `ModelingToolkit` package. There is one additional step necessary.
|
||||||
@ -260,7 +264,7 @@ solve(prob, NewtonRaphson())
|
|||||||
We describe briefly the `Optimization` package which provides a common interface to *numerous* optimization packages in the `Julia` ecosystem. We discuss only the interface for `Optim.jl` defined in `OptimizationOptimJL`.
|
We describe briefly the `Optimization` package which provides a common interface to *numerous* optimization packages in the `Julia` ecosystem. We discuss only the interface for `Optim.jl` defined in `OptimizationOptimJL`.
|
||||||
|
|
||||||
|
|
||||||
We begin with a simple example from first semester calculus:
|
We begin with a standard example from first semester calculus:
|
||||||
|
|
||||||
|
|
||||||
> Among all rectangles of fixed perimeter, find the one with the *maximum* area.
|
> Among all rectangles of fixed perimeter, find the one with the *maximum* area.
|
||||||
@ -270,9 +274,16 @@ We begin with a simple example from first semester calculus:
|
|||||||
If the perimeter is taken to be $25$, the mathematical setup has a constraint ($P=25=2x+2y$) and an objective ($A=xy$) to maximize. In this case, the function to *maximize* is $A(x) = x \cdot (25-2x)/2$. This is easily done different ways, such as finding the one critical point and identifying this as the point of maximum.
|
If the perimeter is taken to be $25$, the mathematical setup has a constraint ($P=25=2x+2y$) and an objective ($A=xy$) to maximize. In this case, the function to *maximize* is $A(x) = x \cdot (25-2x)/2$. This is easily done different ways, such as finding the one critical point and identifying this as the point of maximum.
|
||||||
|
|
||||||
|
|
||||||
To do this last step using `Optimization` we would have.
|
To do this last step using the `Optimization` package, we must first load the package **and** the underlying backend glue code we intend to use:
|
||||||
|
|
||||||
|
|
||||||
|
```{julia}
|
||||||
|
using Optimization
|
||||||
|
using OptimizationOptimJL
|
||||||
|
```
|
||||||
|
|
||||||
|
Our objective function is defined using an intermediate function derived from the constraint:
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
height(x) = @. (25 - 2x)/2
|
height(x) = @. (25 - 2x)/2
|
||||||
A(x, p=nothing) = @.(- x * height(x))
|
A(x, p=nothing) = @.(- x * height(x))
|
||||||
@ -281,13 +292,6 @@ A(x, p=nothing) = @.(- x * height(x))
|
|||||||
The minus sign is needed here as optimization routines find *minimums*, not maximums.
|
The minus sign is needed here as optimization routines find *minimums*, not maximums.
|
||||||
|
|
||||||
|
|
||||||
To use `Optimization` we must load the package **and** the underlying backend glue code we intend to use:
|
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
|
||||||
using Optimization
|
|
||||||
using OptimizationOptimJL
|
|
||||||
```
|
|
||||||
|
|
||||||
Next, we define an optimization function with information on how its derivatives will be taken. The following uses `ForwardDiff`, which is a good choice in the typical calculus setting, where there are a small number of inputs (just $1$ here.)
|
Next, we define an optimization function with information on how its derivatives will be taken. The following uses `ForwardDiff`, which is a good choice in the typical calculus setting, where there are a small number of inputs (just $1$ here.)
|
||||||
|
|
||||||
@ -307,7 +311,7 @@ soln = solve(prob, Newton())
|
|||||||
|
|
||||||
:::{.callout-note}
|
:::{.callout-note}
|
||||||
## Note
|
## Note
|
||||||
We use `Newton` not `NewtonRaphson` as above. Both methods are similar, but they come from different uses – for latter for solving non-linear equation(s), the former for solving optimization problems.
|
We use the method `Newton` and not `NewtonRaphson`, as above. Both methods are similar, but they come from different packages – the latter for solving non-linear equation(s), the former for solving optimization problems.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
@ -697,7 +701,6 @@ So we have $\iint_{G(R)} x^2 dA$ is computed by the following with $\alpha=\pi/4
|
|||||||
```{julia}
|
```{julia}
|
||||||
import LinearAlgebra: det
|
import LinearAlgebra: det
|
||||||
|
|
||||||
|
|
||||||
𝑓(uv) = uv[1]^2
|
𝑓(uv) = uv[1]^2
|
||||||
|
|
||||||
function G(uv)
|
function G(uv)
|
||||||
@ -706,6 +709,7 @@ function G(uv)
|
|||||||
|
|
||||||
u,v = uv
|
u,v = uv
|
||||||
[cos(α)*u - sin(α)*v, sin(α)*u + cos(α)*v]
|
[cos(α)*u - sin(α)*v, sin(α)*u + cos(α)*v]
|
||||||
|
|
||||||
end
|
end
|
||||||
|
|
||||||
f(u, p) = (𝑓∘G)(u) * det(ForwardDiff.jacobian(G, u))
|
f(u, p) = (𝑓∘G)(u) * det(ForwardDiff.jacobian(G, u))
|
||||||
@ -731,4 +735,3 @@ f(x, p) = [x[1], x[2]^2]
|
|||||||
prob = IntegralProblem(f, [0,0],[3,4], nout=2)
|
prob = IntegralProblem(f, [0,0],[3,4], nout=2)
|
||||||
solve(prob, HCubatureJL())
|
solve(prob, HCubatureJL())
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -25,7 +25,7 @@ using Symbolics
|
|||||||
Symbolic math at its core involves symbolic variables, which essentially defer evaluation until requested. The creation of symbolic variables differs between the two packages discussed here.
|
Symbolic math at its core involves symbolic variables, which essentially defer evaluation until requested. The creation of symbolic variables differs between the two packages discussed here.
|
||||||
|
|
||||||
|
|
||||||
`SymbolicUtils` creates variables which carry `Julia` type information (e.g. `Int`, `Float64`, ...). This type information carries through operations involving these variables. Symbolic variables can be created with the `@syms` macro. For example
|
`SymbolicUtils` creates variables which carry `Julia` type information (e.g. `Int`, `Float64`, ...). This type information carries through operations involving these variables. Symbolic variables can be created with the `@syms` macro. For example:
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
@ -104,7 +104,7 @@ typeof(x), symtype(x), typeof(Symbolics.value(x))
|
|||||||
## Symbolic expressions
|
## Symbolic expressions
|
||||||
|
|
||||||
|
|
||||||
Symbolic expressions are built up from symbolic variables through natural `Julia` idioms. `SymbolicUtils` privileges a few key operations: `Add`, `Mul`, `Pow`, and `Div`. For examples:
|
Symbolic expressions are built up from symbolic variables through natural `Julia` idioms. `SymbolicUtils` privileges a few key operations: `Add`, `Mul`, `Pow`, and `Div`. For example:
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
@ -287,7 +287,7 @@ ex = sin(x)^2 + cos(x)^2
|
|||||||
ex, simplify(ex)
|
ex, simplify(ex)
|
||||||
```
|
```
|
||||||
|
|
||||||
The `simplify` function applies a series of rewriting rule until the expression stabilizes. The rewrite rules can be user generated, if desired. For example, the Pythagorean identity of trigonometry, just used, can be implement with this rule:
|
The `simplify` function applies a series of rewriting rule until the expression stabilizes. The rewrite rules can be user generated, if desired. For example, the Pythagorean identity of trigonometry, just used, can be implemented with this rule:
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
@ -341,7 +341,7 @@ The notation `~x` is called a "slot variable" in the [documentation](https://sym
|
|||||||
### Creating functions
|
### Creating functions
|
||||||
|
|
||||||
|
|
||||||
By utilizing the tree-like nature of a symbolic expression, a `Julia` expression can be built from an symbolic expression easily enough. The `Symbolics.toexpr` function does this:
|
By utilizing the tree-like nature of a symbolic expression, a `Julia` expression can be built from a symbolic expression easily enough. The `Symbolics.toexpr` function does this:
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
@ -443,7 +443,7 @@ Symbolics.get_variables(ex)
|
|||||||
There are some facilities for manipulating polynomial expressions in `Symbolics`. A polynomial, mathematically, is an expression involving one or more symbols with coefficients from a collection that has, at a minimum, addition and multiplication defined. The basic building blocks of polynomials are *monomials*, which are comprised of products of powers of the symbols. Mathematically, monomials are often allowed to have a multiplying coefficient and may be just a coefficient (if each symbol is taken to the power $0$), but here we consider just expressions of the type $x_1^{a_1} \cdot x_2^{a_2} \cdots x_k^{a_k}$ with the $a_i > 0$ as monomials.
|
There are some facilities for manipulating polynomial expressions in `Symbolics`. A polynomial, mathematically, is an expression involving one or more symbols with coefficients from a collection that has, at a minimum, addition and multiplication defined. The basic building blocks of polynomials are *monomials*, which are comprised of products of powers of the symbols. Mathematically, monomials are often allowed to have a multiplying coefficient and may be just a coefficient (if each symbol is taken to the power $0$), but here we consider just expressions of the type $x_1^{a_1} \cdot x_2^{a_2} \cdots x_k^{a_k}$ with the $a_i > 0$ as monomials.
|
||||||
|
|
||||||
|
|
||||||
With this understanding, then an expression can be broken up into monomials with a possible leading coefficient (possibly $1$) *and* terms which are not monomials (such as a constant or a more complicated function of the symbols). This is what is returned by the `polynomial_coeffs` function.
|
With this understanding, then an expression can be broken up into monomials with a possible coefficient (possibly just $1$) *and* terms which are not monomials (such as a constant or a more complicated function of the symbols). This is what is returned by the `polynomial_coeffs` function.
|
||||||
|
|
||||||
|
|
||||||
For example
|
For example
|
||||||
@ -454,7 +454,7 @@ For example
|
|||||||
d, r = polynomial_coeffs(a*x^2 + b*x + c, (x,))
|
d, r = polynomial_coeffs(a*x^2 + b*x + c, (x,))
|
||||||
```
|
```
|
||||||
|
|
||||||
The first term output is dictionary with keys which are the monomials and with values which are the coefficients. The second term, the residual, is all the remaining parts of the expression, in this case just the constant `c`.
|
The first term output is a dictionary with keys which are the monomials and with values which are the coefficients. The second term, the residual, is all the remaining parts of the expression, in this case just the constant `c`.
|
||||||
|
|
||||||
|
|
||||||
The expression can then be reconstructed through
|
The expression can then be reconstructed through
|
||||||
@ -513,7 +513,7 @@ Mathematically the degree of the $0$ polynomial may be $-1$ or undefined, but he
|
|||||||
degree(0), degree(1), degree(x), degree(x^a)
|
degree(0), degree(1), degree(x), degree(x^a)
|
||||||
```
|
```
|
||||||
|
|
||||||
The coefficients are returned as *values* of a dictionary, and dictionaries are unsorted. To have a natural map between polynomials of a single symbol in the standard basis and a vector, we could use a pattern like this:
|
The coefficients are returned as *values* of a dictionary, and dictionaries are unsorted.
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
@ -523,7 +523,7 @@ d, r = polynomial_coeffs(p, (x,))
|
|||||||
d
|
d
|
||||||
```
|
```
|
||||||
|
|
||||||
To sort the values we can use a pattern like the following:
|
To have a natural map between polynomials of a single symbol in the standard basis and a vector, we could use a pattern like this to sort the values:
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
@ -533,7 +533,7 @@ vcat(r, [d[k] for k ∈ sort(collect(keys(d)), by=degree)])
|
|||||||
---
|
---
|
||||||
|
|
||||||
|
|
||||||
Rational expressions can be decomposed into a numerator and denominator using the following idiom, which ensures the outer operation is division (a binary operation):
|
Rational expressions can be decomposed into a numerator and denominator using the following idiom, which assumes the outer operation is division (a binary operation):
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
@ -591,7 +591,7 @@ and
|
|||||||
norm(collect(v))
|
norm(collect(v))
|
||||||
```
|
```
|
||||||
|
|
||||||
Matrix multiplication is also deferred, but the size compatability of the matrices and vectors is considered early:
|
Matrix multiplication is also deferred, but the size compatability of the matrices and vectors is considered immediately:
|
||||||
|
|
||||||
|
|
||||||
```{julia}
|
```{julia}
|
||||||
@ -775,7 +775,7 @@ The `SymbolicNumericIntegration` package includes many more predicates for doing
|
|||||||
If $f(x)$ is to be integrated, a set of *candidate* answers is generated. The following is **proposed** as an answer: $\sum q_i \Theta_i(x)$. Differentiating the proposed answer leads to a *linear system of equations* that can be solved.
|
If $f(x)$ is to be integrated, a set of *candidate* answers is generated. The following is **proposed** as an answer: $\sum q_i \Theta_i(x)$. Differentiating the proposed answer leads to a *linear system of equations* that can be solved.
|
||||||
|
|
||||||
|
|
||||||
The example in the [paper](https://arxiv.org/pdf/2201.12468v2.pdf) describing the method is with $f(x) = x \sin(x)$ and the candidate thetas are ${x, \sin(x), \cos(x), x\sin(x), x\cos(x)}$ so that the propose answer is:
|
The example in the [paper](https://arxiv.org/pdf/2201.12468v2.pdf) describing the method is with $f(x) = x \sin(x)$ and the candidate thetas are ${x, \sin(x), \cos(x), x\sin(x), x\cos(x)}$ so that the proposed answer is:
|
||||||
|
|
||||||
|
|
||||||
$$
|
$$
|
||||||
@ -896,4 +896,3 @@ cs = [first(arguments(term)) for term ∈ arguments(a)] # pick off coefficients
|
|||||||
```{julia}
|
```{julia}
|
||||||
rationalize.(cs; tol=1e-8)
|
rationalize.(cs; tol=1e-8)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user