linearization write \(f(x+\Delta x) - f(x) \approx f'(x)\Delta x\), where \(\delta x\) is a small displacement from \(x\). The reason there isn’t equality is the unwritten higher order terms that vanish in a limit.
Alternate limits. Another way of writing this is in terms of explicit smaller order terms:
\[
(f(x+h) - f(x)) - f'(x)h = \mathscr{o}(h),
\]
which means if we divide both sides by \(h\) and take the limit, we will get \(0\) on the right and the relationship on the left.
Differential notation simply writes this as \(dy = f(x)dx\). More verbosely, we might write
\[
df = f(x+dx) - f(x) = f'(x) dx.
\]
Here \(dx\) is a differential, made rigorous by a limit, which hides the higher order terms.
In these notes the limit has been defined, with suitable modification, for functions of vectors (multiple values) with scalar or vector outputs.
For example, when \(f: R \rightarrow R^m\) was a vector-valued function the derivative was defined similarly through a limit of \((f(t + \Delta t) - f(t))/{\Delta t}\), where each component needed to have a limit. This can be rewritten through \(f(t + dt) - f(t) = f'(t) dt\), again using differentials to avoid the higher order terms.
When \(f: R^n \rightarrow R\) is a scalar-valued function of a derivative, differentiability was defined by a gradient existing with \(f(c+h) - f(c) - \nabla{f}(c) \cdot h\) being \(\mathscr{o}(\|h\|)\). In other words \(df = f(c + dh) - f(c) = \nabla{f}(c) \cdot dh\). The gradient has the same shape as \(c\), a column vector. If we take the row vector (e.g. \(f'(c) = \nabla{f}(c)^T\)) then again we see \(df = f(c+dh) - f(c) = f'(c) dh\), where the last term uses matrix multiplication of a row vector times a column vector.
Finally, when \(f:R^n \rightarrow R^m\), the Jacobian was defined and characterized by \(\| f(x + dx) - f(x) - J_f(x)dx \|\) being \(\mathscr{o}(\|dx\|)\). Again, we can express this through \(df = f(x + dx) - f(x) = f'(x)dx\) where \(f'(x) = J_f(x)\).
In writing \(df = f(x + dx) - f(x) = f'(x) dx\) generically, some underlying facts are left implicit: \(dx\) has the same shape as \(x\) (so can be added); \(f'(x) dx\) may mean usual multiplication or matrix multiplication; and there is an underlying concept of distance and size that allows the above to be rigorous. This may be an abolute value or a norm.
Further, various differentiation rules apply such as the sum, product, and chain rule.
The @BrightEdelmanJohnson notes cover differentiation of functions in this uniform manner and then extend the form by treating derivatives as linear operators.
where the \(\alpha\) and \(\beta\) are scalars, and \(v\) and \(w\) possibly not and come from a vector space. Regular multiplication and matrix multiplication are familiar linear operations, but there are many others.
The referenced notes identify \(f'(x) dx\) with \(f'(x)[dx]\), the latter emphasizing \(f'(x)\) acts on \(dx\) and the notation is not commutative (e.g., it is not \(dx f'(x)\)).
Linear operators are related to vector spaces.
A vector space is a set of mathematical objects which can be added together and also multiplied by a scalar. Vectors of similar size, as previously discussed, are the typical example, with vector addition and scalar multiplication previously discussed topics. Matrices of similar size (and some subclasses) also form a vector space. Additionally, many other set of objects form vector spaces. An example might be polynomial functions of degree \(n\) or less; continuous functions, or functions with a certain number of derivatives.
Take differentiable functions as an example, then the simplest derivative rules \([af(x) + bg(x)]' = a[f(x)]' + b[g(x)]'\) show the linearity of the derivative in this setting. This linearity is different from how the derivative is a linear operator on \(dx\).
A vector space is described by a basis – a minimal set of vectors needed to describe the space, after consideration of linear combinations. For many vectors, this the set of special vectors with \(1\) as one of the entries, and \(0\) otherwise.
A key fact about a basis is every vector in the vector space can be expressed uniquely as a linear combination of the basis vectors.
Vectors and matrices have properties that are generalizations of the real numbers. As vectors and matrices form vector spaces, the concept of addition of vectors and matrices is defined, as is scalar multiplication. Additionally, we have seen:
The dot product between two vectors of the same length is defined easily (\(v\cdot w = \Sigma_i v_i w_i\)). It is coupled with the length as \(\|v\|^2 = v\cdot v\).
Matrix multiplication is defined for two properly sized matrices. If \(A\) is \(m \times k\) and \(B\) is \(k \times n\) then \(AB\) is a \(m\times n\) matrix with \((i,j)\) term given by the dot product of the \(i\)th row of \(A\) (viewed as a vector) and the \(j\)th column of \(B\) (viewed as a vector). Matrix multiplication is associative but not commutative. (E.g. \((AB)C = A(BC)\) but \(AB\) and \(BA\) need not be equal (or even defined, as the shapes may not match up).
A square matrix \(A\) has an inverse\(A^{-1}\) if \(AA^{-1} = A^{-1}A = I\), where \(I\) is the identity matrix (a matrix which is zero except on its diagonal entries which are all \(1\)). Square matrices may or may not have an inverse. When they don’t the matrix is called singular.
Viewing a vector as a matrix is possible. The association is typically through a column vector.
The transpose of a matrix comes by permuting the rows and columns. The transpose of a column vector is a row vector, so \(v\cdot w = v^T w\), where we use a superscript \(T\) for the transpose. The transpose of a product, is the product of the transposes – reversed: \((AB)^T = B^T A^T\); the tranpose of a transpose is an identity operation: \((A^T)^T = A\); the inverse of a transpose is the tranpose of the inverse: \((A^{-1})^T = (A^T){-1}\).
Matrices for which \(A = A^T\) are called symmetric.
A few of the operations on matrices are the transpose and the inverse. These return a matrix, when defined. There is also the determinant and the trace, which return a scalar from a matrix. The trace is just the sum of the diagonal; the determinant is more involved to compute, but was previously seen to have a relationship to the volume of a certain parallellpiped. There are a few other operations described in the following.
.
Scalar-valued functions of a vector
Suppose \(f: R^n \rightarrow R\), a scalar-valued function of a vector. Then the directional derivative at \(x\) in the direction \(v\) was defined for a scalar \(\alpha\) by:
Not only does this give a connection in notation with the derivative, it naturally illustrates how the derivative as a linear operator can act on non-infinitesimal values.
Previously, we wrote \(\nabla f \cdot v\) for the directional derivative, where the gradient is a column vector. The above uses the identification \(f' = (\nabla f)^T\).
For \(f: R^n \rightarrow R\) we have
\[
df = f(x + dx) - f(x) = f'(x) [dx]
\]
is a scalar, so if \(dx\) is a column vector, \(f'(x)\) is a row vector with the same number of components (just as \(\nabla f\) is a column vector with the same number of components).
Examples
@BrightEdelmanJohnson include this example to show that the computation of derivatives using components can be avoided. Consider \(f(x) = x^T A x\) where \(x\) is a vector in \(R^n\) and \(A\) is an \(n\times n\) matrix. Then \(f: R^n \rightarrow R\) and its derivative can be computed:
\[
\begin{align*}
df &= f(x + dx) - f(x)\\
&= (x + dx)^T A (x + dx) - x^TAx \\
&= x^TAx + dx^TA x + x^TAx + dx^T A dx - x^TAx\\
&= dx^TA x + x^TAdx \\
&= (dx^TAx)^T + x^TAdx \\
&= x^T A^T dx + x^T A dx\\
&= x^T(A^T + A) dx
\end{align*}
\]
The term \(dx^t A dx\) is dropped, as it is higher order (goes to zero faster), it containing two \(dx\) terms. In the second to last step, an identity operation (taking the transpose of the scalar quantity) is taken to simplify the algebra. Finally, as \(df = f'(x)[dx]\) the identity of \(f'(x) = x^T(A^T+A)\) is made, or taking transposes \(\nabla f = (A + A^T)x\).
Compare the elegance above, with the component version, even though simplified, it still requires a specification of the size to carry the following out:
usingSymPy@syms x[1:3]::real A[1:3, 1:3]::realu = x'* A * xgrad_u = [diff(u, xi) for xi in x]
For \(f: R^n \rightarrow R^m\), @BrightEdelmanJohnson give an example of computing the Jacobian without resorting to component wise computations. Let \(f(x) = Ax\) with \(A\) being a \(m \times n\) matrix, it follows that
Again, taking the transpose of the scalar quantity \(x^TAdx\) to simplify the expression.
When \(A^T = A\) (\(A\) is symmetric) this simplifies to a more familiar looking \(2x^TA\), but we see that this requires assumptions not needed in the scalar case.
Next, if \(f(x) = Ax\) then
\[
df = (dA)x + A(dx) = 0x + A dx = A dx,
\]
\(A\) being a constant here.
Example
@BrightEdelmanJohnson consider what in Julia is .*. That is the operation:
They compute the derivative of \(f(x) = A(x .* x)\) for some fixed matrix \(A\) of the proper size.
We can see that \(d (\text{diag}(v)w) = d(\text{diag}(v)) w + \text{diag}(v) dw = (dx) .* w + x .* dw\). So
\(df = A(dx .* x + x .* dx) = 2A(x .* dx)\), as \(.*\) is commutative by its definition. Writing this as \(df = 2A(x .* dx) = 2A(\text{diag}(x) dx) = (2A\text{diag}(x)) dx\), we identify \(f'(x) = 2A\text{diag}(x)\).
This operation is called the Hadamard product and it extends to matrices and arrays.
The chain rule
Like the product rule, the chain rule is shown by @BrightEdelmanJohnson in this notation with \(f(x) = g(h(x))\):
The operator \(f'(x)= g'(h(x)) h'(x)\) is a product of matrices.
Computational differences with expressions from the chain rule
Of note here is the application of the chain rule to three (or more compositions):
The derivative of \(f(x) = a(x) b(x) c(x)\) can be expressed as
\[
f' = (a'b')c' \text{ or } f' = a'(b'c')
\]
Multiplying left to right (the first) is called reverse mode; multiplying right to left (the second) is called forward mode. The distinction becomes important when considering the computational cost of the multiplications.
If \(f: R^n \rightarrow R^m\) has \(n\) much bigger than \(1\) and \(m=1\), then it is much faster to do left to right multiplication
if \(f:R^n \rightarrow R^m\) has \(n=1\) and \(m\) much bigger than one, the it is faster to do right to left multiplication.
The basic idea comes down to the shape of the matrices. When \(m=1\), the derviative is a product of matrices of size \(n\times j\)\(j\times k\) and \(k \times 1\) yielding a matrix of size \(n \times 1\) matching the function dimension. Matrix multiplication of an \(m \times q\) times \(q \times n\) takes an order of \(mqn\) operations. The multiplication of left to right is then
The first operation takes \(njk\) operation leaving an \(n\times k\) matrix, the next multiplication then takes another \(nk1\) operations or \(njk + nk\) together. Whereas computing from the right to left is first \(jk1\) operations leaving a \(j \times 1\) matrix. The next operation would take another \(nk1\) operations. In totalL
left to right is \(njk + nk\) = \(nk \cdot (1 + j)\).
right to left is \(jk + j = j\cdot (k+1)\).
When \(j=k\), say, we can compare and see the second is a factor less in terms of operations. This can be quite significant in higher dimensions, whereas the dimensions of calculus (where \(n\) and \(m\) are \(3\) or less) it is not an issue.
Example
Using the BenchmarkTools package, we can check the time to compute various products:
In calculus, we have \(n\) and \(m\) are \(1\),\(2\),or \(3\). But that need not be the case, especially if differentiation is over a parameter space.
XXXX (Maybe the ariplain wing, but please, something origi
Derivatives of matrix functions
What is the the derivative of \(f(A) = A^2\)?
The function \(f\) takes a \(n\times n\) matrix and returns a matrix of the same size. This innocuous question isn’t directly handled by the Jacobian, which is defined for vector valued function \(f:R^n \rightarrow R^m\).
This derivative can be derived directly from the product rule:
\[
\begin{align*}
f(A) &= [AA]'\\
&= A dA + dA A
\end{align*}
\]
That is \(f'(A)\) is the operator \(f'(A)[\delta A] = A \delta A + \delta A A\) and not \(2A\delta A\), as \(A\) may not commute with \(\delta A\).
Vectorization of a matrix
Alternatively, we can identify \(A\) through its components, as a vector in \(R^{n^2}\) and then leverage the Jacobian.
One such identification is vectorization – consecutively stacking the column vectors into a vector. In Julia the vec function does this operation:
We do this via linear algebra first, then see a more elegant manner following the notes.
A basic course in linear algebra shows that any linear operator on a finite vector space can be represented as a matrix. The basic idea is to represent what the operator does to each basis element and put these values as columns of the matrix.
In this \(3 \times 3\) case, the linear operator works on an object with \(9\) slots and returns an object with \(9\) slots, so the matrix will be \(9 \times 9\).
The basis elements are simply the matrices with a \(1\) in spot \((i,j)\) and zero elsewhere. Here we generate them through a function:
basis(i,j,A) = (b=zeros(Int, size(A)...); b[i,j] =1; b)JJ = [vec(basis(i,j,A)*A +A*basis(i,j,A)) for j in1:3 for i in1:3]
But how can we see the Jacobian, \(J\), from the linear operator \(f'(A)[\delta A] = \delta A A + A \delta A\)?
To make this less magical, a related operation to vec is defined.
The \(\text{vec}\) function takes a matrix and stacks its columns.
The \(\text{vec}\) function can turn a matrix into a vector, so it can be used for finding the Jacobian, as above. However the shape of the matrix is lost, as are the fundamental matrix operations, like multiplication.
The Kronecker product replicates values making a bigger matrix. That is, if \(A\) and \(B\) are matrices, the Kronecker product replaces each value in \(A\) with that value times \(B\), making a bigger matrix, as each entry in \(A\) is replaced by an entry with size \(B\).
orthogonal: \((A\otimes B)^T = (A\otimes B)\) if both \(A\) and \(B\) has the same property
determinants: \(\det(A\otimes B) = \det(A)^m \det(B)^n\), where \(A\) is \(n\times n\), \(B\) is \(m \times m\).
trace (sum of diagonal): \(\text{tr}(A \otimes B) = \text{tr}(A)\text{tr}(B)\).
The main equation coupling vec and kron is the fact that if \(A\), \(B\), and \(C\) have appropriate sizes, then:
\[
(A \otimes B) \text{vec}(C) = \text{vec}(B C A^T).
\]
Appropriate sizes for \(A\), \(B\), and \(C\) are determined by the various products in \(BCA^T\).
If \(A\) is \(m \times n\) and \(B\) is \(r \times s\), then since \(BC\) is defined, \(C\) has \(s\) rows, and since \(CA^T\) is defined, \(C\) must have \(n\) columns, as \(A^T\) is \(n \times m\), so \(C\) must be \(s\times n\). Checking this is correct on the other side, \(A \times B\) would be size \(mr \times ns\) and \(\vec{C}\) would be size \(sn\), so that product works, size wise.
The referred to notes have an explanation for this formula, but we confirm with an example with \(m=n-2\), \(r=s=3\):
@syms A[1:2, 1:2]::real B[1:3, 1:3]::real C[1:3, 1:2]::realL, R =kron(A,B)*vec(C), vec(B*C*A')all(l == r for (l, r) ∈zip(L, R))
true
Now to use this relationship to recognize \(df = A dA + dA A\) with the Jacobian computed from \(\text{vec}{f(a)}\).
We have \(\text{vec}(A dA + dA A) = \text{vec}(A dA) + \text{vec}(dA A)\), by obvious linearity of \(\text{vec}\). Now inserting an identity matrix, \(I\), which is symmteric, we have:
\[
\text{vec}(A dA) = \text{vec}(A dA I^T) = (I \otimes A) \text{vec}(dA),
\]
and
\[
\text{vec}(dA A) = \text{vec}(I dA (A^T)^T) = (A^T \otimes I) \text{vec}(dA)
\]
This leaves
\[
\text{vec}(A dA + dA A) =
\left((I \otimes A) + (A^T \otimes I)\right) \text{vec}(dA)
\]
We should then get the Jacobian we computed from the following:
@syms A[1:3, 1:3]::realusingLinearAlgebra: IJ =vec(A^2).jacobian(vec(A))JJ =kron(I(3), A) +kron(A', I(3))all(j == jj for (j,jj) inzip(J,JJ))
true
This technique can also be used with other powers, say \(f(A) = A^3\), where the resulting \(df = A^2 dA + A dA A + dA A^2\) is one answer that can be compared to a Jacobian through
\[
\begin{align*}
df &= \text{vec}(A^2 dA I^T) + \text{vec}(A dA A) + \text{vec}(I dA A^2)\\
&= (I \otimes A^2)\text{vec}(dA) + (A^T \otimes A) \text{vec}(dA) + ((A^T)^2 \otimes I) \text{vec}(dA)
\end{align*}
\]
The above shows how to relate the derivative of a matrix function to the Jacobian of a vectorized function, but only for illustration. It is decidely not necessary to express the derivative of \(f\) in terms of the derivative of its vectorized counterpart.
Example: derivative of the inverse
What is the derivative of \(f(A) = A^{-1}\). When \(A\) is a scalar, we related it to the reciprocal of the derivative of \(f\) at some other point. The same technique is available. Starting with \(I = AA^{-1}\) and noting \(dI\) is \(0\) we have
Let \(f(A) = \text{det}(A)\). What is the derivative?
First, the determinant of a square, \(n\times n\), matrix \(A\) is a scalar summary of \(A\) with different means to compute it, but one recursive one in particular is helpful here:
for any \(j\). The cofactor\(C_{ij}\) is the determinant of the \((n-1)\times(n-1)\) matrix with the \(i\)th row and \(j\)th column deleted times \((-1)^{i+j}\).
To find the gradient of \(f\), we differentiate by each of the \(A_{ij}\) variables, and so
The chain rule brings about a series of products. The adjoint method illustrated below, shows how to approach the computation of the series in a direction that minimizes the computational cost, illustrating why reverse mode is preferred to forward mode when a scalar function of several variables is considered.
@BrightEdelmanJohnson consider the derivative of
\[
g(p) = f(A(p)^{-1} b)
\]
This might arise from applying a scalar-valued \(f\) to the solution of \(Ax = b\), where \(A\) is parameterized by \(p\).
The chain rule gives the following computation to find the derivative (or gradient):
\[
\begin{align*}
dg
&= f'(x)[dx]\\
&= f'(x) [d(A(p)^{1} b)]\\
&= f'(x)[-A(p)^{-1} dA A(p)^{-1} b + 0]\\
&= -f'(x) A(p)^{-1} dA A(p)^{-1} b.
\end{align*}
\]
By writing \(dA = A'(p)[dp]\) and setting \(v^T = f'(x)A(p)^{-1}\) this becomes
\[
dg = -v^T dA A(p)^{-1} b = -v^T dA x
\]
This product of three terms can be computed in two directions:
From left to right:
First \(v\) is found by solving \(v^T = f'(x) A^{-1}\) through the solving of \(v = (A^{-1})^T (f'(x))^T = (A^T)^{-1} \nabla(f)\) or by solving \(A^T v = \nabla f\). This is called the adjoint equation.
The partial derivatives in \(g\) is related to each partial derivative of \(dA\) through:
as the scalar factor commutes through. With \(v\) and \(x\) solved for (via the adjoint equation and from solving \(Ax=b\)) the partials in \(p_k\) are computed with dot products. There are just two costly operations.
From right to left:
The value of \(x\) can be solved for, as above, but computing the value of
requires a costly solve for each \(p_k\), and \(p\) may have many components. As mentioned above, the reverse mode offers advantages when there are many input parameters (\(p\)) and a single output parameter.
Example
Suppose \(x(p)\) solves some system of equations \(h(x(p),p) = 0\) in \(R^n\) (\(n\) possibly just \(1\)) and \(g(p) = f(x(p))\) is some non-linear transformation of \(x\). What is the derivative of \(g\) in \(p\)?
Suppose the implicit function theorem applies to \(h(x,p) = 0\), that is – locally – there is an implicitly defined function \(x(p)\) with a derivative. Moreover by differentiating both sides it can be identified:
Call \(A =\left(\frac{\partial h}{\partial x}\right)^{-1}\). Then define \(v\) indirectly through \(v^T = f'(x) A^{-1}\). With this: \(v = (A^{-1})^T (f'(x))^T = (A^T)^{-1} \nabla{f}\) which is found by solving \(A^Tv = \nabla{f}\). Again, this is the adjoint equation.
The value of \(dA\) is related to each partial derivative for which
as the scalar factor commutes through. With \(v\) and \(x\) solved for (via the adjoint equation and from solving \(Ax=b\)) the partials in \(p_k\) are computed with dot products.
However, from right to left, the value of \(x\) can be solved for, but computing the value of
requires a costly solve for each \(p_k\), and \(p\) may have many components. The reverse mode offers advantages when there are many input parameters (\(p\)) and a single output parameter.
Example
Suppose \(x(p)\) solves some system of equations \(h(x(p),p) = 0\) in \(R^n\) (\(n\) possibly just 1$) and \(g(p) = f(x(p))\) is some non-linear transformation of \(x\). What is the derivative of \(g\) in \(p\)?
Suppose the implicit function theorem applies to \(h(x,p) = 0\), that is locally the response \(x(p)\) has a derivative, and moreover by the chain rule
then \(v\) can be solved from taking adjoints (as before). Let \(A = \partial h/\partial x\), the \(v^T = -f'(x) A^{-1}\) or \(v = -(A^{-1})^T (f'(x))^t= -(A^T)^{-1} \nabla f\). As before it would take two solves to get both \(g\) and its gradient.
Theorem 1. Let \(f:X \rightarrow Y\), where \(X,Y\) are finite dimensional inner product spaces with elements in \(R\). Suppose \(f\) is smooth (a certain number of derivatives). Then for each \(x\) in \(X\) there exists a unique linear operator, \(f'(x)\), and a unique bilinearsymmetric operator \(f'': X \oplus X \rightarrow Y\) such that
New terms include bilinear, symmetric, and inner product. An operator (\(X\oplus X \rightarrow Y\)) is bilinear if it is a linear operator in each of its two arguments. Such an operator is symmetric if interchanging its two arguments makes no difference in its output. Finally, an inner product space is one with a generalization of the dot product. An inner product takes two vectors \(x\) and \(y\) and returns a scalar; it is denoted \(\langle x,y\rangle\); and has properties of symmetry, linearity, and non-negativity (\(\langle x,x\rangle \geq 0\), and equal \(0\) only if \(x\) is the zero vector.) Inner products can be used to form a norm (or length) for a vector through \(||x||^2 = \langle x,x\rangle\).
We reference this, as the values denoted \(f'\) and \(f''\) are unique. So if we identify them one way, we have identified them.
Specializing to \(X=R^n\) and \(Y=R^1\), we have, \(f'=\nabla f^T\) and \(f''\) is the Hessian.
Take \(n=2\). Previously we wrote a formula for Taylor’s theorem for \(f:R^n \rightarrow R\) that with \(n=2\) has with \(x=\langle x_1,x_2\rangle\):
\(H\) being the Hessian with entries \(H_{ij} = \frac{\partial f}{\partial x_i \partial x_j}\).
This formula – \(f(x+dx)-f(x) \approx f'(x)dx + dx^T H dx\) – is valid for any \(n\), showing \(n=2\) was just for ease of notation when expressing in the coordinates and not as matrices.
By uniqueness, we have under these assumptions that the Hessian is symmetric and the expression \(dx^T H dx\) is a bilinear form, which we can identify as \(f''(x)[dx,dx]\).
That the Hessian is symmetric could also be derived under these assumptions by directly computing that the mixed partials can have their order exchanged. But in this framework, as explained by @BrightEdelmanJohnson it is a result of the underlying vector space having an addition that is commutative (e.g. \(u+v = v+u\)).
The mapping \((u,v) \rightarrow u^T A v\) for a matrix \(A\) is bilinear. For a fixed \(u\), it is linear as it can be viewed as \((u^TA)[v]\) and matrix multiplication is linear. Similarly for a fixed \(v\).
@BrightEdelmanJohnson extend this characterization to a broader setting. The second derivative can be viewed as expressing first-order change in \(f'(x)\), a linear operator. The value \(df'\) has the same shape as \(f'\), which is a linear operator, so \(df'\) acts on vectors, say \(dx\), then:
\[
df'[dx] = f''(x)[dx'][dx] = f''(x)[dx', dx]
\]
The prime in \(dx'\) is just notation, not a derivative operation for \(dx\).
With this view, we can see that \(f''(x)\) has two vectors it acts on. By definition it is linear in \(dx\). However, as \(f'(x)\) is a linear operator and the sum and product rules apply to derivatives, this operator is linear in \(dx'\) as well. So \(f''(x)\) is bilinear and as mentioned earlier symmetric.
Polarization
@BrightEdelmanJohnson interpret \(f''\) by looking at the image under \(f\) of \(x + dx + dx'\). If \(x\) is a vector, then this has a geometrical picture, from vector addtion, relating \(x + dx\), \(x+dx'\), and \(x + dx + dx'\).
The image for \(x +dx\) is to second order \(f(x) + f'(x)[dx] + (1/2)f''(x)[dx, dx]\), similarly \(x + dx'\) is to second order \(f(x) + f'(x)[dx'] + (1/2)f''(x)[dx', dx']\). The key formula for \(f''(x)\) is
Consider an expression from earlier \(f(x) = x^T A x\) for some constant \(A\). Then \(f''\) is found by noting that \(f' = (\nabla f)^T = x^T(A + A^T)\), or \(\nabla f = (A^T + A)x\) and \(f'' = H = A^T + A\) is the Jacobian of the gradient.
By rearranging terms, it can be shown that \(f(x) = 1/2 x^THx = 1/2 f''[x,x]\).
Example: second derivative of \(\text{det}(A)\)
Consider \(f(A) = \text{det}(A)\). We saw previously that:
So, after dropping the third-order term, we see: \[
\begin{align*}
f''(A)&[dA,dA'] \\
&= \text{det}(A)\text{tr}(A^{-1}dA')\text{tr}(A^{-1}dA)
- \text{det}(A)\text{tr}(A^{-1}dA' A^{-1}dA).
\end{align*}
\]