1.26 determinants

\(\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}}
\def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}}
\def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)}
\def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}}
\def\ker{\mathbf{ker}}\def\im{\mathbf{im}}
\)

1. Determinant is a function of square matrices (can be defined for any operator \(A:U\to U\), but we’ll avoid this abstract detour). It can be defined as a function that has the following properties:

  1. It is linear in columns (i.e. if in a matrix \(A\) a column (say, \(k\)-th) is \(\lambda_1 e_1 +\lambda_2 e_2\), then
    \[
    f(A)=\lambda_1f(A_1)+\lambda_2f(A_2),
    \]
    where \(A_i\) obtained by replacing \(k\)-th column with \(e_i, i=1,2\).
  2. It is zero if two columns are the same, and
  3. \(f(E)=1\).

2. It is quite easy to see that these properties have the following important implication:
\[
\det(AB)=\det(A)\det(B).
\]

In particular, this means that if \(A\) is invertible \(\Leftrightarrow\) determinant is non-zero!

Conversely, if \(A\) is not invertible, there exists a vector \(x\neq 0:Ax=0\), and hence one of the columns can be represented as a linear combination of others. This, by the properties of the determinants, immediately implies that

\[
A \mathrm{\ is\ invertible\ }\Leftrightarrow \det A\neq 0. A \mathrm{\ is\ invertible\ }\Leftrightarrow \det A\neq 0.
\]

3. One can also derive the Kramer rule:
If the vector \(x\) with coordinates \(x_1,\ldots, x_n\) solves the system of linear equations
\(Ax=b\) (with square matrix \(A\)), that is if

\begin{array}{cccc}
a_{1,1}x_1+&\ldots&+a_{1,n}x_n&=b_1\\
\vdots&&\vdots&\vdots\\
a_{n,1}x_1+&\ldots&+a_{n,n}x_n&=b_n\\
\end{array}

then for any \(k=1,\ldots,n\),
\[
x_k\det(A)=\det(a_1,\ldots,a_{k-1},b,a_{k+1},\ldots,a_n),
\]
where \((a_1,\ldots,a_{k-1},b,a_{k+1},\ldots,a_n)\) is the matrix \(A\) with \(k\)-th column \(a_k\) replaced by the column-vector \(b\).

4. This is an awfully useful result. In particular, it implies a formula for the inverse matrix: if one denote by \(M_{{k}{l}}\) minor, the determinant of the matrix obtained from \(A\) by deleting its \(k\)-th row and \(l\)-th column (called \(k,l\)-th minor), then the matrix with coefficients
\[
B_{kl}=(-1)^{k+l}M_{lk} \mathrm{(notice\ switched\ indices!)}
\]
satisfies
\[
AB=BA=\det(A)E.
\]

In other words, the inverse matrix to \(A\) is a polynomial of its coefficients, divided by \(\det(A)\)…

5. One can also obtain
\[
\det(A)=\sum_\sigma (-1)^{s(\sigma)}a_{1\sigma_1}\cdot\ldots\cdot a_{n\sigma_n}.
\]

From here we can obtain that the determinant of block-triangular matrix is
\[
\det\left(
\begin{array}{cc}
A&B\\
0&D\\
\end{array}
\right)=\det(A)\det(D)
\]
for square-sized \(A,D\).

6. There is an important way to reduce large determinants to smaller ones: it’s called Schur complement:

\[
\mathrm{If\ }\det(A)\neq 0, \det\left(
\begin{array}{cc}
A&B\\
C&D\\
\end{array}
\right)=\det(A)\det(D-CA^{-1}B).
\]
(Notice that all the matrices make sense!)

7. One more general result (Binet-Cauchy formula) about determinants:

Consider two collections of functions, \(f_i,g_i, i=1,\ldots,n\), and the matrix whose coefficients are integrals of products of these functions:
\[
A_{ij}=\int_a^b f_i(x)g_j(x)dx.
\]
Then
\[
\det(A)=\int\ldots\int_{a<x_1<x_2\ldots<x_n} \det(F(x_1,\ldots,x_n))\det(G(x_1,\ldots,x_n))dx_1\ldots dx_n.
\]
(Here \F((x_1,\ldots,x_n)\) is the matrix with the entries
\[
F_{ij}=f_i(x_j),
\]
and similarly for \(G\).

8. There are way too many interesting and useful determinantal formulae… We’ll cover just a few.

A great compendium of results and methods can be found in this survey by Krattenthaler.

Vandermonde: well-known. It appears when one solves the Lagrange interpolation problem:
To find a polynomial \(p=a_0x^{n}+a_1x^{n-1}+\ldots+a_n\) of degree \(n\) which takes given values \(y_0, y_1,\ldots,y_n\) at given points, \(x_0, x_1,\ldots,x_n\).

The result is easy to obtain in different ways (it is
\[
\sum_k y_k\frac{(x-x_1)\ldots(x-x_{k-1})(x-x_{k+1}\ldots(x-x_n)}{(x_k-x_1)\ldots(x_k-x_{k-1})(x_k-x_{k+1}\ldots(x_k-x_n)},
\]
as one can verify easily), and this also gives many interesting identities…

Very useful is also the Cauchy determinant, for \(A_{kl}=\frac{1}{x_k+y_l}\):\
\[
\det A=\frac{\prod_{k<l}(x_{k}-x_{l})(y_{k}-y_{l})}{\prod_{k,l}(x_k+y_l)}.
\]

Jacobi matrices appears in many problems:

\[
J_k=\left(
\begin{array}{ccccc}
a&b&0&\ldots&0\\
c&a&b&\ldots&0\\
0&c&a&\ldots&0\\
\vdots&\vdots&\vdots&\ddots&\vdots\\
0&0&0&\ldots&a\\
\end{array}
\right)
\]
Their determinants satisfy the recursion
\[
j_{k+1}=aj_k-bc j_{k-1}.
\]


Exercises:

  • Find the determinants of Jacobi matrices with \(a=1, bc=-1\) and \(a=-1,bc=-1\).
  • Find
    \[\left|
    \begin{array}{ccccc}
    1&1&1&\ldots&1\\
    x_1&x_2&x_3&\ldots&x_n\\
    x_1^2&x_2^2&x_3^2&\ldots&x_n^2\\
    \vdots&\vdots&\vdots&\ddots&\vdots\\
    x_1^{n-2}&x_2^{n-2}&x_3^{n-2}&\ldots&x_n^{n-2}\\
    x_1^n&x_2^n&x_3^n&\ldots&x_n^n\\
    \end{array}
    \right|
    \]
No comments yet.

Leave a Reply