\(\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}}

\def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}}

\def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)}

\def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}}

\def\ker{\mathbf{ker}}\def\im{\mathbf{im}}

\def\tr{\mathrm{tr\,}}

\def\braket#1#2{\langle #1|#2\rangle}

\)

**1.** We know that an operator can be represented as a matrix if you fix the basis. Changing the basis changes the matrix. One can try to make the matrix simpler, bringing it to one of the normal forms. If the operator is given by a matrix (in some basis), then change of the basis (in the source, or in the target space).

The *normal forms* depend on the type of the linear operator. The simplest one occurs when the operator is between different spaces:

\[

A:U\to V,

\]

so that one can choose the bases in \(U\) and \(V\) independently.

**2.** *Changing the basis* (in \(U\) or in \(V\)) leads to multiplications of the matrix \(A_{ij}\) on the right and on the left by the *change of the basis* matrices:

\[

A’_{i’j’}=\sum_{i,j}B_{i’i}A_{ij}C_{jj’}=\sum_{i,j}\braket{e’_{i’}}{e_i}A_{ij}\braket{f_{j}}{f’_{j’}}

\]

(with the \(B\) and \(C\) being the invertible matrices of replacing the basis).

**Theorem**: With appropriate choice of bases in \(U\) and \(V\), the matrix for an operator \(A\) can be reduced to the normal form

\[

\left(\begin{array}{c|c}

E_r&0\\

\hline

0&0\\

\end{array}\right),

\]

where \(r\) i the rank of \(A\).

*Exercise*: Consider the mapping that takes a quadratic polynomial \(q\) to its values at \(0,1,2\) and \(3\). Find the normal form of this operator.

There are many ways to think about this normal form. Essentially it says that any linear mapping is glued out of zeros (mappings that collapses everything to \(0\)), isomorphisms and trivial embeddings.

**3.** The situation in the case when \U=V\), and the bases in these spaces should be the same is much more involved. In this case change of basis reduces to actions on \(A\) by *conjugation*:

\[

A\mapsto B^{-1}AB.

\]

One important observation: recall that to each endomorphism \(A:U\to U\) one can associate its characteristic polynomial:

\[

P_A(z)=\det(A-zE).

\]

It is an invariant of the operator \(A\), not of the matrix: change of the basis leaves characteristic polynomial unchanged.

**4.** The roots of the characteristic polynomial are called *eigenvalues*. The set of eigenvalues of an operator \(A\) is called its spectrum and denoted as \(\sigma(A)\).

*Exercise*: Find eigenvalues of the matrix of rotation by \(\phi\) in \(\Real^2\).

The coefficients of the characteristic polynomial have meanings: if \(d=\dim U\),

\[

P_A(z)=(-z)^d+\tr(A) (-z)^{d-1}+\ldots+\det(A),

\]

where \(\tr=\sum_i \bra{f_i}A\ket{f_i}\), the sum of the diagonal elements of the matrix for \(A\) (in any basis). The coefficients are the symmetric functions of the eigenvalues of \(A\).

**5.** If \(f\) is a polynomial, and \(A\) is an operator, then eigenvalues of \(f(A)\) are \(f(\lambda), \lambda\in\sigma(A)\).

*Example*: The eigenvalues of circulant matrix are the roots of unity.

**6.** Now, we are ready to discuss the normal form for endomorphisms.

Define a Jordan block of size \(r\) the matrix

\[

J_r=\left(

\begin{array}{ccccc}

\lambda&1&0&\ldots&0\\

0&\lambda&1&\ldots&0\\

\ldots&\ldots&\ldots&\ldots&\ldots\\

0&0&0&\ldots&\lambda\\

\end{array}

\right).

\]

**Theorem**: For any operator (over \(\Comp\)) there exists a basis in which the operator is represented by a matrix consisting of diagonal Jordan blocks.

The total size of the blocks corresponding to an eigenvalue \(\lambda\) is the multiplicity of \(\lambda\) in the complete factorization of the characteristic polynomial: in

\[

P_A(z)=\prod_{\lambda\in\sigma(A)}(z-\lambda)^{m_\lambda},

\]

\(m_\lambda\) is the total size of the Jordan blocks corresponding to \(\lambda\).

**7.** If all the blocks are of size 1, the matrix (and the corresponding operator) is called diagonalizable. If the characteristic polynomial of an operator in \(d\)-dimensional space has \(d\) distinct roots, the operator is necessarily diagonalizable.

In particular, a *generic* operator is diagonalizable.

Diagonalizable operators (by definition) have a basis consisting of the eigenvectors of \(A\): \(v\in U\) is called an eigenvector corresponding to the eigenvalue \(\lambda\) if

\[

Av=\lambda v.

\]

*Exercises*.

- Find the Jordan normal form for the operator of differentiation acting on the polynomials of degree at most \(d\). Find all eigenvectors.
- Find eigenvalues and eigenvectors for the matrix

\[

\left(\begin{array}{cc}

0&1\\

.01&0\\

\end{array}\right).

\]

## No comments yet.