\(\def\Real{\mathbb{R}}

\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}}

\def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}}

\def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)}

\def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}}

\def\ker{\mathbf{ker}}\def\im{\mathbf{im}}

\def\tr{\mathrm{tr\,}}

\def\braket#1#2{\langle #1|#2\rangle}

\)

**1.**

Given a bilinear form \(Q(\cdot,\cdot)\), or, equivalently, a mapping \(Q:U\to U^*\), one can bake easily *new* bilinear forms from linear operators \(U\to U\): just take \(Q_A(u,v):=Q(u,Av)\).

This opens door for exploiting the interplay between the normal forms of operators and the properties of the corresponding bilinear forms. One problem is though, that the normal forms of endomorphisms involve complex eigenpairs, and the most natural bilinear form on Euclidean space, the scalar product, is not positive definite if one extends the ground field to complex numbers: for a quadratic form, \(q(\lambda v)=\lambda^2q(v)\), which cannot be nonnegative for all \(\lambda\in\Comp\).

**2.**

The solution is to redefine the notion of bilinearity so that it does not affect the definitions on the real space, but remains positive definite when we allow for complex coefficients. This leads to the notion of sesquilinear form, the one that satisfies

\[

Q(u_1+u_2,v)=Q(u_1,v)+Q(u_2,v), \mathrm{\ same\ for\ } Q(u,v_1+v_2), \mathrm{\ and\ }

Q(\lambda u, \mu v)=\bar{\lambda}\mu.

\]

A (complex) vector space with a positive definite sesquilinear (a.k.a *Hermitian*) form is called a *Hermitian space*. The standard example is the space of vector(columns) with the Hermitian form

\[

(u,v):=\sum \bar{u}_kv_k.

\]

In general, this is how one makes a Hermitian space out of Euclidean one: allow complex scalars, but make the scalar product sesquilinear rather than bilinear.

**3.**

As mentioned, any endomorphism \(A:U\to U\) of a Hermitian space engenders a bilinear form

\(Q_A(u,v):=(u,Av)\). One can, of course, apply \(A\) to the first argument; the result, in general, will be different:

\[

(Au,v)\neq (u,Av).

\]

But one can always, for a given operator \(A\) *define* an operator \(B\) such that

\[

(Bu,v)=(u,Av).

\]

Such a \(B\) is called the adjoint operator, still denoted as \(A^*\): in other words, by definition,

\[

(A^*u,v)=(u,Av) \mathrm{\ for\ all\ } u,v.

\]

**4.**

One can easily see that in an orthonormal basis, the matrix of the operator adjoint to \(A\) is obtained from the matrix of \(A\) by transposing and complex conjugating.

One can make an analogue of quadratic form out of an operator \(A\) by

\[

q(v):=(v,Av);

\]

the result will be real for all \(v\) if \(A=A^*\), i.e. is self-adjoint.

**5.**

The fundamental fact about self-adjoint operators is the following

Theorem: Any self-adjoint operator in Hermitian space has real spectrum, and an orthonormal basis consisting of the eigenvectors of \(A\). In particular, the Jordan normal form of a self-adjoint operator has no cells of size 2 or more.

We note that if the operator \(A\) is self-adjoint and real (that is it is interchangeable with complex conjugation: \(\bar{Av}=A\bar{v}\)), then all the eigenvectors can be chosen are real as well.

**6.**

Similar result can be proven about the skew-adjoint matrices \(A^*=-A\): again, going to the complex domain, we the notice that \(iA\) is self-adjoint, and therefore there exists a basis consisting of the eigenvectors of \(A\), and all eigenvectors of \(A\) are purely imaginary.

If \(A\) is self-adjoint and real, then these non-vanishing eigenvalues split into complex-conjugated pairs, \(\pm\lambda_k\). In particular, a real skew-adjoint operator is necessarily degenerated in an odd-dimensional space.

## No comments yet.