1.24 linear operators

\(\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}}
\def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}}
\def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)}
\def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}}
\def\ker{\mathbf{ker}}\def\im{\mathbf{im}}
\)

1. Linear mappings, or linear operators are just linear functions taking values in a linear space:
\[
A:U\to V,\ \mathrm{where\ both\ } U\ \mathrm{and\ } V\ \mathrm{ are\ linear\ spaces.}
\]
The same properties (with respect to the additivity and multiplication by a constant) are assumed:
\[
A(\lambda_1u_1+\lambda_2u_2)=\lambda_1A(u_1)+\lambda_2A(u_2), \mathrm{etc.}
\]

2. We will denote the space of linear operators from \(U\) to \(V\) as \(L(U,V)\). It is again a linear space (this is a routine exercise to verify).

If the bases \(e_j, j=1,\ldots,\dim U, f_i, i=1,\ldots,\dim V\) in both linear spaces \(U\) and \(V\) are fixed, then one can represent the operator as a matrix whose coefficient \(A_{ij}\) comes from expanding \(Ae_j\) in terms of \(f\)’s. Easiest mnemonic rule uses bra-ket representations of unity in \(U,V\):
\[
A=E_VAE_U=\sum_i\ket{f_i}\bra{f_i}A\sum_j\ket{e_j}\bra{e_j}=\sum_{ij}A_{ij}\ket{f_i}\bra{e_j}.
\]
If \(U=V\) is the same space (i.e. \(A\) is a self-mapping), one typically takes the same bases.

3. So, given the bases \(e_j, j=1,\ldots,\dim U, f_i, i=1,\ldots,\dim V\) one obtains a basis in \(L(U,V\) given by \(\ket{f_i}\bra{e_j}, j=1,\ldots,\dim U, i=1,\ldots,\dim V\) (this is, again, an easy theorem). Hence
\(\dim L(U,V)=\dim(U)\dim(V)\).


Exercise: Let \(U\) be the space of polynomials in \(x,y\) of degree at most 2. Find a basis for this space. Consider the operator
\[
AP=\left(x\frac{\partial}{\partial x}+y\frac{\partial}{\partial y}\right)P.
\]
Check that it maps \(U\) into \(U\). Find its matrix in your basis.


4. If one is given¬†operators \(B:U\to V,A:V\to W\), one can define their composition \(AB\) (bizarrely, we first apply \(B\)…) – the matrix of this composition can be easily recovered from the matrices of \(A, B\).

5. Null-space and range of a linear operator are defined as follows:
\[
\ker A=\{u\in U: Au=0\}. \im A=\{v: \mathrm{for\ some\ } u\in U, v=Au\}.
\]

An easy check: both kernel and image of an operator \(A\in L(U,V)\) are linear subspaces (of \(U\) and \(V\), respectively).


Exercise: Let \(U=\{a_0z^4+a_1z^3+\ldots+a_4\}\) be the space of all polynomials in one variable of degree 4. Let \(A:U\to U\) is given by
\[
AP=Q \Leftrightarrow Q(z)=P(z)+P(-z).
\]
Find \(\ker A\) and \(\im A\).


One often says that \(im A\) is isomorphic to the factorspace \(U/\ker A\). One can see this as the definition of the factorspace.

6. There exists a fundamental relation
\[
\dim U=\dim\ker A+\dim\im A
\]
for any \(A\in L(U,V)\).

7. An important class of sub- and factorspaces of a linear space \(U\) comes from duality: if \(V\subset U\) is a linear subspace, then any linear function on \(U\) is also a linear function on \(V\). Hence we get automatically a mapping
\[
U^*\to V^*.
\]
The image is all of \(V^*\) (why?).
The kernel of this mapping consists of exactly the functionals that vanish on \(V\) (there is a special name for that, the annulator of \(V\), denoted as \(V^\perp\)). So,
\[
V^*=U^*/V^\perp.
\]

8. More generally, if we have a linear operator
\[
A:U\to V,
\]
we automatically obtain a linear operator
\[
A^*:V^*\to U^*,
\]

The matrix of \(A^*\) is the transposition of the matrix of \(A\) (if one uses the dual bases). In terms of bra-ket notations, it just boils down to multiplying from the left.

Generalizing the relation between the annulator and the image, we have
\[
(\im A)^*=V^*/(\ker A^*).
\]
In particular, \(\dim U-dim\ker A=\dim \im A=\dim(\im A)^*=\dim V^*-\dim\ker A^*\).


Exercise: Let \(U\) be the space of polynomials in one variable of degree 4. Consider the subspace of polynomials divisible by \(z^2-1\). Describe its annulator.


9. Rank of an operator (or of a matrix) is the dimension of its image. Equivalently, it is the maximal number of linearly independent columns.

10. Finding preimages of linear operators is the same as solving systems of linear equations. Here we have a simple but important

Theorem (Fredholm alternative):
If \(\dim U=\dim V\), and \(A:U\to V\) is linear, then either both systems \(Ax=b, A^*y=c\) have unique solutions for any \(b\in V, c\in U^*\) or the null-spaces of both \(A\) and \(A^*\) are nontrivial (and have the same dimension, \(\dim\ker A=\dim\ker A^*>0\)).

One remarkable property of the linear mapping is that if \(A:U\to V\) is one-to-one, then the inverse mapping is also linear. It means that in this case there exists a linear operator \(B:V\to U\) such that \(BA=E_U\) (and, therefore, \(AB=E_V\)).

This leads immediately to determinants.

No comments yet.

Leave a Reply