\(\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}}

\def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}}

\def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)}

\def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}}

\def\ker{\mathbf{ker}}\def\im{\mathbf{im}}

\def\tr{\mathrm{tr\,}}

\def\braket#1#2{\langle #1|#2\rangle}

\)

**1. ***Bilinear forms* are functions \(Q:U\times U\to k \) that depend on each of the arguments linearly.

Alternatively, one can think of them as the linear operators

\[

A:U\to U^*, \mathrm{ \ with\ } Q(u,v)=A(u) (v).

\]

If \(U\) has a basis, the bilinear form can be identified with the matrix of its coefficients:

\[

Q_{ij}=Q(e_i,e_j).

\]

(Notice that the order matters!)

Rank 1 bilinear forms of rank 1 are just products of linear functions, \(\bra{u}\bra{v}\).

**2. **In the space of continuous functions on interval \([a,b]\), a kernel \(K: [a,b]\times[a,b]\to\Real\) defines a bilinear form

\[

Q(f,g)=\int_a^b \int_a^b K(s,t)f(s)g(t) ds dt.

\]

**3. **Bilinear form can be symmetric, \(Q(u,v)=Q(v,u)\), or skew-symmetric \(Q(u,v)=-Q(v,u)\), and each bilinear form is a sum of symmetric and skew-symmetric forms. (Skew-)symmetric form correspond to (skew-)symmetric matrices.

**4. **Passing to a new basis is straightforward:

\[

Q\mapsto C^\top Q C,

\]

where \(C \) is the matrix of passing from new to old basis, abd \(C^\top\) is its transpose.

**5. **Given a bilinear form \(Q\), one can derive a quadratic form \(q(x):=Q(x,x)\). Adding a skew-symmetric form to a bilinear form leave the associated quadratic form unchanged. Hence, one can assume that a quadratic form comes from a symmetric bilinear form, which can be derived using the polarization process:

\[

Q(x,y)=\frac12(q(x+y)-q(x)-q(y)).

\]

**6. **A quadratic form on a real vector space is called positive definite is

\[

q(x)>0 \mathrm{\ for\ any\ } x\neq 0.

\]

An example of positive definite quadratic form is the standard Euclidean norm (in a basis \(\{e_k\}_{k=1}^n\)):

\[

q(x)=\sum x_k^2.

\]

One can use any positive definite quadratic form as the defining building block for a Euclidean space (define the norm of a vector as \(\|x\|^2:=q(x)\), the scalar product as \((x,y):=Q(x,y)\) etc). Any theorem about the Euclidean spaces can be reformulated and proven in terms of a positive-definite quadratic form without any loss (say, the Cauchy-Schwartz inequality

\[

(x,y)\leq \|x\|\|y\|

\]

translates into

\[

Q(x,y)\leq \sqrt{q(x)q(y)}.

\]

Still, one needs convenient coordinates. Normal forms in the case of positive-definite forms is done by the familiar process (Gram-Schmidt orthogonalization). The process is straightforward: given a basis \(f_k, k=1,\ldots, n\), set \(e_1:=f_1\), so that and then iterate for \(k\gt 1\):

\[

e_k=f_k+\sum_{l\lt k}c_{kl}e_l,

\]

where the coefficients \(c_{kl}\) are chosen so that \(Q(e_k,e_l)=0\) for \(k<l\), i.e.

\[

c_{kl}=-\frac{Q(f_k,e_l)}{Q(e_l,e_l)}.

\]

(Here we use the fact that \(e_l\neq 0\) – thanks to the linear independence of \(f\)’s, – and positive definiteness of \(Q\).)

Exercise: Consider the space of real polynomial functions of degree at most 3. Consider the form

\[

q_1(f):=|f(-1)|^2+|f(0)|^2+|f(1)|^2.

\]

Is this form positive definite?Consider another form,

\[

q(f)=\int_0^\infty e^{-x}|f(x)|^2dx.

\]

Diagonalize the form using Gram-Schmidt procedure starting with the standard monomial basis \(\{1,x,x^2,x^3\}\).

**7. **Of course, not only positive definite quadratic forms can be brought to simple normal form; Jacobi diagonalization process works always. It creates a sequence of linear functions \(e^*_k, k=1,\ldots,n\) (that is elements of the dual space) such that

\[

q(v)=\sum a_l e_k(v)^2.

\]

The procedure works like this: in the original basis, the form is

\[

\sum_{k,l} Q_{kl}x_kx_l.

\]

If \(Q_{11}\neq 0\), one can represent

\[

q(x)=Q_{11}(x_1+\sum_{l\geq 2}Q_{1l}/Q_{11} x_l)^2+q^{(1)}

\],

where \(q^{(1)}\) depends only on coordinates \(x_2,\ldots,x_n\). Set \(y_1=x_1+\sum_{l\geq 2}Q_{1l}/Q_{11} x_l\); continuing inductively, we get a diagonalized quadratic form.

If at any step all diagonal elements \(Q_{ll}=0\), change coordinates to \(x_l-x_m, x_l+x_m\).

**8. **The numbers \(n_+, n_-\) of positive and negative coefficients in a diagonal representation of the quadratic form are called its signature. The signature is an invariant of a quadratic form (that is it does not matter, what is the diagonalization process, the signature of the result is always the same. In fact, \(n_+\) (\(n_-\)) is the largest dimension of a subspace restriction to which of the quadratic form is positive (negative) definite. At the same time, \(n_0:=n-n_+-n_-\) is the dimension of \(\ker Q\).

**9. **The diagonalization process can be made perfectly efficient is the principal minors of the Gram matrix, that is the determinants

\[

\Delta_0:=1, \Delta_1:=Q_{11}, \Delta_2=\left|\begin{array}{cc}Q_{11}& Q_{12}\\Q_{21}&Q_{22}\\\end{array}\right|,\ldots

\]

are non-vanishing. In this case, the diagonalization results in the quadratic form

\[

\sum_{k=1}^n\frac{\Delta_k}{\Delta_{k-1}}y_k^2,

\]

implying the Sylvester criterion for a quadratic form to be positive definite: all principal minors of its Gram matrix (with respect to any basis) should be positive.

Exercise. Find the signature of the quadratic form \(q_1\) above.

## No comments yet.