Archive | res

RSS feed for this section

solutions for problems on vector analysis and differential forms

Solutions

Exercise:
Assume that 4 vectors in \(\mathbb{R}^3\) satisfy \(A+B+C+D\)=0.
Simplify
$$ A\times B-B\times C+C \times D-D\times A$$

Solution: By the anti-symmetry \(-B\times C = C\times B\)
and \(-D\times A = A\times D\) we can rewrite the expression as
$$ A\times B+C\times B+C \times D+A\times D$$
Using the distributive law of the cross product, we have
$$ (A+C)\times B+(C+A) \times D$$
and again
$$ (A+C)\times (B+D) =** $$
However, as \(F=(A+C) = -(B+D)\) we get
$$ **= F\times (-F) = 0 $$


Exercise: Integrate \(dx/y\) over the circle (oriented counterclockwise)
\(C:=\{x^2+y^2=R^2\}\).…

Read full story Comments { 0 }

March 28

  • Consider the \(k\)-form \(\omega \) defined at \(x\in \Omega\subset \mathbb{R}^n\), given by the sum
    $$ \omega(x)= \sum c_I(x) dx_I $$
    where the multi-index
    \( I=\{1\leq i_1\leq i_2 \leq, \ldots, \leq i_k \leq n\} \).
    Each basis element
    \(dx_I:= dx_{i_1} \wedge dx_{i_2} \ldots \wedge dx_{i_k} \)
    is a \(k\)-form taking the (tangent) vectors \((v_1,\ldots,v_k) \), and its values
    is determined by the\(k\times k\) determinant of the \(I\)-rows sliced from
    the \(n\times k\) matrix of stacked column \(v_i\) , i.e
    $$ dx_I(v_1,\ldots,v_k) = \text{det}\left[\begin{array}{c|c|c|c|c} ~&v_1 & v_2 & \ldots & v_k \\
    \hline
    i_1 & & & & \\
    \vdots & & & & \\
    i_n & & & &
    \end{array} \right]$$
  • Integration of forms : we can integrate \(k\)-form over \(k\)-dimensional patches,
    generalizing the idea behind Riemann integration in elementary calculus.
  • Read full story Comments { 0 }

    3.14 Vector Analysis And Classical Identities in 3D

    \(\def\pd{\partial}\def\Real{\mathbb{R}}
    \)

    1. There are two main types of products between vectors in \(\mathbb{R}^3\):
      The inner/scalar/dot product
      $$ A\cdot B = A_x+B_x + A_yB_y + A_z Bz \in \mathbb{R} $$
      is commutative, distributive, and homogenenous.
      The vector (cross) product:
      $$ A\times B = \begin{pmatrix}
      A_yB_z-A_zB_y \\
      A_zB_c-A_zB_z \\
      A_xB_y-A_yB_x\end{pmatrix} \in \mathbb{R^3} $$
      is homogeneous, not commutative, not associative, but linear with
      respect to each entry.
    2. The cross product \(A\times B\) is always perpendicular to \(A\) and \(B\),
      and its length equals to \(|A||B|\sin(\phi)\) where \(\phi\) is the angle
      between the vectors.
    Read full story Comments { 0 }

    2.28 operators in Hermitian spaces and their spectra.

    \(\def\Real{\mathbb{R}}
    \def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}}
    \def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}}
    \def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)}
    \def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}}
    \def\ker{\mathbf{ker}}\def\im{\mathbf{im}}
    \def\tr{\mathrm{tr\,}}
    \def\braket#1#2{\langle #1|#2\rangle}
    \)

    1.
    Given a bilinear form \(Q(\cdot,\cdot)\), or, equivalently, a mapping \(Q:U\to U^*\), one can bake easily new bilinear forms from linear operators \(U\to U\): just take \(Q_A(u,v):=Q(u,Av)\).

    This opens door for exploiting the interplay between the normal forms of operators and the properties of the corresponding bilinear forms. One problem is though, that the normal forms of endomorphisms involve complex eigenpairs, and the most natural bilinear form on Euclidean space, the scalar product, is not positive definite if one extends the ground field to complex numbers: for a quadratic form, \(q(\lambda v)=\lambda^2q(v)\), which cannot be nonnegative for all \(\lambda\in\Comp\).…

    Read full story Comments { 0 }

    midterm

    problems and solutions here.…

    Read full story Comments { 0 }

    2. 14 quadratic forms

    \(\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}}
    \def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}}
    \def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)}
    \def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}}
    \def\ker{\mathbf{ker}}\def\im{\mathbf{im}}
    \def\tr{\mathrm{tr\,}}
    \def\braket#1#2{\langle #1|#2\rangle}
    \)

    1. Bilinear forms are functions \(Q:U\times U\to k \) that depend on each of the arguments linearly.

    Alternatively, one can think of them as the linear operators
    \[
    A:U\to U^*, \mathrm{ \ with\ } Q(u,v)=A(u) (v).
    \]
    If \(U\) has a basis, the bilinear form can be identified with the matrix of its coefficients:
    \[
    Q_{ij}=Q(e_i,e_j).
    \]

    (Notice that the order matters!)

    Rank 1 bilinear forms of rank 1 are just products of linear functions, \(\bra{u}\bra{v}\).…

    Read full story Comments { 0 }

    solutions to 2.7

    Exercise: find the product of rotations by \(\frac{\pi}{2}\) around \(x, y\) and then \(z\) axes.
    Solution: \( \newcommand{Rot}[1]{\overset{R_#1}{\longrightarrow}} \) Let \(R_x,R_y\) and \(R_z\) represent the rotation by \(\frac{\pi}{2}\) matrices about the \(x,y\) and \(z\) axes respectively, as defined in the lectures notes: $$ R_x = \left[\begin{array}{c|cc} 1 & 0 & 0\\ \hline 0 & 0 & 1 \\ 0 & -1 & 0 \end{array}\right],~~~ R_y = \left[\begin{array}{c|c|c} 0 & 0 & -1 \\ \hline 0 & 1 & 0 \\ \hline 1 & 0& 0 \end{array}\right],~~~ R_z = \left[\begin{array}{cc|c} 0 & 1 & 0 \\ -1 & 0 & 0 \\ \hline 0 & 0 & 1 \end{array}\right] $$ Our goal is to compute \(R_zR_yR_x\), and although this is a very simple product, we can also deduce the overall rotation by pure geometrical means.…

    Read full story Comments { 0 }

    2.7 functions of operators

    \(\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}}\def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}}
    \def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)}
    \def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}}
    \def\braket#1#2{\langle #1|#2\rangle}
    \def\ker{\mathbf{ker}}\def\im{\mathbf{im}}
    \def\e{\mathcal{E}}
    \def\tr{\mathrm{tr}}
    \)

    1.
    A linear operator \(A:U\to U\) that maps a space into itself is called endomorphism. Such operators can be composed with impunity. In elevated language, they form an algebra. (It is a generalization of our representation of complex numbers as \(2\times 2\)-matrices).


    2.
    Which means we can form some functions of operators. Polynomials are the easiest one: if \(P=a_0x^n+\ldots+a_n\), then
    \[
    P(A)=a_0A^n+\ldots+a_n E.
    \]

    Other functions can be defined as well, if they can be approximated by polynomials: the exponential is a familiar example:
    \[
    \exp(A)=\sum_{k\geq 0} A^k/k!…

    Read full story Comments { 0 }

    solutions to exercises 1.26

    Exercise: Find the determinants of Jacobi matrices with \(a=1,bc=-1\) and \(a=-1,bc=-1\).
    Solution: First assume \(a=1,bc=-1\), and use the regression obtained in the lecture notes, i.e $$j_{k+1}=aj_k-bcj_{k-1} = j_k+j_{k-1}$$

    Those are Fibonacci numbers with initial conditions obtained by the first two determinants: $$ j_1 = \text{det}(a) = a = 1,~~~ ~~~ j_2 = \left| \begin{matrix} a & b \\ c & a \end{matrix} \right| = a^2-bc = 1+1 = 2, $$ and the rest follows the standard Fibonacci sequence (\(1,2,3,5,8,13,\ldots\)). For the second case, \(a=-1,bc=-1\), we similarly get $$ j_{k+1}=aj_k-bcj_{k-1} = -j_k+j_{k-1}$$ with initial conditions \(j_1 = -1\) and \(j_2 = 1+1 = 2 \), which is the alternating Fibonacci sequence (\(-1,2,-3,5,-8,13,\ldots\)).…

    Read full story Comments { 0 }

    1.26 determinants

    \(\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}}
    \def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}}
    \def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)}
    \def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}}
    \def\ker{\mathbf{ker}}\def\im{\mathbf{im}}
    \)

    1. Determinant is a function of square matrices (can be defined for any operator \(A:U\to U\), but we’ll avoid this abstract detour). It can be defined as a function that has the following properties:

    1. It is linear in columns (i.e. if in a matrix \(A\) a column (say, \(k\)-th) is \(\lambda_1 e_1 +\lambda_2 e_2\), then
      \[
      f(A)=\lambda_1f(A_1)+\lambda_2f(A_2),
      \]
      where \(A_i\) obtained by replacing \(k\)-th column with \(e_i, i=1,2\).
    2. It is zero if two columns are the same, and
    3. \(f(E)=1\).
    Read full story Comments { 0 }