# Archive | research

RSS feed for this section

## 1.26 determinants

$$\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}} \def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}} \def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)} \def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}} \def\ker{\mathbf{ker}}\def\im{\mathbf{im}}$$

1. Determinant is a function of square matrices (can be defined for any operator $$A:U\to U$$, but we’ll avoid this abstract detour). It can be defined as a function that has the following properties:

1. It is linear in columns (i.e. if in a matrix $$A$$ a column (say, $$k$$-th) is $$\lambda_1 e_1 +\lambda_2 e_2$$, then
$f(A)=\lambda_1f(A_1)+\lambda_2f(A_2),$
where $$A_i$$ obtained by replacing $$k$$-th column with $$e_i, i=1,2$$.
2. It is zero if two columns are the same, and
3. $$f(E)=1$$.

## 1.24 linear operators

$$\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}} \def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}} \def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)} \def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}} \def\ker{\mathbf{ker}}\def\im{\mathbf{im}}$$

1. Linear mappings, or linear operators are just linear functions taking values in a linear space:
$A:U\to V,\ \mathrm{where\ both\ } U\ \mathrm{and\ } V\ \mathrm{ are\ linear\ spaces.}$
The same properties (with respect to the additivity and multiplication by a constant) are assumed:
$A(\lambda_1u_1+\lambda_2u_2)=\lambda_1A(u_1)+\lambda_2A(u_2), \mathrm{etc.}$

2. We will denote the space of linear operators from $$U$$ to $$V$$ as $$L(U,V)$$. It is again a linear space (this is a routine exercise to verify).…

## 1.19 Linear spaces

$$\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}} \def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}} \def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)} \def\bra#1{\langle #1|}\def\ket#1{|#1\rangle}\def\j{\mathbf{j}}\def\dim{\mathrm{dim}} \def\ker{\mathbf{ker}}\def\im{\mathbf{im}}$$

1. Linear (or vector) spaces (over a field $$k$$ – which in our case will be almost always the complex numbers) are sets that satisfy the following standard list of properties:

• There exists an (associative and commutative) addition/subtraction operations, along with a neutral element, the origin,
• …and a multiplication by scalar, again associative,
• …and distributivity holds.

Examples: Complex numbers is a vector space over real numbers. Real numbers are a vector space over rational numbers.…

## solutions to exercises 1.17

Exercise 1: describe the curve $$1/z$$, where $$z=1+ti, ~~t\in (-\infty,\infty)$$.

Solution: apply $$\frac{1}{z} = \frac{\bar z}{|z|^2}$$ and get $$\frac{1}{1+ti} = \frac{1-ti}{1+t^2}$$, which is a parametric curve  of the form $$p(t) = x(t)+iy(t)$$ where $$x(t) = \frac{1}{1+t^2},~~~ y(t) = \frac{-t}{1+t^2}$$.

This is a circle (without the origin), of radius $$1/2$$ and center at $$(1/2,0)$$.
Indeed,  the distance of $$\frac{1}{z}$$ from $$z_0$$, given by $$|\frac{1}{z}-\frac{1}{2}|^2= |\frac{2-z}{2z}|^2=\frac{1}{4}\frac{2-(1+it)}{1+it}|^2 = \frac{1}{4}\frac{|1-it|^2}{|1+it|^2} = \frac{1}{4}\frac{1+t^2}{1+t^2} = \frac{1}{4}$$ is constant and equals to $$\frac{1}{2}$$.

How to arrive to that:

• Smoothness: both $$x(t),y(t)$$ are rational functions of $$t$$ that have no poles (their denominator is always positive), hence the curve is smooth at all parameter values.

## 1.17 Complex numbers

$$\def\Real{\mathbb{R}}\def\Comp{\mathbb{C}}\def\Rat{\mathbb{Q}}\def\Field{\mathbb{F}}\def\Fun{\mathbf{Fun}}\def\e{\mathbf{e}}\def\f{\mathbf{f}}\def\bv{\mathbf{v}}\def\i{\mathbf{i}} \def\eye{\left(\begin{array}{cc}1&0\\0&1\end{array}\right)}$$

Complex numbers are expressions $$z=x+\i y$$, where $$\i$$ is the imaginary unit, defined by $$\i^2=-1$$. Note the ambiguity ($$-\i$$ would work just as well).

Complex numbers can be added subtracted, multiplied the usual way, with the usual properties of associativity, commutativity and distributivity. There is a zero, and a norm, $$\|x+y\i\|=x^2+y^2$$.

Flipping $$\i$$ gives a conjugate complex number:
$\overline{x+\i y}=x-\i y.$

The norm satisfies $$\|z\|^2=z\bar{z}$$.

Complex conjugacy is multiplicative, $$\overline{z_1z_2}=\bar{z}_1\bar{z}_2$$, and so is the norm.…

## mock final

Here. (Disregard the points for the problems.)final_mock

## week 13

$$\def\Real{\mathbb{R}} \def\Comp{\mathbb{C}} \def\Rat{\mathbb{Q}} \def\Field{\mathbb{F}} \def\Fun{\mathbf{Fun}} \def\e{\mathbf{e}} \def\f{\mathbf{f}} \def\bv{\mathbf{v}}$$

#### Interior point methods

Logarithmic barriers. Central path. Convergence and quality of solution estimates from duality. (Boyd, chapt.11).

#### Ellipsoid method

(Use Boyd’s notes).

#### Sample problems

• Find the central path for the problem
$\begin{array}{rl} ax+by&\to\min, \mathrm{\ subject\ to}\\ x\geq&0\\ y\geq&0\\ x+y\leq&1\\ \end{array}$
for different $$a,b$$ (for example, $$a=3,b=5$$).

## week 12

$$\def\Real{\mathbb{R}} \def\Comp{\mathbb{C}} \def\Rat{\mathbb{Q}} \def\Field{\mathbb{F}} \def\Fun{\mathbf{Fun}} \def\e{\mathbf{e}} \def\f{\mathbf{f}} \def\bv{\mathbf{v}}$$

#### Conjugate gradient method

Material:Guler, ch. 14.7-9

Conjugate directions. Gram-Schmidt algorithm. Conjugate directions algorithm. Performance estimates via spectral data.

#### Sample problems

• Consider the 4-dimensional space spanned by the polynomials
$p(x)=ax^3+bx^2+cx+d$
of degree at most $$3$$.
Run the Gram-Schmidt orthogonalization procedure on the vectors
$$\{1,x,x^2,x^3\}$$, if the scalar product is given by
$p_1’Qp_2=\int_{-1}^1 p_1(x)p_2(x) dx.$
• Consider quadratic form in $$\Real^d$$ given by
$Q=\left( \begin{array}{cc} 3&-1\\ -1&3\\ \end{array} \right)$
Run two iterations of the conjugate gradient method for the system
$Qx=b,$
where $$b=(1,0)’$$.

## week 11

$$\def\Real{\mathbb{R}} \def\Comp{\mathbb{C}} \def\Rat{\mathbb{Q}} \def\Field{\mathbb{F}} \def\Fun{\mathbf{Fun}} \def\e{\mathbf{e}} \def\f{\mathbf{f}} \def\bv{\mathbf{v}}$$

Gradient descent methods. Step size choices. Backtracking rule.

Newton algorithm. Solving systems of nonlinear equations. Quadratic convergence,

#### Sample problems

• Compute first 5 iterations $$x_0=2,x_1,\ldots, x_5$$ of Newton method to solve
$x^2-5=0.$

Plot $$\log|x_k^2-5|$$.

• For $$x_0$$ close to $$0$$ the iterations of Newton method to solve
$x^3-5x=0$
quickly converge to $$0$$ (what else!?). Find the largest interval where this is true (i.e. find the smallest $$x_0>0$$ starting with which the iterations of the Newton method fail to converge).