Finishing Noninear Programming: Guler, Chapter 9.
Class notes (pretty incomplete, to be used just as a study guide), here and here.…
]]>Finishing Noninear Programming: Guler, Chapter 9.
Class notes (pretty incomplete, to be used just as a study guide), here and here.
]]>Consider a Morse function on \(f:\Real^n\to \Real\) with controlled behavior at infinity, – say, \(f=|x|^2\) near infinity. Assume further that all critical values \(a_1<a_2<\ldots<a_k\) are distinct and that all indices of critical points are \(0\) or \(1\). (Condition obviously holds in one variable.)
Clearly, there are many function that satisfy these conditions. In the (again, most transparent) univariate case, the enumeration of topological types of functions with given critical values is the subject of a nice thread of papers by Arnold on “snakes” (see, e.g., here), relating them to enumeration of ramified coverings of Riemannian spheres, up-down sequences and other cute objects.…
]]>Consider a Morse function on \(f:\Real^n\to \Real\) with controlled behavior at infinity, – say, \(f=|x|^2\) near infinity. Assume further that all critical values \(a_1<a_2<\ldots<a_k\) are distinct and that all indices of critical points are \(0\) or \(1\). (Condition obviously holds in one variable.)
Clearly, there are many function that satisfy these conditions. In the (again, most transparent) univariate case, the enumeration of topological types of functions with given critical values is the subject of a nice thread of papers by Arnold on “snakes” (see, e.g., here), relating them to enumeration of ramified coverings of Riemannian spheres, up-down sequences and other cute objects.
We are interested however in a somewhat finer enumeration. Notice that under our index condition, the sublevel sets \(U(c)=\{f\leq c\}\) for non-critical \(c\) are collections of topological \(n\)-disks. When increasing \(c\) crosses a value corresponding to a critical point of index \(0\), a new disk is born; when the index is \(1\), two disks merge (an alternative would be to generate an element in 1-dimensional homology in the sublevel set for which we don’t have any ammunition – critical points of index 2 – to kill later on).
Combining these births of components and their merges results in a graph which is referred to as “merge tree” and coincides, in dimension \(n\gt 1\) with the so-called Reeb (a.k.a Reeb-Kronrod) tree.
(Remark: This merge tree completely determines the 0-dimensional persistent diagram corresponding to the filtration by the sublevel sets. The algorithm of decomposing a merge tree into a collection of bars is simple: find the lowest critical point and connect it to the root. The span of the resulting path defines the longest bar. Remove the path and iterate on the resulting subtrees. We note that the bars in those subtrees will be fully contained in the bar corresponding to the removed stem.)
Can one reconstruct the function back from its merge tree? Yes, in the univariate case. Indeed, if the tree is a plane tree, that is if for any merging branches, we know their order left-to-right. In this case, the original function can be obtain using the contour or height walk, called sometimes the Dyck path, see Le Gall’s survey. The height walk will be equivalent (up to reparametrization of the domain) to the original function.
So, how many functions will produce the same merge tree in univariate case: well, the merge tree is binary, and so for each non-leaf vertex we have an option to flip left and right branches, leading to different plane embeddings (at least if all the critical values are distinct) of the same merge tree. Each of them will produce its own height walk, corresponding to the same (up to embedding) merge tree.
Thus, the number of different functions, up to a reparametrization of the domain, resulting in the same merge tree, in univariate case, is \(2^M\), where \(M\) is the number of local maxima, i.e. the number of branching vertices in the merge tree. This was observed by Justin Curry.
So, what would be the situation in \(\Real^n\)? It is, actually, very close to the univariate case. Fix a merge tree \(T\) (a binary tree with distinct real numbers, heights, assigned to its vertices), and denote by \(F(T)\) the space of Morse functions with merge tree \(T\). The critical values of these functions correspond to the heights of the merge tree, with exception of the largest one, the root. The critical points of index \(0\) correspond to the leaves (with exception of the root), and the branching vertices correspond to the critical points of index \(1\).
Theorem: The space \(F(T)\) is homotopy equivalent to the product of \(I\) spheres \(S^{n-1}\).
The mapping from \(F(T)\) to \((S^{n-1})^I\) is given by the collections of unit vector parallel to the unstable direction of the critical points of index 1, oriented towards the component with deeper minimum.
]]>Still, important to understand where your department is on the map, if one wants to steer it in some particular direction, or to keep it where it is.
I wrote a short python script, using BeautifulSoup (thank you!) and campus-wide subscription to MathSciNet (thank you too!) – and downloaded the raw numbers: how many items (whichever MathSciNet is indexing: articles, books, theses,…) are authored by folks from ‘1-IL’ (our code) vs.…
]]>Still, important to understand where your department is on the map, if one wants to steer it in some particular direction, or to keep it where it is.
I wrote a short python script, using BeautifulSoup (thank you!) and campus-wide subscription to MathSciNet (thank you too!) – and downloaded the raw numbers: how many items (whichever MathSciNet is indexing: articles, books, theses,…) are authored by folks from ‘1-IL’ (our code) vs. the total output, split by the rough areas as defined by the Mathematical Subject Classification scheme.
Here are the results. The left display shows the fraction of the world’s math output that folks at University of Illinois produced since (rather arbitrarily chosen cutoff) 2013… The right display shows the total number of published works in each area across all institutions.
The usual caveats comparing publication numbers in different areas apply: some publish at a much higher rate than others; the disparities somewhat reflected in the right table. Still, the tables below invite some deeper level of introspection.
Say, the number of works in K-theory – worldwide – is small. It is small not just because it is harder to write a competent paper there than, say, in game theory, but also because the sum total of people working in the area is low. So a department hosting a high fraction of the world output is akin to being home to an endangered species. Shall we protect our K-theorists from extinction? Or let them being taken over by operator algebra and algebraic topology? To exacerbate these existential worries, are our stats correct in the first place? – when typical number of papers in a subject area is globally in below a hundred a year, fluctuations start to be very pronounced.
ON the other end, at the highest output areas in math proper (PDEs, Combinatorics, Probability, Number theory…) it is important to remember that within each of them there are broad subareas with very different rates of typical output. A lot of publishable papers in, say, combinatorics could have been written pretty fast, but many are deep and use highly technical tools, and would clearly require years of gestation. Perhaps the top level rubrics of MSC provide too coarse a partition to use these numbers algorithmically.
Still, these numbers do tell us something. Judge yourself.
]]>Simplex method (see Matousek/Gaertner, Understanding and using linear programming.
Starting Noninear Programming: Guler, Chapter 9.
Class notes (pretty incomplete, to be used just as a study guide), here and here.
Homework (due by midnight of Sunday, Mar. 10).
Simplex method (see Matousek/Gaertner, Understanding and using linear programming.
Starting Noninear Programming: Guler, Chapter 9.
Class notes (pretty incomplete, to be used just as a study guide), here and here.
Homework (due by midnight of Sunday, Mar. 10).
Solutions to homework.
]]>Linear Programming: Guler, Chapter 6 and Matousek/Gaertner, Understanding and using linear programming.
Class notes (pretty incomplete, to be used just as a study guide), here and here.
Exercises
Linear Programming: Guler, Chapter 6 and Matousek/Gaertner, Understanding and using linear programming.
Class notes (pretty incomplete, to be used just as a study guide), here and here.
Exercises
Finishing Guler, Chapter 4 and covering Chapter 6.
Class notes (pretty incomplete, to be used just as a study guide), here and here.
Exercises:
Finishing Guler, Chapter 4 and covering Chapter 6.
Class notes (pretty incomplete, to be used just as a study guide), here and here.
Exercises:
Class notes (pretty incomplete, to be used just as a study guide), here and here.
Using linear constraints compatibility in packing problems, see e.g. here.
Exercises:
Class notes (pretty incomplete, to be used just as a study guide), here and here.
Using linear constraints compatibility in packing problems, see e.g. here.
Exercises:
Homework (due by midnight of Tuesday, Feb. 19).
Class notes (pretty incomplete, to be used just as a study guide), here and here.
…
]]>Class notes (pretty incomplete, to be used just as a study guide), here and here.
]]>
Class notes (pretty incomplete, to be used just as a study guide), here and here.
Exercises:
Class notes (pretty incomplete, to be used just as a study guide), here and here.
Exercises:
Homework (due by midnight of Tuesday, Feb 5th. Submit at Gradescope, course MXX6X4).