…]]>…for some reason the “default-mode” of the brain, what the mind automatically does at rest, seems to be to imagine and plan — to linger in the future or the past. This requires processing information from high-level semantic representations backwards to their constituent sensory components…

]]>…for some reason the “default-mode” of the brain, what the mind automatically does at rest, seems to be to imagine and plan — to linger in the future or the past. This requires processing information from high-level semantic representations backwards to their constituent sensory components…

Whether the schools will reopen in Fall is now a matter of prediction markets bets, but the planning at UofI is already underway.

The focus of the planning is, understandably, on the students, staff and faculty safety. Yet there is an aspect of the process going beyond the campus.

Indeed, UIUC is a primary campus of a large state school with a significant fraction of the students from Illinois, and, more to the point, students returning home on a regular basis.…

]]>Whether the schools will reopen in Fall is now a matter of prediction markets bets, but the planning at UofI is already underway.

The focus of the planning is, understandably, on the students, staff and faculty safety. Yet there is an aspect of the process going beyond the campus.

Indeed, UIUC is a primary campus of a large state school with a significant fraction of the students from Illinois, and, more to the point, students returning home on a regular basis. This makes them a potentially strong vector in Illinois epidemic, counteracting the key contributor to flattening the curve, – the *localization of the outbreaks*. If the contagion flares up and dies out locally, the load on the health system remains sustainably low; if it springs out simultaneously throughout the state, becoming compressed on the time axis, the situation could become dire.

How dire? The only way to get a feel on those effects is explore various scenarios using some model, and to see what happens.

- Opening campus in its traditional mode – with in-state students traveling home on weekends, – might lead to a dramatic wave of statewide infections.
- To understand under which circumstances this can happen, and how to avoid this would require a strong effort in simulation modeling.

Let’s consider the following – extremely simplified – model of a state (Ideallinois) with a state school in the middle, in a town named Campusurb.

I will model the state as a collection of communities, where the general population and students enrolled into the state school, live.

I will work with \(K=100\) communities having populations \(n_k\equiv 10,100, k=1,\ldots,K\) so that the total population of the state is about one million. Each of the communities is the home to \(s_k\equiv 100 \) students, so that the total number of students is \(s=10,000\) (i.e. about the size of a community in our idealized

state.

I will assume some values of parameters that correspond to a slow burning epidemics, with effective branching number \(R_0\) slightly above one (i.e. the epidemics does not die out, but runs its course, at a relatively slow pace). Some details of the model are given below.

One of the most important assumptions is that the hundred of our Ideallinois’ communities are relatively isolated: what happens in one, percolates into the other communities slowly (not much traveling between different places happening). I also assume that at the beginning of the simulation there are just a few communities (in the result shown below, it is just 1) where there is some infected population.

In our model, the Campusurb, *when the students are on campus*, is a community in itself (roughly speaking, that the students mingle only among themselves, not with the community of people housing, feeding and teaching them). We will ignore out-of state or international students.

So, what does the model show?

I played with three basic scenarios:

**Remote classes**the students essentially stay home and are

indistinguishable from the rest of the population,- the**(0+7)**

scenario: 0 days a week on campus, 7 days a week at home ;**In person classes**on weekdays,**home visits**on weekends, – the**(5+2)**scenario;**In person classes**,**campus lockdown**(no visits home), – the**(7+0)**scenario.

Here are the results (it should be noted that the simulations are stochastic, so each run is a bit different: the salient points outlined above are stable though; I will present the confidence intervals and other simulational paraphernalia elsewhere).

The first plot shows the infections runs in the general population. One can see that the (5+2) scenarios is the most

dangerous one: the infections are coming in a powerful wave, maxing at 17,000 new cases per day in a million strong state. Assuming 3-5% hospitalization rates this is a dramatic load.

Scenarios (7+0) and (0+7) have comparable effect of significantly “flattening the curve”, with peaks at 33-40% of those in (5+2) scenario on general population.

The situation is however different for the student population: the (7+0) scenario (students locked on campus) is the worst, from the campustown perspective, (5+2) leads to a somewhat lower wave, and (0+7) scenario (remote classes) is the gentlest – essentially, the student population, spread uniformly across the state follows the general population trajectory.

An explanation of these plots can be glimpsed from the heat maps below. Here, the infection levels are shown for all 100 communities (columns of the matrix), as they evolve in time (the vertical coordinate, running from top to bottom). One can see that in the (0+7) and (7+0) scenarios, the infections flare at random times in the communities, spread over the simulation interval. The flattening of the curve is achieved by spreading those localized events.

In a contrast, in (5+2) scenario, the students visiting their communities over the weekend serve as a powerful mixer, picking an outbreak in one community and propagating it almost simultaneously through the state.

Obviously these simulations are at best a caricature of the processes at play. The state of Ideallinois is nowhere close to the complexities of Illinois or other state with a large centralized state university campus. Yet, as any model, this one points at a potentially very dangerous development that opening the campus for in-person education can trigger.

(It should be pointed out that even the best case scenario, what we are describing here is the *Patchwork Pandemic* we seem to converge to, as a nation. Wish the baseline scenario were something better than this.)

What policies can be deployed to mitigate this, and whether this caricature is actually realistic requires an effort beyond what we ca address in this post. In particular, such an effort could be used to test potential feedback driven mitigation policies, that are hard, or politically infeasible to test in real life.

Simulational modeling can help. We need more of it.

I used a stochastic version of SIR, relying on a mean field approximation. Namely, I assumed that the agents in each particular group (students, or a community population) are exposed to random chances of infection, which depend on the fraction of infected people in the community (through local spread) and the fraction of the

infected people in the whole state (though global mixing). The number of infected in each generation is them modeled as a binomially distributed random value. In other words, we are not solving a system of differential equations, but are running a stochastic simulations.

The details of the model will be presented elsewhere (a link to the jupyter notebook to be posted).

]]>

As Covid-19 takes over the country, many organizations move their teams to work from home. Often, it is necessary to keep an office presence. This can be done in various ways: split your team into smaller units and let them alternate days, or weeks, or do completely random assignments (essentially, toss a coin for who will be in the office three days from now), etc. Or, one can abandon the fixed teams, and shuffle employees, again on a random or periodic basis…

These scenarios are *apriori* quite different in terms of their impact on infection propagation. …

As Covid-19 takes over the country, many organizations move their teams to work from home. Often, it is necessary to keep an office presence. This can be done in various ways: split your team into smaller units and let them alternate days, or weeks, or do completely random assignments (essentially, toss a coin for who will be in the office three days from now), etc. Or, one can abandon the fixed teams, and shuffle employees, again on a random or periodic basis…

These scenarios are *apriori* quite different in terms of their impact on infection propagation. How to minimize the exposure of their personnel, is a question without an immediate answer. Below are a couple of back-of-envelope answers.

Keep in mind, that the models I look at are highly stylized, and you should always consult your physician resident mathematician (if you don’t have one, hire them) before applying them. Perhaps, the most important caveat: we consider the model where there is *just one infected person* interacting with the team at any given interval (so that this model might be completely irrelevant in a week or so).

To address more detailed scenarios, a simulation platform is being developed.

A friend of a friend asked,* how one can optimize rotation of the team members between office duties and work from home*, to make exposure to corona virus minimal. To quote:

“

if they break their company into teams if it’s better to do 1) team A come in one day and team B the next or 2) randomly draw 50% for each day. Would one method over the other give them a statistical advantage over spreading corona to the team.”

We consider possible strategies to minimize impact of the presence of (latent) infected person on a team. On one hand, we look into whether it is better *to randomize the number of days a person is attending the office*: turns out, it is always beneficial to randomize. On the other hand, we consider whether it is desirable *to keep fixed teams or shuffle people between them*: turns out it is always better to keep fixed teams.

However, whichever are the general formulae, *simulations still rule*.

*Assume* that one wants to reduce the chances that an infected individual in a team would infect someone else, before being diagnosed and quarantined. *Assume further*, for simplicity, that the latent period (while the infected person is undetected) has an even number of working days, \(2D\), say (in practice, \(2D\) is something like 8).

Denote the chance that the team where the infected person is present to stay coronavirus-free is \(q\) (obviously, \(q\) is between \(0\) and \(1\), but its actual value depends on the overall situation, size of the team, workplace practices etc, – and, as we will see, does not matter much in this model).

So, if the teams alternate, the chances that the team that has an infected person will see no transmission is \(q^D\): they just have to be lucky \(D\) times.

Consider now the teams that meet on any given day with probability \(1/2\), over \(2D\) days. The probability they are lucky on any given day is

\[

1/2+q/2

\]

Namely, if they don’t meet, they are fine (with probability \(= 1/2\)), and if they do, they are lucky with probability \(q\).

So, the randomly meeting team will be lucky with probability

\[

(1/2 + q/2)^{2D}.

\]

Now the question is which is larger, \(q^D\) or \((1/2+q/2)^(2D)\). This is equivalent to asking which is larger, \(q\) or \((1/2+q/2)^(2)\).

Opening the brackets, we see that

\[

(1/2+q/2)^2-q=(1-q)^2/4>0

\]

for *all *\(q<1\).

In other words, **the randomly meeting team always fares better than on-off team**…

What is the intuition behind this result? It is, in fact quite transparent. Think about the number of days, \(D\), the infected person spends with the rest of the team. Given this number, the chances for his team to *not* get infected is \(q^D\). The dependence of this probability on \(D\) is *convex*, so that Jensen inequality kicks in, implying

\[

\mathbb{E}

q^D\geq q^{\mathbb{E} D},

\]

In other words, the chances of having no transmission at all from the infected person to the rest of the team **are always smaller for random D than for the deterministic D with the same mean**.

Another general takeaway is that **the probability of having no transmission at all does not depend on how the rest of the team is formed, – whether the team is the same on each of the \(D\) days they interact with the infected person, or is changing**.

However, if one is not just concerned with the *probability* of having a transmission, but also with *how many* people are infected, if the transmission does happen, we need more detailed analysis.

Let’s look into the *average number of the team members infected*, not just the probability that the number of infected people is \(0\).

We fix (for now) the number of days \(D\) the infected person was interacting with the rest of the team.

*Assume* that during each of the days, a fraction \(s\) of the total personnel count \(N\) is manning the office (so that \(k=sN\) people are present each day). In the baseline model above, \(s=1/2\).

**F**ixed: all*k*members interacting with the infected person are the same, and**R**andom: each day, any person has a chance \(s\) to be manning the office, chosen randomly each day.

Assume further, that for each person sharing the office with the infected one, the chances *to avoid* contracting CV is \(\tau\) (so that the chances that all \(k\) people present avoid contracting it on any given day is \(\tau^k=q\)).

In this case, for any of the \(k\) people on the team, the probability to avoid transmission for \(D\) days is \(\tau^D\), so that the *average* number of people aoviding transmission is \(k\tau^D\), and the *average* number of infected people is

\[

I_F=k(1-\tau^D).

\]

In this case, each of \(N\) people on any given day will not be summoned to the office with probability \((1-s)\), and will be called in, but will escape infection with probability \(s\tau\).

The probability to avoid infection during the \(D\) days during which the infected person attends the office is therefore

\[

((1 −s) + s\tau)^D,

\]

and the *average* number of infected persons will be

\[

I_R=N(1 − ((1 − s) + s\tau)^D).

\]

So, who wins, \(I_F\) or \(I_R\)? In other words, what is the comparison

\[

k (1 − \tau^D) \mbox{ vs }N(1 − ((1 − s) + s\tau)^D)?

\]

A little bit of algebra (which we will skip here) shows that

\[

I_F>I_R

\]

**for any \(\tau\lt 1\)**.

In other words, **the number of infected persons is smaller for fixed teams, than for the randomly shuffled ones** (as, perhaps, intuition was telling you anyway).

All these computation, general as they are, are still inferior compared to the hammer of OR and epidemiological theory: agent-based simulations. ** Here‘s** a python notebook running such simulations, taking as an input some approximations we know from the literature (like incubation period etc), some guesstimates (like infection rates), and the compositions of the teams, and spitting various estimates, like the distribution of the number of infected.

The planning concerns a team of 24, over 60 days. The competing plans are to split them into teams of 6, working each on schedule 5 days office, 5 days at home, or into three teams, of sizes 4+4+4 or 6+4+2. THe personnel is rotated between the teams. Other parameters are:

- at work infection rate/contagious person: 0.09
- at home infection rate: 0.005

Here’s the typical output. Those are the cumulative distribution functions (say, value .5 at 5 means that the chances to have 5 or less infected is .5), so the higher the plot runs, the better the schedule is.

Here’s another

We estimated the fraction of the team that will be infected over the 60 days, on the *5 days at work, 5 days at home schedule*. The infection at work and infection at home rates are as above.

The key feature of this graph is the the fraction *increases* with the size of the team. This implies, that given the *number of workers* and the *number of teams* into which you need to split your crew, it is always beneficial to *keep the sizes of the teams as uniform as possible.*

It makes little sense to try to formulate universal recommendations that would guide staffing decisions in each particular situation. Neither is practical to attempt to work out precise formulae for a broad spectrum of scenarios. What the managers should do, is simulations, especially agent-based, – they are still the best tool in town, and can save lives.

**Use them!**

\def\phd{\mathtt{PH}}

\def\CAT{\mathtt{CAT}}

\)

The goal of this note is to define the biparametric persistence diagrams for smooth generic mappings \(h=(f,g):M\to\Real^2\) for smooth compact manifold \(M\). Existing approaches to multivariate persistence are mostly centered on the workaround of absence of reasonable algebraic theories for quiver representations for lattices of rank 2 or higher, or similar artificial obstacles.

We will rely on the standard facts about generic smooth mappings into two-dimensional manifolds: for such mappings, the set of critical points is a smooth curve \(\Sigma\) in \(M\), which is immersed outside of a finite number of pleats: near generic point of \(\Sigma\), there are local coordinates on \(M\) in which the mapping is locally given by

\[

y_1=x_1, y_2=q(x_2,\ldots,x_m)

\]

(folds), and near isolated points of the curve of critical points, in some coordinates the mapping is given by

\[

y_1=x_1, y_2=x_2^3+x_1x_2+q(x_3,\ldots,x_m),

\]

(pleats).…

\def\phd{\mathtt{PH}}

\def\CAT{\mathtt{CAT}}

\)

The goal of this note is to define the biparametric persistence diagrams for smooth generic mappings \(h=(f,g):M\to\Real^2\) for smooth compact manifold \(M\). Existing approaches to multivariate persistence are mostly centered on the workaround of absence of reasonable algebraic theories for quiver representations for lattices of rank 2 or higher, or similar artificial obstacles.

We will rely on the standard facts about generic smooth mappings into two-dimensional manifolds: for such mappings, the set of critical points is a smooth curve \(\Sigma\) in \(M\), which is immersed outside of a finite number of pleats: near generic point of \(\Sigma\), there are local coordinates on \(M\) in which the mapping is locally given by

\[

y_1=x_1, y_2=q(x_2,\ldots,x_m)

\]

(folds), and near isolated points of the curve of critical points, in some coordinates the mapping is given by

\[

y_1=x_1, y_2=x_2^3+x_1x_2+q(x_3,\ldots,x_m),

\]

(pleats). Here \(q\) is a quadratic form of the remaining variables (absent in the pleat case, if \(m=2\).

The image of the critical curve \(\Sigma\) is a curve in the \((f,g)\)-plane called *contour*; generically, \(h\) is an immersion of \(\Sigma\) outside of the pleats; at the pleat points, the contour has cusps.

(The contour on the left is a – rotated and somewhat distorted – rendition of Chicago Millenium Park’s Bean sculpture, with the cusps made visible.)

As the coordinates on the \(f,g\) plane are given, more special points emerge: namely those where either of the functions has a critical point. We will be referring to these special points as V- (vertical) or H- (horizontal) points.

These H- and V-points sit on the critical curve \(\Sigma\). Generically, the functions \(f,g\) are Morse, and the contour has non-vanishing curvature at them. (This follows from the fact that at a critical point of, say \(f\) the restriction of Hessian matrix of \(f\) to the hyperplane \(\{dg=0\}\), is nondegenerate.)

The V- and H-points partition the critical curve into segments which map to the segments of the contour curve where its slope is \(\neq 0,\infty\). We will refer to the segments where the slope is negative as Pareto points (for obvious reasons).

We augment the Pareto segments by attaching at their boundaries the rays (referred to as *extension rays*) pointing up (at V-points) or right (at H-points). We will call the V- or H-points where this attachment results in \(C^1\) curve, the *pseudosmooth* points; those points where the curve loses smoothness, the *pseudocusps*.

We will refer to the resulting union of Pareto segments and extension rays as the *life contour* for the filtration defined by the pair \(f,g\).

At the points of Pareto segments one can coorient the contour, declaring positive the side where both\(f\) and\(g\) can increase. This allows one to define the *index* of a point \(x\) on the smooth part of Pareto segment: choose a function of \(f,g\) vanishing on contour near the point, and increasing in the positive direction; this function has a (degenerate) critical point at \(x\); its index is the index of the point \(p\).

The index at extension rays is defined just as the index of the critical point of \(f\) or \(g\), where the ray was attached to a Pareto segment.

It is easy to see that the index is constant along the smooth parts of the Pareto segments, and jumps at cusps and pseudocusps in a predictable way: the higher with respect to \(f,g\) point has higher index.

We will be referring to the (Pareto, i.e. such that the branches of the contour involved have negative slopes) pseudocusps, cusps and the double points *of the same index*) as *obstacles*.

Consider an *increasing curve* \(\gamma:\Real\to\Real^2\) in the \(f,g\)-plane (this means that both functions strictly increase along the curve). To fix the gauge, we will assume that the curve is parameterized by \(f+g\).

An increasing curve \(\gamma\) defines a usual \(I\)-indexed filtration of \(M\), and correspondingly, persistent homologies and persistent diagrams \(\phd_k(\gamma), k=0,\ldots,m\), which we interpret as a collections of distinguishable points in planes \(\{b\lt d\}\).

(Remark that the approach to multiparametric persistence through restriction to increasing *straight* lines has been considered by several authors. Here we allow arbitrary increasing curves.)

The main (somewhat tautological) result of this note is

Theorem: For the path-connected collection of increasing curves in the plane avoiding obstacles, there is a section of the space of persistent diagrams: that is for any two such curves \(\gamma,\gamma’\), there is an identification

\[

I(\gamma’,\gamma):\phd_*(\gamma)\to\phd_*(\gamma’),

\]

of the bars in the persistent diagrams corresponding to each of the curves, and these identifications are consistent:

\[

I(\gamma”,\gamma’)I(\gamma’,\gamma)=I(\gamma”,\gamma).

\]

The figure below shows six homotopy non-equivalent increasing paths avoiding the obstacles, – five green and the sixth, blue.

The bars shown on the lower left (recall that the curves are parameterized by \(f+g\) correspond to the filtration that the blue curve defines: the births and deaths happen where the curve intersects the life contour.

One can readily see what happens when one moves from one homotopy class to another, across a (pseudo)cusp with the indices of branches \((k,k+1)\): a bar in \(\phd_k\) appears or disappears.

As the curve crosses a self-intersection point of the life contour, where the branches of equal indices cross, one has a less prominent event, where the charges in the persistence diagram \(\phd_k\) bounce of a common vertical or horizontal line (we can call the corresponding bars *interacting*).

This result prompts question about the structure of the space of increasing paths in the plane avoiding a finite collection of obstacles.

Theorem: the connected components of space of increasing curves in the plane avoiding a finite set of obstacles \(o_1,\ldots,o_k\in\Real^2\) are contractible, and their number is equal to the number of chains of obstacles, i.e. subsets \(o_{i_1}\prec\ldots o_{i_l}\), where \(o_m\prec o_n\) is the vector ordering of the points.

Adjacency of these cells is in itself an interesting invariant of the overall structure. Namely, for any collection of obstacles, consider the set of increasing paths passing through those obstacles. Each of these closed strata is manifestly contractible, as is (which is easy to prove the same way the Theorem above is proven) each connected component of the set of the increasing paths passing only through given set of obstacles.

Then the resulting stratification of the space of increasing paths is dual to a \(\CAT(0)\) cube complex: the vertices of the complex correspond to the connected components of teh set of increasing paths avoiding obstacles, and the sets \(h_k\) of increasing paths passing through obstacle \(o_k\) form the hyperplanes (for the nomenclature, see Sageev).

We will be referring to this cube complex as the *characteristic complex* of biparametric persistence structure defined by \(f,g\).

**I would stipulate that the characteristic complex is the key descriptor of biparametric persistence in smooth category.**

This note is but an introduction to the overall problem of understanding the nature of the characteristic complex for biparametric persistences. Here are a few research directions we hope to purse:

- How the characteristic complex changes under the small perturbations of the pair \(f,g\)? It is clear that cubes with all edges large survive (or cannot emerge out of nothing):
- Interacting bars: provide a combinatorial characterization in terms of the life contour.
- It is quite immediate that the characteristic complex survives smooth perturbations of \((f,g)\). As one deforms the pair, the structure of the characteristic complex changes in a predictable way (some vertices can appear/disappear; pairs of hyperplanes can appear/disappear etc). A study of these transformation would provide a good handle on the relation of the patterns of the characteristic complex.
- Relation of the characteristic complex and the invariants of biparametric persistence (as defined here or here) would be, obviously, of interest.

This note is an outline of my talk at the conference on Geometric Data Analysis at U Chicago in May 2019; written up while at IMI at Kyushu University.

My understanding is that Peter Bubenik and Mike Catanzaro were also considering biparametric persistence for smooth maps.

]]>This content is password protected. To view it please enter your password below:

]]>

This content is password protected. To view it please enter your password below:

]]>

We consider the problem of flock control: what are the natural constraints on the steering agents in reconfiguring an ensemble of agents?

Our basic setup is following: the agents are modeled as points $latex \xx=(x_1,\ldots,x_n), x_k\in X, k=1,\ldots, N$, with $latex X$ some terrain (for simplicity, we constrain ourselves here to Euclidean plane, but in general it can be some reasonably tame subset of a manifold). *Quasi-static behavior* is assumed, under which the system, for each value of the controlling parameters quickly settles into (locally minimizing) the equilibrium whose basin of attraction contains current position.…

We consider the problem of flock control: what are the natural constraints on the steering agents in reconfiguring an ensemble of agents?

Our basic setup is following: the agents are modeled as points $latex \xx=(x_1,\ldots,x_n), x_k\in X, k=1,\ldots, N$, with $latex X$ some terrain (for simplicity, we constrain ourselves here to Euclidean plane, but in general it can be some reasonably tame subset of a manifold). *Quasi-static behavior* is assumed, under which the system, for each value of the controlling parameters quickly settles into (locally minimizing) the equilibrium whose basin of attraction contains current position.

We will assume that the mutual interactions between the agents in the flock are governed by the interaction term of the energy

$$

R(\xx)=\sum_{k\neq l} R(x_k, x_l).

$$

Here $latex R$ is a repulsive term forcing the agents to avoid each other. For the case of planar terrain, Coulomb interactions are especially attractive options, thanks to connections to potential theory and related holomorphic parameterizations:

$$

R(x_k,x_l)=-\log |x_k-x_l|.

$$

To keep the agent confined, we impose a global term,

$$

U_0(\xx)=\sum_k U_0(x_k).

$$

Here $latex U$ is a coercive function ($latex U(x_k):=\alpha |x_k|^2$ is an obvious choice).

Last element of the setup, is the controller term, – or, rather, an explicit dependence of the global energy sector on some control parameters.

We consider here two versions:

- The steering agent is herself embedded into the terrain, and her contribution to the potential also has the repulsive force structure:

$$

U_p(x_k)=U_0(x_k)-K\log|p-x_k), p\in X.

$$

We will be referring to this setting as the “shepherd” setting. - The confining potential function $latex U_p$ is made explicitly dependent on the parameter $latex p\in P$, some space of parameters. As a non-example, we can consider $latex U_p(x)=U_0(x)+\langle p, x\rangle$. More interesting would be to make the quadratic form $latex U$ dependent on the parameter running through positive definite forms.

As we indicated above, the overall dynamics is assumed to be fast on the agents, settling to their equilibrium state corresponding to the fixed values of the steering parameter $latex p$, and slow for the parameter $latex p$. This leads one to the following formulation:

$$

\dot{x}_k=-\frac{1}{\epsilon}\frac{\partial U_p(\xx)}{\partial x_k},\\

U_p(\xx)=\sum_k \left( U_p(x_k)+\sum_{l\neq k} R(x_k,x_l)\right),\\

\dot{p}=u, u\in U.

$$

Intuitively, the model corresponds to Ginibre ensemble (eigenvalues of Gaussian complex matrices), with a perturbative term (corresponding to “shepherd”).

Agents positions $latex x(t)=(x_k(t))_{k=1}^M$, “shepherd” (steering agent) trajectory $latex p(t)$.

Potential function:

$$U(x,p) = \frac{\alpha}{2}\|x\|^2 + \frac{1}{2}\sum_{k\neq j} \log\|x_k-x_j\|^2 + \frac{1}{2}\sum_k\log\|x_k-p\|^2.$$

Dog trajectory initial and final condition: $latex p(0)=p(T)=p_0$

Agents initial condition at equilibrium: $latex \nabla U(x(0),p(0))=0$

Temporal evolution:

$$\dot x(t) = -\nabla U(x(t),p(t))$$

The gradient (for each agent agent) is:

$$

\begin{align}

\nabla_{x_k} U(x,p) &= \nabla \frac{\alpha}{2}\|x_k\|^2 + \frac{1}{2}\sum_{j\neq k} \nabla_{x_k}\log\|x_k-x_j\|^2 + \frac{1}{2} \nabla_{x_k}\log\|x_k-p\|^2\\

&=

\alpha x + \sum_{k\neq j} \frac{x_k-x_j}{\|x_k-x_j\|^2}

+ \frac{x_k-p}{\|x_k-p\|^2}

\end{align}

$$

Example of simulation with herd of 12:

One can see starting (dot) and ending (cross) positions of the agents. The trajectory of the steering agent is solid blue. The agents start and end in an equilibrium position; n3o braiding of the agents observed.

- Assume the first, “shepherd”, model. Fix a (smooth) trajectory $latex p:[0,T]\to X\equiv \Real^2$, starting and ending at the same point $latex p_0$ far away from the origin. This loop would generate a movement among the charges $latex x_k, k=1,\ldots,n$.Do the point return to the same positions reshuffled?
- When they do, one generates so called braid, an element in the fundamental group of the configuration space of $latex n$ (indistinguishable) points in plane. What elements can be generated?
- What elements can be generated if the movements of the shepherd are constrained, for example to stay outside of the convex hull of the agents?

This content is password protected. To view it please enter your password below:

]]>

$

Consider $latex n$ points in Euclidean space, $latex \xx=\{x_1,\ldots, x_n\}, x_k\in \Real^d, n\leq d+1$.

Generically, there is a unique sphere in the affine space spanned by those points, containing all of them. This centroid (which we will denote as $latex o(x_1,\ldots,x_n)$) lies at the intersection of the bisectors $latex H_{kl}, 1\leq k\lt l\leq n$, hyperplanes of points equidistant from $latex x_k, x_l$.

Assume now that the points $latex x_k,k=1,\ldots,n$ are independent standard $latex d$-dimensional Brownian motions. What is the law of the centroid?…

]]>$

Consider $latex n$ points in Euclidean space, $latex \xx=\{x_1,\ldots, x_n\}, x_k\in \Real^d, n\leq d+1$.

Generically, there is a unique sphere in the affine space spanned by those points, containing all of them. This centroid (which we will denote as $latex o(x_1,\ldots,x_n)$) lies at the intersection of the bisectors $latex H_{kl}, 1\leq k\lt l\leq n$, hyperplanes of points equidistant from $latex x_k, x_l$.

Assume now that the points $latex x_k,k=1,\ldots,n$ are independent standard $latex d$-dimensional Brownian motions. What is the law of the centroid?

It quite easy to see that it is a continuous martingale. What is the quadratic variation of $latex o$?

If the directions orthogonal to the affine span $latex L(\xx)$ of the tuple, the answer is immediate: it is proportional to

$$

\sum_{k=1}^n p^2_k,

$$

where $latex p_k$ are the barycentric coordinates of $latex o$ with respect to $latex x_1,\ldots,x_n$. Indeed, it is immediate that if $latex dx_k^\perp$ are the components of the Brownian increments orthogonal to $latex L(\xx)$, then

$$

do^\perp(\xx)=\sum_k p_k dx_k^\perp.

$$

It follows -as $latex dx_k^\perp$ are independent and uncorrelated) that the component of $latex o$ orthogonal to $latex L(\xx)$ is a martingale with variation equal to (Euclidean norm on $latex L(\xx)^\perp$ times) $latex \sum_k p_k^2$.

Let’s turn to the motion of $latex o(\xx)$ inside $latex L(\xx)$. Let $latex S(\xx)\subset L(\xx)$ be the sphere centered at the centroid and containing all the points of the sample. Let $latex dx_k^\parallel=\xi_k+\eta_k$ be the decomposition of the $latex L(\xx)$ component of $latex dx_k$ into the vector in the tangent space $latex \xi_k\in T_{x_k}S(\xx)$, and the vector orthogonal to it. If we denote by $latex n_k=(x_k-o(\xx))/|x_k-o(\xx)|$ the unit norm vector from the centroid to $latex k$-th point of the sample, then $latex \eta_k=s_k\cdot n_k$, where $latex s_k$ is the increment of a Brownian motion independent of all all other components (as follows from the orthogonalities hardwired in the construction).

Now, the condition that as the points of the sample move, the distances to the centroid remain constant is equivalent to

$$

\langle \eta_k-\nu,n_k\rangle=\langle \eta_l-\nu,n_l\rangle,

$$

for all $latex 1\leq l,k\leq n$. (Here we denote by

$$

\nu:=do(\xx)^\parallel

$$

the $latex L(\xx)$-component of $latex do(\xx)$.)

Equivalently, this means that $latex \langle \nu, n_k-n_l\rangle=s_k-s_l$.

As $latex s_k, s_l$ are independent, with unit quadratic variation, we conclude that

$$

\langle (n_k-n_l),C (n_k-n_l)\rangle=2

$$

for all $latex k,l$, – here

$$

C:=\ex \nu\otimes\nu’

$$

is the quadratic variation of the centroid in the affine plane spanned by the sample.

Geometrically, an equivalent description is that $latex C=AA’$, where $latex A$ is the linear part of the affine transformation taking the points of the sample to the regular $latex n$-simplex with sides $latex \sqrt{2}$.

]]>\)

Consider a collection of vectors \( e_1,\ldots,e_n\) in the upper half-plane, such that \( e_k=(x_k,1)\) and \( x_1>x_2\gt \ldots \gt x_n\). Minkowski sum of the segments \( s_k:=[0,e_k]\) is a zonotope \( \Z\). *Rhombus* in this context are the Minkowski sums \( \Z(k,l)=s_k\oplus s_l, 1\leq k\lt l\leq n\) of pairs of the segments. Tilings of \( \Z\) by \( n\choose 2\) rhombuses \( \Z(k,l)\) define also an oriented graph (formed by the edges of the rhombuses, and oriented up in the plane).…

\)

Consider a collection of vectors \( e_1,\ldots,e_n\) in the upper half-plane, such that \( e_k=(x_k,1)\) and \( x_1>x_2\gt \ldots \gt x_n\). Minkowski sum of the segments \( s_k:=[0,e_k]\) is a zonotope \( \Z\). *Rhombus* in this context are the Minkowski sums \( \Z(k,l)=s_k\oplus s_l, 1\leq k\lt l\leq n\) of pairs of the segments. Tilings of \( \Z\) by \( n\choose 2\) rhombuses \( \Z(k,l)\) define also an oriented graph (formed by the edges of the rhombuses, and oriented up in the plane). An increasing path from the bottom to the top defines then a permutations in \( \sg_n\): each of the vectors \( e_k\) appears in an increasing path once, and the permutation then is the order of these vectors. Collection of all such permutations corresponding to a tiling \( \tau\) is denoted as \( \sg(\tau)\).

Recall that the *Condorset paradox* describe a situation when in a collection of odd number of permutations (perhaps, with repetitions), the order defined by majority rues on pairs has cycles: an example is

$$

\{(1,2,3),(2,3,1),(3,1,2): 1\succ 2\succ 3\succ 1:

$$

one can see that alternative \( 1\) is preferred to alternative \( 2\) by majority, etc…

A *Condorset domain* is a subset in \( \sg_n\) for which the Condorset paradox cannot happen. A well-known example is the one-peaked permutations (whose graph is a unimodal function). Danilov, Karzanov and Koshevoi noticed that for any tiling \( \tau\) of \( \Z\), the collection \( \sg(\tau)\) is a Condorset domain…

On the negative side, Arrow’s impossibility theorem states that any aggregation procedure subject to some natural “axioms” (which the majority voting satisfies) and defined on all of \( \sg_n, n\geq 3\), does not exist.

In another direction, it was noticed (by Benno Eckmann, Graciela Chichilnisky and others, see surveys here and there) – that in the topological version of preference aggregation (social choice) theory, the key trouble in preference aggregation is caused by the nontrivial topology of the space of preferences, and that all those problems tend to resolve when the space is contractible.

Of course, all of this is not directly applicable to the setup of Arrow’s impossibility theorem: there the space of the preferences is discrete. Yet, one can impose some topology there as well (as done here), in the following way:

Let \( \B=\{(k, l)\}, 1\leq k\neq l\leq n\) be the set of ordered pairs of distinct alternatives. We form a simplicial complex by adding a simplex \(\sigma\) if the collection of pairwise preferences forming the vertices of that simplex do not have a cycle. The maximal number of vertices in such a simples is, clearly, \( n\choose 2\), and each simplex corresponds to a linear order on the indices \( 1,2,\ldots, n\), i.e. an element of \( \sg_n\).

One can prove almost immediately, the resulting simplicial complex is homotopy equivalent to \( (n-2)\)-dimensional sphere. This in turn, can be used to deduce Arrow’s theorem.

Now, it is natural to ask, what is the topology of the simplicial complex comprised of the simplices in \( \sg(\tau)\), – is it trivial? This wouldn’t prove anything per se, but would be consistent with the previous observations and point at the general meta statement that social choice paradoxes require nontrivial topology.

And indeed, we have an easy

**Theorem**: the union of simplices on \( \B\) from \( \sg(\tau)\) is contractible for any rhomboidal tiling of \( \Z\).

**Proof**: The path corresponding to the leftmost border of the zonotope \( \Z\) corresponds to identity permutation, and some adjacent pair of edges in that path span a rhombus of the tiling. Removing this rhombus gives a smaller tiling, whose leftmost border still will have a pair of adjacent edges spanning a rhombus in \( \tau\). Each operation of removal of a rhombus corresponds, in the simplicial complex, to the removal of a vertex and collapse of the unique simplex containing it to its base, preserving the homotopy type. After a few iteration, one arrives to a unique simplex (corresponding to the rightmost path in \( \Z\)).