…]]>…for some reason the “default-mode” of the brain, what the mind automatically does at rest, seems to be to imagine and plan — to linger in the future or the past. This requires processing information from high-level semantic representations backwards to their constituent sensory components…

]]>…for some reason the “default-mode” of the brain, what the mind automatically does at rest, seems to be to imagine and plan — to linger in the future or the past. This requires processing information from high-level semantic representations backwards to their constituent sensory components…

I thought it would be interesting to check what the 2017′ Tax Cuts and Jobs Act did to Mankiw’s productivity.…

]]>I thought it would be interesting to check what the 2017′ Tax Cuts and Jobs Act did to Mankiw’s productivity. The effective tax rates for savvy operators (I am sure he can and does hire good accountants) went way down, and thus the incentives to work, as he surmised, increased a lot.

The picture on the right illustrate what (I admit, just a part, albeit arguably most visible part of) Mankiw’s output looks like, since 2010. Height of the list for each year is a graphic representation of Mankiw’s productivity. The red line indicates when tax cuts became the law; the timeline goes from bottom to the top. If you mistrust your eye, the average number of his columns went down from 7.7 per year pre tax cut times to mere 5 per year thereafter.

This is surely just one datapoint, and who knows what are the idiosyncratic challenges Mankiw faced over the last 3 years. But it is a fair metric to look at, as he himself offered it as a measure of social efficacy of a certain policy.

Judging by this metric, tax cut was not a success. Perhaps, those who diligently follow Mankiw’s output should insist that his tax rates, should go up, so they enjoy more of their opinions, insights, revelations etc.

]]>The Laplacian \(L\) of \(\Gamma\) is the symmetric matrix

$$

L_{u,v}=\left\{

\begin{array}{ll}

-w(uv)&\mbox{ if }u\neq v,\\

\sum_{u’\neq u} w(uu’)& \mbox{ if }u=v\\

\end{array}\right..

$$

(Here we view the weights \(w\) as formal variables.)

As we all know, any principal minor of \(L\) equals the sum of the weights of spanning trees of \(\Gamma\).

Another way to define the principal minor is as the determinant of the restriction the quadratic form given by \(L\) to any of the coordinate hyperplanes.…

]]>The Laplacian \(L\) of \(\Gamma\) is the symmetric matrix

$$

L_{u,v}=\left\{

\begin{array}{ll}

-w(uv)&\mbox{ if }u\neq v,\\

\sum_{u’\neq u} w(uu’)& \mbox{ if }u=v\\

\end{array}\right..

$$

(Here we view the weights \(w\) as formal variables.)

As we all know, any principal minor of \(L\) equals the sum of the weights of spanning trees of \(\Gamma\).

Another way to define the principal minor is as the determinant of the restriction the quadratic form given by \(L\) to any of the coordinate hyperplanes. Turns our the hyperplanes need not to be the coordinate ones: restricting to any (well, almost any) codimension one plane gives you the sum of the weights of the spanning trees.

More precisely, by the “Hesse trick,” the determinant of the extended Laplacian

$$

\tilde{L}=\left(

\begin{array}{cc}

0 & x\\

x^T&L\\

\end{array}

\right)

$$

is equal (up to a factor depending on \(x\)) to the determinant of the restriction of the quadratic form defined by \(L\) to the hyperplane \(\langle x,\cdot\rangle=0\).

Turns out,

$$

\det \tilde{L}=(x_1+x_2+\ldots+x_v)^2\sum_{\mbox{spanning trees }T} w(T).

$$

Very useful!

It is intuitively clear from the picture, that something separates the NYC, Chicago, LA from Houston: one sees oscillations in Houston, not in the other three metropolises, – so reminiscent of the transition from overdamped to underdamped systems.

What is it? Here’s a toy model which accounts for this phenomenon.…

]]>It is intuitively clear from the picture, that something separates the NYC, Chicago, LA from Houston: one sees oscillations in Houston, not in the other three metropolises, – so reminiscent of the transition from overdamped to underdamped systems.

What is it? Here’s a toy model which accounts for this phenomenon. I will deal in hugely exaggerated generalities, interpreting the mobility data as a proxy for the state of the economic activity, impacted by the prevalence of the infection in the society. Correspondingly, I consider a coupled system consisting of Economy and Virus.

We assume that the virus growth is facilitated by the economy: the larger the economy, the more interactions between people, the faster the growth.

Economy, on the other hand, is stationary (one could insert a growth term, but it is far slower than that of virus, so we safely drop it), but is slowed down by the virus. (It should be noted, that our coordinate chart is centered around a presumed equilibrium, – so the negative “Economy” just means below that equilibrium point etc.)

This slowdown by the virus is describing the natural, intrinsic reaction of the economy to the pandemic, such as the resources reallocated to the hospitals, workers skipping work because of sickness or need to help someone sick, supply chain disruption etc.

Independently pf that, there might be some control measures that the government or the population could undertake, – represents by the \(u\) term.

As I am teaching a basic linear control course, all interactions here are assumed linear:

\[

\begin{array}{rcl}\dot{E}&=&-bV+u\\ \dot{V}&=&cE+dV.\\ \end{array}

\]

OK, what do we want? Obviously, to stabilize the system, keeping \(V\) bounded. And \(E\), too, – if the epidemics expands, the economy drops exponentially.

What our feedback control should be?

First instinct is, of course, to base the feedback entirely on the virus, \(u=-lV\). However, this does not affect the trace of the resulting linear system (leaving it at \(d\)), and therefore is not enough for the stabilization.

One is forced to have a negative feedback based on \(E\):* shutdowns are necessary for the stabilization!*

So, accepting that, take \(u=-kE-lV\), resulting in

\[

\begin{array}{rcl}\dot{E}&=&-kE-(l+b)V\\ \dot{V}&=&cE+dV.\\ \end{array}.

\]

To simplify the analysis, note that by rescaling \(E,V\) (who cares how we count the critters, – by trillions or by micrograms?) we can make \(b\) and \(c\) equal, and by changing the time units we can make them both equal to \(1\). So, the system reduces to

\[

\begin{array}{rcl}\dot{E}&=&-kE-(l+1)V\\ \dot{V}&=&E+dV.\\ \end{array}.

\]

When it is stable? Well, when the trace and the determinant of the RHS matrix are negative and positive, respectively, i.e. when

\[

k>d, \mbox{ and } l+1>kd.\]

Those are the conditions of stability!

Ramping up the unseriousness of this model, we assume now that the gain \(l\) is not really in government hands, – it represents the reaction to virus spread that the governments (at least in this polity) are too inept to control. In our model, \(l\) the public’s reactions to what it observes about the virus spread. So we will leave it fixed, and will have a look at what the government can tune up, – the gain \(k\). The larger \(k\), the more severe the government lockup are.

What gain \(k\) yields the stable system? well, it should be found in the interval

\[

d\leq k\leq \frac{l+1}{d}.

\]

What is the best value there? One natural marker is the value where the eigenvalues of the RHS operator (“poles” in the control theory parlance) merge, which happens at

\[

k_*=2\sqrt{l+1}-d\leq \frac{l+1}{d}.

\]

Below this level, oscillations start, above, the system is overdamped (good), but the decay rate can be improved.

Here are the plots of a family of the solutions with \(d=1/2, l=3\) and \(k=1.6, 3.5, 6\)… Orange is at the critical level \(k=3.5\); green well above, blue, well below.

One can see clear resemblance with the plots above…

This model is just a toy, created to illustrate implicit feedback structures built into our societal decisions, not to produce some specific recommendations (Houston, your \(k\) is too low!)… And yet. We rarely think about governance in control-theoretic terms, but we should. In particular, control theory should govern our anticyclical macroeconomic responses, such as the levels of unemployment benefits.

Control theory is old, but its usefulness is still underestimated.

]]>Whether the schools will reopen in Fall is now a matter of prediction markets bets, but the planning at UofI is already underway.

The focus of the planning is, understandably, on the students, staff and faculty safety. Yet there is an aspect of the process going beyond the campus.

Indeed, UIUC is a primary campus of a large state school with a significant fraction of the students from Illinois, and, more to the point, students returning home on a regular basis.…

]]>Whether the schools will reopen in Fall is now a matter of prediction markets bets, but the planning at UofI is already underway.

The focus of the planning is, understandably, on the students, staff and faculty safety. Yet there is an aspect of the process going beyond the campus.

Indeed, UIUC is a primary campus of a large state school with a significant fraction of the students from Illinois, and, more to the point, students returning home on a regular basis. This makes them a potentially strong vector in Illinois epidemic, counteracting the key contributor to flattening the curve, – the *localization of the outbreaks*. If the contagion flares up and dies out locally, the load on the health system remains sustainably low; if it springs out simultaneously throughout the state, becoming compressed on the time axis, the situation could become dire.

How dire? The only way to get a feel on those effects is explore various scenarios using some model, and to see what happens.

- Opening campus in its traditional mode – with in-state students traveling home on weekends, – might lead to a dramatic wave of statewide infections.
- To understand under which circumstances this can happen, and how to avoid this would require a strong effort in simulation modeling.

Let’s consider the following – extremely simplified – model of a state (Ideallinois) with a state school in the middle, in a town named Campusurb.

I will model the state as a collection of communities, where the general population and students enrolled into the state school, live.

I will work with \(K=100\) communities having populations \(n_k\equiv 10,100, k=1,\ldots,K\) so that the total population of the state is about one million. Each of the communities is the home to \(s_k\equiv 100 \) students, so that the total number of students is \(s=10,000\) (i.e. about the size of a community in our idealized

state.

I will assume some values of parameters that correspond to a slow burning epidemics, with effective branching number \(R_0\) slightly above one (i.e. the epidemics does not die out, but runs its course, at a relatively slow pace). Some details of the model are given below.

One of the most important assumptions is that the hundred of our Ideallinois’ communities are relatively isolated: what happens in one, percolates into the other communities slowly (not much traveling between different places happening). I also assume that at the beginning of the simulation there are just a few communities (in the result shown below, it is just 1) where there is some infected population.

In our model, the Campusurb, *when the students are on campus*, is a community in itself (roughly speaking, that the students mingle only among themselves, not with the community of people housing, feeding and teaching them). We will ignore out-of state or international students.

So, what does the model show?

I played with three basic scenarios:

**Remote classes**the students essentially stay home and are

indistinguishable from the rest of the population,- the**(0+7)**

scenario: 0 days a week on campus, 7 days a week at home ;**In person classes**on weekdays,**home visits**on weekends, – the**(5+2)**scenario;**In person classes**,**campus lockdown**(no visits home), – the**(7+0)**scenario.

Here are the results (it should be noted that the simulations are stochastic, so each run is a bit different: the salient points outlined above are stable though; I will present the confidence intervals and other simulational paraphernalia elsewhere).

The first plot shows the infections runs in the general population. One can see that the (5+2) scenarios is the most

dangerous one: the infections are coming in a powerful wave, maxing at 17,000 new cases per day in a million strong state. Assuming 3-5% hospitalization rates this is a dramatic load.

Scenarios (7+0) and (0+7) have comparable effect of significantly “flattening the curve”, with peaks at 33-40% of those in (5+2) scenario on general population.

The situation is however different for the student population: the (7+0) scenario (students locked on campus) is the worst, from the campustown perspective, (5+2) leads to a somewhat lower wave, and (0+7) scenario (remote classes) is the gentlest – essentially, the student population, spread uniformly across the state follows the general population trajectory.

An explanation of these plots can be glimpsed from the heat maps below. Here, the infection levels are shown for all 100 communities (columns of the matrix), as they evolve in time (the vertical coordinate, running from top to bottom). One can see that in the (0+7) and (7+0) scenarios, the infections flare at random times in the communities, spread over the simulation interval. The flattening of the curve is achieved by spreading those localized events.

In a contrast, in (5+2) scenario, the students visiting their communities over the weekend serve as a powerful mixer, picking an outbreak in one community and propagating it almost simultaneously through the state.

Obviously these simulations are at best a caricature of the processes at play. The state of Ideallinois is nowhere close to the complexities of Illinois or other state with a large centralized state university campus. Yet, as any model, this one points at a potentially very dangerous development that opening the campus for in-person education can trigger.

(It should be pointed out that even the best case scenario, what we are describing here is the *Patchwork Pandemic* we seem to converge to, as a nation. Wish the baseline scenario were something better than this.)

What policies can be deployed to mitigate this, and whether this caricature is actually realistic requires an effort beyond what we ca address in this post. In particular, such an effort could be used to test potential feedback driven mitigation policies, that are hard, or politically infeasible to test in real life.

Simulational modeling can help. We need more of it.

I used a stochastic version of SIR, relying on a mean field approximation. Namely, I assumed that the agents in each particular group (students, or a community population) are exposed to random chances of infection, which depend on the fraction of infected people in the community (through local spread) and the fraction of the

infected people in the whole state (though global mixing). The number of infected in each generation is them modeled as a binomially distributed random value. In other words, we are not solving a system of differential equations, but are running a stochastic simulations.

The details of the model will be presented elsewhere (a link to the jupyter notebook to be posted).

]]>

The process asks for giving the Members of the Academic Senate 3 days, till 5 p.m. of Thursday, March 26 to express their views, so that SEC can adopt a resolution and recommend a course of actions to the administration on Friday, March 27th.

This is a fast track action, if one ever saw one.…

]]>The process asks for giving the Members of the Academic Senate 3 days, till 5 p.m. of Thursday, March 26 to express their views, so that SEC can adopt a resolution and recommend a course of actions to the administration on Friday, March 27th.

This is a fast track action, if one ever saw one. And the reason for this rush: the resolution asks to *stop teaching dead in its track*. Here’s the language:

Be it resolved, Senate members call for the University to end immediately the Spring 2020

semester and direct instructors to calculate final grades based on the first eight weeks’

assessments or convert to pass/fail <…>

What is the rationale for this abrupt stop? The bulk of it is a litany of our privileged lifestyle disruptions: “panic and anxiety”, “extraordinary moment for the University, the United States, and the planet”, “dire conditions of a world-historic, once-in-a-generation global health crisis”, and (at the background of the world-historic crisis) the fact that we had to spend the spring break moving our classes online.

All these grievances could safely be ignored, but one, a true and serious reason: the digital divide. Indeed, moving classes online assumes that the students have the devices and the Internet service at home, – and many don’t.

What shall we do in this situation? This is the standard choice of how to achieve basic fairness: either we disrupt to make everyone equal at the lowest level, – denying half a semester worth of education to our students, – or at the highest, by acting quickly and *providing access to all*.

As a quick example: setting a program of our students who purchase decent tablets or laptops would cost $500 or so per person. Another $500 would cover buying broadband internet access for half a year. One time cost to our University will be high, for sure, but miniscule compared to the inevitable hits down the road, as the enrollment, especially international one, dwindles. Showing that we are committed to giving to all of our students a fighting chance to learn (world-historic crisis be damned) will make a huge difference to keep faith in UofI as an institution.

Should we follow the course charted by the resolution, the reputation losses from reneging on what is written in our mission, – “to transform lives and serve society by *educating*, creating knowledge, and putting knowledge to work”, – would be immeasurable.

We can do better. Progressive fairness is not to deny to all what is not given to some. The fairness is to give to all the best we can. Resolution 2003 would be a spectacular failure at it. Do not adopt it.

]]>As Covid-19 takes over the country, many organizations move their teams to work from home. Often, it is necessary to keep an office presence. This can be done in various ways: split your team into smaller units and let them alternate days, or weeks, or do completely random assignments (essentially, toss a coin for who will be in the office three days from now), etc. Or, one can abandon the fixed teams, and shuffle employees, again on a random or periodic basis…

These scenarios are *apriori* quite different in terms of their impact on infection propagation. …

As Covid-19 takes over the country, many organizations move their teams to work from home. Often, it is necessary to keep an office presence. This can be done in various ways: split your team into smaller units and let them alternate days, or weeks, or do completely random assignments (essentially, toss a coin for who will be in the office three days from now), etc. Or, one can abandon the fixed teams, and shuffle employees, again on a random or periodic basis…

These scenarios are *apriori* quite different in terms of their impact on infection propagation. How to minimize the exposure of their personnel, is a question without an immediate answer. Below are a couple of back-of-envelope answers.

Keep in mind, that the models I look at are highly stylized, and you should always consult your physician resident mathematician (if you don’t have one, hire them) before applying them. Perhaps, the most important caveat: we consider the model where there is *just one infected person* interacting with the team at any given interval (so that this model might be completely irrelevant in a week or so).

To address more detailed scenarios, a simulation platform is being developed.

A friend of a friend asked,* how one can optimize rotation of the team members between office duties and work from home*, to make exposure to corona virus minimal. To quote:

“

if they break their company into teams if it’s better to do 1) team A come in one day and team B the next or 2) randomly draw 50% for each day. Would one method over the other give them a statistical advantage over spreading corona to the team.”

We consider possible strategies to minimize impact of the presence of (latent) infected person on a team. On one hand, we look into whether it is better *to randomize the number of days a person is attending the office*: turns out, it is always beneficial to randomize. On the other hand, we consider whether it is desirable *to keep fixed teams or shuffle people between them*: turns out it is always better to keep fixed teams.

However, whichever are the general formulae, *simulations still rule*.

*Assume* that one wants to reduce the chances that an infected individual in a team would infect someone else, before being diagnosed and quarantined. *Assume further*, for simplicity, that the latent period (while the infected person is undetected) has an even number of working days, \(2D\), say (in practice, \(2D\) is something like 8).

Denote the chance that the team where the infected person is present to stay coronavirus-free is \(q\) (obviously, \(q\) is between \(0\) and \(1\), but its actual value depends on the overall situation, size of the team, workplace practices etc, – and, as we will see, does not matter much in this model).

So, if the teams alternate, the chances that the team that has an infected person will see no transmission is \(q^D\): they just have to be lucky \(D\) times.

Consider now the teams that meet on any given day with probability \(1/2\), over \(2D\) days. The probability they are lucky on any given day is

\[

1/2+q/2

\]

Namely, if they don’t meet, they are fine (with probability \(= 1/2\)), and if they do, they are lucky with probability \(q\).

So, the randomly meeting team will be lucky with probability

\[

(1/2 + q/2)^{2D}.

\]

Now the question is which is larger, \(q^D\) or \((1/2+q/2)^(2D)\). This is equivalent to asking which is larger, \(q\) or \((1/2+q/2)^(2)\).

Opening the brackets, we see that

\[

(1/2+q/2)^2-q=(1-q)^2/4>0

\]

for *all *\(q<1\).

In other words, **the randomly meeting team always fares better than on-off team**…

What is the intuition behind this result? It is, in fact quite transparent. Think about the number of days, \(D\), the infected person spends with the rest of the team. Given this number, the chances for his team to *not* get infected is \(q^D\). The dependence of this probability on \(D\) is *convex*, so that Jensen inequality kicks in, implying

\[

\mathbb{E}

q^D\geq q^{\mathbb{E} D},

\]

In other words, the chances of having no transmission at all from the infected person to the rest of the team **are always smaller for random D than for the deterministic D with the same mean**.

Another general takeaway is that **the probability of having no transmission at all does not depend on how the rest of the team is formed, – whether the team is the same on each of the \(D\) days they interact with the infected person, or is changing**.

However, if one is not just concerned with the *probability* of having a transmission, but also with *how many* people are infected, if the transmission does happen, we need more detailed analysis.

Let’s look into the *average number of the team members infected*, not just the probability that the number of infected people is \(0\).

We fix (for now) the number of days \(D\) the infected person was interacting with the rest of the team.

*Assume* that during each of the days, a fraction \(s\) of the total personnel count \(N\) is manning the office (so that \(k=sN\) people are present each day). In the baseline model above, \(s=1/2\).

**F**ixed: all*k*members interacting with the infected person are the same, and**R**andom: each day, any person has a chance \(s\) to be manning the office, chosen randomly each day.

Assume further, that for each person sharing the office with the infected one, the chances *to avoid* contracting CV is \(\tau\) (so that the chances that all \(k\) people present avoid contracting it on any given day is \(\tau^k=q\)).

In this case, for any of the \(k\) people on the team, the probability to avoid transmission for \(D\) days is \(\tau^D\), so that the *average* number of people aoviding transmission is \(k\tau^D\), and the *average* number of infected people is

\[

I_F=k(1-\tau^D).

\]

In this case, each of \(N\) people on any given day will not be summoned to the office with probability \((1-s)\), and will be called in, but will escape infection with probability \(s\tau\).

The probability to avoid infection during the \(D\) days during which the infected person attends the office is therefore

\[

((1 −s) + s\tau)^D,

\]

and the *average* number of infected persons will be

\[

I_R=N(1 − ((1 − s) + s\tau)^D).

\]

So, who wins, \(I_F\) or \(I_R\)? In other words, what is the comparison

\[

k (1 − \tau^D) \mbox{ vs }N(1 − ((1 − s) + s\tau)^D)?

\]

A little bit of algebra (which we will skip here) shows that

\[

I_F>I_R

\]

**for any \(\tau\lt 1\)**.

In other words, **the number of infected persons is smaller for fixed teams, than for the randomly shuffled ones** (as, perhaps, intuition was telling you anyway).

All these computation, general as they are, are still inferior compared to the hammer of OR and epidemiological theory: agent-based simulations. ** Here‘s** a python notebook running such simulations, taking as an input some approximations we know from the literature (like incubation period etc), some guesstimates (like infection rates), and the compositions of the teams, and spitting various estimates, like the distribution of the number of infected.

The planning concerns a team of 24, over 60 days. The competing plans are to split them into teams of 6, working each on schedule 5 days office, 5 days at home, or into three teams, of sizes 4+4+4 or 6+4+2. THe personnel is rotated between the teams. Other parameters are:

- at work infection rate/contagious person: 0.09
- at home infection rate: 0.005

Here’s the typical output. Those are the cumulative distribution functions (say, value .5 at 5 means that the chances to have 5 or less infected is .5), so the higher the plot runs, the better the schedule is.

Here’s another

We estimated the fraction of the team that will be infected over the 60 days, on the *5 days at work, 5 days at home schedule*. The infection at work and infection at home rates are as above.

The key feature of this graph is the the fraction *increases* with the size of the team. This implies, that given the *number of workers* and the *number of teams* into which you need to split your crew, it is always beneficial to *keep the sizes of the teams as uniform as possible.*

It makes little sense to try to formulate universal recommendations that would guide staffing decisions in each particular situation. Neither is practical to attempt to work out precise formulae for a broad spectrum of scenarios. What the managers should do, is simulations, especially agent-based, – they are still the best tool in town, and can save lives.

**Use them!**

As I found the FAQ provided by the campus a bit lacking, I collected below a few points worth noting. (The prospectus, a.k.a. “white paper” on Falcon Enterprise can be found here.)

- Falcon Enterprise is a big, distributed solution.

As I found the FAQ provided by the campus a bit lacking, I collected below a few points worth noting. (The prospectus, a.k.a. “white paper” on Falcon Enterprise can be found here.)

- Falcon Enterprise is a big, distributed solution. The computers of faculty and staff and campus servers with Falcon installed will run “sensors” – essentially, programs doing deep inspection of what is stored in memory, what processes are running and what traffic goes through ports. The
*sensors*would, of course, have full access to filesystem, – and justly so: to catch a virus, one needs to look*inside*a file, not only at file header. This is not very dissimilar to what the traditional anti-virus software, which looks for fingerprints (from a regularly updated database) of malice in the files on the computer or server. - What is
*different*in the Crowdstrike solutions, is that these data are constantly sent back to the Crowdstrike servers, to be run through their proprietary system. The advantage of this is, of course, that the outbreak of mass infection on a network can be detected faster; and the culprits can be (sometimes) explicitly identified, which (again, sometimes) facilitates remedial actions. - The data Crowdstrike collects on endusers’ – ours, – lap- and desktops are stored at Crowdstrike servers for a few weeks. These data nowhere close to comprehensive, of course. But we don’t know exactly what is sent up to the mothership. (Given Crowdstrike’s claim that their software is AI driven, that is learning on the fly of the emerging threats, they don’t explicitly control what is sent and stored either.)
- Nonetheless, humans aren’t fully out of the loop: the white paper promises a dedicated team that will be monitoring our data 24/7 as they flow into their system. Campus IT personnel will also have – tightly regulated and bound by various legal constraints – visibility into our computers.
- Overall campus cost are about ~$300K for setup and ~$500K/year to operate. This, however, assumes that the install base is around 48K computers, which brings into focus the big question:
- Who will be forced to install it on their computers? Right now IT FAQ says that only campus owned machines will need to get Falcon installed. We have about 5,5K faculty (tenure/tenure track, specialized and visiting) and about 8K of staff members, which of course includes facilities management etc. Do we really have about 4 networked computing units per person?

In summary: Crowdstrike’s Falcon Enterprise is a cutting edge IT-security solution, an appropriate answer to modern era unprecedented cyber-threats. Correspondingly, it is unprecedentedly intrusive, to the degree similar to tools run by financial or military organizations.

]]>