Thursday, June 28, 2007

Intervening in Complex Political Systems

Complexity theory offers an innovative way to think about global affairs. But it’s…well, complex, so a little theory may be useful before tackling the innumerable actors (people, parties, countries, international corporations, alliances), wide variety of capabilities, and endless interconnections characterizing international politics.

Consider a trivial complex system consisting of three components, as follows:

Component A – pink, medium-sized
Component B – blue. medium-sized
Component C – gray, medium-sized.

Each component, in our toy system, has only one possible action – move one step in a 3-D grid in any direction at random. (An "action" is behavior that has an impact on other components.)

This is clearly a rather simple system, compared, say, to a human cardiovascular system or a nation-state political system. Nevertheless, predicting its future states will be difficult because we are asserting that it is a complex system. This statement tells us first that its components are interdependent and adaptive.

Let’s assume, then, that each component’s color and size can evolve as a function of distance from the other two components. It is likely that the rate of evolution will vary across components, so let Component A have the slowest rate of evolution, Component B a rate 30% faster, and Component C a rate 200% faster unless it finds itself exactly equidistant from Components A and B, at which point its evolution rate slows to only 5% faster.

Finally, it is natural to have delays in the reactions of components, so let us posit, as examples of a complete rule set, the following simple rules:
  • At Time 1, Component A moves.
  • If Component A moves closer to B, Component B waits two time steps and then moves.
  • Else Component B waits one time step and then moves.
  • If Component A and B together end up on average closer to C than they were before moving, then C turns a lighter shade, shrinks, waits three time steps, and moves.
  • After five time steps, if C is within 10 distance units of A and is smaller than medium, A moves toward C.

One could continue to define a rather large number of rules, even for this very simple complex system. But even if we stop here, adding only a rule that says, "else, no one moves," predicting future states of the system will quickly get arduous. Now consider one simple change that is essential to approximate social reality – instead of rules that have simple, binary alternatives (move/don’t move), the rules are always tendencies, e.g., "Else Component B probably waits a short number of time steps and then move a short distance." Now, with this minimal step in the direction of real world social system reality, where one never knows, e.g., exactly how rigorously the other side will adhere to a cease-fire, suddenly we have a system whose states can hardly be predicted even a few steps into the future.

A couple initial questions arise, even from this "simple" complex system:

  1. If its future states cannot be predicted, what can we say about the future?
  2. What external input would make the system’s future states easier or more difficult to predict?

What can we say about the future?

  • Colors get lighter or darker but don’t change, so, for example, we will never see "green."
  • Certain relationships clearly exist (specified in the rules), so if we could control the actions of one component, we could predict something about the actions of others.
  • A fundamental distinction exists between behavior we care about (we care about behavior that affects the system) and other behavior that is internal to the component. We need to keep this distinction in mind and be careful to ignore internal behavior if our concern lies with the system. Our model simplifies by not having any internal behavior, but in the real world these will be hopelessly confused.
  • If we made a system dynamics model of system behavior, might we discover some interesting tipping points? Whether or not a boring question for this toy system with its scarcity of interactions, it is a critical question for evaluating real systems where, for example, a seemingly straightforward financial agreement can lead to, say, cultural complications.
  • We did not discuss the concept of behavior at the system level. Indeed, this system seems rather too simple to have any. But perhaps this assumption is naïve; we would have to build an agent-based model of the system—and a rather sophisticated one, at that--to find out. In any case, for a real system, we would be wise to consider the possibility that behavior not inferable from the behavior of the individual components might emerge at some collective level.


In sum, it appears that a good deal can be said to guide one’s expectations about future behavior, whether the actual state of the system at any particular point can be predicted or not.


What can we do to alter predictability of the future?

  • Alter energy levels. In some sense, behavior is a function of "energy." What that means depends on the system, but if our components can move, they have "energy." If we increase the energy, they may move farther and/or faster.
  • Add or subtract variables. These components have size and color. Suppose one added the ability to reproduce, form alliances, and steal each other’s energy?


Without having yet said anything about the real world of international affairs (or any other real system), we have already begun to derive some principles. Let us call them PFMAWCC: principles for mucking about with complex systems.

Principle #1. Intervening provokes adaptation.
Intervening means inserting "energy," whether energy in the literal physics sense or, in the case of a political system, one of the political equivalents (argumentation, new laws, force, money). One way or the other, getting something done requires an investment of energy. Putting energy into a system will make the system adapt more, well, "energetically." This will make it harder to control and will make the future harder to predict because actions will take place faster.

Principle #2. Intervening degrades predictability.
Adding energy makes things happen faster, so it becomes harder to analyze everything. But intervening also probably will add variables, i.e., capabilities. Foreign technical and financial assistance may enable a regime to create new institutions or enforce new control mechanisms. The construction of a new road system may facilitate participation in the international drug trade. New actors may join the system, and old actors may gain new capabilities. Actors will connect to each other in new ways. Adding variables causes an exponential rise in the number of connections, vastly increasing the downstream affects. Subtracting isn’t simple, either, because it’s not just a matter of subtraction. Reducing energy will have varied impact on different components because complex systems are heterogeneous (recall that even our model system’s components vary by color, and color affects behavior). Subtracting a variable (e.g., denying a colonial population the right to self-government) will cause multiple immediate impacts, more second-order impacts, potentially still more third-order impacts…once again, exponential change, and not necessarily the mirror image of adding the variable.


These principles caution the activist to proceed with care. Whatever you do will have downstream impacts in every direction you won’t be able to foresee because everything is connected in ways you can’t predict.

Now, about that "real world" we have been so carefully avoiding...

No comments: