Some random complex thoughts..

..on “Contextual vs Composable”, “Antifragile” and other “memes” 

Oh, let me pause here for a moment so you can appreciate how incredibly funny is the “random complex” oxymoron in the post title. Ha. Ha. Ha-ha-ha.

*sigh*…

Now, geek humor aside, a bit more serious: In several tweet exchanges and direct conversations with various people over the last few months I was asked  — and I even promised (!) —  to write something a bit longer than sarcastic 140 character quips on complexity theory, Complex Adaptive Systems (CAS) and why do I strongly dislike (or even “call BS”) on several of the “memes” being circulated around — e.g. “Contextual vs Composable” as programabiltiy abstractions (concept also picked up in this article by James Urquhart in the context of PaaS);  “Antifragile” — a term coined in a book with the same name by Nassim Nicholas Taleb (see e.g. this review of the book, which  best I’ve seen so far — and no, I haven’t bothered to read the book itself); etc. etc.

Before I even start, a few words of warning and caveats: As I usually make abundantly clear  — sometime quite bluntly and/or to excruciating length — in direct conversations, my presentations, or even blog posts such as this one I’m very keen on framing my position as accurately as I can in terms of — and in relation to — academic research on the topic at hand. I find it a matter of intellectual honesty in listing, as clear as possible, my references, the subjectivity of my position, my biases and my influences. And, to all extent possible, avoid “inventing” terms and concepts for which there is an already well-established technical jargon. And, correspondingly, when using terms and concepts for which there are several interpretations, list in which sense I use one or the other at any particular time.

The reason is not only the deeply-ingrained habits of an old academic. It’s that I consider it a matter of bare minimum of respect to my audience and for the time they spent reading/listening to yours truly: Providing appropriate pointers and references, and using the terms and vocabulary as per those references — which may or may not be familiar to the audience —  simply establishes the common ground for mutual understanding and potential debate. Conversely, I find it nothing short of an insulting when people (also referred to as “thought leaders) “invent” new terms, concepts or “theories” and — most usual in those cases… — fail to provide appropriate references and pointers to their “research”. And that especially when there a rich set of prior works and a well-established technical jargon on the subject.  For more feel free to look at my previous blog post for more ranting on this..

Back to the topic at hand: I’m deeply influenced by– and will extensively refer to — David Snowden’s work on the Cynefin framework as “multi-ontological framework for sense-making”. See [1] for a good starting reference on the subject.  Also consistent with Professor Snowden is my contraint-based definition for Complex Adaptive Systems (originating from [12] if you really want to get technical); the use of terms “simple”, “complicated”, “complex”, etc.

Now, time for trying to frame, even if loosely, a few terms and concepts that I find important to the topic at hand — together with some pointers to further reading. This is by no means an exhaustive list — simply something to frame the discussion.

A few complexity theory concepts

Complex Adaptive System 

Consists of a collection of agents that act independently based on agent-local data and heuristics [2]. The agents act in an adaptive manner in response to the behavior of other agents and, respectively, the overall, aggregate behavior of the system. As a result the logic and heuristics the agents follow in their actions changes both with respect to time and in comparison with other agents [3].

CAS manifest the following characteristics:

  • They are non-linear, non-deterministic and evolve irreversibly [3] [4]: Once they evolve to one state it is impossible to undo the changes. In technical terms we say that “CAS are non-causal, they are dispositional”. Or, using a popular metaphor “in a CAS the same thing will happen twice only by accident”.

An important — but far too often ignored — implication is that they are non-predictable i.e. we cannot write computer models that entirely capture — and thus can predict — system behavior. They can only be perceived — and the overall systemic behavior influenced — via direct interaction with the agents.

  • The nature of the system differs from the nature of the agents themselves and is fundamentally influenced by the context they are embedded in. While the nature of the agents can be entirely deterministic and predictable the specific context of their interaction may result in the overall system become “complex”. For example an airplane is a highly “complicated” (in Cynefin terms) but entirely deterministic machine: Its built-in controls are designed to respond in exactly the same way in identical conditions — e.g. pulling that lever that amount raises the flaps to that degree, each and every time,  but only while everything else being equal. However, in non-linear contexts (e.g. flying an an airplane in turbulence) the behavior of the system in response to exercising those controls is unpredictable due to the non-linear and time-sensitive summation of all the interactions within and with the system. As such human judgement — based on direct experience of similar conditions — has to be brought to bear to heuristically guide the system behavior based on the controls available and the effect we infer those will have on the system behavior (inference based on the said prior experience).

An important observation here is that the nature of a system is also a matter of perception — i.e. it is an ontological problem. For example while in actuality a particular system in a particular context is causal and deterministic (i.e. not a CAS) an incomplete understanding of the underlying dynamics and the rules governing its behavior may make the system appear as a CAS — i.e. non-deterministic, non-causal, and dispositional. As such the way we perceive it is not only dependent on the surrounding context, but also determined by our own prior subjective experiences, biases and personal inclinations. And it is in this sense we refer to Cynefin as a “multi-ontological framework for sense-making”: Different types of systems are perceived — and dealt with — in different ways, depending both the actual context (objectively) but also on how we subjectively perceive them at that particular time.

  • They have emergent properties [4] [5] [6] i.e. patterns of regularity that are not present at the level of the agents themselves nor can be inferred from their characteristics [7] [8]. This is usually referred to in the popular vernacular as “the whole is greater then the sum of its parts”.

While many CAS do consist of a clearly identifiable hierarchy of lower order subsystems [8] and a reductionist approach in analyzing them does provide some value (i.e. analyzing CAS in  terms of abstract models for their subsystems), those analyses are fundamentally limited. Namely they are limited — and influenced by — the inherent assumptions built into those models [9]. Consequently such “mechanistic” decompositions of CAS fail to fully describe the emergent nature of the CAS itself [10]. Instead those emergent CAS characteristics need not to be studied in terms of the composing systems but, more importantly, on their own terms. Terms which are irreducible — and cannot be integrated from — the laws governing the underlying subsystems. That is why we say that “the system transcends the agents”. Unfortunately, this is far too often confused with “self-organization”.

Constraints / Boundary conditions

One fundamental aspect that makes a CAS dispositional and “path-dependent”  — i.e. “exquisitely sensitivite to initial conditions and particular history” [11] — are context-sensitive constraints. These constraints “synchronize and correlate previously independent parts into a systemic whole” [12].
It is by alternatively imposing and relaxing of context-specific constraints that the agent behavior within a CAS — and thus the emergent behaviour of the CAS itself — is influenced. It is this through this influencing process that CAS are controlled. However, since CAS are by their very nature non-linear / non-causal any such intervention is irreversible. Consequently the CAS system is known — and its behavior influenced — via a series of safe-to-fail “experiments” [13] : Low cost, incremental actions from which we either can recover quickly (safe-to-fail, fail-fast) or, alternatively, further amplified to encourage the desired emergent behavior.

One important aspect of constraints acting as boundary conditions for CAS with human agents is that these boundaries are permeable, fuzzy, negotiable, and in a constant state of flux [12].  They are to perceived as zones of phase changes rather than rigid limits which separate but cannot be transgressed [14]. The reason is due to the nature of human identity [15]: Fluid, constantly negotiated using the power of the narrative (but without being integrated), with multiple, resilient identities surfacing or receding as the context may require.

Attractors, Affordances, Fitness landscapes

The safe-to-fail interventions described above are performed using attractors — catalytic probes acting as trigger mechanisms that precipitate a desirable change in CAS evolutionary trajectory  — i.e. its emergent behavior [11] [12].

Correspondingly affordances describe the degrees of freedom available to the system and the propensity of a CAS to evolve in some directions versus others [16], as facilitated by the attractors. As a result we can describe a CAS in terms of a three-dimensional fitness landscape where the “valleys” and “peaks” represent basins of attraction and, respectively, states and behaviors from which the system shies away [11] [12].

Ordered systems — Simple, Complicated

Following the Cynefin definition of those I’m using the terms “simple” or “complicated” for ordered systems. I.e. causal, deterministic, predictable, and “computer model-able”. One thing worth noting is that in a constraint-based definition of such systems the agents are tightly coupled and their behavior is entirely controlled by the system itself. In other words, the agent affordances are strictly limited.

As noted above for CAS, we need to be conscious of the potential differences between the objective nature of the system itself and, respectively, how we subjectively perceive it.

Controls

The term — at least how it is used most commonly in the popular vernacular — tends to pertain to “ordered” systems (Cynefin “simple”, “complicated”) and control thereof. Given the “ordered” nature of those systems (deterministic, predictable) these controls are direct, measurable, and mechanistic in nature (“pull that lever” / “push that button” / “trigger that feedback loop”). However, for CAS this paradigm does not apply directly. As described above, for CAS their behavior is influenced using attractors by encouraging desired emergent behavior within flexible boundaries. As such the resulting controlling effect within a CAS is far more difficult to predict than in the case of “controlling mechanisms” (i.e. “controls”) appropriate for “ordered” systems.

Coherence

Is an important concept for complex systems in that it allows us to quantify CAS emergent behavior. Technically defined as “maximizing satisfaction of a set of positive and negative constraints” [17] — in our terms above attractors, and, respectively, repellers — the concept of coherence allows quantifying — at least to some degree — how consistent is the behavior of different agents within a CAS. Or, alternatively, the “degrees of truth” for a “theory”. For example it allows us to state things such as “Darwin theory of evolution is more coherent than creationism” in that the hypotheses comprised in the former are more consistent with available data than the latter. In doing so we avoid make absolute statements such as “one is true whereas the other is false” where counterexamples falsifying both theories exist.

An important thing to note is that the scope of the embedding context is essential — as it is for CAS in general. The larger the scope and the more generic the assumptions that are necessarily made about that context — or, putting it in Popperian terms, the easier it is to falsify it [18] — the more coherent and stronger the hypothesis. For example astrology may be coherent in and by itself (i.e. considered in isolation) but once its explanations conflict with psychology and astronomy it becomes obvious it is less coherent [17].

Resilience and Stability

Literally from [19] (emphasis mine):

“Stability …represents the ability of a system to return to an equilibrium state after a temporary disturbance; The more rapidly it returns and the less it fluctuates, the more stable it would be”

“But there is another property, termed resilience, that is a measure of the persistence of systems and of their ability to absorb change and disturbance and still maintain the same relationships between populations or state variables”.

Moreover, from [11] (emphasis mine):

“…the difference between stability and resilience: A stable system fluctuates minimally outside its stable attractor, to which it quickly returns when perturbed. Stable systems are typically brittle; they disintegrate if highly stressed. Resilient systems, on the other hand, might fluctuate wildly but have the capacity to modify their structure so as to adapt and evolve. Resilient, robust systems are also called meta-stable. Co-evolution selects for resilience, not stability”

As shown elsewhere [11] [13] throughout history policy makers have counseled fail-safe strategies in that they promoted an ideal of stability, rather than resilience. Such utopias — that can be tracked back to Plato — are self-contained, isolated and excluded the potential for change as they were designed for an ideal equilibrium, and any further change would mean a departure from that ideal.

However, as mentioned above, it has been shown [19] that evolution selects for resilience rather than stability. In particular, a safe-to-fail strategy allows systems to adapt to disturbances, absorb change, and thus makes them robust.  Characteristic to such systems is variety among subsystems [20] and a moderate degree of coupling between these: “Too loose coupling means that fluctuations (innovations) emanating from one system would not reach others; too close coupling [means] that the environment can quickly damp out any fluctuation” [13].

In biological systems these characteristics are captured by the concept of degeneracy — “a system property that requires the existence of multi-functional components (but also modules and pathways) that perform similar functions (i.e. are effectively interchangeable) under certain conditions, yet can perform distinct functions under other conditions” [21]. In other words the system is capable to perform exaptations —  which is defined as repurposing a function (or a trait) that has been developed for one purpose to other uses under conditions of extreme stress [22]. For example feathers have initially evolved for the regulation of temperature in dinosaurs but then repurposed for bird flight.

Fine-grained objects

Implied in the terms “safe-to-fail” and “coupled to a moderate degree” is the concept of fine-grained objects. In particular, the safe-to-fail interventions performed within a CAS to influence its emergent behavior have to have the appropriate level of granularity. A too coarse granularity precludes the intervention from being “low-cost” whereas a too fine-grained granularity limits its learning impact i.e. limits its impact on the system emergent behavior [23].

Also, and more importantly, as this emergent behavior is stabilized and the system evolves from exploration to exploitation [24] the granularity of those interventions should be increased to maximize their impact and effectiveness. Moreover, this movement is periodic: When an established design reaches its useful lifetime (the constraining assumptions are falsified) we need to revert back to an exploratory learning phase and the fine granularity of the interventions appropriate to that phase.


Contextual vs composable, Antifragile and other “memes”

Contextual vs Composable programability abstractions

In the light of the defintions above, the terms as such — at least as used in this blog post  — make very little sense. If instead we use the term “plug-in based framework” for the former — as suggested in the post itself– we can infer that what is meant is a pre-designed “architecture”. Or, as the post says:

“Contextual tools like Ant and Maven allow extension via a plug-in API, making extensions the original authors envisioned easy”

In other words it is a pre-designed architecture that functions as a framework in which new plugins can be developed. As the terms clearly suggest this is a very valid environment as long as the assumptions built-in the system — a pre-designed, ordered, “complicated” system in Cynefin terms — are respected.

As such it should come as no surprise that when this assumptions are falsified and the hard limits imposed by the design are challenged the system breaks down (not capable of delivering the expected results).

On the other hand I find the term “composable” slightly more appropriate: What is described there are fine-grained objects that are combined in novel ways that could not have been “pre-designed” when the objects themselves were created. In particular, the article illustrates how the well-known, well-established tools available in a UNIX system can be ingeniously combined to solve very efficiently a rather advanced problem. However, one very important aspect the article fails to mention is that these tools have evolved, tuned, and have been refined over many years of usage in countless environments  — or, to use our terms above, “embedding contexts”.

However, concluding that one type of system is “better” than the other is failing to fully appreciate the fundamental differences between the two. Like James Urquhart seems to suggest at the end of his blog post they are simply different, providing different compromises between control (fine-grained objects) and convenience (effectiveness). In this respect I find this post by Nati Shalom capturing that aspect far more accurately.

Still, and more importantly, as we inevitably move from exploration (fine-grained objects providing control) to exploitation (convenient, effective, mature platforms) and in that process the dominant design matures and solidifies, we need to be aware — and pay close attention to — those border cases that break the assumptions that need to be made as part of that evolution. These are clear signs the dominant design is approaching the end of its useful lifetime and we need to consider alternatives. When that happens we need to be able restart the cycle, revert back to fine-grained objects, and enter an exploration learning phase in the search of a new, better, more effective dominant design that can subsequently be exploited effectively — all until the cycle has to be repeated all over again.

Antifragile

The above discussion on resilience vs. stability should provide ample justification on why I personally find the term completely useless. The fact that it seemed to have been “coined” in a void (with no references to ample body of respectable academic work that exists on the topic) and is used in various ways in various contexts does not make matters any better. Moreover, since such usage eschews the definitions and technical jargon well established by the said prior works I find that nothing short of disrespectful.

References

[1] D. Snowden, C.F. Kurtz — “The new dynamics of strategy: Sense-making in a complex and complicated world”. IBM Systems Journal, VOL 42, NO 3, 2003.
[2] J.H. Holland — “Studying Complex Adaptive Systems.” Journal of Systems Science and Complexity, 2005.
[3] E.M. Rogers, U.E. Medina, M.A. Rivera, C.J. Wiley — “Complex Adaptive Systems and the Diffusion of Innovations”. The Innovation Journal, The Public Sector Innovation Journal, Volume 10(3), Article 30.
[4] Fulvio Mazzocchi — “Complexity in Biology”. http://www.nature.com/embor/journal/v9/n1/full/7401147.html
[5] J. H. Holland — “Emergence: From Chaos to Order”.  Perseus Publishing, 1998.
[6] S. Johnson — “Emergence: The Connected Lives of Ants, Brains, Cities, and Software”. Scribner Book, 2001.
[7] M. Polanyi — “Transcendence and Self-Transcendence”. Soundings 53:1, 1970.
[8] D. J. Watts — “Everything is Obvious (once you know the answer)”. Crown Publishing Group, 2011.
[8] H.A. Simon — “The Architecture of Complexity: Hierarchic Systems”. Proceedings of the American Philosophical Society, December 1962.
[9] H.A. Simon — “The Sciences of the Artificial”. MIT Press, Third Edition, 1996.
[10] M. Polanyi — “Life’s Irreducible Structure”. Science, Vol. 160, June 1968.
[11] A. Juarrero — “Complex Dynamic Systems Theory”. Cognitive Edge, http://cognitive-edge.com/library/more/articles/complex-dynamical-systems-theory/
[12] A. Juarrero — “Dynamics in action: Intentional behavior as a complex system”.  MIT press, 1999.
[13] A. Juarrero — “Fail-safe versus safe-fail: Suggestions towards an Evolutionary Model of Justice”. Texas Law Review Journal, June 1991.
[14] A. Juarrero — “Complex Dynamic Systems and the Problem of Identity”. Emergence, 4(1/2), 2002.
[15] C.F. Kurtz, D. Snowden — “Bramble Bushes in a Thicket: Narrative and the intangibles of learning networks”. Cognitive Edge article, http://cognitive-edge.com/blog/entry/4445/bramble-bushes-in-a-thicket/
[16]  H. Letiche, M. Lissack, R. Schultz — “Coherence in the midst of complexity”. Macmillan, 2011.
[17] P. Thagard — “Coherence in Thought and Action”. MIT press, 2002.
[18] K. Popper  — “Science: Conjectures and Refutations”. Lecture at Peterhouse, Cambridge, 1953.
[19] C.S. Holling — “Resilience and Stability of Ecological Systems”, 1973. From E. Jantsch, C. Waddington — “Evolution and consciousness: Human Systems in Transition”, Addison Wesley 1976.
[20] W. R. Ashby — “Introduction to Cybernetics”, W. Clowls & Sons, 1956.
[21] J. M. Whitacre — “Degeneracy: A link between evolvability, robtustness and complexity in biological systems”. BMC Journal of Theoretical Biology and Medical Modelling, 2010.
[22] S. J. Gould, E. S. Vrba — “Exaptation – a missing term in the science of form”. Paleobiology 8(1), 1982.
[23] D. Snowden — “It’s all about the granularity”.  Cognitive Edge blog post, http://cognitive-edge.com/blog/entry/5758/its-all-about-the-granularity/
[24] J. March — “Exploration and Exploitation in Organizational Learning”. Organization Studies, 2(1), February 1991.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s