Evidence in the Complex domain – Some observations from a practitioner

First, a big caveat and some framing for this post: I make no secret of the fact that I’m a big admirer of Dave Snowden and adept of the Cynefin sense-making framework. To anyone that has the capability or interest I’d warmly recommend attending his SenseMaker courses — or at the very least one of his lectures and articles. For a quick intro into that, see my previous posts here. For much more, see Cognitive-Edge.

With that out of the way I henceforth will assume familiarity with Cynefin, the difference between “Complex” and “Complicated” etc.

This post was triggered by this excellent article – and subsequent Twitter exchange — by Dave on the subject of what passes as evidence, particularly in Complex systems. Which, by and large and with some simplifications, are most of the social systems of interest. I would recommend reading that article before proceeding (**).

Timing and full agreement with Dave’s article aside, this topic is near and dear to my heart and gave it quite a bit of thought in the past. The reason ? In my current role one of my tasks is providing early evidence to act as basis of future technological investments (aka “strategy”, rather crudely). This post reflects my observations and experiences in the field of doing just that.

Like Dave mentions in his post, humans — and the social groups they build and are part of — are systems with a large degree of complexity. As CAS (complex adaptive systems) we are dispositional, not causal. More importantly, the nature and degree of predictability of those dispositions vary greatly depending on context: As an individual my personal dispositions are somewhat predictable — will have my usual “black tea and two rye bread toasts” this morning or do I go for a strong coffee ?. Still, even those personal dispositions are influenced — and largely dependent by — the environment I am in: How long and well did I sleep ? Am I at home or at a hotel where I can be tempted by fry-up ?

On the other hand, my individual ability to trigger coherence (narrowly defined as: a group agreeing to act in a certain way), and the time to achieve it decreases rapidly with the size of the group I’m engaged in. For example, it’s usually faster for me to get my immediate family (Dunbar’s “support group” to be precise) to agree on what we are going to cook for Sunday dinner rather than influencing my local community regarding county plans for building a new road close to our house. (Well, at least it tends to be so for some Sunday dinner courses involving fish and my daughters, but I digress…)

While coherence of a certain narrative and evidence are distinct concepts, in a complex environment I consider the two closely intertwined. An understanding of the current state of the system (experience based) coupled with its disposition to evolve into some directions versus others (see this video on the Probable — Possible — Plausible scale) constitutes a starting hypothesis. Based on this starting hypothesis, evidence is gathered — or, rather, manifests itself — as coherence (positive response) to our probing actions.

Important to note here is that those actions take place in a Complex environment. In other words those actions are sense-making pobes — multiple safe-to-fail experiments running in parallel, of which at least 50% of them should fail. The latter point is very important since it provides critical information on where the boundary is, i.e. what constraints are at play and how — and if — we can push against them (relax) or, respectively, enforce. See also below.

In these contexts (Complex domains) the process of gathering evidence is an expression of the distributed act of sense-making: Emergent coherence to a certain narrative is manifested as positive response to our probing actions. The evidence (coherence) co-evolves — and thus informs and guides — further sense-making via distributed probing actions (safe-to-fail, low-cost experiments).

Concretely in my area of activity (IT engineering) the process is rather rapid due to the large degree of codification at play (ala Boisot): The large majority of customers (see (*) note below) have often very similar set of technologies, often using them in similar ways, ergo similar set of needs. In response to early signals (i.e. interest manifested by early adopters and outliers (*)) we initiate sense-making probes (experiments). We nurture those that seem promising, all while looking for coherence (adoption) in other similar – but independent ! – contexts, and quickly incorprate their sense-making feeback to guide further probing. As the reaction to those early experiments becomes similar (ergo emergent coherence), we foster group ties and feed this virtuous cycle, in a positive-feedback fashion: As momentum is gathered around it, our understanding is further refined into evidence that further feeds the cycle via informed probing.

Now, like the old adage goes, “there be dragons”: Like clearly stated in this other post by Dave there is danger in blindly following the mantra: “OK, we got lucky. This seemed to have worked, let’s do some more of it”. In addition to the caveats Dave lists in his post, here are some from my own experience:

  •  Pay close attention to the “near-field outliers” and the watch carefully “the side-view”: Informing and influencing the initial traction (ergo evidence) are what I call the “near-field outliers”. These are the cases where the reaction to our sense-making probe is “Yes, this is/would be great – but only if...”. Or: “Oh, that’s really nice, but how about ...”. Actively exploring — via safe-to-fail, co-evolutionary experiments — those lateral dispositions and testing them in terms of group coherence (how much it resonates in other, independent contexts) is crucial. It provides two things: a) Evidence guiding subsequent probing actions (in the sense above) 2) A sense of the encompassing constraints, that either have to be pushed against (if deemed undesirable) or enforced (if we consider them desirable or, at the very least, benign).
  • Beware of premature convergence: This is a real danger, and a trap we fall into far too often. Particularly in the IT engineering domain, with its low-cost experiments and fast-iterations being all too readily available, in the name of “driving traction” (aka “market adoption”) it is very tempting to latch on the so-called early adopters and drive adoption in whatever direction we deem appropriate at a given time. It is at this very point where a careful balance has to be made between actively probing in multiple, contradictory directions looking for emergent coherence vs. becoming prescriptive — a perverted “discovery process”, often manifested as a set of leading questions, all with pre-defined answers and all pointing to same conclusion. Unfortunately this form of intellectual arrogance is far too common in IT engineering — not seldom overlayed on an existing set of prejudices, on both sides. While efficient in enforcing convergence to a pre-defined set of conclusions and next steps — all in the name of predictability, of course — I do not consider them as genuine evidence as the nature of coherence is enforced, rather than emergent (i.e. order is externally enforced). That this is most often used to simply reassure and reinforce the existing status quo and power structures only adds insult to injury.

At this point an important note is in proper order: In all of the above narrative coherence (ergo adoption) was a proxy for evidence. As Dave notes in his post, there is a fundamental difference between evidence and proof. The difference ultimately depends on the cognitive domain at hand and, respectively, our cognitive biases. As this post pertains to Complex domain (i.e situations and contexts with large degree of complexity) evidence cannot be used to demonstrate the existence of a causal chain or used in any predictive fashion. In a Complex domain cause and effect can only be analyzed a posteriori, and experiment repeatability and prediction are non-sensical. That is why we need to carefully analyze the near-field outliers: In good Popperian tradition, all evidence — acting either as a starting assumption or working theory — should be falsifiable. That is why, as per above, we need to carefully make sure at least half of our experiments “fail” and, respectively, analyse and probe near-outliers for refining / invalidating (falsifying) the evidence at hand.

Moreover, in the Complex domain the concept of goal and metrics (definition of success) has to walk a fine line between been specific enough to provide an actionable measure of progress (**) and, respectively, be ambiguous enough to allow lateral evolution and serendipitous discovery of adjacent evolutionary possibilities. As those are explored and sense-making becomes more refined, this can be captured in more focused, quantitative metrics — hence a more precise definition of success. It is at this point where we transition from Complex to Complicated, from exploration to exploitation, from prototype to product. It is here where causal chains stabilize and become predictable — or, most dangerously, they are assumed to be so — and the notion of evidence changes into proof. The existence of this dominant narrative, the onset of group think under the assumption of causality has serious risks of its own — some of which already mentioned above. However, they are outside the scope of this post.

(*) Note the deliberate ommision of discussing “far-outliers”. A topic into itself, those fringes are the realm of instability, opportunity — or, often, both. Depending on their number and group coherence (i.e. coherence among themselves) they can either be regarded as individual, external reference points to be used to measure “progress” or, if manifesting group coherence, tested for larger collective coherence — hence alternative, contradictory evidence, as per this article.

(**) Just after finishing this article I just realised Dave posted a follow-up.

Some random complex thoughts..

..on “Contextual vs Composable”, “Antifragile” and other “memes” 

Oh, let me pause here for a moment so you can appreciate how incredibly funny is the “random complex” oxymoron in the post title. Ha. Ha. Ha-ha-ha.


Now, geek humor aside, a bit more serious: In several tweet exchanges and direct conversations with various people over the last few months I was asked  — and I even promised (!) —  to write something a bit longer than sarcastic 140 character quips on complexity theory, Complex Adaptive Systems (CAS) and why do I strongly dislike (or even “call BS”) on several of the “memes” being circulated around — e.g. “Contextual vs Composable” as programabiltiy abstractions (concept also picked up in this article by James Urquhart in the context of PaaS);  “Antifragile” — a term coined in a book with the same name by Nassim Nicholas Taleb (see e.g. this review of the book, which  best I’ve seen so far — and no, I haven’t bothered to read the book itself); etc. etc.

Before I even start, a few words of warning and caveats: As I usually make abundantly clear  — sometime quite bluntly and/or to excruciating length — in direct conversations, my presentations, or even blog posts such as this one I’m very keen on framing my position as accurately as I can in terms of — and in relation to — academic research on the topic at hand. I find it a matter of intellectual honesty in listing, as clear as possible, my references, the subjectivity of my position, my biases and my influences. And, to all extent possible, avoid “inventing” terms and concepts for which there is an already well-established technical jargon. And, correspondingly, when using terms and concepts for which there are several interpretations, list in which sense I use one or the other at any particular time.

The reason is not only the deeply-ingrained habits of an old academic. It’s that I consider it a matter of bare minimum of respect to my audience and for the time they spent reading/listening to yours truly: Providing appropriate pointers and references, and using the terms and vocabulary as per those references — which may or may not be familiar to the audience —  simply establishes the common ground for mutual understanding and potential debate. Conversely, I find it nothing short of an insulting when people (also referred to as “thought leaders) “invent” new terms, concepts or “theories” and — most usual in those cases… — fail to provide appropriate references and pointers to their “research”. And that especially when there a rich set of prior works and a well-established technical jargon on the subject.  For more feel free to look at my previous blog post for more ranting on this..

Back to the topic at hand: I’m deeply influenced by– and will extensively refer to — David Snowden’s work on the Cynefin framework as “multi-ontological framework for sense-making”. See [1] for a good starting reference on the subject.  Also consistent with Professor Snowden is my contraint-based definition for Complex Adaptive Systems (originating from [12] if you really want to get technical); the use of terms “simple”, “complicated”, “complex”, etc.

Now, time for trying to frame, even if loosely, a few terms and concepts that I find important to the topic at hand — together with some pointers to further reading. This is by no means an exhaustive list — simply something to frame the discussion.

A few complexity theory concepts

Complex Adaptive System 

Consists of a collection of agents that act independently based on agent-local data and heuristics [2]. The agents act in an adaptive manner in response to the behavior of other agents and, respectively, the overall, aggregate behavior of the system. As a result the logic and heuristics the agents follow in their actions changes both with respect to time and in comparison with other agents [3].

CAS manifest the following characteristics:

  • They are non-linear, non-deterministic and evolve irreversibly [3] [4]: Once they evolve to one state it is impossible to undo the changes. In technical terms we say that “CAS are non-causal, they are dispositional”. Or, using a popular metaphor “in a CAS the same thing will happen twice only by accident”.

An important — but far too often ignored — implication is that they are non-predictable i.e. we cannot write computer models that entirely capture — and thus can predict — system behavior. They can only be perceived — and the overall systemic behavior influenced — via direct interaction with the agents.

  • The nature of the system differs from the nature of the agents themselves and is fundamentally influenced by the context they are embedded in. While the nature of the agents can be entirely deterministic and predictable the specific context of their interaction may result in the overall system become “complex”. For example an airplane is a highly “complicated” (in Cynefin terms) but entirely deterministic machine: Its built-in controls are designed to respond in exactly the same way in identical conditions — e.g. pulling that lever that amount raises the flaps to that degree, each and every time,  but only while everything else being equal. However, in non-linear contexts (e.g. flying an an airplane in turbulence) the behavior of the system in response to exercising those controls is unpredictable due to the non-linear and time-sensitive summation of all the interactions within and with the system. As such human judgement — based on direct experience of similar conditions — has to be brought to bear to heuristically guide the system behavior based on the controls available and the effect we infer those will have on the system behavior (inference based on the said prior experience).

An important observation here is that the nature of a system is also a matter of perception — i.e. it is an ontological problem. For example while in actuality a particular system in a particular context is causal and deterministic (i.e. not a CAS) an incomplete understanding of the underlying dynamics and the rules governing its behavior may make the system appear as a CAS — i.e. non-deterministic, non-causal, and dispositional. As such the way we perceive it is not only dependent on the surrounding context, but also determined by our own prior subjective experiences, biases and personal inclinations. And it is in this sense we refer to Cynefin as a “multi-ontological framework for sense-making”: Different types of systems are perceived — and dealt with — in different ways, depending both the actual context (objectively) but also on how we subjectively perceive them at that particular time.

  • They have emergent properties [4] [5] [6] i.e. patterns of regularity that are not present at the level of the agents themselves nor can be inferred from their characteristics [7] [8]. This is usually referred to in the popular vernacular as “the whole is greater then the sum of its parts”.

While many CAS do consist of a clearly identifiable hierarchy of lower order subsystems [8] and a reductionist approach in analyzing them does provide some value (i.e. analyzing CAS in  terms of abstract models for their subsystems), those analyses are fundamentally limited. Namely they are limited — and influenced by — the inherent assumptions built into those models [9]. Consequently such “mechanistic” decompositions of CAS fail to fully describe the emergent nature of the CAS itself [10]. Instead those emergent CAS characteristics need not to be studied in terms of the composing systems but, more importantly, on their own terms. Terms which are irreducible — and cannot be integrated from — the laws governing the underlying subsystems. That is why we say that “the system transcends the agents”. Unfortunately, this is far too often confused with “self-organization”.

Constraints / Boundary conditions

One fundamental aspect that makes a CAS dispositional and “path-dependent”  — i.e. “exquisitely sensitivite to initial conditions and particular history” [11] — are context-sensitive constraints. These constraints “synchronize and correlate previously independent parts into a systemic whole” [12].
It is by alternatively imposing and relaxing of context-specific constraints that the agent behavior within a CAS — and thus the emergent behaviour of the CAS itself — is influenced. It is this through this influencing process that CAS are controlled. However, since CAS are by their very nature non-linear / non-causal any such intervention is irreversible. Consequently the CAS system is known — and its behavior influenced — via a series of safe-to-fail “experiments” [13] : Low cost, incremental actions from which we either can recover quickly (safe-to-fail, fail-fast) or, alternatively, further amplified to encourage the desired emergent behavior.

One important aspect of constraints acting as boundary conditions for CAS with human agents is that these boundaries are permeable, fuzzy, negotiable, and in a constant state of flux [12].  They are to perceived as zones of phase changes rather than rigid limits which separate but cannot be transgressed [14]. The reason is due to the nature of human identity [15]: Fluid, constantly negotiated using the power of the narrative (but without being integrated), with multiple, resilient identities surfacing or receding as the context may require.

Attractors, Affordances, Fitness landscapes

The safe-to-fail interventions described above are performed using attractors — catalytic probes acting as trigger mechanisms that precipitate a desirable change in CAS evolutionary trajectory  — i.e. its emergent behavior [11] [12].

Correspondingly affordances describe the degrees of freedom available to the system and the propensity of a CAS to evolve in some directions versus others [16], as facilitated by the attractors. As a result we can describe a CAS in terms of a three-dimensional fitness landscape where the “valleys” and “peaks” represent basins of attraction and, respectively, states and behaviors from which the system shies away [11] [12].

Ordered systems — Simple, Complicated

Following the Cynefin definition of those I’m using the terms “simple” or “complicated” for ordered systems. I.e. causal, deterministic, predictable, and “computer model-able”. One thing worth noting is that in a constraint-based definition of such systems the agents are tightly coupled and their behavior is entirely controlled by the system itself. In other words, the agent affordances are strictly limited.

As noted above for CAS, we need to be conscious of the potential differences between the objective nature of the system itself and, respectively, how we subjectively perceive it.


The term — at least how it is used most commonly in the popular vernacular — tends to pertain to “ordered” systems (Cynefin “simple”, “complicated”) and control thereof. Given the “ordered” nature of those systems (deterministic, predictable) these controls are direct, measurable, and mechanistic in nature (“pull that lever” / “push that button” / “trigger that feedback loop”). However, for CAS this paradigm does not apply directly. As described above, for CAS their behavior is influenced using attractors by encouraging desired emergent behavior within flexible boundaries. As such the resulting controlling effect within a CAS is far more difficult to predict than in the case of “controlling mechanisms” (i.e. “controls”) appropriate for “ordered” systems.


Is an important concept for complex systems in that it allows us to quantify CAS emergent behavior. Technically defined as “maximizing satisfaction of a set of positive and negative constraints” [17] — in our terms above attractors, and, respectively, repellers — the concept of coherence allows quantifying — at least to some degree — how consistent is the behavior of different agents within a CAS. Or, alternatively, the “degrees of truth” for a “theory”. For example it allows us to state things such as “Darwin theory of evolution is more coherent than creationism” in that the hypotheses comprised in the former are more consistent with available data than the latter. In doing so we avoid make absolute statements such as “one is true whereas the other is false” where counterexamples falsifying both theories exist.

An important thing to note is that the scope of the embedding context is essential — as it is for CAS in general. The larger the scope and the more generic the assumptions that are necessarily made about that context — or, putting it in Popperian terms, the easier it is to falsify it [18] — the more coherent and stronger the hypothesis. For example astrology may be coherent in and by itself (i.e. considered in isolation) but once its explanations conflict with psychology and astronomy it becomes obvious it is less coherent [17].

Resilience and Stability

Literally from [19] (emphasis mine):

“Stability …represents the ability of a system to return to an equilibrium state after a temporary disturbance; The more rapidly it returns and the less it fluctuates, the more stable it would be”

“But there is another property, termed resilience, that is a measure of the persistence of systems and of their ability to absorb change and disturbance and still maintain the same relationships between populations or state variables”.

Moreover, from [11] (emphasis mine):

“…the difference between stability and resilience: A stable system fluctuates minimally outside its stable attractor, to which it quickly returns when perturbed. Stable systems are typically brittle; they disintegrate if highly stressed. Resilient systems, on the other hand, might fluctuate wildly but have the capacity to modify their structure so as to adapt and evolve. Resilient, robust systems are also called meta-stable. Co-evolution selects for resilience, not stability”

As shown elsewhere [11] [13] throughout history policy makers have counseled fail-safe strategies in that they promoted an ideal of stability, rather than resilience. Such utopias — that can be tracked back to Plato — are self-contained, isolated and excluded the potential for change as they were designed for an ideal equilibrium, and any further change would mean a departure from that ideal.

However, as mentioned above, it has been shown [19] that evolution selects for resilience rather than stability. In particular, a safe-to-fail strategy allows systems to adapt to disturbances, absorb change, and thus makes them robust.  Characteristic to such systems is variety among subsystems [20] and a moderate degree of coupling between these: “Too loose coupling means that fluctuations (innovations) emanating from one system would not reach others; too close coupling [means] that the environment can quickly damp out any fluctuation” [13].

In biological systems these characteristics are captured by the concept of degeneracy — “a system property that requires the existence of multi-functional components (but also modules and pathways) that perform similar functions (i.e. are effectively interchangeable) under certain conditions, yet can perform distinct functions under other conditions” [21]. In other words the system is capable to perform exaptations —  which is defined as repurposing a function (or a trait) that has been developed for one purpose to other uses under conditions of extreme stress [22]. For example feathers have initially evolved for the regulation of temperature in dinosaurs but then repurposed for bird flight.

Fine-grained objects

Implied in the terms “safe-to-fail” and “coupled to a moderate degree” is the concept of fine-grained objects. In particular, the safe-to-fail interventions performed within a CAS to influence its emergent behavior have to have the appropriate level of granularity. A too coarse granularity precludes the intervention from being “low-cost” whereas a too fine-grained granularity limits its learning impact i.e. limits its impact on the system emergent behavior [23].

Also, and more importantly, as this emergent behavior is stabilized and the system evolves from exploration to exploitation [24] the granularity of those interventions should be increased to maximize their impact and effectiveness. Moreover, this movement is periodic: When an established design reaches its useful lifetime (the constraining assumptions are falsified) we need to revert back to an exploratory learning phase and the fine granularity of the interventions appropriate to that phase.

Contextual vs composable, Antifragile and other “memes”

Contextual vs Composable programability abstractions

In the light of the defintions above, the terms as such — at least as used in this blog post  — make very little sense. If instead we use the term “plug-in based framework” for the former — as suggested in the post itself– we can infer that what is meant is a pre-designed “architecture”. Or, as the post says:

“Contextual tools like Ant and Maven allow extension via a plug-in API, making extensions the original authors envisioned easy”

In other words it is a pre-designed architecture that functions as a framework in which new plugins can be developed. As the terms clearly suggest this is a very valid environment as long as the assumptions built-in the system — a pre-designed, ordered, “complicated” system in Cynefin terms — are respected.

As such it should come as no surprise that when this assumptions are falsified and the hard limits imposed by the design are challenged the system breaks down (not capable of delivering the expected results).

On the other hand I find the term “composable” slightly more appropriate: What is described there are fine-grained objects that are combined in novel ways that could not have been “pre-designed” when the objects themselves were created. In particular, the article illustrates how the well-known, well-established tools available in a UNIX system can be ingeniously combined to solve very efficiently a rather advanced problem. However, one very important aspect the article fails to mention is that these tools have evolved, tuned, and have been refined over many years of usage in countless environments  — or, to use our terms above, “embedding contexts”.

However, concluding that one type of system is “better” than the other is failing to fully appreciate the fundamental differences between the two. Like James Urquhart seems to suggest at the end of his blog post they are simply different, providing different compromises between control (fine-grained objects) and convenience (effectiveness). In this respect I find this post by Nati Shalom capturing that aspect far more accurately.

Still, and more importantly, as we inevitably move from exploration (fine-grained objects providing control) to exploitation (convenient, effective, mature platforms) and in that process the dominant design matures and solidifies, we need to be aware — and pay close attention to — those border cases that break the assumptions that need to be made as part of that evolution. These are clear signs the dominant design is approaching the end of its useful lifetime and we need to consider alternatives. When that happens we need to be able restart the cycle, revert back to fine-grained objects, and enter an exploration learning phase in the search of a new, better, more effective dominant design that can subsequently be exploited effectively — all until the cycle has to be repeated all over again.


The above discussion on resilience vs. stability should provide ample justification on why I personally find the term completely useless. The fact that it seemed to have been “coined” in a void (with no references to ample body of respectable academic work that exists on the topic) and is used in various ways in various contexts does not make matters any better. Moreover, since such usage eschews the definitions and technical jargon well established by the said prior works I find that nothing short of disrespectful.


[1] D. Snowden, C.F. Kurtz — “The new dynamics of strategy: Sense-making in a complex and complicated world”. IBM Systems Journal, VOL 42, NO 3, 2003.
[2] J.H. Holland — “Studying Complex Adaptive Systems.” Journal of Systems Science and Complexity, 2005.
[3] E.M. Rogers, U.E. Medina, M.A. Rivera, C.J. Wiley — “Complex Adaptive Systems and the Diffusion of Innovations”. The Innovation Journal, The Public Sector Innovation Journal, Volume 10(3), Article 30.
[4] Fulvio Mazzocchi — “Complexity in Biology”. http://www.nature.com/embor/journal/v9/n1/full/7401147.html
[5] J. H. Holland — “Emergence: From Chaos to Order”.  Perseus Publishing, 1998.
[6] S. Johnson — “Emergence: The Connected Lives of Ants, Brains, Cities, and Software”. Scribner Book, 2001.
[7] M. Polanyi — “Transcendence and Self-Transcendence”. Soundings 53:1, 1970.
[8] D. J. Watts — “Everything is Obvious (once you know the answer)”. Crown Publishing Group, 2011.
[8] H.A. Simon — “The Architecture of Complexity: Hierarchic Systems”. Proceedings of the American Philosophical Society, December 1962.
[9] H.A. Simon — “The Sciences of the Artificial”. MIT Press, Third Edition, 1996.
[10] M. Polanyi — “Life’s Irreducible Structure”. Science, Vol. 160, June 1968.
[11] A. Juarrero — “Complex Dynamic Systems Theory”. Cognitive Edge, http://cognitive-edge.com/library/more/articles/complex-dynamical-systems-theory/
[12] A. Juarrero — “Dynamics in action: Intentional behavior as a complex system”.  MIT press, 1999.
[13] A. Juarrero — “Fail-safe versus safe-fail: Suggestions towards an Evolutionary Model of Justice”. Texas Law Review Journal, June 1991.
[14] A. Juarrero — “Complex Dynamic Systems and the Problem of Identity”. Emergence, 4(1/2), 2002.
[15] C.F. Kurtz, D. Snowden — “Bramble Bushes in a Thicket: Narrative and the intangibles of learning networks”. Cognitive Edge article, http://cognitive-edge.com/blog/entry/4445/bramble-bushes-in-a-thicket/
[16]  H. Letiche, M. Lissack, R. Schultz — “Coherence in the midst of complexity”. Macmillan, 2011.
[17] P. Thagard — “Coherence in Thought and Action”. MIT press, 2002.
[18] K. Popper  — “Science: Conjectures and Refutations”. Lecture at Peterhouse, Cambridge, 1953.
[19] C.S. Holling — “Resilience and Stability of Ecological Systems”, 1973. From E. Jantsch, C. Waddington — “Evolution and consciousness: Human Systems in Transition”, Addison Wesley 1976.
[20] W. R. Ashby — “Introduction to Cybernetics”, W. Clowls & Sons, 1956.
[21] J. M. Whitacre — “Degeneracy: A link between evolvability, robtustness and complexity in biological systems”. BMC Journal of Theoretical Biology and Medical Modelling, 2010.
[22] S. J. Gould, E. S. Vrba — “Exaptation – a missing term in the science of form”. Paleobiology 8(1), 1982.
[23] D. Snowden — “It’s all about the granularity”.  Cognitive Edge blog post, http://cognitive-edge.com/blog/entry/5758/its-all-about-the-granularity/
[24] J. March — “Exploration and Exploitation in Organizational Learning”. Organization Studies, 2(1), February 1991.

How to write mind-blowing presentations

Or: “How to be a thought leader in 5 easy steps”

Full-of-ideas vs full-of-shit

First and foremost, an apology for breaking a promise:

In a Twitter exchange some time ago I promised that my first “real” blog post (basically: a rabid rant just as this one, as it will become abundantly obvious rather quickly…) will be entitled “From Correlation to Causation and back again”…

However, there is a recent trend that managed to top the above issue on my personal “Bullshit-o-Meter”. You know, just when you thought the industry cannot sink any lower somebody comes along and proves you wrong (Well, just before you managed to do that yourself first — we’ll get back to that in a moment…)

So, what this is all about ? Well, it’s a long-winded answer to Chris Hoff’s question he asked on Twitter a couple of days ago (with the mandatory tongue-in-cheek):

“How does one become a thought leader ?”

Well, my dear attention-starved reader, here’s my “How to become a thought leader in 5 easy steps” guide. Oh, and in case you wondered, this never-before shared, unique recipe (now trademarked and patent-pending) will allow you to give <american> AWWWWwwSOME </american>, mind blowing presentations to stunned audiences:

  1. Quickly (and only partly !)  browse through a book and/or academic research material on some subject that was previously completely foreign to you. Popular suggestions: Thermodynamics, Cybernetics, Neuroscience, Sociology, Philosophy (including Philosophy of Science), Linguistics, Organizational and Management theory (ahem…we’ll get back to this one day hopefully. See the broken promise above). Anything that captures your imagination, really… The more controversial the book or the more obscure it is,  the better.
  2. Collect some quotes from this material that supports some random point you’re trying to make — but make sure they sound smart. For example something like this:
    “Information is meaning extracted from data in specific contexts that have to be pre-shared”.  Or:
    “Knowledge can only be observed indirectly through acting on the meaning extracted from data in specific frames of interpretation”.  Or, more sophisticated / eclectic:
    “Coherence is an emergent quality of a complex adaptive system”
  3. Make sure you are not providing any references whatsoever to the original material thus making an implicit claim of originality (but never clearly stated — we gotta be smart here and leave room for an exit in case some poor soul actually did read the original material and has the audacity to calls us on that…).  Also, feel free to mangle the concepts, terms, and vocabulary in the original source as such to obfuscate their origin. That is why I recommend only partly reading the said  material — it will help a great deal to that end.
    (Oh and before you blame me of unrivaled originality or depth of new insight: The source of the first two quotes above is this wonderful book by Max Boisot, one of my personal favorites. The third can be found in any half-decent book about complexity theory / complex adaptive systems. This one will do just fine).
  4. Browse Google Images, Getty Images, or any other image repository that you think somehow relates to the point you’re making.
  5. Put the two together in a slide.

As a result you will have a wonderful presentation that will simply simply WOW the indigenous population at IT conferences to the point they’ll be so amazed at the depth of insight you’re sharing that they will stop fidgeting with their mobile phones and laptops (or any other toy) for just about 15 seconds.

And, most importantly, you’ll have the instant status of a (deep) thought leader.

Now, please excuse me — I need to finish my talk at the OpenStack Summit in Portland (yes, lest we forget self plugs…) by re-using concepts from Geoffrey A. Moore’s “Crossing the Chasm” and Clayton Christensen’s “Innovator’s Dilemma”. How original is that, huh ?

Over and out,


P.S. 1: The image is from the inimitable @gapingvoid cartoon — one of my top favorites.
P.S. 2: Still not entirely happy with the WP theme / formatting. Spent more than a couple of hours of hair-pulling of trying to get even this simple post formatted somewhere close to “decent”. So please excuse the suboptimal formatting as I’m still figuring this out.

Yet another not-so-humble blog

‘Cause that’s exactly what the world needs

In the past was challenged, repeatedly, to start a blog as a placeholder for my pseudo-random rants. And, in the great spirit of Internet collaboration – or, in the words of the living legend that is Geoffrey A. Moore, Systems of Engagement — I steadfastly refused to do so.

Until now.

So, scary as it may be to get out of the cosy comfort zone of making snarky comments in 140 characters or less on Twatter — which resulted me being knighted to the title of  “Sir Florian the Acerbic” by the inimitable Steve Chambers, title which I’m dubiously proud of — here it is. A blog. My blog. With sentences and phrases. And, hopefully, ideas that go beyond sarcastic quips.

So, what can you expect from this blog?  Well, quite simply, some stuff I’m interested in. Which is, as of today:

  • Scale-out IT infrastructures. Right now I’m particularly interested in OpenStack.

That’s pretty much it really.

Oh, and last but not least: The title of the blog should be self-descriptive. The content will be ether in reaction to — or striving to trigger — a “three letter moment”. On that, you can choose your favourite TLA, FFS..


P.S. Oh, and I beg for your patience as I’m trying to get to grips with the looks and formatting. Not entirely happy with this first iteration, but this has to do for now. Suggestions welcome.