Evidence in the Complex domain – Some observations from a practitioner

First, a big caveat and some framing for this post: I make no secret of the fact that I’m a big admirer of Dave Snowden and adept of the Cynefin sense-making framework. To anyone that has the capability or interest I’d warmly recommend attending his SenseMaker courses — or at the very least one of his lectures and articles. For a quick intro into that, see my previous posts here. For much more, see Cognitive-Edge.

With that out of the way I henceforth will assume familiarity with Cynefin, the difference between “Complex” and “Complicated” etc.

This post was triggered by this excellent article – and subsequent Twitter exchange — by Dave on the subject of what passes as evidence, particularly in Complex systems. Which, by and large and with some simplifications, are most of the social systems of interest. I would recommend reading that article before proceeding (**).

Timing and full agreement with Dave’s article aside, this topic is near and dear to my heart and gave it quite a bit of thought in the past. The reason ? In my current role one of my tasks is providing early evidence to act as basis of future technological investments (aka “strategy”, rather crudely). This post reflects my observations and experiences in the field of doing just that.

Like Dave mentions in his post, humans — and the social groups they build and are part of — are systems with a large degree of complexity. As CAS (complex adaptive systems) we are dispositional, not causal. More importantly, the nature and degree of predictability of those dispositions vary greatly depending on context: As an individual my personal dispositions are somewhat predictable — will have my usual “black tea and two rye bread toasts” this morning or do I go for a strong coffee ?. Still, even those personal dispositions are influenced — and largely dependent by — the environment I am in: How long and well did I sleep ? Am I at home or at a hotel where I can be tempted by fry-up ?

On the other hand, my individual ability to trigger coherence (narrowly defined as: a group agreeing to act in a certain way), and the time to achieve it decreases rapidly with the size of the group I’m engaged in. For example, it’s usually faster for me to get my immediate family (Dunbar’s “support group” to be precise) to agree on what we are going to cook for Sunday dinner rather than influencing my local community regarding county plans for building a new road close to our house. (Well, at least it tends to be so for some Sunday dinner courses involving fish and my daughters, but I digress…)

While coherence of a certain narrative and evidence are distinct concepts, in a complex environment I consider the two closely intertwined. An understanding of the current state of the system (experience based) coupled with its disposition to evolve into some directions versus others (see this video on the Probable — Possible — Plausible scale) constitutes a starting hypothesis. Based on this starting hypothesis, evidence is gathered — or, rather, manifests itself — as coherence (positive response) to our probing actions.

Important to note here is that those actions take place in a Complex environment. In other words those actions are sense-making pobes — multiple safe-to-fail experiments running in parallel, of which at least 50% of them should fail. The latter point is very important since it provides critical information on where the boundary is, i.e. what constraints are at play and how — and if — we can push against them (relax) or, respectively, enforce. See also below.

In these contexts (Complex domains) the process of gathering evidence is an expression of the distributed act of sense-making: Emergent coherence to a certain narrative is manifested as positive response to our probing actions. The evidence (coherence) co-evolves — and thus informs and guides — further sense-making via distributed probing actions (safe-to-fail, low-cost experiments).

Concretely in my area of activity (IT engineering) the process is rather rapid due to the large degree of codification at play (ala Boisot): The large majority of customers (see (*) note below) have often very similar set of technologies, often using them in similar ways, ergo similar set of needs. In response to early signals (i.e. interest manifested by early adopters and outliers (*)) we initiate sense-making probes (experiments). We nurture those that seem promising, all while looking for coherence (adoption) in other similar – but independent ! – contexts, and quickly incorprate their sense-making feeback to guide further probing. As the reaction to those early experiments becomes similar (ergo emergent coherence), we foster group ties and feed this virtuous cycle, in a positive-feedback fashion: As momentum is gathered around it, our understanding is further refined into evidence that further feeds the cycle via informed probing.

Now, like the old adage goes, “there be dragons”: Like clearly stated in this other post by Dave there is danger in blindly following the mantra: “OK, we got lucky. This seemed to have worked, let’s do some more of it”. In addition to the caveats Dave lists in his post, here are some from my own experience:

  •  Pay close attention to the “near-field outliers” and the watch carefully “the side-view”: Informing and influencing the initial traction (ergo evidence) are what I call the “near-field outliers”. These are the cases where the reaction to our sense-making probe is “Yes, this is/would be great – but only if...”. Or: “Oh, that’s really nice, but how about ...”. Actively exploring — via safe-to-fail, co-evolutionary experiments — those lateral dispositions and testing them in terms of group coherence (how much it resonates in other, independent contexts) is crucial. It provides two things: a) Evidence guiding subsequent probing actions (in the sense above) 2) A sense of the encompassing constraints, that either have to be pushed against (if deemed undesirable) or enforced (if we consider them desirable or, at the very least, benign).
  • Beware of premature convergence: This is a real danger, and a trap we fall into far too often. Particularly in the IT engineering domain, with its low-cost experiments and fast-iterations being all too readily available, in the name of “driving traction” (aka “market adoption”) it is very tempting to latch on the so-called early adopters and drive adoption in whatever direction we deem appropriate at a given time. It is at this very point where a careful balance has to be made between actively probing in multiple, contradictory directions looking for emergent coherence vs. becoming prescriptive — a perverted “discovery process”, often manifested as a set of leading questions, all with pre-defined answers and all pointing to same conclusion. Unfortunately this form of intellectual arrogance is far too common in IT engineering — not seldom overlayed on an existing set of prejudices, on both sides. While efficient in enforcing convergence to a pre-defined set of conclusions and next steps — all in the name of predictability, of course — I do not consider them as genuine evidence as the nature of coherence is enforced, rather than emergent (i.e. order is externally enforced). That this is most often used to simply reassure and reinforce the existing status quo and power structures only adds insult to injury.

At this point an important note is in proper order: In all of the above narrative coherence (ergo adoption) was a proxy for evidence. As Dave notes in his post, there is a fundamental difference between evidence and proof. The difference ultimately depends on the cognitive domain at hand and, respectively, our cognitive biases. As this post pertains to Complex domain (i.e situations and contexts with large degree of complexity) evidence cannot be used to demonstrate the existence of a causal chain or used in any predictive fashion. In a Complex domain cause and effect can only be analyzed a posteriori, and experiment repeatability and prediction are non-sensical. That is why we need to carefully analyze the near-field outliers: In good Popperian tradition, all evidence — acting either as a starting assumption or working theory — should be falsifiable. That is why, as per above, we need to carefully make sure at least half of our experiments “fail” and, respectively, analyse and probe near-outliers for refining / invalidating (falsifying) the evidence at hand.

Moreover, in the Complex domain the concept of goal and metrics (definition of success) has to walk a fine line between been specific enough to provide an actionable measure of progress (**) and, respectively, be ambiguous enough to allow lateral evolution and serendipitous discovery of adjacent evolutionary possibilities. As those are explored and sense-making becomes more refined, this can be captured in more focused, quantitative metrics — hence a more precise definition of success. It is at this point where we transition from Complex to Complicated, from exploration to exploitation, from prototype to product. It is here where causal chains stabilize and become predictable — or, most dangerously, they are assumed to be so — and the notion of evidence changes into proof. The existence of this dominant narrative, the onset of group think under the assumption of causality has serious risks of its own — some of which already mentioned above. However, they are outside the scope of this post.


(*) Note the deliberate ommision of discussing “far-outliers”. A topic into itself, those fringes are the realm of instability, opportunity — or, often, both. Depending on their number and group coherence (i.e. coherence among themselves) they can either be regarded as individual, external reference points to be used to measure “progress” or, if manifesting group coherence, tested for larger collective coherence — hence alternative, contradictory evidence, as per this article.

(**) Just after finishing this article I just realised Dave posted a follow-up.

Advertisements

Yet another not-so-humble blog

‘Cause that’s exactly what the world needs

In the past was challenged, repeatedly, to start a blog as a placeholder for my pseudo-random rants. And, in the great spirit of Internet collaboration – or, in the words of the living legend that is Geoffrey A. Moore, Systems of Engagement — I steadfastly refused to do so.

Until now.

So, scary as it may be to get out of the cosy comfort zone of making snarky comments in 140 characters or less on Twatter — which resulted me being knighted to the title of  “Sir Florian the Acerbic” by the inimitable Steve Chambers, title which I’m dubiously proud of — here it is. A blog. My blog. With sentences and phrases. And, hopefully, ideas that go beyond sarcastic quips.

So, what can you expect from this blog?  Well, quite simply, some stuff I’m interested in. Which is, as of today:

  • Scale-out IT infrastructures. Right now I’m particularly interested in OpenStack.

That’s pretty much it really.

Oh, and last but not least: The title of the blog should be self-descriptive. The content will be ether in reaction to — or striving to trigger — a “three letter moment”. On that, you can choose your favourite TLA, FFS..

/Florian

P.S. Oh, and I beg for your patience as I’m trying to get to grips with the looks and formatting. Not entirely happy with this first iteration, but this has to do for now. Suggestions welcome.