Corollaries to axioms

Corollary: C1 regarding reality

What: Reality is actual but only accessible through human cognition

Why: We need to make explicit the notion that models are a representation of reality not the reality itself.

Corollary: C2 regarding worldviews

What: Systems models are created through the ‘spectacles’ of a set of worldviews.

Why: If models are a representation of our reality and not the reality itself we can think of the models as filters or spectacles through which we must look to see our reality.

Corollary: C3 regarding understanding

What: We can only control effectively what we understand.

Why: Perhaps this is self-evident but each of us understands in a different way. That is why we need to collaborate in teams to share understanding in different ways.

Corollary: C4 regarding embedment

What: Physical natural or artificial systems are ‘hard’ systems. Both are embedded in ‘soft’ people and social systems. Hard systems are objective whereas soft systems are subjective and intersubjective.

Why: This aspect of rigour sounds unduly ‘academic’ esoteric and remote from practice. However it underpins our understanding of uncertainty and the way we learn our way through it and so is important.

Corollary: C5 regarding functions

What: The purpose of a hard system is a function. The function of an artificial system is decided by us – it is man-made. A model of a natural system helps us understand the behaviour of a part of a reality. As a consequence we may ascribe a function to it.

Why: The function of man-made artefacts is a familiar concept to most of us. However the function of a natural system can be a source of confusion if we do not explicitly recognise that that function is ascribed by us through our models.

Corollary: C6 regarding fitness for purpose.

What: Hard systems are not universally true i.e. true in all contexts and circumstances. Rather they are dependably fit for purpose to a degree in a context. Dependability corresponds to our common-sense notion of truth or fact. Statements deduced from dependable models correspond to reality in a particular context or situation.

Why: Models, by their very nature are partial representations of our reality. Consequently they are incomplete and are only dependable in the context to which they are relevant.

Corollary: C7 regarding duty of care

What: The dependability of a systems model requires those people involved to exercise a proper duty of care, to test the model to an appropriately dependable level based on evidence, to demonstrate sufficient competence and integrity and to be transparent about their values.

Why: Dependability has to be judged based on the testing of a model. The tests have to be as searching and rigorous as appropriate for the problem. Practical rigour requires diligence and duty of care that leaves no stone unturned with no sloppy or slip-shod thinking.

Corollary: C8 regarding subsidiarity

What: The Principle of Subsidiarity (as set out in the Treaty of Lisbon 2007 [Eur-Lex 2016] is that systems models should be created at the lowest practical level consistent with delivering their purpose.

Why: The idea here is that decisions should be as local as possible because that is where the problems are best understood.

Corollary: C9 regarding emergence

What: Holons have emergent properties. These are attributes that apply at only one or more layers as a result of interactions between holons at lower levels that do not exhibit these attributes.

Why: Emergent properties arise or come forth from inter- dependencies at more detailed layers. They are more common than many people realise. For example the pressure of a gas is the result of the buzzing around of gas molecules at a lower level of description. The human ability to walk and talk emerges from the co-operation our many subsystems.

Corollary: C10 regarding connectivity

What: Connections create relationships and patterns of relationships.

Why: In Chinese thought all things are interconnected. The internet is a web of interconnected computers. The brain is a network of highly interconnected neurones. Our infrastructure is an interconnected network of facilities.

Corollary: C11 regarding stakeholder interests

What: There is an increased chance of success if stakeholder interests are aligned.

Why: Common sense tells us that we are more likely to be successful if we ‘pull together’ as oarsmen do in a boat race. We are more likely to pull together if we have a common purpose.

Corollary: C12 regarding processes

What: Systems models are processes.

Why: If we accept that change is ubiquitous then everything is a process. Why is this helpful? Because it shifts our focus and leads to a new understanding of change. It provides us with a means of integrating many ideas and enables us to create simplicity in complexity. Unsurprisingly perhaps many people find it hard to think of a table as a process since they cannot reject the idea that it is a thing composed of ‘stuff’ – such as wood. It may help to think about the life cycle of the table from raw material, through design and making to usage, maintenance and disposal to see the table is constantly being and becoming. Everything exists in the process of time.

Corollary: C13 regarding feedback

What: Processes may be loopy involving feedback and feedforward.

Why: Most engineers are familiar with the ideas of feedback and feedforward in hard systems. They apply equally in soft systems where they are often called loops of influence.

Corollary: C14 regarding leadership

What: Managing a process to a desirable outcome requires appropriate leadership and collaborative learning.

Why: Traditional learning is something we do to acquire knowledge that may be useful to us in some way. We tend to think via a prescribed framework which promotes a strong distinction between the academic and the vocational which devalues practical wisdom. To change, people need vision. Leadership is about engaging with that vision then building and coaching teams to achieve it – and it applies at all layers.

Corollary: C15 regarding outcomes

What: Unexpected and unintended changes may result in future consequences that may be opportunities to create benefit or hazards that threaten damage.

Why: We must protect ourselves from the harmful effects of unintended consequences that is why we need to be alert to the possibility of ‘incubating failure’. Just as importantly we must take advantage of possible benefits from unintended consequences – they lead to new opportunities and genuine innovation.

Corollary: C16 regarding the six ‘honest serving men’

What: Attributes of processes can be classified into the categories of why, how, who, what, where and when. ‘Why’ expresses the purpose which drives the ‘how’ of the methods, transformations and procedures of change in the descriptors and measures of people (who), performance indicators and systems variables including impedance (what), contextual influences (where) and measures of time (when). One way of expressing this is ‘why = how (who, what, where, when)’.

Why: Rudyard Kipling’s six good men are generic. They provide the means to capture, model, control and improve processes in systems.

Corollary: C17 regarding ‘trade-offs’

What: Trade-off decisions may be required when two or more output variables are negatively related. For example the trade-off between lower NOx and lower CO2 pollution in exhaust gas recirculation of a diesel engine. A balance of disadvantages may have to be struck. For complex systems, the balance between the multiplicities of variables becomes even more difficult.

Why: Axiom 5 states that complex systems often cannot be ‘solved’ rather they have to be managed to desirable outcomes. One of the means of managing trade-offs is through evolutionary learning recognising that many trade-offs are non-linear and step changes may be created by innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *