Systems Thinking and Feedback

Description

So, what is a system? A system is a set of things – people, cells, molecules, or whatever – interconnected in such a way that they produce their own pattern of behavior over time. The system may be buffeted, constricted, triggered, or driven by outside forces. But the system’s response to these forces is characteristic of itself, and that response is seldom simple in the real world. [Meadows 2009]
— Donella Meadows
Thinking in Systems

Systems thinking, and systems theory, are broad topics extending far beyond IT and the digital profession. Meadows defines a system as: “an interconnected set of elements that is coherently organized in a way that achieves something”. Systems are more than the sum of their parts; each part contributes something to the greater whole, and often the behavior of the greater whole is not obvious from examining the parts of the system.

Systems thinking is an important influence on digital management. Digital systems are complex, and when the computers and software are considered as a combination of the people using them, we have a sociotechnical system. Digital systems management seeks to create, improve, and sustain these systems.

A digital management capability is itself a complex system. While the term “Information Systems (IS)” was widely replaced by “Information Technology (IT)” in the 1990s, do not be fooled. Enterprise IT is a complex sociotechnical system, that delivers the digital services to support a myriad of other complex sociotechnical systems.

The Merriam-Webster dictionary defines a system as: “a regularly interacting or interdependent group of items forming a unified whole". These interactions and relationships quickly take center stage as the focus moves from individual work to team efforts. Consider that while a two-member team only has one relationship to worry about, a ten-member team has 45, and a 100-person team has 4,950!

A Brief Introduction to Feedback

The harder you push, the harder the system pushes back. [Senge 2006]
— Peter Senge
The Fifth Discipline

As the Senge quote implies, brute force does not scale well within the context of a system. One of the reasons for systems stability is feedback. Within the bounds of the system, actions lead to outcomes, which in turn affect future actions. This is a positive thing, as it is required to keep a complex operation on course.

Feedback is a problematic term. We hear terms like “positive feedback” and “negative feedback” and associate such usage with performance coaching and management discipline. That is not the sense of feedback in this document. The definition of feedback as used in this document is based on engineering and control theory.

Reinforcing Feedback Loop illustrates the classic illustration of a reinforcing feedback loop.

feedback
Figure 1. Reinforcing Feedback Loop

For example, as shown in Reinforcing (Positive?) Feedback, with Rabbits, “rabbit reproduction” can be considered as a process with a reinforcing feedback loop.

rabbits
Figure 2. Reinforcing (Positive?) Feedback, with Rabbits

The more rabbits, the faster they reproduce, and the more rabbits. This is sometimes called a “positive” feedback loop, although the local gardener may not agree. This is why feedback experts (e.g., Sterman 2000) prefer to call this “reinforcing” feedback because there is not necessarily anything “positive” about it.

We can also consider feedback as the relationship between two processes; see Feedback Between Two Processes.

feedback between 2 processes
Figure 3. Feedback Between Two Processes

In the example, what if Process B is fox reproduction; that is, the birth rate of foxes (who eat rabbits); see Balancing (Negative?) Feedback, with Rabbits and Foxes?

rabbits and foxes
Figure 4. Balancing (Negative?) Feedback, with Rabbits and Foxes

More rabbits equal more foxes (the “+” symbol on the line) because there are more rabbits to eat! But what does this do to the rabbits? It means fewer rabbits (the “--” on the line). Which, ultimately, means fewer foxes and, at some point, the populations balance. This is classic negative feedback. However, local gardeners and foxes do not see it as negative. That is why feedback experts prefer to call this “balancing” feedback. Balancing feedback can be an important part of a system’s overall stability.

What does Systems Thinking Have to do with IT?

In an engineering sense, positive feedback is often dangerous and a topic of concern. One example of bad positive feedback in engineering is the London Millennium Bridge. On opening, the Millennium Bridge started to sway alarmingly, due to resonance and feedback which caused pedestrians to walk in cadence, increasing the resonance issues. The bridge had to be shut down immediately and retro-fitted with $9 million worth of tuned dampers [Cornell 2005].

As with bridges, at a technical level, reinforcing feedback can be a very bad thing in IT systems. In general, any process that is self-amplified without any balancing feedback will eventually consume all available resources, just like rabbits will eat all the food available to them. So, if you create a process (e.g., write and run a computer program) that recursively spawns itself, it will sooner or later crash the computer as it devours memory and CPU; see runaway processes.

Balancing feedback, on the other hand, is critical to making sure you are “staying on track”. Engineers use concepts of control theory; for example, damping, to keep bridges from falling down.

Digital Value covered the user’s value experience, and also how services evolve over time in a lifecycle. In terms of the dual-axis value chain, there are two primary digital value experiences:

  • The value the user derives from the service (e.g., account lookups, or a flawless navigational experience)

  • The value the investor derives from monetizing the product, or comparable incentives (e.g., non-profit missions)

Additionally, the product team derives career value. This becomes more of a factor later in the game. We will discuss this further in Coordination and Process – on organization – and Context IV, on architecture lifecycles and technical debt.

The product team receives feedback from both value experiences. The day-to-day interactions with the service (e.g., help desk and operations) are understood, and (typically on a more intermittent basis) the portfolio investor also feeds back the information to the product team (the boss’s boss comes for a visit).

Balancing feedback in a business and IT context takes a wide variety of forms:

  • The results of a product test in the marketplace; for example, users’ preference for a drop down box versus checkboxes on a form

  • The product owner clarifying for developers their user experience vision for the product, based on a demonstration of developer work-in-process

  • The end users calling to tell you the “system is slow” (or down)

  • The product owner or portfolio sponsor calling to tell you they are not satisfied with the system’s value

In short, we see these two basic kinds of feedback:

  • Positive/reinforcing, “do more of that”

  • Negative/balancing, “stop doing that”, “fix that”

The following should be considered:

  • How you are accepting and executing on feedback signals?

  • How is the feedback relationship with investors evolving, in terms of your product direction?

  • How is the feedback relationship with users evolving, in terms of both operational criteria and product direction?

One of the most important concepts related to feedback, one we will keep returning to, is that product value is based on feedback. We have discussed Lean Startup, which represents a feedback loop intended to discover product value. Don Reinertsen has written extensively on the importance of fast feedback to the product discovery process.

Reinforcing Feedback: The Special Case Investors Want

At a business level, there is a special kind of reinforcing feedback that defines the successful business; see The Reinforcing Feedback Businesses Want.

positive business feedback
Figure 5. The Reinforcing Feedback Businesses Want

This is reinforcing feedback and positive for most people involved: investors, customers, employees. At some point, if the cycle continues, it will run into balancing feedback:

  • Competition

  • Market saturation

  • Negative externalities (regulation, pollution, etc.)

But those are problems that indicate a level of scale the business wants to have.

Open versus Closed-Loop Systems

Finally, we should talk briefly about open-loop versus closed-loop systems.

  • Open-loop systems have no regulation, no balancing feedback

  • Closed-loop systems have some form of balancing feedback

In navigation terminology, the open-loop attempt to stick to a course without external information (e.g., navigating in the fog, without radar or communications) is known as “dead reckoning”.

A good example of an open-loop system is the children’s game “pin the tail on the donkey”; see Pin the Tail on the Donkey.[1] In “pin the tail on the donkey”, a person has to execute a process (pinning a paper or cloth “tail” onto a poster of a donkey – no live donkeys are involved!) while their eyes are covered, based on their memory of their location (and perhaps after being deliberately disoriented by spinning in circles). Since they cannot see, they have to move across the room and pin the tail without the ongoing corrective feedback of their eyes. (Perhaps they are getting feedback from their friends, but perhaps their friends are not reliable.)

donkey game
Figure 6. Pin the Tail on the Donkey

Without the eye-covering, it would be a closed-loop system. The person would rise from their chair and, through the ongoing feedback of their eyes to their central nervous system, would move towards the donkey and pin the tail in the correct location. In the context of a children’s game, the challenges of open-loop may seem obvious, but an important aspect of IT management over the past decades has been the struggle to overcome open-loop practices. Reliance on open-loop practices is arguably an indication of a dysfunctional culture. An IT team that is designing and delivering without sufficient corrective feedback from its stakeholders is an ineffective, open-loop system. Mark Kennaley [Kennaley 2010] applies these principles to software development in much greater depth, and is recommended.

Engineers of complex systems use feedback techniques extensively. Complex systems do not work without them.

Observe, Orient, Decide, Act (OODA)

After the Korean War, the US Air Force wished to clarify why its pilots had performed in a superior manner to the opposing pilots who were flying aircraft viewed as more capable. A colonel named John Boyd was tasked with researching the problem. His conclusions are based on the concept of feedback cycles, and how fast humans can execute them. Boyd determined that humans go through a defined process in building their mental model of complex and dynamic situations. This has been formalized in the concept of the OODA loop; see OODA Loop.[2]

OODA loop
Figure 7. OODA Loop

OODA stands for:

  • Observe

  • Orient

  • Decide

  • Act

Because the US fighters were lighter, more maneuverable, and had better visibility, their pilots were able to execute the OODA loop more quickly than their opponents, leading to victory. Boyd and others have extended this concept into various other domains including business strategy. The concept of the OODA feedback loop is frequently mentioned in presentations on Agile methods. Tightening the OODA loop accelerates the discovery of product value and is highly desirable.

The DevOps Consensus as Systems Thinking

We covered continuous delivery and introduced DevOps in Competency Area 3. Systems theory provides us with powerful tools to understand these topics more deeply.

change _versus_ stability
Figure 8. Change versus Stability

One of the assumptions we encounter throughout digital management is the idea that change and stability are opposing forces. In systems terms, we might use a diagram like Change versus Stability. As a Causal Loop Diagram (CLD), it is saying that change and stability are opposed – the more we have of one, the less we have of the other. This is true, as far as it goes – most systems issues occur as a consequence of change; systems that are not changed in general do not crash as much.

3 node CLD
Figure 9. Change Vicious Cycle

The trouble with viewing change and stability as diametrically opposed is that change is inevitable. If simple delaying tactics are put in, these can have a negative impact on stability, as in Change Vicious Cycle. What is this diagram telling us? If the owner of the system tries to prevent change, a larger and larger backlog will accumulate. This usually results in larger and larger-scale attempts to clear the backlog (e.g., large releases or major version updates). These are riskier activities which increase the likelihood of change failure. When changes fail, the backlog is not cleared and continues to increase, leading to further temptation for even larger changes.

How do we solve this? Decades of thought and experimentation have resulted in continuous delivery and DevOps, which can be shown in terms of system thinking in The DevOps Consensus.

3 node CLD
Figure 10. The DevOps Consensus

To summarize a complex set of relationships:

  • As change occurs more frequently, it enables smaller change sizes

  • Smaller change sizes are more likely to succeed (as change size goes up, change success likelihood goes down; hence, it is a balancing relationship)

  • As change occurs more frequently, organizational learning happens (change capability); this enables more frequent change to occur, as the organization learns

    • This has been summarized as: “if it hurts, do it more” (Martin Fowler in Duvall 2007).

  • The improved change capability, coupled with the smaller perturbations of smaller changes, together result in improved change success rates

  • Improved change success, in turn, results in improved system stability and availability, even with frequent changes; evidence supporting this de facto theory is emerging across the industry and can be seen in cases presented at the DevOps Enterprise Summit and discussed in The DevOps Handbook [Kim et al. 2016]

Notice the reinforcing feedback loop (the “R” in the looped arrow) between change frequency and change capability. Like all diagrams, this one is incomplete. Just making changes more frequently will not necessarily improve the change capability; a commitment to improving practices such as monitoring, automation, and so on is required, as the organization seeking to release more quickly will discover.

Evidence of Notability

Discussions of systems thinking, feedback, and OODA occur repeatedly throughout IT and digital management literature; e.g., ITIL’s Service Strategy [ITIL 2011] and The DevOps Handbook [Kim et al. 2016].

Limitations

Systems thinking is an advanced and somewhat theoretical topic, and discussions of it should carefully consider the audience.

Related Topics


1. Image credit https://www.flickr.com/photos/portland_mike/5445434245/, Mike Krzeszak, Flickr, Creative Commons, downloaded November 13, 2016.
2. Image credit https://commons.wikimedia.org/wiki/File:OODA.Boyd.svg, full diagram originally drawn by John Boyd for his briefings on military strategy, fighter pilot strategy, etc. Patrick Edwin Moran author, downloaded April 7, Creative Commons license.