Industry Frameworks
Description
This culminating section consists of a critical examination of the IT management frameworks, which can be seen as structured approaches to many concerns discussed in Context III: coordination, processes, investment management, projects, and organizational structures.
Industry frameworks and bodies of knowledge play a powerful role in shaping organizational structures and their communication interfaces, and creating a base of people with consistent skills to fill the resulting roles. While there is much of value in the frameworks, they may lead you into the planning fallacy or defined process traps. Too often, they assume that variation is the enemy, and they do not provide enough support for the alternative approach of empirical process control. At the time of publication, the frameworks are challenged on many fronts by Agile, Lean, and DevOps approaches.
Defining Frameworks
There are other usages of the term “framework”, especially in terms of software frameworks. Process and management frameworks are non-technical. |
So, what is a “framework”? The term “framework”, in the context of a business process, is used for comprehensive and systematic representations of the concerns of a professional practice. In general, an industry framework is a structured artifact that seeks to articulate a professional consensus regarding a domain of practice. The intent is usually that the guidance be mutually-exclusive and collectively exhaustive within the domain so that persons knowledgeable in the framework have a broad understanding of domain concerns.
The first goal of any framework, for a given conceptual space, is to provide a “map” of its components and their relationships. Doing this serves a variety of goals:
-
Develop and support professional consensus in the business area
-
Support training and orientation of professionals new to the area (or its finer points)
-
Support governance and control activities related to the area (more on this in Governance, Risk, Security, and Compliance)
Many frameworks have emerged in the IT space, with broader and narrower domains of concern. Some are owned by non-profit standards bodies; others are commercial. We will focus on five in this document. In roughly chronological order, they are:
Both ITIL and COBIT have recently released new versions (COBIT 2019, ITIL 4) which respond in some measure to these challenges noted above. However, since much of the current industry practice still reflects earlier versions, the discussion here will remain for the forseeable future. |
Observations on the Frameworks
In terms of the new digital delivery approaches, there are a number of issues and concerns with the frameworks:
-
The misuse of statistical process control
-
Local optimization temptation
-
Lack of execution model
-
Proliferation of secondary artifacts, compounded by batch-orientation
-
Confusion of process definition
The Misuse of Statistical Process Control
Some frameworks, notably the original Capability Maturity Model (CMM), emphasize statistical process control. However, as we discussed in the previous section, process control theorists see creative, knowledge-intensive processes as requiring empirical control. Statistical process control applied to software has therefore been criticized as inappropriate [Raczynski & Curtis 2008].
In CMM terms, empirical process control starts by measuring and immediately optimizing (adjusting). As Martin Fowler notes: “a process can still be controlled even if it cannot be defined” [Schwaber & Beedle 2002]. They need not – and cannot – be fully defined. Therefore, it is highly questionable to assume that process optimization is something only done at the highest levels of maturity.
This runs against much current thinking and practice, especially that deriving from Lean philosophy, in which processes are seen as always under improvement. (See discussion of Toyota Kata.) All definition, measurement, and control must serve that end.
PMBOK suggests that “control charts may also be used to monitor cost and schedule variances, volume, and frequency of scope changes, or other management results to help determine if the project management processes are in control” [PMBOK 2013]. This also contradicts the insights of empirical process control, unless the project was also a fully defined process – unlikely from a process control perspective.
Local Optimization Temptation
IT capability frameworks can be harmful if they lead to fragmentation of improvement effort and lack of focus on the flow of IT value.
The digital delivery system at scale is a complex sociotechnical system, including people, process, and technology. Frameworks help in understanding it, by breaking it down into component parts in various ways. This is all well and good, but the danger of reductionism emerges.
There are various definitions of “reductionism”. This discussion reflects one of the more basic versions. |
A reductionist view implies that a system is nothing but the sum of its parts. Therefore, if each of the parts is attended to, the system will also function well.
This can lead to a compulsive desire to do “all” of a framework. If ITIL 2011 calls for 25 processes, then a large, mature organization by definition should be good at all of them. But the 25 processes (and dozens more sub-processes and activities) called for by ITIL 2011,[1] or the 32 called for in COBIT 5, are somewhat arbitrary divisions. They overlap with each other. Furthermore, there are many digital organizations that do not use a full framework-based process portfolio and yet deliver value as well as organizations that do use the frameworks to a greater degree.
The temptation for local, process-level optimization runs counter to core principles of Lean and systems thinking. Many management thinkers, including W. Edwards Deming, Eli Goldratt, and others have emphasized the dangers of local optimization and the need for taking a systems view.
As this document’s structure suggests, the delivering of IT value requires different approaches at different scales. There is recognition of this among framework practitioners; however, the frameworks themselves provide insufficient guidance on how they scale up and down.
Lack of Execution Model
It is also questionable whether even the largest actual IT organizations on the planet could fully implement the full scope of the process-based frameworks. Specifying too many interacting processes has its own complications. Consider: Both ITIL 2011 and COBIT devote considerable time to documenting possible process inputs and outputs. As a part of every process definition, ITIL 2011 had a section entitled “triggers, inputs, outputs, and interfaces”. The “Service-Level Management Process” [ITIL 2011b]; for example, lists:
-
Seven triggers (e.g., “service breaches”)
-
Ten inputs (e.g., “customer feedback”)
-
Ten outputs (e.g., “reports on OLAs”)
-
Seven interfaces (e.g., “supplier management”)
COBIT similarly details process inputs and outputs. In the Enabling Processes guidance, each management practice suggests inputs and outputs. For example, the APO08 process “Manage Relationships” has an activity of “Provide input to the continual improvement of services”, with:
-
Six inputs
-
Two outputs
But processes do not run themselves. These process inputs and outputs require staff attention. They imply queues and therefore work-in-process, often invisible. They impose a demand on the system, and each hand-off represents transactional friction. Some hand-offs may be implemented within the context of an IT management suite; others may require procedural standards, which themselves need to be created and maintained. The industry currently lacks understanding of how feasible such fully elaborated frameworks are in terms of the time, effort, and organizational structure they imply.
We have discussed the issue of overburden previously. Too many organizations have contending execution models, where projects, processes, and miscellaneous work all compete for people’s attention. In such environments, the overburden and wasteful multi-tasking can reach crisis levels. With ITIL in particular, because it does not cover project management or architecture, we have a very large quantity of potential process interactions that is nevertheless incomplete. (It should be noted that ITIL 4 now terms its primary concerns “practices”, not “processes” – this is a notable shift.)
Secondary Artifacts, Compounded by Batch-Orientation
The process hand-offs also imply that artifacts (documents of various sorts, models, software, etc.) are being created and transferred in between teams, or at least between roles on the same team with some degree of formality. Primary artifacts are executable software and any additional content intended directly for value delivery. Secondary artifacts are anything else.
An examination of the ITIL and COBIT process interactions shows that many of the artifacts are secondary concepts such as “plans”, “designs”, or “reports”:
-
Design specifications (high-level and detailed)
-
Operation and use plan
-
Performance reports
-
Action plans
-
Consideration and approval
and so on. (Note that actually executable artifacts; e.g., source code, are not included here.)
Again, artifacts do not create themselves. Dozens of artifacts are called for in the process frameworks. Every artifact implies:
-
Some template or known technique for performing it
-
People trained in its creation and interpretation
-
Some capability to store, version, and transmit it
Unstructured artifacts such as plans, designs, and reports, in particular, impose high cognitive load and are difficult to automate. As digital organizations automate their pipelines, it becomes essential to identify the key events and elements they may represent, so that they can be embedded into the automation layer.
Finally, even if a given process framework does not specifically call for waterfall, we can sometimes still see its legacy. For example:
-
Calls for thorough, “rigorous” project planning and estimation
-
Cautions against “cutting corners”
-
“Design specifications” moving through approval pipelines (and following a progression from general to detailed)
All of these tend to signal a large batch-orientation, even in frameworks making some claim of supporting Agile.
Good system design is a complex process. We introduced technical debt in Application Delivery, and will revisit it in Architecture and Portfolio. But the slow feedback signals resulting from the batch processes implied by some frameworks are unacceptable in current industry. This is in part why new approaches are being adopted.
Confusion of Process Definition
One final issue with the “process” frameworks is that, while they use the word “process” prominently, they are not aligned with BPM best practices [Betz 2011b].
All of these frameworks provide useful descriptions of major ongoing capabilities and practices that the large IT organization must perform. But in terms of our preceding discussion on process method, they, in general, are developed from the perspective of steady-state functions, as opposed to a value stream or defined process perspective.
The BPM community is clear that processes are countable and event-driven; see Sharp & Patrick 2008. Naming them with a strong, active verb is seen as essential. “True” IT processes, therefore, might include:
-
Accept Demand
-
Deliver Release
-
Complete Change
-
Resolve Incident
-
Improve Service
However, if reviewing ITIL, a BPM consultant would see the “process” called “Capacity Management” and observe that it is not countable or event-driven. “How many capacities did you do today?” is not a sensible question, for the most part.
Evidence of Notability
The major frameworks have had an enormous influence on digital and IT management. They drive many of the basic assumptions encountered in digital management and IT practices. Consultancies and training organizations monetize them; auditors assess organizations against their "best practices".
Limitations
Frameworks struggle to strike a balance between too extremes: being either too specific and prescriptive versus being too abstract and theoretical. In the digitally transforming economy, informed by Agile practices, they seem to specify simple cookbook recipes to increasingly dynamic and complex problems. Is this inherent to any framework? Can a new framework overcome these issues? That, in part, is the motivation for this document.
Related Topics