Guest post by John Hunter, author of Management Matters: Building Enterprise Capability.
This embedded video is a webcast with Rocco Perla and Lloyd Provost: Learning with the Science of Improvement during the COVID-19 Pandemic. The presentation is from April 2020 which is useful to keep in mind given the rapidly changing COVID-19 situation. Their explanations of how to use an understanding of variation to aid in understanding decision making and evaluating changes to the system are timeless.
Rocco Perla:
Because we are not using these charts [control charts] we are not using the theory of variation to ground us in the actions we need to make and that is the real important point.
Exactly, control charts themselves are useful, but the real important issue is using an understanding of variation (and how variation is impacting the process we are working with) to underpin our thinking and drive our actions and the evaluation of how successful those actions have been. Control charts are not the point (the “end”); they are the tool to allow us to use an understanding of variation and the measured process to drive our thinking, decisions, and evaluation of changes.
In the webcast, Rocco mentions this article he wrote, Governors: Read This Before Reopening Your State, which looks at how to use control charts to aid in policy setting decisions.
Lloyd discusses the weaknesses in using COVID-19 death data, as the data is not entirely accurate; it has distortions in it. But another measure, COVID-19 active cases for example, are even less accurate with more distortions. This is a good reminder about the challenges of using real-world data. In practice, most data inside your organization is going to be much cleaner, but these real world data challenges exist and must be considered whenever you are using data to learn and improve.
I find the use of control charts on data from states and nations useful, but I think it is also qualitatively different that the use of a control charts on a process inside the organizations. The ability to visualize and appreciate the nature of variation is useful. However, I worry about seeing these huge data pools that are the result of many systems and processes and added variation (weather, societal norms, data quality from many different sources, varying economic conditions…) as nearly similar to a control chart monitoring a fairly stable process as potentially creating confusion.
I think those with a strong internalized understanding of variation won’t be confused. I do worry that those, unfamiliar with how to understand variation within a process and how to interpret that data, could be confused. Basically, I think using control charts is useful on things like public health, and macroeconomics can be useful, but we should understand that the data from these huge collections of data is merging together many systems and processes, and may well hide meaningful data.
Think about how even separating out the data from a night shift and day shift can provide clear signals when with the data merged, no obvious signals can be seen. Then think about viewing public health data for a whole country the size of the USA. Sure, the data for the whole country can be useful. But when you are looking for specific insight into how to improve and how to measure ongoing efforts, such large scale views are not likely to be the most useful. Even state-wide measures are often going to combine data from very different situations and could easily hide useful insight. Poorly stratified data leads to mistakes in analysis.
Related: Process Behavior Charts are the Secret to Understanding the Organization as a System – How to Create a Control Chart for Seasonal or Trending Data – Knowledge of Variation – We Need to Understand Variation to Manage Effectively – Knowledge About Variation – Control Charts in Health Care – Stratify Data to Hone in on Special Causes of Problems
When I saw the New York data, I was wondering whether the data was missing the nursing home deaths that were under reported.
When I saw the New York data, I was wondering whether the data was missing the nursing home deaths that were under reported.