By John Hunter, author of the Curious Cat Management Improvement Blog (since 2004).
From Dr. Deming’s last interview; published in Industry Week magazine.
The results of a system must be managed by paying attention to the entire system. When we optimize sub-components of the system we don’t necessarily optimize the overall system. This is true when looking at the people as Dr. Deming mentions. It is also true when optimizing say one department or one process.
Optimizing the results for one process is not the same as operating that process in the way that leads to the most benefit for the overall system.
It is a lot easier within an organization that doesn’t view the organization as a system to assign responsibility to achieve specific results to specific individuals and components of the organization. Which is likely why most organization manage themselves this way. Even they see the risks of such behavior and so most often there are requirements to consult with those who are impacted.
But most often these efforts to have people cooperate outside of what they are held accountable for are weak and the primary focus is on optimizing what they are accountable for. And the organization suffers even while improving results of components because the most significant gains are to be made in managing the organization as a system not in optimizing components within the system.
The management system will nearly always determine how the individuals working within it manage. The lack of teamwork is not something that the individuals bring to the workplace that failure to work together is the result of how the organization has been setup. To change behavior the management system must be changed.
Related: Break Down Barriers Between Departments – Why do you hire dead wood? Or why do you hire live wood and kill it? – Dr. Russell Ackoff Webcast on Systems Thinking – Where There is Fear You Do Not Get Honest Figures – Distorting the System, Distorting the Data or Improving the System
Try looking up the penny game. It illustrates this idea. A team of 5 people have to move 10 pennies (coins) through the system. There are 3 trials.
Trial one, the first person flips each coin over (with one hand). When all ten pennies are flipped, pass the pennies to the next person. Each person repeats the process until the end. Record the time for each person, and when the first penny arrives, and the last penny arrives to the last person. The last person is the customer and does not do any flipping. They record when they receive the first and last penny.
Trial two. Work in batches of 5 pennies. Once flipping 5 pennies, pass them along, and flip the 2nd 5 pennies. Measure each persons time, and the time the last person/customer receives the first and last penny.
Trial three do again with batches of 2 pennies. Flip 2, pass on, flip the next 2, pass on , etc until you flipped all 10 pennies.
Not surprising, the entire process is faster since in trial 3, the team is working more in parallel. Same effort of work, but now in parallel. The surprising result is, each person takes longer to do their individual jobs. Trial 3 is longer per person, since they have to interrupt the penny flipping to pass on the completed work to the next person. Trial one, there is only one pass of 10 pennies. Trial two, there are two passes of 5 each, and in trial 3, there are 5 passes of 2 pennies each. Looking at the numbers, it doesn’t make intuitive sense. We ran the penny game internally. In our numbers, the customer receives the first penny 470% faster, and the last penny at 222% faster. Yet each person took about 21%-44% longer to do their step. The entire process is simnifically faster, but the subcomponents are slower.
To invert this example, if we try to optimize the subcomponents to be faster, we drastically slow down the entire process.
I see this with centralized scheduling groups for hotels and medical exams. The companies are optimizing themselves to be more efficient, but it takes me significally longer to book an actual reservation or medical appointment.
In the runner example from the previous post, the runners would have to be able to run in parallel. The individual runners can be slower, but the overall time would be faster. 🙂
I can’t figure out what Deming means by “top man”, but clearly it means someone who does not know how to cooperate in order to be effective at his job — so what is he the “top” of?
Every possible example I can think of is absurdly reductive. The 4 fastest runners would produce the fastest relay team, unless they were incapable of passing a baton between themselves, but then they’re not the “top men” at running a relay, are they?
If the job involves cooperation, then you need to hire people who are good at cooperation. That’s true of any component of the work: you should not hire people who are good at nearby skills but not at the actual job you’re hiring for. But I guess that isn’t as catchy and people wouldn’t put photos of me over a bitmap of my words on their webpage.
Remember that Deming was a man of his time – so “top man” would be one that others thought were the most capable and in the analogy all of the up and down & lateral functions would be occupied similarly. The relay running would be an example of a “like” function with 3 features – grab, run, hand. So the optimisation strategy would be as you suggested to get 4 alike to achieve perfect optimisation. It’s difficult in real world practice. But to try to use the model as a thought experiment let’s say fix one factor such that they all can run at the same speed, but the first two runners has a quick hand but slow grasp and t later two the opposite . Then changing the order would optimise the system with fast hand to fast grab and slow hand to slow grab. The overall system is optimised by reordering. Or train the weaknesses to be slightly slow or faster would sub-optimise the individual but optimise the whole. Again it’s an interesting thought .