Why Bother With a CI Culture Assessment System?

by Chris Butterworth, Morgan Jones, and Peter Hines

“Why Bother?” is deliberately intended to be a thought-provoking title. It also reflects the authors’ experiences with some of the struggles that organizations go through to get real value from an assessment system. One of the most common reactions we see from many people can be summarized as, “Please, not another audit from HQ!”

CI leaders often find themselves speaking a language different from their colleagues and they often resort to visible boards or tools to demonstrate progress. For example, “We have X percent of our teams using a visual management board.” In reality, measures like this do not tell us how effective our CI culture is, nor do they tell us whether or not it has any chance of sustainability. What is needed is a common language that assists CI leaders and everyone else across an organization to help them understand the current level of CI culture and what they can do next to realize even more improvements.

What’s in a Name?

One of the challenges in suggesting an assessment is that many people immediately perceive it as just another audit that is going to suck up valuable time, create jobs for the head office, and be a complete waste of time. Unfortunately, that is often true. The starting point when thinking about designing an assessment system is to be very clear on its purpose.

So, let’s state right off what an assessment is not. It is not an audit.

A common dictionary definition of an audit is to “make an official examination of the accounts of a business and produce a report to officially examine the financial accounts of a company.”[1] An audit is often seen as a negative thing. We see organizations preparing for an audit because they do not want to “fail” or “get caught out.” Some organizations spend weeks preparing for the annual audit and auditors are often viewed as someone to hide things from. “Don’t tell the auditors,” is whispered in many coffee shop conversations and often becomes a light-hearted (but revealing) refrain if anything goes wrong.

While many would argue that this perception is incorrect, it remains a prevailing attitude in many organizations. The CI community has a lot to answer for here and we would be the first to say that we have hopefully learned a lot from our past mistakes. There is a need to justify the jobs, the training, and the investment. Leaders want to know how many people have been trained in what tools. For example, how many managers are Six Sigma black belts or how much money have consultants saved us? The CI community has embraced audits as a tool, such as 5S audits, visual management board audits, and so on. The list has become endless and often ever more complex in a mistaken attempt to make the measurement less subjective or more rigorous.

In one example, a large multinational organization changed its simple progress check to a 300-question audit that required hours to complete and lost all value as it deteriorated into a tick-box exercise. To use another example, one improvement program in a large, multi-site retail business was measured entirely on the budget savings made to the operating cost of each store. The program reported massive success with double-figure savings for two consecutive years. The CI leader was promoted and everyone was pleased with the financial results. Unfortunately, in year three, things started to fall apart—quite literally. The biggest savings had come from slashing almost all the preventative maintenance and running the equipment until it burned out. Year three required massive investment in infrastructure, new equipment, and disruption to customers and internal teams, which was a direct result of the cost “improvements” reported.

Quite rightly, finance vice presidents and budget holders want to know if they are getting value for money and a good return on investment. This is good business practice. The CI community’s response to try to justify their existence is understandable. The concern, though, is that a quantitative approach to measuring CI drives undesired behavior and limits the chances of sustainability. Instead, we need to rethink what we are measuring and why we are measuring it.

One of the main reasons many assessment systems measure the wrong thing is a lack of clarity around the purpose of an assessment.

So, what is the purpose of an assessment? This is a valuable discussion to have, and each organization will want to include their own perspective and context. In general, we believe the purpose should be to learn where we are now and what the key steps are that we need to take to get even better.

To try to overcome some of these issues, we have found it necessary to explain to people that the check on progress is an assessment or review and not an audit. This new terminology is still met with scepticism and always will be until people see how the results are used. It is not what we do that is important, it is how we do it and how we use the results.

One of the key approaches and important mindsets to have in improvement at any level is Plan, Do, Check, Act (PDCA). This is often used by a team to structure action plans or improvement projects.

A very high-level summary of PDCA is given below. It is not only something that applies to projects but is also a mindset and a way of working.

Planning (P)—involves understanding what the customer values, being clear on the desired outcomes and setting out the key milestones and actions required.

Doing (D)—involves undertaking the elements in the plan.

Checking (C)—involves reviewing if the actions taken in D have been completed and if they have delivered the expected outcome.

Acting (A)—involves incorporating lessons learned from C and conducting either a wider rollout or a redesign that necessitates going back to P.

In reality, any significant program of work will have multiple PDCA cycles and will often have sub-level PDCA cycles within the higher PDCA steps.

Unfortunately, all too often we see a lot of Plan and Do in a repeating cycle with insufficient Check. This has numerous risks associated with it.

How can the organization know if what it is doing is effective without a structured review of the progress being made and the results being delivered? Too often the program delivery becomes the goal and the measures are used to track the implementation of the tools. For example, the percentage of teams that have a visual management board or the number of people trained in Lean Six Sigma. These tell the organization little about how effective their improvement initiative is or what they need to do better.

Without the effectiveness check, the risk is that the organization will continue to do the same thing and waste time and effort replicating things that are not delivering the required result. How will the organization understand which activities it should continue doing more of and which need a different approach?

What is most important is not the results themselves, but rather how we achieve them. As such, the assessment system needs to assess and review the behaviors in the organization. This will be discussed in more detail at the Shingo webinar on March 22, 2022.

Note: This article is an extract from the book, Why Bother? Why and How to Assess Your Continuous Improvement Culture by Chris Butterworth, Morgan Jones, and Peter Hines (Taylor and Francis, 2022) .   

[1] Cambridge Dictionary, Cambridge University Press (2022), s.v. “audit.”

Optional footer text. If none, Advanced > Layout > Display: None

© Copyright 2024 Shingo Institute. All rights reserved.
Shingo Institute
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram