man working in high tech manufacturingJust as a recipe designed for four may require some changes if it is called upon to serve 4,000, dramatically up-scaling the number of quality metrics being monitored may demand new approaches when deploying SPC.
 
To be sure, the techniques of statistical process control (SPC) have been in use for nearly a century. In the early 1920s, when Walter Shewhart pioneered these tools, they were deployed and maintained with pencil and paper. The benefits of early detection of variation were apparent despite the manually intensive work required. The recipe, so to speak, was straightforward and easy to follow.
 
Today, computers and software are widely used for SPC, but the basic workflow of SPC has remained the same: 1) measure something, 2) visualize it on a chart, 3) learn from and possibly react to signals that are detected. One problem with this workflow is the difficulty of scaling to the large number of metrics required in today’s competitive and sometimes highly regulated environments.
 
In the January 26-27, 2013, weekend edition of The Wall Street Journal, Bill Gates writes about the importance of data collection, harkening back to the innovators of the 19th century, where “measuring tools…allowed inventors to see if their incremental design changes led to improvements.” Gates himself remarks about how he has been struck by “how important measurement is to improving the human condition.” Citing data such as that related to the reduction of polio cases from 350,000 in 1988 to 212 worldwide in 2012, Gates points out how critical measurement is to understanding progress. 
 
Those in manufacturing, as well as healthcare and service industries, know that this is true. Measurement data is fundamental to every process. While there can be no doubt of this, using the data in a way that it yields information depends on being able to respond to it. One can, as the saying goes, be “drowning in data but starved for information.”
 
Imagine any moderately complex manufacturing production line. Think about the number of metrics that might be gathered along the line. Even in focusing only on metrics that have a direct impact on quality, the numbers can be staggering. What if there are 300, 500 or even 1,000 metrics that need to be monitored for variation? Is it realistic or practical to deploy 1,000 control charts along this hypothetical line? In most cases, even when computers and software are available, the answer will be no. It is just not practical. Who would look at these charts? How many workers could be spared to do this SPC work? 
 
There are at least two barriers to scaling SPC to these levels: 1) the effort/cost required to set up the charts initially; and 2) the attention burden on the workers who must look at, think about and react to these charts. To truly scale SPC charting efforts, both of these must be addressed.
 
The first issue clearly represents a challenge to any large organization. Setting up hundreds of charts will demand time and effort. The employees doing this work require clear training related to SPC charting, understanding variation, and analyzing charts. Knowledge of product specifications and process parameters is also important. Errors made when setting up these charts will ultimately affect the payback time for the investment in SPC charting.
 
The second issue, the problem of paying attention to the charts, is also difficult to address. In the production settings with which we are familiar, workers are busy. They tend not to have free time, time when they can focus their attention on a chart, engage in analytical thinking, react to it correctly, and then get back to what they were doing.
 
So, we agree that scaling SPC is difficult. Does that mean we should not try to do it? The answer, unfortunately, is “it depends.” There are many production situations where a large scale SPC charting effort may not yield a payback. However, at today’s increasingly faster rates of production, there are other cases where large scale SPC charting can create a return on investment. 
 
Think of any production line capable of producing 5,000 pieces in one hour. This is not uncommon. On customer visits, we’ve seen these and others with even higher rates of production. Depending on the price of the unit being made, if an hour goes by where unwanted variation in a key metric goes undetected, the cost of this scrap can be substantial. Hourly sampling plans are common; how much faulty product can be produced in one hour? High speed production is great, but even when you reduce sampling time to, say, every 20 minutes, the risk of producing large amounts of scrap cannot be overlooked.
 
Here are some thoughts and approaches to overcoming the barriers to scaling your SPC charting efforts.
 
First of all, whenever possible, your data collection efforts should be automated. When your production line is running, product should be being measured and tested and these results should be flowing into databases. With today’s modern production machines, this is often possible without operator intervention. When manual measurement or testing is required, the sampling rates should be set based on the potential cost of failures. Additionally, the manual measurement and testing workflow should be well integrated into some worker’s natural workflow. These manually gathered results should flow directly into a database rather than onto paper and later put into the database.
 
Once data from your process is flowing into a database, software-based control charts of the key metrics should be defined. The employees doing this work need to be familiar with SPC concepts, including control limits, out-of-control testing, and chart type selection. Ideally, the charts should be defined so that each time a viewer sees it, it will be showing a reasonable amount of the most recent, most relevant data—and no further chart configuration is needed. The goal for these charts should be “define them once, view them often.”
 
Defining these charts can be time consuming. Think about this workflow and ways to make it more efficient. For example, have “template charts” pre-defined. When a new chart is needed, rather than starting from scratch, use a template, give it a new name and change only the settings that are different. Creating and using meaningful naming conventions is important. Since the number of charts can be large, good naming conventions can reduce the cognitive burden on employees who commonly look for and view the charts.
 
Next, what about looking at all these charts? Here is where new technology and new thinking about the SPC charting workflow might really improve the scalability of SPC. 
 
What is the reason for looking at a control chart? To see a signal. By the time a control chart is set up, you really should not expect to see too many signals. The limits were established when the process was running in a predictable, in-control state. 
 
What we propose is that operators see control charts only when a signal is detected, preferably by software. There might be several hundred control charts defined. However, the appropriate operator is shown a control chart only when a signal is detected by some other agent. The agent in this case is the SPC software. While operators go about their work, the software is patiently watching data flow into the database, applying SPC thinking to this data, then raising a signal for operator intervention only when needed. 
 
There are very real challenges when scaling SPC charting efforts. However, if we are willing to change our paradigm and think carefully about the workflows, these challenges can be overcome and large scale SPC charting efforts can pay for themselves by reducing scrap and variation. 
 
While measurement tools used to track polio vaccinations might be different from those used in a manufacturing facility, there can be no doubt that tools for understanding data are essential to making sense of, and responding to, critical metrics. Bill Gates asserts that the measurement systems used in Africa to track vaccination efforts “will help us to finish the job of polio eradication within the next six years.” Surely, the measurement tools available to manufacturing facilities to understand data flowing into a database can also have dramatic impacts on outcomes and process improvement.