The cost of quality isn’t discussed much. More often, the cost of poor quality hogs the limelight and the headlines. Cost of poor quality costs are believed to be 100% avoidable, and thus are assigned resources to expend extra attention and efforts to eliminate. Conversely, the costs of good quality (prevention and appraisal) are simply accepted as necessary evils, the cost of doing business.

But are they really? At a previous employer, I had an experience that made me question this notion.

As the manufacturing engineering manager, I had a lot of responsibilities, one of which was spearheading our continuous improvement efforts. During one of my gemba walks, I spent some time in one of our higher-volume assembly cells. I was watching, listening and asking lots of questions of the folks in the cell, since I hadn’t spent any real quality time there in a while.

A Closer Look

Overall, the cell ran pretty well. It met its takt time consistently, had minimal downtime, and the associates were happy. Given that most of the assembly was hand work, I paid particular attention to each station’s ergonomics, and those looked good as well. As I was asking questions, though, I could sense that there was something awry. I pressed the issue, and finally the workers did mention that there was one station in particular that everyone hated. “Great!” I said. “Let’s take a look!”

They walked me through the operation, and I honestly didn’t see much that looked wrong, or even iffy. Noticing my confusion, the team lead explained that ergonomically, the station was fine. “It’s these stupid things that we all hate!” she exclaimed, holding up a small solenoid.

She handed it to me, and I turned it over in my hand. I handed it back to her, and she assembled it into a unit. She then tested the unit, which promptly failed. “Happens about every third or fourth unit,” she said, and then went on to show me how she had to disassemble the unit, get a new solenoid, and repeat the assembly process all over again. She would do this until the unit passed test, which didn’t always happen on the second try.

“So what happens to the bad solenoids?” I asked. With a bit of a knowing smirk, she pointed behind me. “See for yourself.”

I turned around and almost lost my breath. Under the stairs to a mezzanine was a 4’x4’x4’ box FULL of solenoids! The sheer number of them was enough to cause this lean practitioner to panic—forget the fact that each one cost us about seven dollars to buy, and that didn’t include all the labor we’d spent to assemble and disassemble them into units.

After I composed myself (which admittedly took a few seconds), I asked her how long this had been going on. “Years,” she replied. Clearly, the process, or many processes actually, was broken.

I convened the team and we set out to solve this puzzle. After some work (I’ll spare you the details—if you’re reading this, then you’ve likely been there!), we figured out what had happened.

The Problem

Years ago, before many of us had joined the company, our customer had encountered a problem and reported back to us about it. Back then, the folks who were involved in problem solving didn’t have the training nor the support to really find the root cause, so instead they just threw the kitchen sink at the issue. Fast was better than good, especially if a customer was unhappy. One of the things that got changed was the assembly line’s solenoid test acceptance criteria. Instead of allowing the entire spec to be used (say, 1-10 milliamps), engineering narrowed the acceptance criteria to be 3-7 milliamps. This action (along with a few others) seemed to make the problem go away, so it was implemented.

What didn’t make its way back to the floor was that the problem DID come back a couple months later, and engineering DID eventually figure out what the root cause was. Turned out that it had nothing to do with the solenoid at all (go figure!). However, no one ever circled back around to revert the test criteria back to our original, allowable 1-10 milliamps. What’s worse is that our vendor blueprint had never been updated, even from the beginning, so we’d been purchasing parts under a 1-10 mA specification, sorting them down to be between 3-7 mA, and throwing the rest away. And we’d done that for years. Let that sink in for a minute. We’ll come back to it in a second.

Fixing the Situation

Needless to say, after we figured all of this out, we sought to rectify the situation ASAP. Although engineering was confident that root cause had been previously identified and addressed, they were still hesitant in giving us our entire 1-10 mA spec back. We eventually settled on 2-9 mA, which we could live with since our vendor agreed to control their process to stay within this tolerance without charging us any extra money. So, we updated the operation sheets (and the blueprints!) and implemented our changes.

Now, if we’d been throwing away solenoids that didn’t meet spec, I feel confident that we would have noticed long before several years had elapsed. In fact, not only would we have had a method of tracking it, we’d have visited the vendor, charged them back for bad product, and forced them to issue root cause/corrective action/preventive action statements. However, since this instance had originally evolved as an “engineering thing” under the umbrella of solving an important customer issue, no one kept tabs on it or thought to question it. We didn’t have a method or a process to track the effect of the changes that had been made. We simply chalked it up to an appraisal cost of quality and assumed it was unavoidable.

When it comes to the cost of quality, it all counts. Some of it is necessary, sure, but that doesn’t mean we shouldn’t continuously measure, monitor and question it. Businesses exist to make money because if they don’t, they aren’t in business for long. We owe it to the customers and the employees to keep track of these costs and continuously seek to improve them—even if they don’t make the veritable naughty list that is the cost of poor quality. Q