The scope of product testing is growing broader and more complex. Traditionally, the objective was to confirm compliance with relevant safety regulations. But today we’re seeing more product management teams also push for functionality and reliability testing. They want to ensure the product is not just safe for consumers to use, but that they will enjoy using it. After all, the impact to a company’s brand reputation and bottom line can be severe if it cannot discover a defect or other problem before the manufacturing process is complete. Therefore, the challenge is how to conduct more thorough testing without delaying the production process or impacting the delivery plan. A statistics-based risk assessment program enables quality managers to strike that necessary balance between ensuring an effective test and inspection process and speed to market.

As its name suggests, a statistics-based risk assessment program is one that assesses quality risk based on an analysis of historical data. For example, a manufacturing facility that has a track record of producing products with defects or other errors, or working with a facility that will produce your product for the first time, increases the risk. But that is just the tip of the proverbial iceberg. Thorough pre-production testing can look at any number of a product’s features and functionalities and leverage statistical analysis to identify problems before the assembly lines start moving. 

Tech Tips

  • A statistics-based risk assessment program is one that assesses quality risk based on an analysis of historical data.
  • For example, a manufacturing facility that has a track record of producing products with defects or other errors, or working with a facility that will produce your product for the first time, increases the risk.
  • Thorough pre-production testing can look at any number of a product’s features and functionalities and leverage statistical analysis to identify problems before the assembly lines start moving.

Consider the production of a new model of smartphone, which have become powerful minicomputers that are users’ primary tools for browsing the internet, getting work done, taking and sharing photos, and listening to music, among dozens of other tasks.

Poor initial reviews by journalists, bloggers and early adopters will doom sales. Therefore, companies want the inspection and testing process to address more than whether the device powers on and can make/receive calls clearly. They want to address the typical consumer’s expectations of quality: usability, aesthetics (i.e., does it “look good”), performance, reliability and compatibility with their ever-growing collections of Internet of Things (IoT) devices.

The category of reliability alone should include a long list of tests, including:

  • Bending and strain relief
  • Cable plugging and unplugging
  • Static pull
  • Button/knob press
  • Electrical static discharge
  • Salt water spray
  • Speaker life
  • Bare unit drop test
  • Coating, painting and adhesion

Conducting statistics-based tests for all of these areas will not only identify defects that are immediately apparent, but anticipate potential problems months from now.

Consider the device’s power button that not only powers it on and off, but also enables the user to just turn the screen display on and off. It may initially respond quickly and accurately to a finger press, but how will it hold up as the user presses it repeatedly day after day? Determining the answer requires testing before and after manufacturing.

There are five phases that virtually all new devices go through:

  1. Ideation and concept: Conceive a new product and conduct market and feasibility analyses.
  2. Definition and design: Develop the product’s design and features.
  3. Development: Convert the design concept into a working prototype.
  4. Production: Turn one prototype into large volumes of the final product.
  5. Distribution: Deliver the finished products to distributors and retail outlets.

Let’s stay with the example of testing the smartphone’s power button. You must begin the testing process during phase 2 by first quality requirements and specifications for the button over long-term usage, such as after 1,500 cycles.

Begin conducting the actual testing when you move to phase 3 – development. Test the working sample’s power button life and ensure it is performed no less than the established 1,500 cycles threshold.

Once you’re ready to begin the production phase, take steps to ensure the manufacturing process is consistent by regular monitoring and validate button life by conducting regular spot testing.

Even when the manufacturing process is complete, the testing process is not. During the distribution phase, validate each order’s overall quality prior to shipping. Continue to monitor quality throughout the entire manufacturing process to ensure the power button’s materials and component have not changed since its engineering specs were approved. And that’s all just to ensure the power button meets users’ expectations. This process applies to all usage reliability and environment reliability tests.

These two charts show just how costly the traditional approach of limiting testing only to confirm compliance with safety regulations has become.

The chart of Vendor A shows that testing revealed high defect rates in the initial production batches. The statistics alert the quality manager to a problem related to design or engineering. He halts the process so that issue is addressed before mass manufacturing beings. Consequently, the defect rate falls and remains flat.

The chart of Vendor B reflects a production process that does not include inspection and testing before mass production. The quality manager cannot identify any trends pointing to a problem, and so the defect rate remains unacceptably high. 

Taking a statistical analysis approach to testing will enable you to detect issues before manufacturing begins. This will enable you to consider re-allocating resources to address these priority areas as necessary. It may sound time-consuming and costly, but consider the much higher amount of money and time you will have to devote to fixing a problem after manufacturing ends.

Picture a seesaw on the playground with prevention cost on one end, and failure cost on the other. Prevention cost represents only the outlay for outsourcing testing before, during and after manufacturing to an outside expert service. But failure cost includes the burden of high return and exchange rates, poor customer ratings that damages your brand, and the high warranty cost per unit for repairs and replacements. On the seesaw, the failure cost is much heavier.