Computerized intelligent data control ensures data integrity and minimizes manufacturing variation.

A guarantee of zero defects for dimensions specified to tenths and millionths of an inch puts enormous pressure on shop-floor operators and engineers who face the challenge of manufacturing complex products in small volumes and doing it on time.

Experience with more than 300 manufacturers of precision components has revealed problems with unaccounted measurement errors, recording mistakes and suboptimal process setup. Based on this relatively small sample of 300 manufacturers, the industry faces potential losses that can reach billions of dollars annually.

Lapses in maintaining data integrity could be avoided if complete statistical assessment of process setup and measurement errors were factored into adaptive control procedures during all production phases—from first-piece approval to process control and final inspection. Data integrity should be transformed from a rhetorical quality topic to the crux of quality and productivity solutions.

At present, gage repeatability and reproducibility (GR&R), described in the Measurement System Analysis (MSA) is the popular source of statistical metrology. This technique can be found useful for control of open-tolerance characteristics and high-volume production. However, inherent limitations of GR&R sampling procedures and “carved in stone” acceptability criteria using a large expansion of standard deviation, restrict the manufacturer in the number of available solutions to process-specific problems typical for close-tolerance manufacturing.

One approach to dealing with these problems is Measurement Data Control (MDC). This technique focuses on data integrity through real-time control of data quality. MDC uses a static GR&R as its benchmark and is designed as a dynamic tool that is continually used to take actions as needed.

Building the data quality platform

Measurement data control captures significant causes of measurement errors and sets up information for real-time control of data quality. Its purpose is to simplify control of a measurement system and use it efficiently on a job-to-job basis.

MDC’s main objectives include providing the means for exact profiling of basic sample variation, precise process setup, and the prevention of nonconformity that result from measurement errors. MDC can alleviate data recording mistakes and measurement uncertainties, prevent incoherent and flawed certification of product quality, and aid in the training of engineering staff, operators and inspectors in shop-floor metrology.

The system architecture contains a set of interconnected software tools that provide total, bottom-to-top control of data quality. Rule-based statistical data management is at the core of a quantitative evaluation of single and grouped causes of variation. Decisions from an “artificial data analyst” can be evoked for a range of tasks. During data collection it alerts the user about a suspicious reading. For non-normally distributed readings in a single sample, special rules are used to estimate probable spread and positioning. Important group-related decisions such as reproducibility, gage compatibility, between-fixtures variability and machine precision are automatically made by the system. MDC is embedded in the integrated control system and is executed by an artificial data control manager that changes statistical and mathematical models to meet manufacturer’s individual needs. A series of models link MDC to job control and product verification layouts.

The sampling design is determined by the purpose of the trials, the experience gained from similar process conditions, and the type of process control model. The sampling method used is up to the individual user but can include any number of measured parts or observers, and the parts can be consecutive or random. Measurement trials are performed on one-cause-at-a-time basis without any pre-existing conditions.

MDC repeatability is essentially a reference for all changed conditions, and is determined directly from the five to 12 measurements taken at the exact, thoroughly cleaned location with the same gage. A template of optional estimates for repeatability is made up of actual range, rules-based expert estimate of range expansion, GR&R derived from MSA statistics and the confidence intervals for statistical estimates. Upon acceptance of repeatability, the manufacturer can test any combination of significant sources of reproducibility. Within-part variation, such as taper and roundness, can be sampled separately or in a combination. Eventually, total measurement error can be calculated as the percentage of process variation.

Making selections

Selection of appropriate acceptance criterion for reproducibility and other causes depend on tightness of the tolerance range, gage resolution, customer’s quality requirements, machine precision, estimated process variation, lot size and others. MDC criteria are divided into four groups with a choice of different estimates in each group:
  • width of spread with the choice of the rules-based statistical estimate, actual sample range, GR&R derived from MSA statistics, measurement uncertainty derived from NIST guidance and index CP;

  • bias, with the choice of mean, median and the confidence interval for the mean;

  • location and spread, with the choice of the rules-based statistical estimate and index Cpk; or

  • proximity to specification limits, with the choice of the mean, pre-control and the rules-based statistical estimate.
The appropriate acceptance criterion is difficult to define let alone quantify before an amount of process variation and tool-wear pattern is determined. Perhaps the most crucial limitation is that normal statistical criteria inflate uncertainty in measurements and are unclear for shop-floor personnel. In the common case for close-tolerance manufacturing, actual sample range and its location between specification limits can be used as the footprint variation. In this case, the variation is underestimated. The rule-based statistical estimate of measurement variation most accurately predicts variation. Standard R&R criterion can be locked up in case of evaluation of measurement system as a whole.

Multiple causes of basic variation and its total value are estimated by convergence of statistically measured individual elements into groups. The ability to explore the causes of errors with different sampling designs, using actual and estimated location and spread relative to specification limits, removes potential statistical errors that occur when a GR&R decomposition algorithm is derived using MSA guidance. The MDC approach allows precise ranking of all sources of errors and uncertainties. It provides data on contribution of measurement error, shape and piece-to-piece variation in the basic sample variation.

The extent of basic variation determines what type of process model should be used. Relatively low basic variation allows the running of tool wear models; increased basic variation compels the use of predictive control of risk of defects. When excessive combinations of measurement errors and shape variation are larger than piece-to-piece variation, the GR&R sample is inserted in a process control model. The values of preprocess measurement error and the baseline sample variation will be traced to in-process and inspection data. Discrepancy with real-time data helps detect data recording mistakes and prompts corrective actions.

There are four general modes in the measurement control process:

  • The investigative mode. A quality professional selects gages, evaluates fixtures, tries new measuring methods, ranks causes of repeatability and reproducibility, and conducts statistical setup profiling.

  • The measurement probing mode. An operator is instructed on how to sample and measure parts to confirm that causes of basic variation are in control.

  • The variation control mode. GR&R spread or shape variations are inserted in the predictive process model controlling risk of defects. An artificial process control manager that works under specific rules embedded in the system executes the process modeling.

  • The data audit mode. An estimate of a series of samples establishes the baseline sample variation. Within-sample variation recorded by an operator during in-process sampling cannot be lower than this baseline variation.


Colors tell the story

An example of statistical assessment of process setup in the investigative mode is an interface that contains a data grid with color-coded readings. It also has interchangeable graphic representation of samples, including location and spread relative to specification limits with the choice of different statistical estimates and multiplying factors, three-way tolerance analysis with verification of sufficient sample size, color-coded histogram and a matrix analysis. In this mode, groups of measurement trials are depicted. Also shown are statistical estimates of bias, variation and process capability for individual samples and groups, and the acceptance decision that is displayed for every trial or combination of trials. This decision is based on statistical data, expert rules and manufacturer-selected acceptance criteria.

In a typical case, a reference of repeatability for bore size is established by six measurements. The samples represent possible sources of variation and help determine whether to accept a process centering.

Every measurement is color-coded using rule-based statistical tests. Appearance of a red reading calls for measuring the same point again. Single trials and interactions between them are flagged by the system and corrective actions are undertaken to run a process with zero defects. This demonstrates that MDC is a method that allows a quality professional to experiment with unlimited causes of errors and uncertainties, in the context of overall variation and risk of defects, for a particular characteristic.

In the measurement probing mode, an operator would use simplified sampling designs to make sure that no surprises with a new batch of material or refreshed coolant can occur. Only one sample of three bore sizes, measured at top and bottom locations, will be used for a new lot.

Benefits can be gained by operating the control tools of measurements, processes and product quality as a single system rather than as islands of computer-aided support. A simplified functional scheme, representing the interaction between statistical setup profiling, process control and final inspection, can be used in practical situations. A basic mode starts from gaging and machine precision study and ends with in-process control. Conversely, if unstable or excessive variation is observed during a production run, a mode starts after gaging trials, and acceptance of variation results from control of process drift.

Close-tolerance characteristics that have a history of excessive variation are tested by MDC for gaging and machine precision. An “Accept” decision by MDC means that the process control model leading to maximum machine efficiency can be selected. A “Warning” restricts selection of the efficiency models for process control. If final inspection is included into the product acceptance plan, the use of c=0 acceptance sampling is not recommended. “Reject” prompts corrective actions or selection of the model allowing controlled deviation from specification limits.

If unstable or excessive variation is turned up during a production run, a search for root causes of sample variation should be undertaken using MDC procedures in real time. To eliminate the effect of tool wear, a study should be conducted after a tool change. Acceptance decisions will make a substantial impact on the further use of the current control model or force troubleshooting. Problems with measurement errors are automatically flagged. The ease with which efficient data control modes can be set and executed becomes an important factor of MDC implementation. Q

Tech Tips

  • Flexible sampling for statistical assessment of process setup and measurement errors.
  • Validation of correct measurements using process adaptive acceptance criteria.
  • Estimation of the compound variation by convergence of single causes.
  • Color-coded flagging through all phases of data collection and analysis.
  • In-process control of irremovable causes of variation through use of the adaptive model.
  • Product verification at all stages of production.