Not long ago, a manufacturer of needles for medical syringes was in the middle of another shift of quality assurance testing. The test involved three individuals side-by-side, with each grabbing the needles off the line and physically inserting them into a drill “chuck,” piercing a rubber slab to simulate the piercing of human skin. Transducers captured the force required for needle insertion compared to a benchmark measurement, assigning a “pass” or “fail” grade based on relative level of sharpness.
And then a strange thing happened. Seemingly out of the blue, one of the three needle testers began to get measurements that were way off the readings of the other two, causing all of the products passing through her station to receive a failing grade. Throughput took a huge hit. Supervisors were baffled. When you watched the three go about their work, everything looked the same. Their technique appeared identical, as did the amount of force being exerted to pierce the chuck. What was going on?
After some careful investigation, the cause was found in the skin oils being transferred to each needle when handled. One of the test operators had a natural skin lubricity that was lower than the other two operators, which was increasing the comparable force required for the test system to permeate the rubber slab. After latex gloves were introduced to the process, the measurements instantly lined up again. Full production capacity resumed.
There is a lesson here for everyone involved with quality testing in high-volume manufacturing: it pays to set some time aside to carefully consider every possible variable that may one day threaten the accuracy and consistency of your data. That includes reviewing all inputs, outputs and interfaces—everything from how a specimen is gripped and loaded to how measurements get taken and how test data is acquired and exported. Such steps should be approached as preventive measures that are performed at the start of a new test program instead of during a crisis. This upfront planning will help minimize those less-than-ideal situations where everyone drops everything to find out why a test station is suddenly failing every product.
Reviewing and managing potential variables is especially important with compression testing, which by nature makes specimens want to behave differently when even the subtlest of variables is introduced. Put another way, it is much harder to squish something in a repeatable manner than to pull it apart. Try pushing a rope together five times the same way versus pulling it apart five times, for example. Therein lies the difference between tension and compression testing.
What follows are six tips for helping you manage compression testing variables so that your data remains consistent and repeatable, product after product.
1. Document the details.
It is well worth the time spent to record all the details about a test setup. How the specimen is mounted, what interfaces are used, and the actuation technology are just some of the variables that can affect test outcomes. So can the number of hours a specimen waits before being tested, as well as running the same test in summer versus winter due to the varying lab temperatures and humidity levels. And what about those big, beautiful west-facing windows? They can easily result in different test values at 4 p.m. compared to 9 a.m.
2. Get alignment as close to perfect as possible.
Very few compression tests involve specimens that are perfectly round or square. Poor control of specimen and grip angularity and concentricity means that compression tests are not pushing in the center of the specimen, nor are they pushing straight down on specimen ends. Unequal specimen lengths and widths are the norm, which makes test system alignment critically important to support repeatable testing and produce consistent test data.
For example, cylindrical test specimens want to buckle inconsistently when compressed—and if a test system is even slightly out of alignment, the compressed specimen may look more like a leaning tower and less like a hockey puck. Paying close attention to alignment before testing will help keep throughput high.
3. Fixate on the fixtures.
Grips and fixtures come in direct contact with the specimen throughout a test. If they are not suited to a specific application or exhibit premature wear due to poor quality, they will introduce variability that will ultimately impact throughput. If the fixture changes, so will your test data. Therefore, it is essential to pay close attention to the fixtures for each test, and to periodically inspect all grips and fixtures in your test lab. Using only the highest-quality accessories is a wise investment that will quickly pay for itself.
4. Think about your transducer technology.
Measuring force and displacement is a challenge in compression testing, because the measurement device has a tendency to literally get in the way and become part of the test. On top of undermining test data fidelity, this phenomenon can destroy expensive measurement devices. What is the best way of getting good data without sacrificing transducers? Noncontact extensometers are a popular workaround. It is also important to have a simple and effective process for periodic transducer validation.
5. Employ smart data acquisition.
Many compression tests begin in a linear fashion but become nonlinear as a specimen begins to crush, crinkle or yield. If your data acquisition rate is less than sufficient, you might miss this important transition when trying to determine bend or compressive yield, for example. Combining “level crossing” and “timed” data acquisition may provide superior results.
Quality testing is all about speed, consistency and repeatability. Therefore, the more you can remove human intervention and human hands from the process, the better. That extends all the way to how captured data is uploaded and saved into a network database.
Although it is difficult to be mindful of every variable at play, consistency must always be the ultimate objective for your test lab. The points above represent some of the key items to consider in the interest of getting and keeping variability out of your compression testing. Spending a bit of time on these tasks upfront will save you much more time—and increase uptime—in the long run.