Quality Magazine

Crash Course in Data Management Speeds Up Huge Simulation Task

August 11, 2010
Simulation Lifecycle Management cuts qualification process for Abaqus FEA crash dummy models from weeks to days.

A typical crash dummy family used to assess the effects of automobile collisions on different sexes and ages of humans. Source: Parker Group


The idea for the automotive crash test dummy first came to life in the 1950s when U.S. Air Force flight surgeon Col. John Stapp realized that more of his fighter pilots were dying in car crashes than from accidents in their hi-tech jet aircraft. The Stapp Car Crash Conferences started that decade and continue today as a venue to share information on the latest research and advancements for improving vehicle crashworthiness and occupant safety.

A major challenge in the ongoing development of physical crash dummies is the need to reasonably represent how the human body responds in an automotive accident. The ultimate goal of crash dummy research is to aid in creating design improvements for both vehicles and occupant restraint systems to reduce injuries and save lives.

Energy absorbing crumple zones and other structural innovations do help protect occupants during car crashes. The addition of air bags, combined with a properly worn lap/shoulder belt, reduces driver deaths by 61% in a frontal crash, according to the National Highway Traffic Safety Administration (NHTSA). However, car manufacturers are now also legally obligated to certify the effects of crash events on the humans involved. As a result, crash dummies for front-impact (Hybrid), side-impact (SID) and rear-impact (RID) have been developed, with engineers from around the world contributing over the years to their evolution. Today, physical crash dummies are a valuable part of every automotive OEM’s product design, development and testing arsenal.



A crash dummy has a complex internal structure and multiple sensors that record up to 35,000 data points in a typical 150 millisecond crash. Source: Parker Group

Smart Investment, Big Price Tag

A very valuable part: A single physical crash dummy can cost more than $200,000. Made from a variety of different materials, including custom-molded urethane and vinyl, they are based on true-to-life human dimensions (a typical dummy family includes several different dummies, ranging in size from a toddler to a large adult male). They have ribs, spines, necks, heads and limbs that respond to impact in realistic ways. Also, they are loaded with sensors (44 data channels on the current front-impact standard, the Hybrid III) that record up to 35,000 items in a typical 100 to 150 millisecond crash.

Automotive companies and government organizations continue to collaborate toward the acceptance of international safety standards-a WorldSID project is now underway-and harmonize methods of testing as the market for each country’s vehicles becomes increasingly global. However, physical test dummies are only a part of the crash and safety certification process. As computer-aided engineering software and computing resources rapidly advance, there is increasing emphasis being placed on developing ever-more-accurate virtual crash dummies.

Simulating the Crash Simulator

Since a physical crash dummy is a manufactured product like any other, it is no surprise that engineers use realistic simulation with finite element analysis (FEA) software to guide its design, production and performance. Given the power of FEA to cost-effectively reduce real-world testing, in the case of expensive crash dummies, and even more expensive vehicle prototypes, it definitely pays to simulate the simulator: You can crash a virtual car and dummy many times, much faster and at far less cost than a single physical test.

Since the goal of simulating a simulator of the complex human body is to closely represent reality, the resulting data must correlate well with physical crash test results. So standardization of FEA models is critical: Each virtual dummy must exhibit responses to crash impact loads and accelerations in a precise, repeatable manner that mirrors what happens to its corresponding physical crash dummy.

What is more, the simulation must continue to run smoothly as each new and improved version of a physical crash dummy comes on the market and as each new version of crash simulation software is released. Simulation software companies go to great lengths to validate the consistency and accuracy of their software in a process called qualification. In the case of creating a new virtual crash dummy or updating an existing one, the software qualification process involves evaluating large quantities of FEA data, gathered from multiple simulations of various crash scenarios, run on different versions of simulation software and, in turn, correlated with new physical test data.



FEA virtual crash dummy model. Source: Parker Group

Large Amounts of Data

In the Providence, RI, headquarters of Simulia (the Dassault Systèmes’ brand for realistic simulation), a team of engineers qualifies and supports a range of virtual crash dummy models developed for their Abaqus FEA software by First Technology Safety Systems (FTSS), a leader in crash dummy innovation for over 40 years. The Simulia group also separately develops and qualifies its own virtual crash dummy models, which are versions of the BioRid (Biofidelic Rear Impact Dummy) and WorldSid (Worldwide Side Impact Dummy). “We need to make sure that every new version of each dummy model that is released will work accurately and give the same response no matter which version of Abaqus we, or our customers, are using,” says Sridhar Sankar, the Simulia crash engineering specialists group leader.

A typical FEA dummy model will have about 100,000 elements, 150,000 nodes and 500,000 degrees of freedom. “To ensure, within engineering tolerances, that you get the same results from the virtual dummies as from the physical tests of the real ones, we have to run component, sub-assembly and full-model tests on each one,” says Sankar.

A component test is used to evaluate an individual FEA model of a dummy neck being bent, a lumbar spine being shoved sideways or a head being dropped on a hard surface. A sub-assembly test assesses the stresses on a full rib cage model hit from the side by a pendulum, with the ribs being individually deformed and possibly intruding into the body cavity. In addition, a full-body test incorporates an entire dummy model being hit from the side by a virtual “solid” barrier or subjected to a simulated sled test. Different testing standards (NHTSA, IIHS, etc.) require a variety of tests. “With 30 to 60 of these validation tests per dummy model, we end up with a very large number of outputs to generate and then compare,” says Sankar.

Manual Qualification Slows Down Team

Until recently, dummy qualification took the Simulia engineers about four weeks for each updated Abaqus virtual dummy model. (A completely new model, such as a WorldSid, would take far longer than that to create.) “These kinds of challenges meant a lot of man-hours for our team,” says Simulia crash engineering specialist George Scarlat.

Before they could even begin the analysis, Scarlat’s group had to create their databases by manually modifying each of the previous validation test responses to add proper filtering-which has to meet industry standards, such as J211 or ISO 6487-to the variables so that the results between different versions of Abaqus could be compared.

Next, the engineers had to manually launch and run the simulations for the 30 to 60 tests in the current and previous versions of Abaqus (usually four or five total). Once they completed the various manual analyses, the team then had to run a post-processing step to generate the curve plots describing the analysis results. The amount of data continues to multiply at this point because the results of a single FEA analysis of dummy rib cage intrusions, for example, could produce up to 200 output variables per test.

Finally, a second post-processing step would take the analysis curves, two at a time, and generate statistical comparisons to quantify the agreement between the same variables in different versions of Abaqus. “So in terms of data you could have 60 tests multiplied by 200 variables multiplied by five different versions of Abaqus,” says Scarlat. “This was a lot of manual work. To meet our deadlines, we really needed to improve the efficiency of the entire process.”



Screenshot shows how SLM and Isight are used to qualify a crash dummy FEA model for two versions of Abaqus software. Source: Parker Group

Combining SLM and PLM

The group decided to apply a combination of Simulia’s own Simulation Lifecycle Management (SLM) tool, and the company’s Isight software for simulation automation and design optimization, to automate and manage the tasks. The results were dramatic: “By using our own tools, which we also provide to our customers for automating and managing their simulation processes, we went from four weeks to four days for the qualification process,” says Scarlat.

How did they do that? Simulia SLM leverages Dassault Systèmes’ Enovia product lifecycle management (PLM) solution with Simulia’s simulation expertise. Using SLM as both a database and a process controller, the engineers could save and manage their simulation data, reuse simulations, retain performance metrics, protect intellectual property and shorten design cycles. They used Isight software within SLM as an add-on tool for driving automation of the data (Isight also has powerful optimization capabilities, which were not needed in this case).

The crash dummy qualification team used SLM as the underlying driver for running each of the three main dummy qualification tasks (preprocessing, analysis and post processing) sequentially. SLM automatically exported all the necessary files from its database for each task (activity). It then automatically imported back into its database any specified result files after the activity was run.

Furthering the Qualification Process

SLM also leverages the capabilities of Isight, in this case for process automation. The crash group engineers first used Isight to create a workflow that enabled them to simultaneously launch all of the Abaqus analysis tasks on a compute cluster. A second Isight workflow was employed in the final post processing task to help determine the correlation between results from different versions of Abaqus software on identical dummy tests. A Python script was used to modify input files, compare results and generate comparison reports. The team ran each project on a Linux 64-bit compute cluster using an average of 1,200 CPUs for a full run-through.

“Automating our tasks was a big help,” says Scarlat. “No user intervention was needed during the complicated workflow execution, which resulted in a significant reduction of our process time for the whole project.” His team qualified five FTSS dummies in the first year of using the new workflow-taking about the same number of man-hours needed to finish only one dummy qualification project before.

The automobile safety engineering world is getting ever closer to the perfect crash dummy. Hybrid IV, also known as THOR, is a dummy currently under development with biomechanical and measurement enhancements that will generate more data than ever. “With such complicated, data-rich FEA in the pipeline, the use of SLM and Isight to automate and manage it all will be even more crucial to the efficiency of our engineering team,” says Sankar.