hexagon logo

Model vs. Test - What is your correlation strategy?

When comparing model and test results, how are you deciding you have "correlation" between the two?
 
For discrete events, I look at matching the general pattern of the results first, but then how about peak magnitudes? How close is close enough for you?
 
For a random response, is the frequency domain the way to go? What's close enough for frequencies or a PSD pattern?
 
Interested in your thoughts...
  • Here's a few of my thoughts:
     
    "Close enough" often is influenced by budget and timing, and availability of the right model data (and test data) to correlate.
     
    As a general rule, I try to correlate subsystems first, before a complete model. For example, correlate a single axle first, before attempting a full class 8 truck model. Sorting out correlation issues at that level often turns into the proverbial "needle in a haystack".
     
    In the same vein, I try to use discrete events first, and even static (or quasi-static) events initially. In a vehicle model, I'm looking for things like basic ride heights are matching, the jounce bumper contacts when expected, etc. Next, if possible I'll use controlled dynamics events, such as putting a vehicle on a 4-post machine, and applying fixed amplitude and frequency into the tire patches. Eventually, I'll work up to frequency sweeps. Then, the random inputs become more of a validation phase.
     
    In general displacements are going to be the easiest to correlate. Anything greater than 5% error on peak magnitudes I'm going to want to look into further. As you move down the chain to velocities, accelerations, forces, stress, and strain, each one becomes a bit more challenging to correlate.
     
    For more transient results, I've used normalized RMS on FFT and PSD with success. You definitely want to match frequencies within a few percent, and that's actually the easy part. The more difficult part is matching amplitudes, which are driven by damping. With models that contain flexible bodies, the modal damping starts to influence the results. Even for rubber bushings, you may need to consider implementation of frequency and amplitude dependent bushings.
     
    One of the more common issues I've seen with correlation, is not matching the instrumentation locations (and orientations) the same way in the model as they are in the test. You could potentially have very good correlation, but because you aren't quite comparing "apples to apples" the results look off.
     
    Another challenge with correlation is when the model is built using "design intent" data. Unless you understand with great confidence the differences in the physical mechanism from the design data, you will not be able to determine if the model differences are modeling issues, or if you are seeing differences due to build tolerance variation. So, when tasked with correlating vehicle models, I've always had the best results when we CMM the vehicle that's going to be used for objective measurements, and use that hardpoint data to define my suspension geometry. I also measure the bushings, springs, dampers, etc., from that exact same vehicle. K&C testing would also happen on that same vehicle.
     
    Hope this if helpful,
    Chris Coker
  • Hi Chris,
     
    Thanks for the reply.
     
    Some comments and questions...
     
    Sounds like we start in the same spot, namely some examination of the static setup of the vehicle. Are you applying your 5% error here for Ride Height, etc.?
     
    You don't quote a number for a % error for accelerations, forces, etc. Any thoughts?
     
    "I've used normalized RMS on FFT and PSD with success" - Is this something like "Spectral RMS" in nCode speak? I will also take a closer look at the impact of damping, as you suggest.
     
    Your "Design Intent" paragraph opens a whole other can of worms. Definitely a challenge with weldments and rubber components. I have been working to get an idea of the normal variation for these things.
     
  • First of all, @Chris Coker​ thank you for sharing your thoughts. It has given me a good idea on how to attack a long-standing issue in the correlation between my CAE model and testing on a particular track (frequency domain).
     
    @James J. Patterson​ , the problem we have encountered with FFT and PSD plots is - they are subjective comparisons. If you have 10 or more locations/plots to compare, bias in judgement occurs. A quick way to quantify the data is using the relative damage spectrum (nCode speak) to compare the signals based on frequency bands.
     
    Currently, we receive a lot of support from our physical testing to conclude whether the achieved correlation is sufficient for that type of model in a particular frequency band {problem is with amplitudes}. A lot of effort goes into looking at model failure history in that particular class, the probable problem areas anticipated in this model etc.,
     
    Correlation percentage is not fixed and varies especially when we are dealing with random data. The general ballpark figure is +/- 15% variation in Testing vs CAE signal is the criteria I follow for acceptability.
     
  • For static measurements like ride heights, I would definitely expect to be within 5% or better.
     
    For accelerations and Forces, anything better than 10% would generally be acceptable. But, if all the accelerations were a little high (for example), I might investigate if there's a DC offset in either the test data or analysis results that causing the discrepancy.
     
    In my opinion, FFT and PSD plots do not need to be subjective. I've had projects where customers wanted a pass/fail criteria in order to move to the next phase of the project, etc. For anything in the frequency domain, I would expect the simulation to very accurately track the individual peak frequencies within 2-3%. At a basic level, the peak frequency mainly a function of mass and stiffness. If your frequencies are off, there is very likely a fundamental problem with masses or stiffness in the model. These are usually easy to resolve. The magnitudes of the peaks are primarily a function of damping. And damping values, whether for bushings, or structural damping for flexible bodies, can often be difficult to obtain. So, sometimes I have to be a bit more flexible on the peak magnitudes, but you can also devise methods using test data to help you tune the damping values in the model (within reason).
     
     
    One other quick bit of advice. If the test data has been filtered, make sure the analysis results are filtered in the same way. I've run into correlation issues in the past where applying the same filter to both test and simulation resolved the issues in correlation.
     
    Best regards,
    Chris
  • Lots of views for this topic, but only a couple responses. Anybody else want to share?
  • Simulation engineers tend to think a test result is the absolute truth and simulation results always have to come as close as possible. Not questioning sensor quality, operating conditions uncertainties, raw data processing oddities etc.. As an example, just have a look at the raw data of a ‚quasi-stationary’ aligning torque vs. slip angle measurement, and you‘ll know what I mean Sunglasses In a long carrier as simulation tools and models developer, I‘ve had so many cases where discrepancies between test and simulation uncovered issues with the measurement rather than the simulation.
  • I agree completely. The longer I work with the test side of the business, the more I realize they have as many problems as we do if not more.
     
    That said, depending on the boss (or customer), you may still be forced to prove your model mimics what the "actual parts did in the test". Hence my question and dilemma...
  • Sure, I know what you mean and didn‘t want at all to question the urgent necessity of thorough validation of simulation results. Just wanted to bring in this important but frequently overlooked aspect.
  • So Michael, you never did actually say anything about how you convince your customer the model is right...Sunglasses
  • good one!
    It is a matter of not just one comparison, but using as many as possible different measurement sources, operating conditions, manoevres, etc. etc. And explaining to the customer that the underlying modelisation uses sound mathematics and physics, and observes all relevant physical properties. Validation is a never ending story and permanently continues with any further model development or refinement.