hexagon logo

"Default" Math Vs "Legacy"?

What are we calling the new default Geo-Tol math? "New Math"? "Default Math"?
I remember this gun fight a loooong time ago, it ended up bad for Hexagon and us also.
what I mean is:
The "New Math" best fits a little too aggressive for me compared to "Legacy". Last time I had reports coming out with perfect true position our customer Lockheed Martin mopped the floor with Pc-Dmis and started the whole ISO Best-Fit shootout, disallowing us to use Pc-Dmis best fit algorithms. The "New Math" best fits also. Here is a comparison:



What do you think?
and can we come up with a disparaging term like "New Math" other than "Geo-Tol"? or "Geo out-of Tol"?

thx
Parents
  • Dear Illustrious Programmers and other lecherous Forum Members,


    I sincerely appreciate ALL the inputs to this conversation, while emotional and overdramatic, empirical and mildly dramatic and threatening: (Oooo000000000000000ooohh Nooooooooo000000000000000000, I LUUUUUUUUUUUUUUUUUUV drama!) I really feel we've popped the cherry on this topic, and before you all kick me out of bed, I'd like to ask one more question of us.

    I am mildly enamored with the "newmath" unrelated actual mating envelope (UAME) and the local and global Least Squares Algorithms, I empirically repair to least squares because in my understanding, we can only with any certainty mathematically deduce the middle of something and that's it. But, I work for a living, and I'm only as valuable as I can be accurate in my measurements period. Nobody pays me *** for guessing.
    i've tried my hardest to understand the gobbly-gook here:

    https://docs.hexagonmi.com/pcdmis/2020.2/en/helpcenter/mergedProjects/core/geometric_tolerances/Evaluating_Size_with_the_Geometric_Tolerance_Command.htm

    But I want to know from the guys who somehow survive this mess day in and day out:

    How do you use these options when probing diameters over 1.0000 with tolerances below .0005 total on global machines?
    What combinations do you use to homogenize and harmonize UAME, LS Local Size without datum shift to give clear, concise and stable direction to machine correction and set-ups?

    Off-topic:
    (I woke up this morning wearing all my clothes from yesterday except for my shirt, my girlfriend woke me up and said: "Baby, you need to get to work now, you're late." I asked her: "What happened, what did I do last night?" and with a far away glassy eyed stare and a smile she said: "I'll tell you later." Wink

    Sincerely,

    SPace-Cowboy
    Gabriel
Reply
  • Dear Illustrious Programmers and other lecherous Forum Members,


    I sincerely appreciate ALL the inputs to this conversation, while emotional and overdramatic, empirical and mildly dramatic and threatening: (Oooo000000000000000ooohh Nooooooooo000000000000000000, I LUUUUUUUUUUUUUUUUUUV drama!) I really feel we've popped the cherry on this topic, and before you all kick me out of bed, I'd like to ask one more question of us.

    I am mildly enamored with the "newmath" unrelated actual mating envelope (UAME) and the local and global Least Squares Algorithms, I empirically repair to least squares because in my understanding, we can only with any certainty mathematically deduce the middle of something and that's it. But, I work for a living, and I'm only as valuable as I can be accurate in my measurements period. Nobody pays me *** for guessing.
    i've tried my hardest to understand the gobbly-gook here:

    https://docs.hexagonmi.com/pcdmis/2020.2/en/helpcenter/mergedProjects/core/geometric_tolerances/Evaluating_Size_with_the_Geometric_Tolerance_Command.htm

    But I want to know from the guys who somehow survive this mess day in and day out:

    How do you use these options when probing diameters over 1.0000 with tolerances below .0005 total on global machines?
    What combinations do you use to homogenize and harmonize UAME, LS Local Size without datum shift to give clear, concise and stable direction to machine correction and set-ups?

    Off-topic:
    (I woke up this morning wearing all my clothes from yesterday except for my shirt, my girlfriend woke me up and said: "Baby, you need to get to work now, you're late." I asked her: "What happened, what did I do last night?" and with a far away glassy eyed stare and a smile she said: "I'll tell you later." Wink

    Sincerely,

    SPace-Cowboy
    Gabriel
Children
  • With a total tolerance of 0.0005", there are so many variables you need to isolate:

    -Environmental: Temperature of the room, Temperature of the part, Temperature of the part after you handled it.
    -Machine's physical limits: Linear/volumetric reproducibility, probe sensor's reproducibility,
    -Fixture/setup bias: datuming strategy, your routine's ability to isolate part/fixture interaction, number of hits/scanning spacing (if available) and where hits are locating relative to manufacturing process, how part is being restrained on machine.
    -FOD (oils on your hand transferring to the parts, debris on the probe, debris on the part sample)

    Looking at your profile, an old Global performance silver with a massive 20-40-20 size simply -might not be accurate enough- to discern pass/fail to this tight of a tolerance. You need to assess your gage's capabilities via MSA to accurately validate its' fitness for the task.
  • Yes, I agree very much so. Therefore we are abandoning the use of the CMM for size of the feature, but we still are concerned with the location of the feature. We are assuming default math makes a maximum inscribed feature, then details the centroid of that. what I'd like to see is more of a Zeiss style construction, which like recompensate, tosses out the probe diameter, locates the centroid, performs a roundness function mathematically filtering around 3 sigma, locates the centroid, then passes the probe diameter through for probe comp.
  • Machine's uncertainty limitations pertain to both location and size.
    If your location tolerance is similar to your size tolerance, you are still using the wrong tool for the job. Unless you shore up all potential contributions of variation as best you can, and prove that variation is reproducible via MSA to a level your organization/customer expects... You are sh!t outta luck in measuring this part on the cmm.

    Default PCDMIS math is Best Fit (BF) least squares methodology (judged with by all hits having the same "weight").
    You can toggle ASME and ISO in settings for each routine, or with "UseISOCalcualtions" toggle in settings editor.

    You surely know you can alter the strategy of each feature at will.
    BF (LSQ), MAX_INSC, MIN_CIRCSC, MIN_SEP, FIDED_RAD, and BF RECOMP (which is tossing probe dia and putting it back in),
  • for the record, Calypso's strategies are still an averaging estimation of the data hits, just like PCDMIS's options. it's simply a different software.
    In fact, calypso's default strategy is also least squares. it's industry standard to use the median output of hit point variation.
    https://carl-zeiss-industrial-metrology-llc.helpjuice.com/en_US/calypso/algorithms

    I will absolutely agree that PCDMIS's Gaussian and hit-filtering strategies are garbage, and historically useless... but they have been working on improving this function.

    In any case, you can extrapolate hit data and use excel, matlab, minitab etc software to help you determine if the PCDMIS strategies are correct.