hexagon logo

Standard deviation

Hello all,

Not being a statistician, what exactly is the standard deviation of each tip telling me.
Besides the formal definition of " a measure of how dispersed the data is in relation to the mean.",
if one of the tips has a standard deviation of .0002 (inches) and another .0005, how will that translate
to the accuracy of the real time measurements of those tips?
Is that for the deviation of the dynamic tip radius and does that differ from a qualification check which
seems to give a xyz and polar radius deviation of each tip relative to the sphere measurement location?

Thanks for any technical insight
  • Assuming that the calibration follows a statistical normal distribution, it would describe that 95% of hits are included into measured radius ±0.0004" for your first case, and ±0.001" for the second.
    You can measure the calibration sphere with the same number of hits than the calibration, same number of level, then look at the form or :
    ASSIGN/SCOPE=MAX(SQRT(DOT(SPH1.HIT[1..SPH1.NUMHITS].XYZ-SPH1.XYZ,SPH1.HIT[1..SPH1.NUMHITS].XYZ-SPH1.XYZ)))-MIN(SQRT(DOT(SPH1.HIT[1..SPH1.NUMHITS].XYZ-SPH1.XYZ,SPH1.HIT[1..SPH1.NUMHITS].XYZ-SPH1.XYZ)))
    Using another software could give you directly the scope of radii.
    I don't know what type of probe you use, and how many hits you take for a calibration, but I would say that 0.0005 " is a very bad calibration... (with a LSPX1 L50 D5, I usually get 0.0002 mm of stddev)
    The same value with 25 hits or 5 hits doesn't give the same evalution of the calibration...
  • The standard deviation is essentially telling you how much variation your points have. A large standard deviation means there is a lot of variation which could be caused by a number of undesirable things - dirt on the tip / cal shpere, a damaged tip / cal sphere (chips, flats, scratches etc) or even be an indication that something is loose - stylii not tightended correctly, cal sphere not bolted down fully, the list goes on.

    As Jefman said, the smaller the standard deviation the better but it is somewhat dependent on what machine you are using and what type of sensor you have.
  • Thanks for the input,
    i use a PH10M Renishaw head with a TP20 .
    Normally the probe length is 100mm extension, TP20, 20mm extension, and then either a 2mm by 20mm tip,
    or 3mm by 20 mm tip.
    Normally get .0002 to .0005 standard deviation.

    Thanks
  • Sorry,
    The .0002 to .0005 standard deviation values are in inches.
  • 9 hits on 2 levels ?
    TP20 gives a trilobe defect, and the axial probing is quiet "stronger" than radial one...
    Measure the calibration sphere with 36 hits on 4 levels, and look at the form, to show where's the max defect.
  • Standard deviation is defined as the square root of the average, of the squared deviations of the values, subtracted from their average value.
    So, let's dwell on how that applies to a probe calibration.
    Cal sphere is 1" diameter.
    radial result of sphere at 9 hits are as follows:
    0.5005
    0.5009
    0.5010
    0.4997
    0.5002
    0.4990
    0.5004
    0.5000
    0.4996
    This equates to an average of 0.500144
    With measurement uncertainty (max-min) of 0.002"
    Standard deviation of those hits is just 0.000608"

    So your measurement uncertainty in this example is 3.3x worse than the standard deviation value.
    This can and will get significantly worse if the outlier points are distributed about a larger range.
  • Thanks louisd,

    I did dwell on that and got a headache.Slight smile
    If I was taking a point on the side of a part 1,1,-.5,0,1,0 (xyzijk) with tool A0B0
    that qualified with a standard deviation of .0001 (everything in inches).
    Then took the same point with tool A105B-90 that qualified with a standard deviation
    of .0006.
    What would be the difference in the total deviation (t value) between both points?

    Would you elaborate on the effect of measurement uncertainty?

    Thanks again