Not being a statistician, what exactly is the standard deviation of each tip telling me.
Besides the formal definition of "
a measure of how dispersed the data is in relation to the mean.",
if one of the tips has a standard deviation of .0002 (inches) and another .0005, how will that translate
to the accuracy of the real time measurements of those tips?
Is that for the deviation of the dynamic tip radius and does that differ from a qualification check which
seems to give a xyz and polar radius deviation of each tip relative to the sphere measurement location?
9 hits on 2 levels ?
TP20 gives a trilobe defect, and the axial probing is quiet "stronger" than radial one...
Measure the calibration sphere with 36 hits on 4 levels, and look at the form, to show where's the max defect.
9 hits on 2 levels ?
TP20 gives a trilobe defect, and the axial probing is quiet "stronger" than radial one...
Measure the calibration sphere with 36 hits on 4 levels, and look at the form, to show where's the max defect.