Not being a statistician, what exactly is the standard deviation of each tip telling me.
Besides the formal definition of "
a measure of how dispersed the data is in relation to the mean.",
if one of the tips has a standard deviation of .0002 (inches) and another .0005, how will that translate
to the accuracy of the real time measurements of those tips?
Is that for the deviation of the dynamic tip radius and does that differ from a qualification check which
seems to give a xyz and polar radius deviation of each tip relative to the sphere measurement location?
I did dwell on that and got a headache.
If I was taking a point on the side of a part 1,1,-.5,0,1,0 (xyzijk) with tool A0B0
that qualified with a standard deviation of .0001 (everything in inches).
Then took the same point with tool A105B-90 that qualified with a standard deviation
of .0006.
What would be the difference in the total deviation (t value) between both points?
Would you elaborate on the effect of measurement uncertainty?
I did dwell on that and got a headache.
If I was taking a point on the side of a part 1,1,-.5,0,1,0 (xyzijk) with tool A0B0
that qualified with a standard deviation of .0001 (everything in inches).
Then took the same point with tool A105B-90 that qualified with a standard deviation
of .0006.
What would be the difference in the total deviation (t value) between both points?
Would you elaborate on the effect of measurement uncertainty?