hexagon logo

10:1 Rule - What spec do you use?

There's been some debate here about what the CMM is capable of measuring to when it comes to the 10:1 rule. Obviously we can change the resolution on the dimensions we want reported way out. What number do you guys reference for this?
Parents
  • Koba tests and the like are useful and are certainly the best way to reliably compare one machine to another, or one machine to itself over time, in a completely neutral, objective way. The problem is that they rarely reflect the true performance that you will get in a real production situation. When you add up the impact of some temp fluctuation, dirt on the parts, surfaces that aren't perfectly smooth, wrist indexing, stylus changing, mixing of TTP and analog scanning, the age of the captain, etc... the real uncertainty is far higher than what is shown by the Koba test. Whenever people ask me what sort of uncertainty we can expect I try to give them a real world value so expectations stay realistic.

    Part of our process involves running the exact same program on the same part twice with a time separation of ~5hrs to 1 day. These parts happen to be almost exactly 1 meter long. So I have seen hundreds and hundreds of samples of the uncertainty that we can expect over 1 meter in a real world environment with all potential sources of uncertainty included. Based on this I can say with confidence that The repeatability over 1 meter is in the neighborhood of 0.010mm on our machines. The April has a very similar machine (a bit smaller) so I'm pretty confident that this is close to the value that they will experience as well if they keep the machine in a well controlled environment. The Koba test will certainty be much better than this.
Reply
  • Koba tests and the like are useful and are certainly the best way to reliably compare one machine to another, or one machine to itself over time, in a completely neutral, objective way. The problem is that they rarely reflect the true performance that you will get in a real production situation. When you add up the impact of some temp fluctuation, dirt on the parts, surfaces that aren't perfectly smooth, wrist indexing, stylus changing, mixing of TTP and analog scanning, the age of the captain, etc... the real uncertainty is far higher than what is shown by the Koba test. Whenever people ask me what sort of uncertainty we can expect I try to give them a real world value so expectations stay realistic.

    Part of our process involves running the exact same program on the same part twice with a time separation of ~5hrs to 1 day. These parts happen to be almost exactly 1 meter long. So I have seen hundreds and hundreds of samples of the uncertainty that we can expect over 1 meter in a real world environment with all potential sources of uncertainty included. Based on this I can say with confidence that The repeatability over 1 meter is in the neighborhood of 0.010mm on our machines. The April has a very similar machine (a bit smaller) so I'm pretty confident that this is close to the value that they will experience as well if they keep the machine in a well controlled environment. The Koba test will certainty be much better than this.
Children
No Data