hexagon logo

Change in accuracy from reducing pre-hit distance?

I’ve been trying to reduce CMM runtime in our QC department and I’ve found that using fly mode and reducing the pre-hit/retract distance by half, it cuts an average of 13% off the runtime for our product. I’m trying to make this a standard, so I want to show my time studies to management. Before I do, I wanted to make sure there were not any changes in accuracy.

From what I know, the only downfall to shortening the pre-hit distance from .1 to .05 is if measuring a non-consistent part like a cast part, the probe might move to close before taking a hit and throw an error when it touches the part. I also know it shortens the distance it will search past the theoretical point, but that can be adjusted by setting the check distance, right? So, is there any other reason you wouldn’t want to shorten the pre-hit distance?
  • If you use FIND HOLE then the search distance is based off of pre-hit/ retract.
  • I would say that retract distance as no effect on accuracy, because the coordinates are taken.
    For the prehit, I think that it's a "time" for stabilization of the dynamic move (decceleration) of the bridge, so using a too small distance could affect the accuracy. Here, you purpose to use 1.27 mm, I think it's ok.
    As usual, it's important to use the same speed during measurement and calibration. Depending on the probe, but maybe you could win a little more by changing the speed, without loosing too much accuracy.
  • As far as prehit and retract distance goes, it really depends on the machine and parts in question. When I find a part that isn't located that accurately, I might bump the prehit and retract distances up to .25". I might lose a few seconds a part, but the program always works and the machine doesn't stop. As JEFMAN eluded, there is a minimum distance needed to stabilize the machine before the hit. If you go under that distance you will lose repeatability and accuracy. Have they already determined that distance? Have you performed a Gage R&R showing that there is no loss? Are the parts made accurately enough that the program will work for every part every time and the machine won't sit? Is the risk of the machine stopping worth the extra ~1-2 hours of run time per day? Will the reduced run time end with the CMM sitting unused? Is there the potential for a burr on the part? If you run it and there is a burr it might keep going and not cost you time with the longer prehit, but at the lower prehit it could stop and not finish the part. Depending on the parts and industry in question, are a few seconds or minutes savings per part worth the potential risks of a quality escape in the future? On the older machines I program I have found that a drop of prehit distance from .1 to .05 is enough that parts that should not fail will fail. Running the longer distance helps more than it costs. I didn't guess on the prehit and retract distances I use, I tested them and found that they worked better in my situation.

    As far as fly mode goes, it doesn't always work as well as intended. In my previous tries of using fly mode I might have saved a few seconds per part, but on some random parts it would sit there and take extra hits on ID holes and just do things that were unexpected, leading to me being called to the machine. The benefit for fly mode on the parts I run was not worth the risk that the program might not work correctly every time. Even in the training that Hexagon gave me the AE said it wasn't worth the time to try it out and that it was more as a feature for their sales dept.

    I think you also have to look at the bigger picture. Are the parts being run lights out? Is there someone sitting at the machine all the time? If the machine stops will there be more fallout than if it continuously runs a bit more slowly? I'm all for saving time on the machine and reporting those savings to management to better my situation, but I also know that if a machine I program stops when I'm not there, I am the one that will be questioned for it, not the person running it.
  • Just another thing to consider. The type of probe you use can be a factor for the prehit distance. A touch trigger probe like a TP20 will register the hit while approaching the part, so a little distance is needed to reach the proper speed and stabilize. However, analog probes (like the SP25) touch the part and then slowly back off until the trigger force is reached before registering the hit then retracting at full speed. For those probes the prehit distance doesn't matter for accuracy - unless you use Fastprobemode which makes it take hits on the approach, like a touch trigger probe.
  • What machine are you using? Just wondering because I had issues with this before from a sheffield. The drives needed a certain amount of distance to stabilize before taking a point, especially if the points taken weren't straight in X Y or Z. I never liked those machines. So going along the lines of what JEFMAN said in his second sentence.

    Try calibrating your sphere with the same prehit and retract you are using for the program and see if you get any calibration errors.
  • you will likely see a more significant time savings by increasing movespeed and touchspeed.
    do a study using the cal sphere.
    write a routine that has assigns for movespeed, touchspeed prehit and retract distances.
    then loop the routine by altering one of the assigns within the loop.
    IE: touchspeed loop study

    assign/touchsppedvalue= 2
    measure circle of cal sphere at standard touch speed.
    loop starts
    assign touchspeedvalue = touchspeedvalue+0.1
    measure circle
    loop until circle stddev is beyond your reproducibility limits.
    this determines your fastest permissible touchspeed for that probe.

    you can do the same for prehit/retract distance as well as other variables.

    in my humble opinion movespeed should always be at max, unless your programs, fixture methods, or operators are shoddy.
  • One thing I forgot to mention earlier was that the probes need to be calibrated with the prehit and retract distances you are using. If you shorten up the prehit and retract on some programs, but not all, then the calibrations for each probe will need to be managed differently. Can you trust your operators to use the correct calibration for their programs if they differ from one program to another? If you only have one part in each machine and run them 24/7, that is easy to manage. If not, things can get out of hand quickly.
  • I’m primarily running on a Hexagon Global S. I’ve been calibrating to the same pre-hit distance I’m using in the program. I haven’t noticed any accuracy issues myself, but wanted some more experienced opinions.
  • I think the main problem is they just don’t want to change anything, so once a new default is set they wouldn’t change that.
  • Why would prehit/retract affect the accuracy? Doesn't make sense to me sorry. As long as the machine isn't bouncing around and tripping up on surfaces the stylus shortening or lengthen hits should not affect the accuracy at all.