hexagon logo

Q-DAS Data Change/Update

I have a couple of questions and just need verification. We made changes in a couple of pcdmis programs with a dimension when it transfers to q-stat the data from the update does not stay with the data it was output with. I have attached a screen shot as you can see the new data Item 30 is dated  1-26 but is in the row of 9-18 data. Is there a setting that would put the new data in the same row it was measured in, instead of the of beginning of the database? Or do we have to create a new database? We are hoping we don't have to create a new database for small changes but if that is what we will have to do then we will. 

Parents
  • Hello Heather, in a basic database view, a line does not nescessarely mean "a measured piece". I suppose one of the changes you made to existing programs was that you add some new characteristics or you change the "key fileds" of the existing one. According to the rules of upload (key fields) these new (changed) characteristics were created (as new) in the testplan in the database. The values are then in the table at the begining. 

    From data flow point of view - I would suggest to maybe chage your workflow to avoid this. If changes are made to the testplan, maybe a new test plan should be created with revision number? There are different approaches to that, and I really recommend to contact your local support for consultation.

    From data analysis perspective - the value mask is not the best graphic to use when analysing the data. Perhaps different table view could help you to overcome this? It depends on what your goal is.

    The last point - how to overcome the view in the value mask? There are couple of options for sorting the data and maybe filling in the blank values for characteristics (if you have some additional data that you can use as a reference). But these settings causes that loading of the testplan is in the read only mode, because you are changing the natural sequence. The perfomrance might be also affected when we are talking about larger data sets.

    In conclusion I would recommend to solve the dataflow in the way, that this will not happen. If you have some doubts or questions, feel free to reach out to me.

Reply
  • Hello Heather, in a basic database view, a line does not nescessarely mean "a measured piece". I suppose one of the changes you made to existing programs was that you add some new characteristics or you change the "key fileds" of the existing one. According to the rules of upload (key fields) these new (changed) characteristics were created (as new) in the testplan in the database. The values are then in the table at the begining. 

    From data flow point of view - I would suggest to maybe chage your workflow to avoid this. If changes are made to the testplan, maybe a new test plan should be created with revision number? There are different approaches to that, and I really recommend to contact your local support for consultation.

    From data analysis perspective - the value mask is not the best graphic to use when analysing the data. Perhaps different table view could help you to overcome this? It depends on what your goal is.

    The last point - how to overcome the view in the value mask? There are couple of options for sorting the data and maybe filling in the blank values for characteristics (if you have some additional data that you can use as a reference). But these settings causes that loading of the testplan is in the read only mode, because you are changing the natural sequence. The perfomrance might be also affected when we are talking about larger data sets.

    In conclusion I would recommend to solve the dataflow in the way, that this will not happen. If you have some doubts or questions, feel free to reach out to me.

Children
No Data