A
I am upgrading and automating a measuring process and this has lead me to examine the coding lying behind a computer-based data capture system that has been in place for a while.
ie various parameters of parts are measured at an AQL, the data is manually entered into the PC and passed/failed on the basis of maipulation of the data entered.
I am looking to have the process automated, removing the manual entry of the data, and have discovered a bit of code in the data handling that stipulates that if the average value of parameter X for the sample exceeds the upper and lower tolerances +/- 1 Standard Deviation , the batch is failed.
ie the specification for paramater X is 10-20, but this data criteria widens the criteria to a lower tolerance of 10-1 SD and the upper tolerance to 20+1 SD.
I don't understand why the creator of the code would have widened the tolerances by 1 SD as they did, but I'm uncomfortable removing the code without justification.
Any opinions?
ie various parameters of parts are measured at an AQL, the data is manually entered into the PC and passed/failed on the basis of maipulation of the data entered.
I am looking to have the process automated, removing the manual entry of the data, and have discovered a bit of code in the data handling that stipulates that if the average value of parameter X for the sample exceeds the upper and lower tolerances +/- 1 Standard Deviation , the batch is failed.
ie the specification for paramater X is 10-20, but this data criteria widens the criteria to a lower tolerance of 10-1 SD and the upper tolerance to 20+1 SD.
I don't understand why the creator of the code would have widened the tolerances by 1 SD as they did, but I'm uncomfortable removing the code without justification.
Any opinions?