E
I am looking into this standard to verify some things we are doing and am having a hard time to understand the classification table and how to apply the fixed error for resolution & % of reading or % of displacement and if there is some cross over from Fixed to Relative error as there is in E83.
Can someone explain it? The standard is not giving any examples and I'm having a hard time to get my head wrapped around this (maybe because it's Monday...)
For instance, I have a machine with a 0.1um resolution.
To meet class B what is the allowable +/- tolerance for displacement error at 100mm.
I am making up an excel sheet so I can just make simple inputs to get the correct allowable error for any give set point based on the class table.
Can someone explain it? The standard is not giving any examples and I'm having a hard time to get my head wrapped around this (maybe because it's Monday...)
For instance, I have a machine with a 0.1um resolution.
To meet class B what is the allowable +/- tolerance for displacement error at 100mm.
I am making up an excel sheet so I can just make simple inputs to get the correct allowable error for any give set point based on the class table.