S
smallbear
I'm conducting tests per IEC 60601-2-27. I have a particular question regarding Clause 201.12.1.101.1, Accuracy of Signal Reproduction. The requirement is: "Input signals in the range of ±5 mV, varying at a rate up to 125 mV/s, shall be reproduced on the output with an error of ≤ ±20 % of the nominal value of the output or ±100 μV, whichever is greater."
The test is simple:
1. Set gain to 10mm/mV.
2. Input a triangular wave.
3. Adjust input signal amplitude to 100% full scale peak-to-valley output.
4. Decrease input signal amplitude by factors of 2, 5, and 10.
Success criteria:
For each signal amplitude in (4), the displayed output shall be linear within ±20 % or ± 100 μV of the full scale output.
My question is:
How do you interpret the success criteria? I'm assuming the +/-20% is allowing a margin of error for output signals that are not perfectly triangular, which means it's acceptable to have a noisy/distorted signals?
I'm currently interpreting this as a 20% linearity on a single test case. It's not making sense to me on how this 20% applies to the full scale output.
Any help would be greatly appreciated.
Thanks.
The test is simple:
1. Set gain to 10mm/mV.
2. Input a triangular wave.
3. Adjust input signal amplitude to 100% full scale peak-to-valley output.
4. Decrease input signal amplitude by factors of 2, 5, and 10.
Success criteria:
For each signal amplitude in (4), the displayed output shall be linear within ±20 % or ± 100 μV of the full scale output.
My question is:
How do you interpret the success criteria? I'm assuming the +/-20% is allowing a margin of error for output signals that are not perfectly triangular, which means it's acceptable to have a noisy/distorted signals?
I'm currently interpreting this as a 20% linearity on a single test case. It's not making sense to me on how this 20% applies to the full scale output.
Any help would be greatly appreciated.
Thanks.