How to Measure Effectiveness of the Design Validation Process?

Chennaiite

Never-say-die
Trusted Information Resource
My query is with regard to Component and Vehicle testing carried out during Design & Development.

Thanks in advance.
 

somashekar

Leader
Admin
By effective pre and post warranty repair / replacement data analysis on the vehicle population.
In a way your components / vehicles are being continuously validated on and off the streets.
OR
By extreme condition testing like MRF
"The tyres we race are the tyres you buy" ....so goes the MRF slogan
 

Chennaiite

Never-say-die
Trusted Information Resource
I don't deny that the product performance in the field is the measure of effectiveness of Design validation 'in a way'. But since any product failing in the market is preceded by a lot of process steps, and Validation during Design and Development is only one of those, the analysis of field data does not categorically conclude the effectiveness of one particular step, I believe.

I have personally retro analyzed so many warranty failures that are attributed to Design & Development phase and in some cases I can say that the test relevant to the failure was conducted and cleared without any failure. But only with this, I am unable to scientifically judge my testing process. I further get into and compare the test conditions against field conditions, test standard used, equipment used, etc but that is getting me nowhere and moreover it becomes case to case analysis of systemic root cause. Instead of that I am looking for a metrics that measures the ability of a particular test to detect the failure mode and hence gives me an indication of effectiveness of the testing process in short term i.e. without having to wait for warranty failures. I am intereted in knowing if any such practice exists.

I am not sure if I really managed to translate my thoughts into words here.
 

Ronen E

Problem Solver
Moderator
Hi,

Generally speaking, "effective" means "achieving its defined objective(s)".

What are design validation's objectives? IMO, to establish through sound objective evidence that users' needs, as defined for the given device, are met.

Have you defined and documented what the users' needs are, for the given device? If your design validation provided good (or at least reasonable) confidence that these needs were met, then it was effective by the definition I began with. Risk management is an interacting and completing activity for enhancing your confidence that there will be as little as practically possible consequent warranty failures.

If you're looking for 100% confidence, then I don't think any single QA tool can provide that. I'm not even sure any combination of tools available will guarantee a 100% failure free device (at least not in real-life world). That's why post-marketing monitoring and continuous improvement tools are so important.

Cheers,
Ronen.
 

Chennaiite

Never-say-die
Trusted Information Resource
<snip>If your design validation provided good (or at least reasonable) confidence that these needs were met <snip>

This is something that I want to zoom in. My results as such can give me confidence of the product meeting user needs, but what is the probability of the results not showing false alarm or miss rate. This is something in the lines of calibration 'uncertainity'. I am not good at this subject of calibration, but I understand 'uncertainity' is the indicator of how uncertain the results can be. Similarly, how uncertain can any testing result be, given the variables involved in the process, and how to measure it.

Let me take an example, a suspension system is being tested for durability under certain load conditions. The test results conclude that the system, under the given conditions, will crack at 50000 cycles. Now, the results could be influenced by variation in 4M, which brings the question of uncertainity to the fore. Considering a miss rate, the product is prone to failures in the field and if it is a false alarm, the product ends getting overdesigned.
 
Last edited:

somashekar

Leader
Admin
You kept me thinking if everything that we do, can get into statistics and the probability, confidence level, uncertainty and such stuff can be found and improved.....
 

Ronen E

Problem Solver
Moderator
Similarly, how uncertain can any testing result be, given the variables involved in the process, and how to measure it.

The way to address it is through sound statistical techniques and proper validation planning. When I talked about "good confidence" I didn't refer to a subjective feeling.

Cheers,
Ronen.
 
M

markspend01

Hey Guys well i think that Metrics are developed to evaluate the effectiveness of a procedure. Analytic's may be situated at any given step of a procedure and should make sure that client specifications are being met.Thanks!!
 
Top Bottom