Without knowing the details of *what* the validation is trying to demonstrate, it is practically impossible to make any directly actionable recommendations. I can offer a few comments:
The use of "variable data" would have to be well-motivated. If the process outcomes are more binary "pass/no pass", analysis of attribute data is probably more appropriate.... and in such cases where the decision on pass/no-pass is based on a test, it is possible that a test method validation is most appropriate. The sample sizes could be as small as 11 for a 95% confidence 95% tolerance hypothesis test... if the TMV is constructed as one.
If you are convinced variable data is the way to go, the lower limit on sample size will most certainly be convincing someone (yourself) that the distribution of the data collected is well-described by the normal distribution. Other distributions are possible of course, depending on the thing being measured, but most common tests assume/require a normal distribution. A sample size of 15 is the typical floor for assessing normality, although a literature review of Anderson-Darling or K-S tests will reveal that "fewer than 8" is the point at which you can't trust such tests. (this is from personal memory)
Variable data for study designs are probably best leveraged when (a) there exists some historical data about the variable and its relationship to the process (or design) boundary AND (b) there is a reasonable expectation that the data being collected will be "far enough" away (from the boundaries) to satisfy the necessary "k-value". IIRC a 95/95 requirement with a sample size of 15 ends up with a (1-sided) k-value of over 2.5, and the 2-sided k-value will be almost 3. [Assuming a normal distribution!]
I agree with all that you wrote. Important point to mention: Every console is checked in the end of production (acceptance / final test). I guess this can be the basis to claim that PV is not necessary? However, and not to shot myself in the leg, how robust should the acceptance test be to determine that the process is fully variable?
If you aren't relying on a process to control any variability that can lead to non-conforming outputs, you don't need to worry so much about process validation. This approach could be triggering for some third parties, so there are two recommended options:
- 100% verification of process outputs
- 'process validation' that demonstrates that the process doesn't introduce variability in the outputs.
The latter is (I think) somewhat subjective... but I've seen many cases where a third-party thought they were being clever (or playing 'gotcha') by pointing to some process step (let's say: "fastening") asking to see "the validation" but we had an analysis that showed we actually validated whatever the largest source of variation was (vital many), and did minimal testing to show that variation in the trivial many process steps couldn't contribute to non-conforming escapes.