The answer to your question is this:
First: determine the risk level associated with the process. This can come from a process
FMEA or similar activity. Many organizations keep things simple and categorize risk in to groups like low, medium, and high.
Second: Make a connection from the risk level to an appropriate confidence level and/or uncertainty level for use in statistics. Many organizations have internal procedures where they give guidance. For example... If HIGH risk, then sample size must satisfy a 99/99 confidence/reliability. If MEDIUM risk, then sample size must satisfy a 95/95 confidence/reliability. If LOW risk, then sample size must satisfy a 95/90 conf/reliab. The numbers should be agreed on across your organization so that there is continuity among all the validations.
Third: Choose from a number of statistical methods to calculate sample size, given the known variables that were determined from your associated risk level.
This answer is vague, but correct when it comes to validation. So a sample size of 30 may or may not be appropriate for validation. Its really only useful in that most data starts to look 'normal' shaped and it may eliminate the need to test for normality in some applications. If in doubt about the sample size to use, start with a RISK ANALYSIS.
-Gary