I am looking for information regarding sampling of files during data migrations with file in excess of 1 million. Currently using c=0, but need to justify the use. These are one time data verifications conducted during execution of pre-approved protocols (validation, qualification, verification). The servers store files/applications that have GxP/GMP importance/functionality and are therefore subject to compliance rules. I have been unable to locate precedents regarding sampling. Currently we use c=0 to sample files and verify that their meta data is correct. Then we do an additional sub-sample (also c=0) to perform manual checks (open files, applications, etc.) of the source files compared with the target files. Anyone have experience in this arena?