Huss Mardini
Regulatory AI Validation
Hi everyone,
I’m looking for some interpretation on Clause 7.4 (Purchasing) as we define the quality agreement for my new service model.
We run a regulatory service that uses a "hybrid" approach: we use specialized AI agents to draft the heavy lifting of a 510(k) (mapping evidence to eSTAR sections, generating device descriptions, predicate finding, classification matching etc,), but we then have a human consultant review and finalize the package before it goes to the manufacturer.
My question is: How should a manufacturer audit a service like ours?
Is it "Software Validation"? Since we use AI tools internally to generate the draft, does the manufacturer need to see a CSV package for our algorithms?
Or is it "Purchasing"? Since the final deliverable is reviewed and signed off by a human expert (just like a traditional consultancy), is it sufficient to audit us as a standard service provider based on the competence of the personnel and the verification of the final output?
I want to make sure I’m classifying the risk correctly. We are treating the AI as a "tool used by the consultant" rather than "software as a medical device," but the line is getting blurry.
Has anyone here audited a vendor who uses Generative AI as part of their service delivery yet? What controls did you ask for?
Thanks,
Huss
I’m looking for some interpretation on Clause 7.4 (Purchasing) as we define the quality agreement for my new service model.
We run a regulatory service that uses a "hybrid" approach: we use specialized AI agents to draft the heavy lifting of a 510(k) (mapping evidence to eSTAR sections, generating device descriptions, predicate finding, classification matching etc,), but we then have a human consultant review and finalize the package before it goes to the manufacturer.
My question is: How should a manufacturer audit a service like ours?
Is it "Software Validation"? Since we use AI tools internally to generate the draft, does the manufacturer need to see a CSV package for our algorithms?
Or is it "Purchasing"? Since the final deliverable is reviewed and signed off by a human expert (just like a traditional consultancy), is it sufficient to audit us as a standard service provider based on the competence of the personnel and the verification of the final output?
I want to make sure I’m classifying the risk correctly. We are treating the AI as a "tool used by the consultant" rather than "software as a medical device," but the line is getting blurry.
Has anyone here audited a vendor who uses Generative AI as part of their service delivery yet? What controls did you ask for?
Thanks,
Huss