Search the Elsmar Cove!
**Search ALL of Elsmar.com** with DuckDuckGo Especially for content not in the forum
Such as files in the Cove "Members" Directory

Chess, Neural Nets and QC

Ed Panek

VP QA RA Small Med Dev Company
Trusted
#1
I am heavily involved in the Leela Chess Zero Project. LCZero We develop NN of various sizes to learn to play chess from scratch. The larger NN takes longer to train but have deeper tactical avoidance while the smaller NN is faster to train but lacks some tactical awareness. Only the rules and 1 point for a win, 0.5 for a draw and 0.0 for loss. A similar project from Google's DeepMind beat the world champ in Go last year. From there the networks play each other to explore the gamespace with some exploration (temperature) parameters and then once those are finished the next networks are trained based upon those results ( patterns that look like this led to wins and these led to draws, etc). Then the process continues for millions of games.

I test the networks vs traditional engines and other networks here
I run the traditional Alpha Beta (Alpha–beta pruning - Wikipedia )engines on a 32 core AMD CPU and the NN on 2 Nvidia GPUs. The results are astounding. In only 2 years of development, Leela Chess Zero is beating the best traditional chess engine on the planet - having itself taken 40 years of development manually by humans limited chess knowledge. In fact, Leela does things we are not sure of the reason(s) why (we can't explain the strategy per se other than to say 'placing a pawn here seems advantageous in 50 moves?) This could lead to some odd situations where we accept an assertion as true without us being able to directly explain why it's true. A NN may see a deeper harmonic in the data we cannot and determine a pattern that exists in this data.

It's amazing how just via self play the networks can learn fundamental chess theory and openings in months that took humans centuries.

I was thinking since QA is heavily reliant on data and trending if AI will play a larger part of QA work in the future. With the volume of metrics available to an AI it seems inevitable it is coming for us very soon.

What do you think? Is it feasible in your space?

The Future of Artificial Intelligence and Quality Management

How Artificial Intelligence revolutionizes Quality Assurance

How AI or machine learning can improve quality assurance: six tips
 
Last edited:
#2
Totally. But probably not as soon as we'd like. Just off the top of my head...

I'm no expert in AI, but my (limited, and possibly incorrect) understanding of how machine learning works is that (more or less) you train the program by only programming it to "want" to maximize the possibility of whatever you define as success for it, but otherwise not guiding its behavior in the endeavor in any way and letting it loose "in the space" to figure out what that means through (lots and lots and lots of) trial and error.

The biggest issue in my mind is that maximizing quality in medical device manufacturing (while balancing it with business concerns) is a virtually infinitely more complex endeavor than winning chess. The biggest hurdle in my mind is identifying what all the relevant data points are, and just as important, actually having the methods, resources, and will to consistently capturing that data. In chess, the "space" can be completely and totally virtualized (i.e., the digitized space IS the space). In medical device manufacturing, you first have to figure out how the create a virtual representation of the space as it actually is (or was)... very difficult--especially for processes that have human involvement.

Ironically, the fact that some activity is prescribed by regulators should make the machine's job easier if the goal is compliance as well as quality.

As far as whether or not "belief" in the machine's findings without necessarily understanding the secret sauce of how it got there would be acceptable... I think it would be acceptable after a recognized scientific authority vetted and presented rationale for the machine's behavior. We already do this to some extent. Most QA professionals (myself included) couldn't explain to you the scientific rationale behind how ANSI/ASQ Z1.4 sampling plans work, but we're trained that they're valid and how/when to apply them.

On that note...

Honestly I think the biggest impact that could be made today in the QA/RA space right now would be the automatization of certain QA/RA processes and the "relationalization" of lots of the QMS documentation and data that most companies already gather.

Since quality systems are formalizations of processes, it means that (most) processes should operate using basic if/then logic, and many of those if/thens could be evaluated and appropriately handled by a machine instead of a human.

Similarly, linking data across different QA/RA subsystems and presenting it in such a linked manner would make most QA/RA professionals' job so much easier. Especially in big, complex systems that span multiple product offerings, multiple locations, and years (or decades) of accumulated QMS crud. Just being able to follow a failure mode from Design Control -> Complaint Handing -> Root Cause -> CAPA -> Feedback Monitoring for verification would be amazing.

I'm just doing a brain dump at this point, but it's a project I've been working on and off on over the past year.
 
Top Bottom