Incorporating Large Language Model (LLM) AI into a QMS

How long before "Retrain AI model" replaces "Retrain the Operator" as the #1 corrective action (in terms of frequency of use).
 
Elsmar Forum Sponsor
I wonder how effective an LLM would be at differentiating between 'special cause' and 'common cause'?
 
"Previous LLM did not make this error. Current LLM does make this error." Root cause? LLM appears as a black box so who knows. One set of training data creates one fit. Further data creates a different fit.
 
Can any of you tell me how the process of hallucinating is different to other outputs? As long as there are no levers for process control, grand theft autocomplete might barely be a tool for text processing wich some postprocessing by something else, e.g. by making some wall of text or bullet points into more readable text if a human checks the output. Also, as a courtesy don't forget to shut off the lights for approx. a month to offset the unnecessary high energy consumption of application of llm vs. application real intelligence.
 
I wonder how effective an LLM would be at differentiating between 'special cause' and 'common cause'?
No better than good old control limits, calculated correctly. Seriously.
Remember software isn’t intelligent. Someone still has to program it.

And it isn’t special cause, it’s assignable cause.
 
LLM is generally considered to be a subset of Artificial Intelligence. So yes, the software is (or at least supposed to be) intelligent. And confronted with a set of data to analyze statistically a skilled human should be better at deciding which are assignable causes. The LLM could be programmed to look for patterns in data that resemble assignable causes and bring those situations the attention of a human who direct further research and action as appropriate.
 
From my experience with Leela Chess Zero a chess AI, training a network of weights is unique. At first it learns very quickly. It learns the basics very well. At some point there are two competing values. A) learning new information patterns while also B) not forgetting previous patterns learned. Once you get into the last few % of performance in order to close that gap you need to add the value rate of new training data. But this has the effect of losing previous patterns. Those last few % of performance is orders of magnitude harder to close. You need more data, memory, and processing ability.


Going from 0-98% of performance costs X.

From 98-99% it costs 5X in resources.
99.5-99.6% might be 50X.
99.6-99.65% 1000X and so on.

But if you think about it this is exactly like humans. Studying quality you can learn the basics in a year or two. But as you go further the knowledge gains become fewer and fewer and require much more resources. Most of the questions at Elsmar are in the last 2% area. Very specific stuff.
 
Last edited:
LLM is generally considered to be a subset of Artificial Intelligence. So yes, the software is (or at least supposed to be) intelligent. And confronted with a set of data to analyze statistically a skilled human should be better at deciding which are assignable causes. The LLM could be programmed to look for patterns in data that resemble assignable causes and bring those situations the attention of a human who direct further research and action as appropriate.
You mean like any other SPC software? Why even think about using an expensive thing when a cheap and easy thing will do the same thing? And probably better.

AI is artificial intelligence. Not real intelligence. Software is not a sentient being. Just because it is all the rage doesn’t mean it is actually god-like. By the way scientists/programmers have been claiming AI’s omnipotence since the fifties. And haven’t yet been successful.
 
LLM is generally considered to be a subset of Artificial Intelligence. So yes, the software is (or at least supposed to be) intelligent. And confronted with a set of data to analyze statistically a skilled human should be better at deciding which are assignable causes. The LLM could be programmed to look for patterns in data that resemble assignable causes and bring those situations the attention of a human who direct further research and action as appropriate.
LLM training is somewhat machine learningly by stealing content (made with some effort) from the internet. When the datasets are done, its a elaborate autocomplete (Loved by people who don't want to put any effort in something) continuing stealing and guzzling lots of power.
 
Back
Top Bottom