Is the use of Artificial Intelligence allowed in ISO 9001 and AS9100 QMS’s?

Sam.F

Involved In Discussions
Is it ok to use Ai to writte procedures, work instructions, Ncrs, internal Audits and management reviews? In Iso9001 and AS9100? Thank you.
 
Elsmar Forum Sponsor
You're devaluing the profession......
It's been hard enough for "The Quality Department" to avoid writing procedures and instead get the people who follow them to write them to have a better feeling of ownership, what do you hope to achieve by now having AI involvement in their creation?

If this is the way the profession is going, thank God I am retiring in 2 months time.......
 
What will the AI system know about your processes and how the work is done that runs those processes. Are you sending a robot out to do the internal audits. Will you have management reviews with Siri listening in so that her AI can write up your MR output?

Sure, you could talk-to-text all of this through some AI interface. Actually having an AI interface write all of this up? From what? What/how will you feed an AI interface ALL of your system, including MR and auditing of your system, and handling NC, for writing it up. Seems it's a;; a bit of Fear OF Missing Out on the AI bandwagon, or someone just doesn't want to be bothered with it all.
 
Here's my take...can you use AI to write the documents? Sure. Have at it. No one wants to stare at a blinking cursor on the monitor for more than 10 seconds.

With that said, AI still makes some significant errors and a human "eyeballs on" safety net is needed to review, revise, and eventually approve.

Time is a precious commodity. I recently hosted a training session at work on how to write a work instruction - in the past, such training by me has been 1/2 to 3/4 of a day. For this session, I was given no more than 90 minutes. And it fell into a week where I was also hosting 4 sessions on the launch of a new data tool to our Human Resources team. Estimating around 30 hours of development work for each module, paired up with working on numerous active projects and my daily routine of running my team, I did not have time to sit and think about what to write for course content.

Hello AI. I plugged in my query, my wants, my teaching approach, and a few high level items, and within a few minutes I had an initial structure. It wasn't perfect. I disagreed with some of it. And it was dry...so very, very dry.

So I tweaked, adjusted, and was inspired then to add some additional content that was needed. A few more passes through AI for input. A few more revisions by me, and my training module for documentation was completed in under 20 hours. That gave me 10 hours back to my life to work on other things and to practice the module - where to bring in humour, voice mastery, etc.

Feedback has included "This was one of my favourite meetings/ group practices I have been to! Thank you making this fun."

Moral of the story? AI can be a great starting point - you need a solid query to start and it can save development time. However, it's not smart to rely solely on it - human touch is needed to make it meaningful.
 
Last edited:
and as @Joe Cruse said: where will that AI software get the facts they need to write those documents? THAT is the big bugger.

Of course you could write the first draft with the facts and then have AI clean it up grammatically, etc. but since AI can decide to eliminate or misstate facts you’ll need to review and revise. How does help you?
 
I am the Quality Manager here and have direct interaction with the ISO auditor. We ironed out some clarifications on some concepts and I made sure they were reflected in the QMS. One of my colleagues didn't quite agree and started feeding queries to ChatGPT and was inundating me with the responses. I would counter, he would feed, and back and forth until he ended up getting ChatGPT to basically say exactly what I had. Neither he nor ChatGPT has to defend the QMS with the auditor and it was an utter waste of time. So like others have said, it might provide a good starting point, but don't expect it to be properly representative of what you do (and certainly don't assume it's correct and allow it to drive what you do).
 
AI, at least for now, needs to be handled very carefully. The legal profession has run into many "hallucinations" with AI generated legal briefs. In some cases, making up legal briefs using AI created citations that didn't really exist, complete with interpretations worded to try to give credence to the citations. Lawyers that have submitted phony briefs have been disciplined. Some don't learn and have been disciplined more than once. It got really scarry when some hallucinations found their way into the language of judgements.

Yes, using AI can be very helpful if used with care as show by Roxanne in the post above, but the user, if they have any integrity, is obligated to carefully review and as needed, edit the results.

An interesting example of the weakness of AI was shown recently on a You Tube posting from Royalty Auto Services. After having diagnosed a no crank, no start concern, so they already knew the outcome, they used AI to see how accurately AI would help a DIY car owner find the problem. AI was able to access factory wiring diagrams and lead the car owner through the steps to pin down the problem. AI did lead to the right answer, but it left out a critical step that could have taken them down the wrong path. Basically, the cure was to replace a defective ignition switch. What was skipped was to determine if there was power to the ignition switch. It only tested the output from the ignition switch. Although the outcome matched what the shop found, AI would have led them to replace the ignition switch when the actual problem could have been somewhere before the ignition switch.

In the next few days they tried AI again with a problem with a BMW. I don't remember the problem at the moment, but AI was very misleading. BMW documentation was either misleading or not as available, resulting in absolute garbage results.

At this point, use AI very carefully.
 
Back
Top Bottom