Yet Another 7.2 Thread (sorry)

You might find it helpful to consider a framework and terminology such as Bloom's Taxonomy to help your people specify what knowledge and/or competence the job position requires. Do you expect people to recall (memorize), recognize, demonstrate, or merely have awareness and know when to seek assistance?

For example, the ASQ Body of Knowledge for a Certified Quality Engineer uses verbs to define competencies, such as Define, Identify, Calculate, Interpret, Classify, Apply. These verbs give an indication how each particular competency could be developed, and would be measured in an objective manner.


By Tidema - Own work, CC BY 4.0, File:Bloom's revised taxonomy.svg - Wikimedia Commons
1771524066547.webp
 
I'm torn on the topic of 'objective measures' for competency. Everybody 'makes mistakes', and everyone could 'do better', and even certification bodies like ASQ uses (multiple!) 'sliding scales' for the certifications. The best I think an organization to hope for is to establish requirements that pass a 'sniff test' and recognize that whatever 'objective measures' are established doesn't mean that lack-of-competency has been forever ruled out as a potential root cause for any issues that arise... or that they are forever free of future review and evaluation.

I've seen my fair share of people 'given' (as in 'gift giving') jobs that they were ultimately not competent at. Sometimes the requirements were ignored, sometimes the requirements were meaningless... occasionally such people gained competency, but usually not.

My favorite sour anecdote involves a "certified Quality Engineer" who agreed to a validation plan with a (legitimate, mathematically established) study design who upon completion of the successful validation wanted us to "do the whole thing again" because "they just didn't believe the results". When I asked "how much do you not believe the results?" the answer was: "What a dumb question! I don't believe them at all!"... but of course they'd approved the study design which literally established the degree of confidence and tolerance by which we all agreed we'd need to believe the results.

I like this anecdote not because "har har, someone can't math" but because the literal base of the taxonomy of knowledge for quality engineers is to remember that there exists a means of establishing degrees of belief in an outcome. I'd of course expect a quality engineer to also understand why we have such a thing as study designs... but that was a quality org established with extremely flexible requirements from the VP down. I swear that this quality engineer (also certified as a quality auditor!) literally thought study designs were just some paperwork ritual.
 
My job is to audit and report.
Probably need to re-visit this. As an auditor, are you seeing where things went south and can tie that directly back to ineffective training? If not, then they probably have an effective system. (Whether it's efficient or not is a different story.) You're probably going to start seeing doors close on you if you get too far out of your lane as auditor.
 
As an auditor, are you seeing where things went south and can tie that directly back to ineffective training? If not, then they probably have an effective system.
does this mean that if it looks we're not doing all the things that 7.2 says to do, but I can't point to a particular problem that lack of conformity to 7.2 would cause, then I shouldn't report it as nonconforming?
 
First and again: 7.2 is about competence not training. While there is connection it isn’t exclusively so. At all. So this isn’t about training.

Next from a strictly compliance standpoint there doesn’t have to be a resulting defect or “problem” to be noncompliant to the requirements.

However internal auditors are not so strictly bound to requirements as external auditors are. If you see a real weakness then it can lead to preventive action to improve the system. BUT you really need to have in depth knowledge and competence and credibility to pull this off.
 
does this mean that if it looks we're not doing all the things that 7.2 says to do, but I can't point to a particular problem that lack of conformity to 7.2 would cause, then I shouldn't report it as nonconforming?
Strictly speaking, a properly worded nonconformance finding cites verbatim the stated requirement and the evidence which led to the finding.

Some of the statements in the management system standard are requirements that can be audited to. Requirements usually include the word "shall". Other statements in the standard are nominal or informative, but not requirements These statements contain words such as might, may, consider.

A management system audit often includes requirements stated within the organization's own documented system, such as policies and procedures. So if the job description says department heads must have a baccalaureate degree and the auditor finds one or more cases where they do not, that nonconformance could be written. It depends if the scope of the audit includes internal procedures.

Another set of requirements that may fall within scope of an audit are customer requirements, contractually obligated through a purchase order or terms & conditions. Sometimes, a customer will state a requirement, for example, that all welds must be welded by a licensed or certified operator, or inspected using particular inspection methods. Such requirements for competence would be fair game in an audit, depending on scope.

The third dimension of many management systems audits is the effectiveness of internal procedures. If there are cases discovered in the audit, where serious errors occurred and there is evidence (for example, the company's own problem-solving documents determine) the root cause was lack of competence, an nonconformance could be written. It wouldn't matter what the training records or the person's resume show, the management permitted a non-competent person to make decisions or perform work, and therefore the management system could be judged ineffective against the requirements.
 
does this mean that if it looks we're not doing all the things that 7.2 says to do, but I can't point to a particular problem that lack of conformity to 7.2 would cause, then I shouldn't report it as nonconforming?
No, your question is "how to you determine and/or confirm the competency of (insert position here)?" The person should be able to show you how the competency was evaluated -- part a and b. It could be simple or complex, depending on the position and risk. For a low level operator, it could be as simple as observation. For a journeyman Tool and Die maker, they'll have a certificate from their apprenticeship. D would be the documentation -- most likely some type of review/record. The question for c comes into play if you see something that indicates competence was lacking and/or needed to be obtained -- maybe moving into a new position, or part of a non-conformance RCA, etc.

Now the issue that I would have, is unless you had evidence of lack of competency (i.e.; a problem), it's hard to say what the organization is doing is not effective. You may not "like" how where are doing our evaluation, but if it is working, it's working.
 
…My favorite sour anecdote involves a "certified Quality Engineer" who agreed to a validation plan with a (legitimate, mathematically established) study design who upon completion of the successful validation wanted us to "do the whole thing again" because "they just didn't believe the results". When I asked "how much do you not believe the results?" the answer was: "What a dumb question! I don't believe them at all!"... but of course they'd approved the study design which literally established the degree of confidence and tolerance by which we all agreed we'd need to believe the results.

I like this anecdote not because "har har, someone can't math" but because the literal base of the taxonomy of knowledge for quality engineers is to remember that there exists a means of establishing degrees of belief in an outcome. I'd of course expect a quality engineer to also understand why we have such a thing as study designs... but that was a quality org established with extremely flexible requirements from the VP down. I swear that this quality engineer (also certified as a quality auditor!) literally thought study designs were just some paperwork ritual.
Seen this a thousand times. It is the perfect example of ‘training effectiveness’ because they remembered the formulas and facts long enough to pass a test, but had no clue how or why the methods worked and so they were absolutely not competent in their use. This is exactly why we need to be clear on the difference between training ‘effectiveness’ which is too often relegated to a test or worse attendance (an objective measure) and actual competence (almost always a subjective assessment) in real life.
 
Back
Top Bottom