SBS - The best value in QMS software

ISO 9001 and Public Schools - Under continuous fire for low student achievement

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
#41
For the measures I am discussing, statistical significance is tough to generate even on a massive scale. A single district?

We collected 200,000 sheets of student work from 200 schools and barely made statistical significance on the measures we are performing. A doctoral student doesn't stand a chance. (The work stacks 80 feet high.)

Statistical significance is tough to pull off in education research, which is why they resort to meta-analysis to generate statistically significant results. (And don't even get me started on meta-analysis.)
Beautiful, that further supports my claim that the problem at this point is you can't measure what you want to measure, so you do not have a measurable (for starters), and the conclusions are not likely to be significant until you do have a measurement system that is effective.


I fully disagree. Even if the state test was PERFECT in analyzing students' true proficiency, it does not satisfy the needs of "check" demanded of the Deming cycle because it is not formative. State tests do not check the process used to generate the results, they only test the outcomes.

So my math scores are low. Even if the state tests is perfect, that still doesn't point to a solution. Is it the instruction? Is it the curriculum? Is it the alignment of the curriculum? Is it the rigor of in-class questioning? State tests won't tell you.

In the Deming cycle, the check phase does not mean, "Sell the cars and see if you generate any customer complaints," which is directly analogous to the method of using state test scores to measure teaching effectiveness.
Well, it is common to measure the results to make conclusions about the process, but again, that supports what I said, in that state tests are the best the government could come up with as a verification of effectiveness (and we know how ineffective the government is at that, anyway), and they are far better then any measurement attempt one could develop for the issues you listed. Remember, by definition a measurable has to be able to be measured. And, that means measured with significance, not with a yardstick.


These students don't know how to write because no one has ever taught them. Why? Because it was always someone else's responsibility.
Responsibility? To teach, or to teach the curriculum? Did they not teach at all, or did the not teach effectively, or was it the curriculum that was not effective? Where did the ball drop? Can the testing at least determine that the ball was dropped? Most colleges without open enrollment believe so.

Besides, what is it you are teaching and are subskill proficiencies getting in the way of assessing their proficiency on the content? That is one of the things I look for when observing classrooms. Can I measure that to national specifications? No. But it still needs measuring.
No, in my coursework it does not get in the way of assessing their proficiency on the content. Part of that is because the assumption is that someone else is going to clean up the mess - as my students may have been assessed (with a test, oh, my) thanks to open enrollment and hopefully it dumped them in developmental English class.
 
Elsmar Forum Sponsor
J

JW9000

#42
Beautiful, that further supports my claim that the problem at this point is you can't measure what you want to measure, so you do not have a measurable (for starters), and the conclusions are not likely to be significant until you do have a measurement system that is effective.
Using this reasoning, teacher evaluations are completely worthless. Why go into the classroom at all, if the data collected cannot be verified as statistically significant?

You misunderstand the purpose of statistical significance. We are not generating results to publish in a refereed journal to be generalized to the population as a whole.

Well, it is common to measure the results to make conclusions about the process, but again, that supports what I said, in that state tests are the best the government could come up with as a verification of effectiveness (and we know how ineffective the government is at that, anyway), and they are far better then any measurement attempt one could develop for the issues you listed. Remember, by definition a measurable has to be able to be measured. And, that means measured with significance, not with a yardstick.
I fully disagree. Statistical significance is not always necessary for the purposes of professional development because the data is not being generalized to the national (or even regional) school population as a whole. The data is being used instead for opening avenues of discussion. "I observed you for two hours and noticed you called on volunteers quite often... let's discuss the effectiveness of that method."

Otherwise, no one would ever perform teacher evaluations. To generate statistically significant results on teacher evaluations would require scores of hours per teacher. No district has that capacity.

Responsibility? To teach, or to teach the curriculum? Did they not teach at all, or did the not teach effectively, or was it the curriculum that was not effective? Where did the ball drop? Can the testing at least determine that the ball was dropped? Most colleges without open enrollment believe so.
So the ball was dropped. Now what?

If all there is to education research is to identify low-performing schools, then our job is done. I can identify those schools right now.

No, in my coursework it does not get in the way of assessing their proficiency on the content. Part of that is because the assumption is that someone else is going to clean up the mess - as my students may have been assessed (with a test, oh, my) thanks to open enrollment and hopefully it dumped them in developmental English class.
I had trouble parsing your last statement. Could you clarify?
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
#43
Using this reasoning, teacher evaluations are completely worthless. Why go into the classroom at all, if the data collected cannot be verified as statistically significant?
True, some are completely worthless, and therefore, true, you may very well be wasting your time.


You misunderstand the purpose of statistical significance. We are not generating results to publish in a refereed journal to be generalized to the population as a whole.

I fully disagree. Statistical significance is not always necessary for the purposes of professional development because the data is not being generalized to the national (or even regional) school population as a whole. The data is being used instead for opening avenues of discussion. "I observed you for two hours and noticed you called on volunteers quite often... let's discuss the effectiveness of that method."

Otherwise, no one would ever perform teacher evaluations. To generate statistically significant results on teacher evaluations would require scores of hours per teacher. No district has that capacity.
If you think that statistical significance is only the realm of published journals, then I really have little to offer. You have provided all the evidence one needs to support the argument that so far all of the data collection you have observed is a waste of time...or, marginal at best. The conclusions will be equally as faulty, and a waste of resources. The issue of sample size (local to national) is irrelevant, in that any data needs to be valid for the conclusions to be valid. Even nothing else, I think you have identified a key root cause to ineffective attempts to improve education. You imply that the level of data sufficient to evaluate education is simple quality level TCE, and if that is true, than the impact of the decision made with it will be as solid. If you do not have the resources to develop significant sampling and measurement, then please do not waste your time. You will just fake yourself into thinking you learned something.

So the ball was dropped. Now what? If all there is to education research is to identify low-performing schools, then our job is done. I can identify those schools right now.
Well, the one thing you might not want to bother to do is use shoot from the hip yardstick data to make improvements, although you seem to support that it is good enough. If so, as Dr. Phil would say: "How's that workin' for ya?"

I had trouble parsing your last statement. Could you clarify?
It means I am not going to teach English comp in my class. It is not part of my "standard curriculum" for starters, and our system is designed to deal with the secondary school failings elsewhere in the curriculum. Of course, that is our college. Not all colleges do, some just fail them. Those that do not further their education are not likely to have any system repair that damage. So, once the ball is dropped, the probability that it will get picked up elsewhere is slim. The damage is done.
 
J

JW9000

#44
True, some are completely worthless, and therefore, true, you may very well be wasting your time.
No, according to your argument a teacher evaluation is worthless unless the results can be verified as statistically significant. What school system could ever operate under such hopeless ideals?

I don't think I have ever seen teacher evaluations performed to statistical significance. Have you?

You're the principal; how would you evaluate your staff? Remember, only statistically significant data is useful data in your scheme, so how are you going to do it?

For that matter, how does anyone evaluate their staff?

Suppose a parent complains about a teacher. Do you then survey every parent and perform a statistical study to find out if the parent's complaint is valid before talking to the teacher? If so, you are going to be a very busy man.

I have a pair of Vernier calipers in my drawer. I don't yank it out every time I need to measure an object. We need to place the tools at our disposal in proper context.

If you think that statistical significance is only the realm of published journals, then I really have little to offer. You have provided all the evidence one needs to support the argument that so far all of the data collection you have observed is a waste of time...or, marginal at best.
Back up. The data we have collected from schools is statistically significant, but only at the state level because of the huge sample sizes. At the individual school level it is not, but our contracts for these studies were state-level and were not designed to generate samples sizes at the individual school level for statistical significance.

The conclusions will be equally as faulty, and a waste of resources. The issue of sample size (local to national) is irrelevant, in that any data needs to be valid for the conclusions to be valid.
Okay, you assign letter grades to your students, right? Do you perform statistical studies on the grades you hand out to ensure that they are receiving valid grades?

Do any of our grades rely on subjectivity? Have you tested the reliability of your subjective grading against known standards before assigning them?

Well, the one thing you might not want to bother to do is use shoot from the hip yardstick data to make improvements, although you seem to support that it is good enough. If so, as Dr. Phil would say: "How's that workin' for ya?"
Sorry, but I don't rely on Dr. Phil to establish my protocols for success.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
#45
No, according to your argument a teacher evaluation is worthless unless the results can be verified as statistically significant. What school system could ever operate under such hopeless ideals?
Oh, you can operate. You just can't adequately justify any "improvement". You just throw them at the dartboard and hope for the best. In manufacturing, it's called "backyard garage" controls. If that is "good enough", then have at it.

Suppose a parent complains about a teacher. Do you then survey every parent and perform a statistical study to find out if the parent's complaint is valid before talking to the teacher? If so, you are going to be a very busy man.

I have a pair of Vernier calipers in my drawer. I don't yank it out every time I need to measure an object. We need to place the tools at our disposal in proper context.
Conversely, is one parent complaining significant enough to make a change? Is one 'F' significant enough to prove the curriculum and/or teacher is ineffective? Are you "talking" to someone without sufficient valid supporting evidence? OK...that is one approach to problem solving.

My point is not to pull out the verniers to measure everything, it is to prove whether they are the right gage for the job. You seem to be willing to accept a ruler to measure everything with no test of validity at all - because all that you have is a ruler. You may find some folks will accept that justification.

Okay, you assign letter grades to your students, right? Do you perform statistical studies on the grades you hand out to ensure that they are receiving valid grades?

Do any of our grades rely on subjectivity? Have you tested the reliability of your subjective grading against known standards before assigning them?
No, but at least I recognize that limitation. And, I do 100% sampling survey of the class for several key facets of the course delivery. The school also does a class survey, but not to the detail I do. The school also uses a measure of retention, called DWF. I have illustrate how it may seem like a valid indicator of the effectiveness of retention, it is fraught with error. A student that is enrolled in more classes than they can handle will drop out of a 1 credit our non-major class before any other class - whether they were having problems in it or not. So, you will see a higher DFW score when it may have nothing specifically to do with the delivery at all. That is one example of where you think you are measuring something, and if you don't validate the data you will be shooting at a moving target for no reason at all.

Back up. The data we have collected from schools is statistically significant, but only at the state level because of the huge sample sizes. At the individual school level it is not, but our contracts for these studies were state-level and were not designed to generate samples sizes at the individual school level for statistical significance.
Who cares what the whole state data looks like (other than the legislature)? That would be like making a change on one car based on data from every car made from every brand. It might catch the huge obvious errors (of course) but little for the individual grade.

Sorry, but I don't rely on Dr. Phil to establish my protocols for success.
Clearly. But, it not clear that the protocols you have chosen are any more effective.
 
J

JW9000

#46
Oh, you can operate. You just can't adequately justify any "improvement". You just throw them at the dartboard and hope for the best. In manufacturing, it's called "backyard garage" controls. If that is "good enough", then have at it.

Conversely, is one parent complaining significant enough to make a change?
Criminy. I stated earlier that the results of our measurements alone are not designed for schools to take action, but are used to generate discussion as part of the Deming cycle.

I was very clear on this. In fact, I have stated it many times in this discussion.

I came here with the hope of learning more about ISO 9001 and many of the others have been very helpful. But you have turned this discussion into a total waste of time. I feel like I am arguing for the sake of arguing.

So I'm out of here.
 
J

JW9000

#48
Yeah, I got the same feeling too.
Let me rephrase my statement: The conversation has boiled down to argument for argument's sake. We're just trying to poke holes in each other's arguments. No good comes of that.

I didn't fully get that... my question 'have you read it?' seems to be still unanswered.
Yes.
 
S

somerqc

#49
Back on topic,

I don't believe that ISO 9001 can be FULLY integrated into the school operations. Why? The classroom is a place where things are always changing, it is virtually impossible to standardize how to teach a curriculum. Why? Because the customers (students) all learn differently. My daughter catches on to things very quickly so almost any approach works with her; whereas, other students may learn just as quick but only if visually shown and others are more written word based.

Now you have to mix this in a classroom with 1 or 2 teachers. Too many gaps get created with the teacher constantly trying to figure out what works for each kid. For kids that learn quickly, they even fall through the gaps as they get bored waiting for the class to catch up.

Ok - lets assume that my 2nd paragraph doesn't occur. All the kids are being challenged. GREAT! How do the teachers figure out the levels of each kid? Assessments? Ok - now bring in the personalities of the children and whether they ate and slept well prior to the assessment. You could very easily assess too low and the child aims low because they can. Again - another area that is almost impossible to detect without active parents (i.e parents noticing a disconnect between schoolwork and what the child does at home, behavior issues, etc.).

I don't know how you could try to standardize the processes in the classroom at the level necessary to ensure high customer satisfaction. The "operations" of the school could easily be addressed and be registered I just have very high doubts about the classroom itself.

John
 

Jim Wynne

Staff member
Admin
#50
Let me rephrase my statement: The conversation has boiled down to argument for argument's sake. We're just trying to poke holes in each other's arguments. No good comes of that.
I gave a similar response recently to someone here--there's a difference between poking holes in arguments and point out the existing holes in an argument. In any case, I think this can be a fruitful discussion and I hope you don't abandon it.

It seems that the primary stumbling block is how and what to measure. We here might be suffering from cognitive dissonance because we deal (mostly) with concrete ideas--we identify things to measure, determine how to measure them, and how to deal with the results. We occasionally have a visitor here who, given certain objectives, doesn't know how to measure them. This usually means that the objectives are too nebulous or otherwise faulty, but in general we deal with fairly clear-cut good/bad decisions and variables that can be identified and controlled. Given an atmosphere with a plethora of seemingly uncontrollable variables and being asked to control the general process, it's not surprising that we would be somewhat baffled.

If one wants to use an undiluted ISO 9001 as a framework, here's what you have to be able to do:

  1. Identify customers
  2. Determine customer requirements
  3. Determine how to decide if customer requirements have been met (determine what to measure and how to measure it)
  4. Review the existing system in view of the first three and decide whether it's sufficient (a form of gap analysis and/or contract review)
  5. Make a feasibility decision, and if the current system is deficient either change the system, change the requirements, or abandon the effort (reject the customer)
If you can't do these things but still want to establish some standardized form of control, you either have to (imo) abandon ISO 9001, customize it, or find/develop a different standard.
 
Thread starter Similar threads Forum Replies Date
S Differentiate ISO 9001 System MNC vs. Public Limited Company ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 5
J What does the revised standard ISO 9001:2008 mean to Jim "Q" public ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 44
A ISO 9001 in local public administration - A city hall ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 21
P Public Transit Agencies ISO 9001 Registered ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 5
Q Audit report template ISO 9001/14001 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 13
Q Process matrix examples of ISO 9001 & 14001 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 2
chris1price Archiving of paper records - ISO 9001 7.5.3.1b Records and Data - Quality, Legal and Other Evidence 4
D Common practices in ISO 9001 deployment ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 17
Q ISO 9001-2015 Internal audit finding Internal Auditing 12
P Audit check for IT company (ISO 9001) ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 7
Q Do these certificates of calibration meet ISO 9001 requirements for traceability to NIST? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 12
C Requirement to link Quality Manual to ISO 9001 clause numbers? ISO 13485:2016 - Medical Device Quality Management Systems 13
W First time being audited (ISO 9001), asking for advice ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 9
Q ISO 9001 - Reseller Exclusions ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 7
R AS9100D internal audit checklist or ISO 9001 2015 to AS9100 D AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 2
N ISO 9001 - Training business with fewer than 5 employees ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 4
J Opportunity in ISO 9001:2015 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 27
D Reports under change management | ISO 13485:2016 & ISO 9001:2015 ISO 13485:2016 - Medical Device Quality Management Systems 3
K Integrating ISO 9001:2015 with ISO 17025:2017 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 2
R Remote Audits for ISO 9001 (or any other standard) General Auditing Discussions 31
T Relationship between ISO 9001 and ISO – IEC BS EN 870079- 34 2020 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 5
S Sequence of ISO 9001:2015 Implementation Steps ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 6
qualprod Business Continuity Planning in ISO 9001? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 9
Brizilla Employee Data Privacy Policy - ISO 9001:2015 requirement(s)? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 6
S ISO 9001:2015 Internal Auditing Internal Auditing 8
Q Process: Knowledge Section 7.1.6 of ISO 9001:2015 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 10
P ISO 9001 certification with zero customers? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 11
A What must be recorded? (ISO 9001:2015, subclause 10.2) ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 9
B Updated IATF 16949 - Will IATF 16949 get revised when ISO 9001:202X is released? IATF 16949 - Automotive Quality Systems Standard 4
S ISO 9001:2015 vs 21 CFR Part 211 matrix Pharmaceuticals (21 CFR Part 210, 21 CFR Part 211 and related Regulations) 0
S ISO 9001 implementation in a Gold exporting business ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 3
M Does the ISO 9001:2015 standard require a disaster recovery plan or emergency response plan ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 16
A Tips and Tricks to understand ISO 9001 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 11
M ISO 9001 Major Nonconformance Internal Audit Schedule/COVID-19 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 18
B ISO 9001 - "Remote Audit Fee" Registrars and Notified Bodies 13
John C. Abnet ISO 9001 4.4.1 "...shall determine the processes needed..." ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 72
S ISO 9001 Clause 8.2.3 - Review of the requirements for products and services in a Cafe ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 8
D ISO 9001 certificate issued by QMS International for 10 years - legit? Registrars and Notified Bodies 17
S Thoughts on managing ISO 9001, 13485, IATF 16949 and 17025 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 33
M ISO 9001:2015 and AS6081:2012 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 2
C Implementation ISO 9001: 2015 ? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 3
eule del ayre Documented Information - Periodic Review of Documents? IATF 16949:2016 / ISO 9001:2015 IATF 16949 - Automotive Quality Systems Standard 34
J Audit Checklist for Integrated Management System for ISO 9001:2015, ISO 14001 & OHSAS18001 (IMS) ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 7
G National Structural Steel Specification 7th Edition - Do I now have to be audited against ISO 3843-3 as well as ISO 9001? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 1
lanley liao How to understand the clause 6 Planning of ISO 9001:2015 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 11
J Sister-company providing parts is only ISO 9001 registered IATF 16949 - Automotive Quality Systems Standard 7
G Copy of withdrawn ISO 9001:1994 Quality Management Standard ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 2
A Does ISO 9001:2015 cover all the requirements of ISO 10012:2003? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 6
G Logistic organization and controls - IATF/ISO 9001 audit Nonconformance and Corrective Action 2
J Scope of ISO 9001 clause 10.2 in the product life cycle ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 2

Similar threads

Top Bottom