Question about Centerline in Control Limits

hazwan2283

Involved In Discussions
Hi All, my name is Hazwan.
I have a question, and I would like to kindly summon the expertise of all our honorable members here in Elsmar.
Here it goes.

Let’s say we are starting with an empty control chart. Typically, we establish the initial control limits once we have enough data. However, I am confused between two available options.

Option 1: We will lock the UCL, LCL and the centerline (which is the grand average from all the data we collected) value in the software. Once we lock these values, the software or the SPC system will evaluate the SPC rules for example Rule#2 (nine points within 1 standard deviation from the CL).

Option 2: We will lock just the UCL and the LCL value in the software but the grand average will not be lock hence it will keep on changing for every subgroup so the standard deviation zones may be unsymmetrical most of the times since grand average will not be exactly center between UCL and LCL. Now my software can still evaluate on the SPC rules based on this scenario its not really a problem.


My actual question is which of these two options is the correct or recommended approach when locking control limits in an SPC control chart? Additionally, if possible, could you kindly point me to a statistical book — and the exact page if available — that discusses this? I've been browsing several SPC references but haven't found a clear explanation of whether one or both of these approaches are valid.


This question came up recently in my head, and I’m genuinely curious to learn the correct practice. I appreciate any insights or references the experts here can share.


Sincerely,
Hazwan
 
Elsmar Forum Sponsor
Option 1. Any good SPC book will tell you this. Read Donald Wheeler.
The ‘center line’ is a control line and must be treated as any control limit is. Several rules rely on the center line so if you let it wander around you ruin the rules. Think about it. You let the subgroup values values ‘wander around’ inside the UCL and LCL and wander about the center line. You do NOT let the 3 limits wander around.
 
There was a member here, @bobdoering, last seen Jan 2025, who wrote about SPC for cases like precision machining where the subgroup average is expected to "sawtooth" over time, due to tool-wear and tool-replacement in the case of machine tools. Understanding the underlying physics and sources of variation is important, so mathematics is chosen to monitor process behavior appropriately. These posts might be informative if @hazwan2283 is asking about a situation such as this:

Statistical Process Control for Precision Machining - Part 1

Statistical Process Control for Precision Machining - Part 2

 
the situation Bob describes was also described by Wheeler in “Can I Have Sloping Limits" in the May 1999 issue of Quality Magazine.

The ‘center line’ doesn’t change as more data is added for each subgroup. It is still fixed as is the upper and lower control limits…
 
I know three publications of Donald Wheeler, in which he discusses fixing the center line:
1) z-charts, which are for "small" sample sizes per product (i.e. lean production), and our interest lies in detecting the deviation from the target value,
2) zed-bar charts, which are also primarily for small samples sizes, and
3) target-centred XmR-Charts, where we use the target value instead of the average value.
However, I am surprised that fixing
a) the control limits {LCL, UCL}, and
b) the center line CL
is such a common knowledge, because I don't understand why we should fix them -- unless the software is unable to perform the calculation without fixed values. All three values {LCL, CL, UCL} are pretty stable. Does fixing the control limits and the CL really significantly increase the sensitivity of charts? It would be great, if you could provide references/links -- I will check those mentioned above.
 
It is logical that the 3 ‘limits’ are fixed if you just think about the math and the intent of the chart*. The limits describe the stable variation - if limits do keep changing when continually recalculating then your process variation is no longer stable. If you don’t fix them and allow the software to change with every subgroup then you actually incorporate increased variation into the limits and you miss changes. You may still catch huge changes but you miss the slower/smaller drifts and changes. Think about what the ‘rules’ are doing: they are checking to see if hte process variation has changed since the stable period when the limits were calculated. So continually recalculating defeated the purpose for the rules adn the chart. This has been discussed since Shewhart’s original work but is simple enough that there are really only a few sentences or paragraphs. Wheeler and other authors have discussed this in context of makign sure any control chart software isn’t allowed to do this.

Of course if we go back to the original hand made charts it is obvious that the limits were fixed as having operators change the limits with every subgroup is silly. Just because software can do complex things very quickly doesn’t mean it should…

*this is more valuable than quoting dead or old authors…

Since I am retired I no longer have these references memorized - perhaps @Miner has them at his fingertips…
 
Last edited:
In the publication "When Should We Compute New Limits?" Wheeler states "[...] the practice of automatically recomputing limits every time a point is added to the chart is unnecessary at best and may be misleading on occasion." However, he does not provide data or a reference. Following the gospel, "in God we trust, all others need to bring data", I would be very interested in a reference.

What I have done is to simulate data points and to add a 0.5*sigma drift over 250 data points. Now I generated two XmR-Charts, because I believe that the XmR chart is the least sensitive to detect such a drift:
1) In the first XmR chart I recalculated the control limits for each data point, as well as their average value.
2) In the second XmR chart I used the first 50 data points to calculate (a) the control limits, and (b) the average value. Then I used fixed values.
Both charts perform equally well -- I just ran a couple of simulations, so this is my impression and not a qualified statement.
 
Donald Wheeler is a credible reference. He is a PhD with decades of experience and research in control charts including Shewhart and Deming’s original works. He actually worked with Deming who actually worked with Shewhart…what more credibility do you need?

In the article you reference he mentions the names of two people regarding the times when recalculating the limits is appropriate.

He uses simple logic and not quotes from dead guys. The purpose of a control chart is to predict the future behavior of a stable process - this is called the baseline chart. Shewhart, Deming, Ott, Montgomery, Grant & Leavenworth, et al have all said it. Because it makes logical and empirical sense. Even if you ask ChatGPT or use Google’s AI summary you will get the same answer. Determine the baseline of he stable process and set the limits. Then plot future data against those limits to determine if the process is still stable or has changed. I don’t remember the exact the pages and quotes because I read them years ago…

I too have experimented with this concept and know from real life data (not simulations) that continually recalculating the limits is a slippery slope that only accommodates slower increases in variation and thus misses slow changes. This is only logical.
 
The article from May 1999 which Bev D cited is no longer available from the QualityMag website. I attached a copy I found on an earlier posting:
Donald Wheeler wrote an article entitled "can I have sloping limits" for quality magazine back in 1999. I have attached a copy of it.

There was also another good article: Sarkar, Ashok, and Pal, Surajit, “Process Control and Evaluation in the Presence of Systematic Assignable Cause”, Quality Engineering, Volume 10, Number 2, 1997, pp. 383-388
but I don't have an electronic copy of it.

I haven't personally investigated the problems cited by Steve P but I have used these approaches for years in machining operations (stamping, milling, lathes, broaches, etc.) and they work...one of the most value-add application of on-line control charts in my experience...

Here is another earlier thread "Control charts where the center line is a trend or slope".
first - control charts aren't really hypothesis tests. although I get the gist of what you are saying. they are useful only when they help us with the economic control of quality.

and a control chart with sloping limits has many useful applications. precision machining is only one area where it might be useful. the hi/lo chart is simple and therefore useful in controlling over adjustment in the presence of tool wear. but there are a myriad of situations and nothing is ever one size fits all.

I also found this Wheeler article When the XmR Chart Doesn’t Seem to Work, which makes this important distinction "When the limits seem to be too wide to be practical, it is important to determine whether this is because the data are full of noise, or because they are full of signals." XmR chart methodology is based on rational subgrouping. IMO, a practitioner has to know enough about the physical process and sources of variation to perform SPC analysis within proper context.
 

Attachments

Last edited:
Here is the seminal reference regarding a fixed set of control limits.

Walter Shewhart, Statistical Method from the Viewpoint of Quality Control:

P. 28. “These limits are to be set so that when the observed quality of a piece of product falls outside them even though the observation be still within the (specification) limits, L1 and L2, it is desirable to look at the manufacturing process in order discover and remove, if possible, one or more causes of variation that need not be left to chance. (Original emphasis)

P. 35 point iii: Shewhart discusses when to change the limits. Why would there be a discussion of changing the limits if he intended them to be continually changed? Teh intent is to set or fix the limits and only change them when an improvement has been made: “From time to time the control limits must be revised as assignable causes are found and eliminated”.

Also in this section Shewhart discusses the centerline as the average of the stable period from which the upper and lower control limits are calculated. This center line serves as a ‘target’ about which the future variation is expected to move. Any change in actual average in the future indicates an assignable cause in the process. Therefore the center line is also set or fixed. (Or else you have no target for comparison)

Every subsequent reference to setting the limits and not continually recalculating them stems from this seminal document.
 
Back
Top Bottom