How often should we look at the data in Control Charts? Mainly Xbar and R charts?

Edwards215

Starting to get Involved
Hello All,

This is my first time posting here. I am pretty new to using SPC and wanted to know more about Control Charts. I understand how control charts are made mostly Xbar and R charts but I wanted to know how often do you refer to them? Is it something that gets updated continuously to see if there is any deviation in the process or something to be looked at only at the end of the month?

I understand that if a process is updated, a new setup is used, a new instrument is brought in then the control charts need to be updated for any changes.
 
Understood. So it should only be done once the update is performed and then see the new stabilized state?
Not exactly. Each control chart should be continuously monitored for out of control conditions to initiate actions to bring the process back into statistical control. If there is no out of control condition, then the process should not be ‘tweaked’. The period of checking is the time frame for each subgroup…for example if the subgroup frequency is every 2 hours then the chart is ‘monitored’ every 2 hours. If the subgroup frequency is every batch/lot then the chart is ‘monitored’ every batch/lot…

If major changes are made to the process then engineers typically monitor the control chart for a period of time to ensure the the process has not deteriorated and that the intended improvements were in fact achieved. Control limits are only revised when an INTENDED improvement is made and has demonstrated stable improvement.
 
Thank you for the explanation.

This might be a stupid question but when you say continuously monitor does it mean on a day to day basis or more on a batch to batch?
 
Subgroup to subgroup. Every time a new subgroup is added someone has to look at the chart. This can range from the operator/inspector who adds the data point to a software monitor that checks the rules as every new point is added. So it depends on your subgrouping scheme as explained above.’

I guess I’m not really sure what your concern is…perhaps you are unclear about how to subgroup and the frequency of sampling? can you give us an example?
 
perhaps you are unclear about how to subgroup and the frequency of sampling?
Many users new to SPC focus on the points and lines on the charts, with little awareness what happens under-the-hood. This is true because calculations are fully automated by software, or in the case of manual-plotted charts, simple arithmetic with constants drawn from a table.

A key aspect of control chart theory, what makes it work, is the concept of rational subgroups. By "work", I mean, the chart gives meaningful, useful information. Subgroups are rational because a sampling scheme is rationally designed to collect parts of interest. Statistical software, or the table of constants, does not give the user any guidance on setting up rational subgroups. So some understanding of rational subgroups is important for SPC practitioners to make wise decisions.

In a nutshell, the SPC chart compares variation within subgroups (depicted by the points on the R or S chart), and between subgroups (depicted by the points on the Xbar chart), to boundaries of what is expected from a stable, repeatable process. The boundaries are computed from an initial baseline set of data, and are depicted as control limit lines.

Subgroups are important in order to assess background variation between parts when the parts are mostly the same.
Sample pieces forming a subgroup are often collected consecutively or from the same batch, because generally, consecutive pieces are more likely to be alike (i.e., less variation).

But there may be other significant sources of variation besides time that dictate different sampling schemes. For example, from an injection mold with four cavities, and the four pieces from a single shot drop together out of the machine, a better sampling scheme would be to separate the parts from each of the four cavities for four separate SPC analyses. When I say better, I mean more useful, meaningful information.

Similarly, if subgroups, or initial baseline data set, combine data from parts produced by different machines, or different batches of raw material, or from different operating or environmental conditions, the background variation will likely be inflated, which could diminish your ability to recognize signals of change in the data stream, and react appropriately.

The frequency of sampling is a tradeoff between sampling cost, and how frequently you expect changes to occur in the process and how quickly you want to detect a signal and respond (i.e., the cost of delay).
 
All of that is true adn of course there is alot more to rational subgrouping - it’s not easy or simple.

BUT the OP’s question is how often to review a chart. Our answer has been to monitor every subgroup (without regard to whether or not the subgrouping and sampling frequency are correct). Fo you have a different perspective on that simple question?

Then the OP can ask other more important questions- and complicated - questions.
 
Thank you for the responses Bev and John.

So, the situation I have in hand is that we have two operations being performed on a tube; first it undergoes bending and then laser cutting, now the issues is the variability in bending process, even though it is within the limits of the design specification, the way it sits on the fixtures changes, which throws off the program in the machine (positions of the holes).

Whenever that happens, we make changes to the program to bring the alignment within the limits.

Now, when I was looking through Xbar control chart, it gave me an idea that I could utilize this to understand the trend that is happening in that run ( or in the batch that is being manufactured) and be on top it before it happens.
 
Back
Top Bottom