yii,
Without getting into the statistical proofs the reason involves the central limit theorm. If you recall Shewhart's experiments we find that if many samples of any sample size n are taken from a unoiverse, the averages (X-bar values) of the samples will form a frequency distribution and the average of the averages (X-Double Bar) of that frequency distribution will tend to be near the average of the universe (u, mu). The spread of the X-bar values of the frequency distribution will depend on the spread of the universe and the sample size
with the spread of the X-bar values being smaller as n gets larger. In the long run the standard deviation of the X-bar values will be the standard deviation of the population divided by the square root of the sample size. This holds regardless of the shape of the universe.
Now with that in mind, the subgroup size for a control chart is basically an economical decision. We chose a 'rational subgroup' so that the variation within the units is small. If the variation with in a subgroup represents the piece-to-piece variability over a short period of time, then unusual variation between subgroups would reflect changes in the process that should be investigated. The practical problem we face is whether to take large samples less frequently or smaller samples more frequently. The longer we wait between samples the longer the process may run in an out of control condition if an assignable cause enters the picture. Keep in mind also that as the sample size increases the control limits get smaller moving toward the central line of the chart.
If you want more information on the statistics involved try these texts:
Quality Control and Industrial Statistics by Acheson Duncan
Statistical Process Control by The Autimotive Industry Action Group (AIAG)
Statistical Quality Control by Grant and Leavenworth
Hope this helps.
Regards,
Rick