perhaps you are unclear about how to subgroup and the frequency of sampling?
Many users new to SPC focus on the points and lines on the charts, with little awareness what happens under-the-hood. This is true because calculations are fully automated by software, or in the case of manual-plotted charts, simple arithmetic with constants drawn from a table.
A key aspect of control chart theory, what makes it work, is the concept of rational subgroups. By "work", I mean, the chart gives meaningful, useful information. Subgroups are rational because a sampling scheme is rationally designed to collect parts of interest. Statistical software, or the table of constants, does not give the user any guidance on setting up rational subgroups. So some understanding of rational subgroups is important for SPC practitioners to make wise decisions.
In a nutshell, the SPC chart compares variation within subgroups (depicted by the points on the R or S chart), and between subgroups (depicted by the points on the Xbar chart), to boundaries of what is expected from a stable, repeatable process. The boundaries are computed from an initial baseline set of data, and are depicted as control limit lines.
Subgroups are important in order to assess background variation between parts when the parts are mostly the same.
Sample pieces forming a subgroup are often collected consecutively or from the same batch, because generally, consecutive pieces are more likely to be alike (i.e., less variation).
But there may be other significant sources of variation besides time that dictate different sampling schemes. For example, from an injection mold with four cavities, and the four pieces from a single shot drop together out of the machine, a better sampling scheme would be to separate the parts from each of the four cavities for four separate SPC analyses. When I say better, I mean more useful, meaningful information.
Similarly, if subgroups, or initial baseline data set, combine data from parts produced by different machines, or different batches of raw material, or from different operating or environmental conditions, the background variation will likely be inflated, which could diminish your ability to recognize signals of change in the data stream, and react appropriately.
The frequency of sampling is a tradeoff between sampling cost, and how frequently you expect changes to occur in the process and how quickly you want to detect a signal and respond (i.e., the cost of delay).