Multi Cavity Sampling Plan - MSPs (Multi-Stream Processes)

S

Scott McDonald

Good or Bad Sampling Plan

I have a question on whether or not a customer of ours is using appropriate sampling method when it comes to accepting or rejecting one of our lots.

Many of the parts we make come from muli-cavity tooling. These are the ones that generate more non-conformity reports with a needed corrective action response. When a multi-cavity tool is approved into production we always furnish capability studies of each individual cavity and shoot for a 1.67 CPK or > on the initial sampling. Everything looks good up until we send in our first production lot where the customer pulls 20 random samples pulling from several to all the cavities.

They crunch their results into a program that matches ANSI/SQCZ1.9-1993 Double Specification Limits, Variability Unknown - Standard Deviation Method, NORMAL (AQL-0.15%, S3), Sample Size = 20. Now by the randomness of the samples and dispersion of the different cavity averages, this range when applied to the program continually kicks out a small % either over or under spec limit generating a non-conforming material report with a request for corrective action. We are working with very limited tolerance of +/-.0008", so of course we're splitting hairs here.

Is their method correct for this situation?

This customer generates 70% of our sales, and I have a hard time with telling them that this may not be an appropriate sampling method for the situation. This is starting to cause a slight animosity between us.

Can anyone help me with this situation or at least guide me to a publication or articles that discuss this in more detail?
 
A

Al Dyer

Are they using a certain subgroup withing the 20 piece sample?

A 20 piece sample sure seems way too low to generate long term capability.

Have you generated your own long term data using random samples from all cavities with say a subgroup of 5 pieces and a sample size of 500?

Is tool wear a problem that needs to be addressed through MTFB studies? (Mean Time Between Failure)

Maybe a little more info so our stat gurus can jump into action? Atul?
 
A

Atul Khandekar

Scott,

Welcome!

Now that Al has pushed the ball into my court, I'll have a first go at it. (sitting at home without any of my references around, a good 2 hours past midnight on a Sunday!)

Customer's specifications are for the product, regardless of the cavity it comes from. From that point of view, it is difficult to say that sampling plan is wrong. It may be argued that a sample size of 20 may not be adequate. Again, IMHO, while doing acceptance sampling subgrouping is not done.

That brings us to the SPC being done on individual cavities. Initial capability study shows a good Cpk. What type of charts are being used to monitor the process(es) after that? What do they indicate? Have you studied the variation between cavities? At first sight, it looks like you need to identify the variation between cavities and try to reduce it. Tool wear could certainly be one of the causes.

For SPC on MSPs (Multi-Stream Processes), I have these www references (from my Favorites folder)

http://www.qualityamerica.com/knowledgecente/articles/RUNGERmsp1.htm
http://www.qualityamerica.com/knowledgecente/articles/RUNGERmsp2.html

I faintly remeber having read somewhere about M/I (Median/Individual) charts being used for multicavity type of processes. I will try to look that up once again.

May be I will take another go at this later.
-Atul.
 
S

Scott McDonald

RE: Good or Bad Sampling Plan

Al and Atul,

Thanks for your quick reply to my questions. This is my first shot at using Caymen Cove, so if I'm doing something wrong as far as answering the questions you posed me, lee me know.

Al's questions -

Q. Are they using a certain subgroup within the 20 piece sample?
A. We save the parts that we inspect during a given run and then ship those to the customer in a separate box with the lot. They then pull there 20 samples from these QC samples.

Q. A 20 piece sample sure seems way too low to generate long term capability. Have you generated your own long term data using random samples from all cavities with say a subgroup of 5 pieces and a sample size of 500?
A. No I have not done this. I have plenty of data I could use because we generate individual control charts for each cavity.

Q. Is tool wear a problem that needs to be addressed through MTFBstudies? (Mean Time Between Failure)
A. Toolwear is not an issue in this case. The variation in the cavity averages comes from each of the tooling cavities being slightly different from each other.

Atul's questions -

What is meant by IMHO acceptance sampling in your statement? What does this acronym?

Q. That brings us to the SPC being done on individual cavities. Initial capability study shows a good Cpk. What type of charts are being used to monitor the process(es) after that? What do they indicate?
A. During production we use standard x-r charts, which are on SPC software we have. There is a chart set up for each one of the cavities. I typically use just one of the cavities as the control cavity for process control and monitor the other three cavities. Any adjustments made to the process are stemmed from the control cavity. The cavity I usually choose for the "control cavity" was the one that showed the most variation during the initial studies.

Q. Have you studied the variation between cavities? At first sight, it looks like you need to identify the variation between cavities and try to reduce it. Tool wear could certainly be one of the causes.
A. As I mentioned in Al's question, it is not tool wear causing the cavity to cavity variation, but the fact that the cavities in the tooling are not the same size as each other. These differences are small enough that to have the tool reworked to get them closer together has sometimes made things worse and further apart. Changing steel by only .0002" can be very tricky.

Thanks for the web site info, I'll check them out.
 
A

Atul Khandekar

Are you actually getting lots rejected by the customer or is just a calculated (estimated) PPM level?

Tool wear will cause the process to drift on one side. But if you get rejections sometimes over USL and sometimes under LSL, then this is a randomly shifting process and you may have to look at the charts closely.

Since you are working with about 40 microns tolerance, please also make sure that you don't have any MSA problems at your end as well as at the customer's place. (This is difficult to argue with a hostile 70% customer :(). Also ensure that (even with software used) there is no loss of precision in any of the calculation steps due to rounding off the numbers.

Acceptance/rejection in your case uses AQL based Sampling plans. Choice of a particular plan is another issue difficult to argue about with the customer. An excellent reference available on the web is the 'AQL Primer' at : http://www.samplingplans.com/aqlprimer.htm

I would also suggest you go to the SQC Online site:
http://sqconline.com

where you can actually enter your values and estimate % non-conforming.

-Atul.

PS: IMHO = 'In My Humble Opinion'
 
A

Al Dyer

Sounds like you have done the correct initial studies. But I guess that the customer who is always right is finding fallout. Match yours with theirs and run with it,

I once worked in an industry (gaskets) where 50 cavity tools were the norm. In these situations we took a 5 piece sample every hour and charted on a run chart.

I guess you really have to look at the end use of thr product and make he needed adjustments.

Al....
 
D

Dave Strouse

Refer to rule 1

Scott -
Rule one is "your customer is always right"
However, they may need help to understand what right is.
A couple of thoughts-

1) How did you assess Cpk of 1.67?
You are probably aware that this number has a large confidence interval associated with it. To find the CI it is usually neccassary to use resampling and bootstrap techniques. However, I have tables that show Cp CI(much easier to estimate CI for Cp) varying from 0.55 to 1.45 on sample of 10 and from 0.86 to 1.14 on sample of 100. You might have a cavity that truely is out of capability.
2) How do you know the non conformances are related to varying means between cavities. Have you partitioned the sample by cavity and modeled it individually. Are you just assuming ( remember what to assume means!) that this is the root cause.

A suggestion to proceed.

First, clean your side of the street. Review the capability studies and conduct within cavity studies with sufficient power to insure your within cavity variation is not likely to be the problem. Get enough samples to nail it good.

Model the customers procedure, i.e. take random twenty piece samples from within cavity and see if they will always pass. You can probably use the data selected randomly from the capability study.
Also model random cavity mixtures from the same data and see if it confirms the customers rejection. If so, you know the mixed cavities is contributing to the problem.

Now you are in a better position to work out a reasonable plan with the customer.

The customers plan has a few problems also, I think. First, the lot size should detiremine the sample size and while I don't have my ANSIV1.9 here, I do have some other books on acceptance sampling and I believe the lot size for sample of 20 would be between 500 and 800 pieces. I suspect you are using larger lots. Also, S-3 is usually reserved for destructive tests and /or other circumstances where larger sampling is prohibited and the receiving authority deems the risk of loss of discrimination to be acceptable.

I am now going to state something which will be very hard to deal with which I have not brought up before. Both the estimation of capability by Cpk method and the acceptance sampling by variables are HEAVILY dependent on the assumption that the underlying distribution is normal.

This is almost certainly NOT the case for you. Any injection molded part will be truncated by the metal and (neglecting swell, which is usually negligable) cannot be as large as the sample will predict based on the normality assumption. Usually, if capability overall is high enough we can get away with this simplification, but in your case maybe not. If you can get nowhere with the plan above, please consult competant statistical help for what to do if that is the case.

Hope this helps.
 
S

Sam

Working with product that is processed with molds and extrusions can be very difficult. There are a lot of variations to condider in raw material, machine capability, die capability, speeds & feeds, temperature (operating & ambient) and the operator.
We had a similar situation with a customer that purchased our fuel filiter clips. Always the same thing, each time the customer would get a shipment they would disciver a hand ful of "short shots" in a box. And their staement was "that when we get a box of clips we expect them all to be good".
Quantity per box 10,000, by definition from our customer, ahandful was approximately 15/20 bad clips per box.

So it was off to the races.
Our on-going monitoring was two, five pc samples in the morning and two five pc samples in the afternoon; the plan was used for our second shift. Machine cycle time was 30 sec. for a 36 cavity die, approx. 4300 pcs/hour.
We an two charts, one for a critical dim and one for average weight. We then decided to go to an hourly sample (WAG).
We selected five, five pc samples each hour. We ran for three days (another WAG) and evaluated our findings. As expected we found some shift to shift variation. This was attributed to the second shift operator making adjustments to the material feed.
But the big difference was found on the information collected in the morning. There was a noticable variation in weight from the first sample to third and fourth samples.
We were taking our first sample at 7:00A, our machine operator stars work at 6:00A, so we decided to come early and take a sample. Turns out we didn't need to take a sample. The operator immediatly started the machine and ran parts; without allowing the machine to warm up we found that we were producing our
defective product the first hour of each morning.
We later found that maintenance was supposed to start the machines at 5:30A to allw them to reach operating temp., but due to shortage of people that process had been eliminated.

We added that to our "lessos learned" folder.
 
A

Atul Khandekar

Dave,
Great response!

As an aside- I have been looking for some good references on bootstrap method. Can you sugget a source?
-Atul.
 
D

Dave Strouse

Bootstrapping

Atul -
Jurans handbook, 5th edition has a good section in the part on basic statistical methods.
It's written by Prof. Dudewicz from the Syracuse math department. He seems to have done a lot of research in this area, as several of the references he cites at the end of the section 44 are his. His main interest seems to be in fitting distributions from small samples using something called the "Extended Generalized Lambda Distribution".
Hot stuff,huh!
Guess I need a life!
 
Top Bottom