Six Sigma Project - Cost Reduction of ATE Process

W

wing_a

SS Project - Cost Reduction of ATE Process

Hi all,

We're a company engaged in manufacturing power supply. After the unit is assemblied, ATE equipment is used to test the unit's ordinary functions, such as the output voltage/frequency, output segmemts test, overload protection, the response to remote commands, etc. The test is performed automatically controlled by Visual Basic program after the operator connecting cables and push the "run" button. Now we want to perform a SixSigma project to reduce the cost of this test process by reducing the cycle time (in another word, by increasing the test efficiency). The key point is to optimize the VB program. Of course the precondition is not to reduce the detect ability of the test system.
We encounter some problems: How do we measure the process? How do we measure the project success? What metrics should we choose? If we define the response of this study to be "Tested Units per Unit Time", or "Test Time per Unit", how do we calculate the process capability (Cp or Cpk) then? How do we find out the Critical-to-Cost, regarding this project?

Any assistance will be great appreciated. I will be very greatful if anyone can send his support/suggestion to <a href="mailto:[email protected]">[email protected]</a>.

Thanks and regards,
Wing
 

Marc

Fully vaccinated are you?
Leader
Just read this. I know it's old and never really got an answer, but any takers? Any comments, thoughts?
 
R

Ravi Khare

You will not be able to calculate a Cp & Cpk unless you have spec limits. Also, here we have one sided tolerance.

I would suggest you use Taguchi's Signal to Noise ratio (smaller the better type) on the 'test time per unit'. You can evaluate this over batches of 10 or more units. These ratios do not require spec limits and work on the principle of rewarding improvement ( or penalizing loss) in a quadratic manner.

Details on S/N ratios can be found in any standard published work on Taguchi's methods.
 
D

Darius

As a capability index, in case of one sided spec, Cpmk (Cp just qualify the variation, not position, and Cpk has unwanted effects on one sided spec cases as better Cpk if lower variation but farther from the target)

The subject of the post, looked at the performace of the VB application ("Of course the precondition is not to reduce the detect ability of the test system. "), but don't said anything about the calculations that make the system to detect if it's OK or not.

If is a control chart, it could be individual control Chart, the problem with measures taken more frecuently is autocorrelation, but there are ways to deal with it.

You could use control chart and when there are no points outside of control limits for a number of measures (the problem with autocorrelated data is that it show most frecuently run of # points, and it's just the process behabiur), it's OK to push the "run" button, but the tricky part of it is to calculate the right control limits (the limits should reflect the normal or "run" condition).


:thedeal:
 
G

Graeme

wing said: Now we want to perform a SixSigma project to reduce the cost of this test process by reducing the cycle time (in another word, by increasing the test efficiency). The key point is to optimize the VB program. Of course the precondition is not to reduce the detect ability of the test system.


Let me state some key assumptions here:
  • I assume that the present process is capable - once a power supply is conncted to the ATE station and the program is started, it makes all of the required tests and produces the required outputs, and they all meet your needs.
  • I assume that the present process has been validated - it produces results comparable to those produced by other methods. (For example, with a qualified technician performing the test manually, and the test loads and voltage/current meters operating as stand-alone instruments.)
  • I assume that you are using a Microsoft Windows-based computer system.
  • I assume that the computer system controlling the ATE is recording all of the parameters and values for, and that the recorded data includes elapsed time for each test.
  • I assume that the desired goal (rephrasing) is to increase the output of the ATE system in terms of units tested over a given time period, with no reduction in the quality of the measurements; and that the initial belief is that optimizing the Visual Basic code may do that.

I have a very limited amount of prior experience (and some of it is more than 20 years ago) with setting up ATE systems and programming computers. Here are some things I have learned that may affect how this is approached.

The system developer and programmer must always remember that it takes a finite amount of time for things to change and settle. When a voltmeter receives a command to change ranges, it takes a measurable amount of time to accomplish that, and to settle to a state where it is ready to make accurate measurements again. It also takes time for the unit being tested (the power supply, in this case) to settle to a stable output whenever the load or voltage is changed. The time involved is not normally a problem when performing tests manually, because human reaction times are so slow (in a relative sense). It must be considered in automated tests, though. It is important to know that these times cannot be reduced.

If you are making high-accuracy measurements (I know nothing about the range or accuracy involved here) time is again a factor. For example, if you are making measurements on the order of microvolts, and using the voltmeter's built-in averaging and/or filtering, it can take many seconds - sometimes 10 or more - from the time the meter is triggered to the time a value is available to be read. Again, this is a time that often cannot be reduced. You can elect not to use those meter features, but then (a) accuracy may be reduced, or (b) you will have to use program overhead to do the same work anyway.

The considerations of both of the above paragraphs mean that there will be a point where is is not possible to make the test go faster, at least not without sacrificing quality. Many years ago I was the "control standard" that a new automation system was tested against. (The device being tested was an autopilot computer.) About 2/3 of the development effort was involved in resolving measurement timing issues - in most cases, making the computer wait longer for the test equipment or the unit under test to reach a specific state. After all of that was complete and the automated system would run without failures, they evaluated the time saving. My average time for manually testing the units was a little less than an hour. The new automated systen saved less than a minute per test! However, it would take that amount of time every time with less variation than a person. Also, the test results were already stored electronically, where previously I was writing everything down and then it was re-typed by someone else. Finally, I could be doing other work while the automated system was busy doing the final test on a good unit, which speeded up the overall process.

When delaying a program for a period of time (such as to accommodate switching and settling) the programmer must use the computer system time-of-day clock for time intervals. Many years ago we used to just put in a short counting loop. That method relied on the microprocessor cycle time, but with the speed and complexity of modern processors you can't reliably do that any more.

I have been told -- I do not know from personal experience -- that great speed gains can be made by rewriting the application in another language, such as C++.

When a test program is running in the Windows operating environment, time interval has some added uncertainty. Windows is an asynchronous interrupt-driven system, and I understand it can be difficult to manage things that are really dependent on time interval. We have an automated system running on Windows, and sometimes things take longer than they "should". (For instance, a 300-second pause to let thermal EMF reach equilibribrium may actually take 309 seconds, or 303, or 312, and so on.) This is a client-server system on our internal network, and other processes are running on the computers. But, it really does not matter that much if it takes a few extra seconds every now and then to complete a test step when calibrating a digital multimeter. Other systems in the company are much more critical about time interval. On systems that use Windows computers, I have observed that the test program is the only thing running on the computer. In other cases, the ATE computers are running other operating systems, such as Unix.

I briefly thought about reconfiguring the program to run multiple test systems simultaneously. However, that probably is not a reasonable idea. Compared to the other equipment needed for automated tests of a power supply, the computer and program is probably a trivial expense. So, they might as well be separate systems, and one person can operate several of them at the same time. (People are better at multi-tasking, anyway.)

If you are using process control charts to monitor the parameters that are being tested, is the elapsed time of the test one factor that is charted? If it is, there is a ready-made metric for you.
 
T

Tom Slack

Wing,
Are you in a Lean Manufacturing mode? One Piece Flow would require that every step has a TACT Time (task completion time) with little variability. For example, if the TACT Time was 1 minute and the Test took 1 minute 10 seconds, the line would back up and stop production.

Now let's assume that Test time has a standard deviation of 5 seconds with a normal deviation and TACT time is 60 seconds. What should the average time be for testing to meet Six Sigma? I would say 60-(6*5) or 30 seconds. Sometimes variability get's over-looked in TACT Time calcualtions.

I was starting to use this technique when I worked at Maytag. There was some resistance because some people instictively used minimum times, which puts a lot of pressure on associates.

I hope this helps,

Tom
 
D

dragonair

ha

wing

it's quite well see you at here too!!

do you often come here ?

:bigwave:

ÄãºÃÏñÀ´µÄµÄºÃÔç°¡£¬À´µÄ²»¶à£¡£¡
 
Top Bottom