wing said: Now we want to perform a SixSigma project to reduce the cost of this test process by reducing the cycle time (in another word, by increasing the test efficiency). The key point is to optimize the VB program. Of course the precondition is not to reduce the detect ability of the test system.
Let me state some key assumptions here:
- I assume that the present process is capable - once a power supply is conncted to the ATE station and the program is started, it makes all of the required tests and produces the required outputs, and they all meet your needs.
- I assume that the present process has been validated - it produces results comparable to those produced by other methods. (For example, with a qualified technician performing the test manually, and the test loads and voltage/current meters operating as stand-alone instruments.)
- I assume that you are using a Microsoft Windows-based computer system.
- I assume that the computer system controlling the ATE is recording all of the parameters and values for, and that the recorded data includes elapsed time for each test.
- I assume that the desired goal (rephrasing) is to increase the output of the ATE system in terms of units tested over a given time period, with no reduction in the quality of the measurements; and that the initial belief is that optimizing the Visual Basic code may do that.
I have a very limited amount of prior experience (and some of it is more than 20 years ago) with setting up ATE systems and programming computers. Here are some things I have learned that may affect how this is approached.
The system developer and programmer must always remember that it takes a finite amount of time for things to change and settle. When a voltmeter receives a command to change ranges, it takes a measurable amount of time to accomplish that, and to settle to a state where it is ready to make accurate measurements again. It also takes time for the unit being tested (the power supply, in this case) to settle to a stable output whenever the load or voltage is changed. The time involved is not normally a problem when performing tests manually, because human reaction times are so slow (in a relative sense). It must be considered in automated tests, though. It is important to know that these times cannot be reduced.
If you are making high-accuracy measurements (I know nothing about the range or accuracy involved here) time is again a factor. For example, if you are making measurements on the order of microvolts, and using the voltmeter's built-in averaging and/or filtering, it can take many seconds - sometimes 10 or more - from the time the meter is triggered to the time a value is available to be read. Again, this is a time that often cannot be reduced. You can elect not to use those meter features, but then (a) accuracy may be reduced, or (b) you will have to use program overhead to do the same work anyway.
The considerations of both of the above paragraphs mean that
there will be a point where is is not possible to make the test go faster, at least not without sacrificing quality. Many years ago I was the "control standard" that a new automation system was tested against. (The device being tested was an autopilot computer.) About 2/3 of the development effort was involved in resolving measurement timing issues - in most cases, making the computer wait longer for the test equipment or the unit under test to reach a specific state. After all of that was complete and the automated system would run without failures, they evaluated the time saving. My average time for manually testing the units was a little less than an hour. The new automated systen saved less than a minute per test!
However, it would take that amount of time
every time with less variation than a person. Also, the
test results were already stored electronically, where previously I was writing everything down and then it was re-typed by someone else. Finally,
I could be doing other work while the automated system was busy doing the final test on a good unit, which
speeded up the overall process.
When delaying a program for a period of time (such as to accommodate switching and settling) the programmer must use the computer system time-of-day clock for time intervals. Many years ago we used to just put in a short counting loop. That method relied on the microprocessor cycle time, but with the speed and complexity of modern processors you can't reliably do that any more.
I have been told -- I do not know from personal experience -- that great speed gains can be made by rewriting the application in another language, such as C++.
When a test program is running in the Windows operating environment, time interval has some added uncertainty. Windows is an asynchronous interrupt-driven system, and I understand it can be difficult to manage things that are really dependent on time interval. We have an automated system running on Windows, and sometimes things take longer than they "should". (For instance, a 300-second pause to let thermal EMF reach equilibribrium may actually take 309 seconds, or 303, or 312, and so on.) This is a client-server system on our internal network, and other processes are running on the computers. But, it really does not matter that much if it takes a few extra seconds every now and then to complete a test step when calibrating a digital multimeter. Other systems in the company are much more critical about time interval. On systems that use Windows computers, I have observed that the test program is the only thing running on the computer. In other cases, the ATE computers are running other operating systems, such as Unix.
I briefly thought about reconfiguring the program to run multiple test systems simultaneously. However, that probably is not a reasonable idea. Compared to the other equipment needed for automated tests of a power supply, the computer and program is probably a trivial expense. So, they might as well be separate systems, and one person can operate several of them at the same time. (People are better at multi-tasking, anyway.)
If you are using process control charts to monitor the parameters that are being tested, is the elapsed time of the test one factor that is charted? If it is, there is a ready-made metric for you.