Elsmar Cove Forum Header Graphic The Elsmar Cove Wiki Sitemap More Free Files The Elsmar Cove Forums Discussion Thread Index Post Attachments Listing Failure Modes Services and Solutions to Problems Elsmar Cove Forums Main Page Elsmar Cove Home Page
NQA-USA
Miner's MSA (Measurement Systems Analysis) Blog 
Go Back   The Elsmar Cove Forum > Common Quality Assurance Processes and Tools > Six Sigma
Forum Username

Wooden Line

Six Sigma Project - Cost Reduction of ATE Process


Search the Elsmar Cove
Search Elsmar
Monitor the Elsmar Forum
Follow Marc & Elsmar
Elsmar Cove Forum RSS Feed  Marc Smith's Google+ Page  Marc Smith's Linked In Page   Marc Smith's Elsmar Cove YouTube Page  Marc Smith's Facebook Page  Elsmar Cove Twitter Feed
Elsmar Cove Groups
Elsmar Cove Google+ Group  Elsmar Cove LinkedIn Group  Elsmar Cove Facebook Group
Sponsor Links

Donate and $ Contributor Forum Access
Courtesy Quick Links

Links that Elsmar Cove visitors will find useful in your quest for knowledge:

Howard's
International Quality Services
Marcelo Antunes'
SQR Consulting
Bob Doering's
Correct SPC - Precision Machining

NIST's Engineering Statistics Handbook
IRCA - International Register of Certified Auditors
SAE - Society of Automotive Engineers
Quality Digest Portal
IEST - Institute of Environmental Sciences and Technology
ASQ - American Society for Quality
Reply
 
Thread Tools Search this Thread Rate Thread Content Display Modes
  Post Number #1  
Old 2nd April 2002, 11:47 PM
wing_a's Avatar
wing_a

 
 
Total Posts: 2
SS Project - Cost Reduction of ATE Process

Hi all,

We're a company engaged in manufacturing power supply. After the unit is assemblied, ATE equipment is used to test the unit's ordinary functions, such as the output voltage/frequency, output segmemts test, overload protection, the response to remote commands, etc. The test is performed automatically controlled by Visual Basic program after the operator connecting cables and push the "run" button. Now we want to perform a SixSigma project to reduce the cost of this test process by reducing the cycle time (in another word, by increasing the test efficiency). The key point is to optimize the VB program. Of course the precondition is not to reduce the detect ability of the test system.
We encounter some problems: How do we measure the process? How do we measure the project success? What metrics should we choose? If we define the response of this study to be "Tested Units per Unit Time", or "Test Time per Unit", how do we calculate the process capability (Cp or Cpk) then? How do we find out the Critical-to-Cost, regarding this project?

Any assistance will be great appreciated. I will be very greatful if anyone can send his support/suggestion to <a href="mailto:wingjohn.lau@phoenixtecsz.com.cn">wingjohn.lau@phoenixtecsz.com.cn</a>.

Thanks and regards,
Wing

Sponsored Links
  Post Number #2  
Old 7th November 2002, 04:49 PM
Marc's Avatar
Marc

 
 
Total Posts: 24,561
Yin Yang

Just read this. I know it's old and never really got an answer, but any takers? Any comments, thoughts?
Sponsored Links

  Post Number #3  
Old 13th November 2002, 12:54 PM
Ravi Khare's Avatar
Ravi Khare

 
 
Total Posts: 70
You will not be able to calculate a Cp & Cpk unless you have spec limits. Also, here we have one sided tolerance.

I would suggest you use Taguchi's Signal to Noise ratio (smaller the better type) on the 'test time per unit'. You can evaluate this over batches of 10 or more units. These ratios do not require spec limits and work on the principle of rewarding improvement ( or penalizing loss) in a quadratic manner.

Details on S/N ratios can be found in any standard published work on Taguchi's methods.
  Post Number #4  
Old 16th November 2002, 10:26 AM
Ravi Khare's Avatar
Ravi Khare

 
 
Total Posts: 70
More ideas on this?....
  Post Number #5  
Old 18th November 2002, 11:33 AM
Darius's Avatar
Darius

 
 
Total Posts: 543
As a capability index, in case of one sided spec, Cpmk (Cp just qualify the variation, not position, and Cpk has unwanted effects on one sided spec cases as better Cpk if lower variation but farther from the target)

The subject of the post, looked at the performace of the VB application ("Of course the precondition is not to reduce the detect ability of the test system. "), but don't said anything about the calculations that make the system to detect if it's OK or not.

If is a control chart, it could be individual control Chart, the problem with measures taken more frecuently is autocorrelation, but there are ways to deal with it.

You could use control chart and when there are no points outside of control limits for a number of measures (the problem with autocorrelated data is that it show most frecuently run of # points, and it's just the process behabiur), it's OK to push the "run" button, but the tricky part of it is to calculate the right control limits (the limits should reflect the normal or "run" condition).


  Post Number #6  
Old 18th November 2002, 11:21 PM
Graeme's Avatar
Graeme

 
 
Total Posts: 428
I Say...

Quote:
wing said: Now we want to perform a SixSigma project to reduce the cost of this test process by reducing the cycle time (in another word, by increasing the test efficiency). The key point is to optimize the VB program. Of course the precondition is not to reduce the detect ability of the test system.

Let me state some key assumptions here:
  • I assume that the present process is capable - once a power supply is conncted to the ATE station and the program is started, it makes all of the required tests and produces the required outputs, and they all meet your needs.
  • I assume that the present process has been validated - it produces results comparable to those produced by other methods. (For example, with a qualified technician performing the test manually, and the test loads and voltage/current meters operating as stand-alone instruments.)
  • I assume that you are using a Microsoft Windows-based computer system.
  • I assume that the computer system controlling the ATE is recording all of the parameters and values for, and that the recorded data includes elapsed time for each test.
  • I assume that the desired goal (rephrasing) is to increase the output of the ATE system in terms of units tested over a given time period, with no reduction in the quality of the measurements; and that the initial belief is that optimizing the Visual Basic code may do that.

I have a very limited amount of prior experience (and some of it is more than 20 years ago) with setting up ATE systems and programming computers. Here are some things I have learned that may affect how this is approached.

The system developer and programmer must always remember that it takes a finite amount of time for things to change and settle. When a voltmeter receives a command to change ranges, it takes a measurable amount of time to accomplish that, and to settle to a state where it is ready to make accurate measurements again. It also takes time for the unit being tested (the power supply, in this case) to settle to a stable output whenever the load or voltage is changed. The time involved is not normally a problem when performing tests manually, because human reaction times are so slow (in a relative sense). It must be considered in automated tests, though. It is important to know that these times cannot be reduced.

If you are making high-accuracy measurements (I know nothing about the range or accuracy involved here) time is again a factor. For example, if you are making measurements on the order of microvolts, and using the voltmeter's built-in averaging and/or filtering, it can take many seconds - sometimes 10 or more - from the time the meter is triggered to the time a value is available to be read. Again, this is a time that often cannot be reduced. You can elect not to use those meter features, but then (a) accuracy may be reduced, or (b) you will have to use program overhead to do the same work anyway.

The considerations of both of the above paragraphs mean that there will be a point where is is not possible to make the test go faster, at least not without sacrificing quality. Many years ago I was the "control standard" that a new automation system was tested against. (The device being tested was an autopilot computer.) About 2/3 of the development effort was involved in resolving measurement timing issues - in most cases, making the computer wait longer for the test equipment or the unit under test to reach a specific state. After all of that was complete and the automated system would run without failures, they evaluated the time saving. My average time for manually testing the units was a little less than an hour. The new automated systen saved less than a minute per test! However, it would take that amount of time every time with less variation than a person. Also, the test results were already stored electronically, where previously I was writing everything down and then it was re-typed by someone else. Finally, I could be doing other work while the automated system was busy doing the final test on a good unit, which speeded up the overall process.

When delaying a program for a period of time (such as to accommodate switching and settling) the programmer must use the computer system time-of-day clock for time intervals. Many years ago we used to just put in a short counting loop. That method relied on the microprocessor cycle time, but with the speed and complexity of modern processors you can't reliably do that any more.

I have been told -- I do not know from personal experience -- that great speed gains can be made by rewriting the application in another language, such as C++.

When a test program is running in the Windows operating environment, time interval has some added uncertainty. Windows is an asynchronous interrupt-driven system, and I understand it can be difficult to manage things that are really dependent on time interval. We have an automated system running on Windows, and sometimes things take longer than they "should". (For instance, a 300-second pause to let thermal EMF reach equilibribrium may actually take 309 seconds, or 303, or 312, and so on.) This is a client-server system on our internal network, and other processes are running on the computers. But, it really does not matter that much if it takes a few extra seconds every now and then to complete a test step when calibrating a digital multimeter. Other systems in the company are much more critical about time interval. On systems that use Windows computers, I have observed that the test program is the only thing running on the computer. In other cases, the ATE computers are running other operating systems, such as Unix.

I briefly thought about reconfiguring the program to run multiple test systems simultaneously. However, that probably is not a reasonable idea. Compared to the other equipment needed for automated tests of a power supply, the computer and program is probably a trivial expense. So, they might as well be separate systems, and one person can operate several of them at the same time. (People are better at multi-tasking, anyway.)

If you are using process control charts to monitor the parameters that are being tested, is the elapsed time of the test one factor that is charted? If it is, there is a ready-made metric for you.
  Post Number #7  
Old 22nd November 2002, 04:16 PM
Tom Slack

 
 
Total Posts: 74
Wing,
Are you in a Lean Manufacturing mode? One Piece Flow would require that every step has a TACT Time (task completion time) with little variability. For example, if the TACT Time was 1 minute and the Test took 1 minute 10 seconds, the line would back up and stop production.

Now let's assume that Test time has a standard deviation of 5 seconds with a normal deviation and TACT time is 60 seconds. What should the average time be for testing to meet Six Sigma? I would say 60-(6*5) or 30 seconds. Sometimes variability get's over-looked in TACT Time calcualtions.

I was starting to use this technique when I worked at Maytag. There was some resistance because some people instictively used minimum times, which puts a lot of pressure on associates.

I hope this helps,

Tom
  Post Number #8  
Old 24th January 2003, 09:35 AM
dragonair's Avatar
dragonair

 
 
Total Posts: 7
ha

wing

it's quite well see you at here too!!

do you often come here ?



你好像来的的好早啊,来的不多!!
Reply

Lower Navigation Bar
Go Back   The Elsmar Cove Forum > Common Quality Assurance Processes and Tools > Six Sigma

Do you find this discussion thread helpful and informational?


Bookmarks


Visitors Currently Viewing this Thread: 1 (0 Registered Visitors (Members) and 1 Unregistered Guest Visitors)
 
Thread Tools Search this Thread
Search this Thread:

Advanced Forum Search
Display Modes Rate Thread Content
Rate Thread Content:

Forum Posting Settings
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


Similar Discussion Threads
Discussion Thread Title Thread Starter Forum Replies Last Post or Poll Vote
Cost Reduction Information Request JNB2005 Manufacturing and Related Processes 9 22nd June 2010 11:37 AM
What would be the Cost Reduction Activities for HR Department ? charmsli Training - Internal, External, Online and Distance Learning 9 28th January 2010 10:41 PM
Rework Reduction Six Sigma Project - Need help zango123 Six Sigma 5 8th September 2009 04:09 PM
Cost Avoidance Calculation - Scrap Reduction mlthompson Statistical Analysis Tools, Techniques and SPC 4 4th January 2007 07:00 PM
Factory Overhead Cost Reduction satishr Misc. Quality Assurance and Business Systems Related Topics 9 13th October 2006 04:59 PM



The time now is 04:37 AM. All times are GMT -4.
Your time zone can be changed in your UserCP --> Options.


   


Marc Timothy Smith - Elsmar.com
8466 LeSourdsville-West Chester Road, Olde West Chester, Ohio 45069-1929
513 341-6272
NOTE: This forum uses "cookies".
no new posts