Page 310 - Semiconductor Manufacturing Handbook
P. 310

Geng(SMH)_CH19.qxd  04/04/2005  20:00  Page 19.37




                                                INSPECTION, MEASUREMENT, AND TEST

                                                                        INSPECTION, MEASUREMENT, AND TEST  19.37

                                    Third-party support
                                    Test development services
                                       Load board design and fabrication services
                                       SW tools (DFT, statistical analysis)
                                    Documentation—on-line or context sensitive
                                    Training—courses, locations, and frequency
                                    Consumables
                                    There is some level of subjectivity to this process because there will be different tester architectures
                                  and different terminology used by vendors, as well as different ways to specify accuracy. There may be
                                  a meeting where the vendors present their answers, along with any additional factors they think are rel-
                                  evant, and answer open questions. Sufficient time should be allowed in the process to complete each step
                                  in the process and to allow a decision to be made and implemented before a critical need develops.

                      19.3.5 Benchmark Evaluation Process
                                  Benchmarking is a process where the test vendor(s) develop a test program in order to compete for
                                  business. This is a great chance for the IC manufacturer to compare results from different vendors.
                                  In this comparison, the winner can be selected and backup strategies can be defined. The first step is
                                  to develop a matrix of selection criteria, prioritize these criteria, and come up with a fair, objective
                                  way to judge results across all vendors. This is similar to the evaluation criteria in processing but the
                                  factors are different. For example, some typical factors would be—completeness of the project,
                                  speed of completion, ability of the vendor to develop the program with minimal help from the IC
                                  manufacturer, test execution time, analysis of data including correlation to passing and failing units
                                  and repeatability, COT (implying potential parallelism in the tests), software tools, and reports to dis-
                                  play the data easily and clearly point out problems. There are also more abstract factors such as soft-
                                  ware usability of the program generation, debug and production environments, and the availability
                                  of tools to translate data from the EDA environment.
                                    Another decision to be made is which device to select for the benchmark. This could be an
                                  advanced technology part or possibly a high volume part, in order to reduce the COT and improve
                                  the profit margin. The use of stable, known devices will add consistency to the process whereas
                                  choosing new unproven parts can add uncertainty. The more parts selected, the more support invest-
                                  ment required by the vendors and the IC manufacturer as well. If there are multiple parts from dif-
                                  ferent product families with different requirements, consider a subset of tests for each part to show
                                  unique capabilities needed for each. The decision must not put risk into the revenue stream of the IC
                                  manufacturer. A common test list that all vendors must complete should be made. A deadline for
                                  completion then needs to be established.
                                    Once all criteria are established, the IC manufacturer supplies data logged ICs with repeatability
                                  data to the test vendor. The vendors then complete the project as described in Sec. 19.2, “Common
                                  Strategies of Implementation.” When the deadline occurs, the IC manufacturer evaluates the results
                                  according to the criteria and makes a decision on the relative success of each vendor.

                      19.3.6 Test Vendor Support
                                  Different IC manufacturers need different kinds of support. The type of support required depends on
                                  many factors including the company’s core competencies, available labor pool, and company
                                  philosophies. Test knowledge is one type of support that requires basic and advanced test theory and
                                  proven implementation. Device knowledge is an understanding of the DUT needs and application
                                  environment. Tester-specific knowledge is the knowledge of a specific vendor’s hardware and soft-
                                  ware. Lastly, a company may simply need a larger labor pool than they currently have or simply want
                                  to outsource for a variety of reasons. Typically an IC manufacturer will want a combination of all of



                             Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
                                        Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
                                          Any use is subject to the Terms of Use as given at the website.
   305   306   307   308   309   310   311   312   313   314   315