-
Performance Modeling Using Additive Regression Splines
Chao and Milor
Circuit designers need to be able to predict variations in circuit performance
as a function of variations in process parameters. Often the relation
between process parameters and circuit performances is highly nonlinear,
and the process is described by a large number of independent variables.
Traditional approaches to modeling, like polynomial regression, are not
very accurate for such problems. In order to build accurate nonlinear
models for high-dimensional problems, an algorithm has been implemented
based on additive regression splines. The model building process
is fully automated. The algorithm is used to build a model to predict
the offset voltage of a parallel filter bank. This example demonstrates
that very accurate nonlinear models can be constructed very efficiently.
-
Projection of Circuit Performance Distributions by Multivariate Statistics
Chow
Production test data of process monitor test structures were utilized in
a circuit simulator that accounts for the correlations of circuit elements.
This correlated simulation is based onprincipal component analysis techniques
(PCA) that requires the means, the standard deviations, and the correlation
coefficients of all circuit elements. A voltage reference subcircuit
consisting of five npn transistors and five base diffused resistors was
chosen for this simulation study. The statistical parameters of thse
circuit elements were approximated by those of the process monitor test
structures and extracted from the test patterns of 1000 production wafers.
The distributions of the reference voltage Vb1 from this simulation
were compared to a Monte Carlo simulation, and to the production data.
Results of this simulation are accurate enough to be of use in predicting
circuit performance distributions of precision analog integrated circuits.
-
Comparing Models for the Growth of Silicon-Rich Oxides (SRO)
Dundar and Rose
The relative advantages of several methods for modeling the growth of Silicon-Rich
Oxide (SRO) films are compared. The methods are a response surface model,
a physical model based on chemical kinetics, and neural network models.
The physical model provides more insight and greater predictive ability.
Neural network models provide better fits to complex response surfaces
with minimal data and can be used successfully in the absence of a theoretical
model. The risks of prediction by neural networks outside their training
domain are demonstrated.
-
Prediction of Wafer State After Plasma Processing Using Real-Time Tool
Data
Lee and Spanos
Empirical models based on real-time equipment signals are used to predict
the outcome (e.g., etch rates and uniformity) of each wafer during and
after plasma processing. Three regression and one neural network
modeling methods were investigated. The models are verified on data
collected several weeks after the initial experiment, demonstrating that
the models built with real-time data survive small changes in the machine
due to normal operation and maintenance. The predictive capability
can be used to assess the quality of the wafers after processing, thereby
ensuring that only wafers worth processing continue down the fabrication
line. Future applications include real-time evaluation of wafer features
and economical run-to-run control.
-
A New Device Design Methodology for Manufacturability
Lu, Holton, Fenner, Williams, Kim, Hartford, Chen, Roze, and Littlejohn
As future technology generations for integrated circuits continue to
"shrink," TCAD tools must be made more central to manufacturing issues;
thus, yield optimization and design for manufacturing (DFM) should be addressed
integrally with performance and reliability when using TCAD during the
initial product design. This paper defines the goals for DFM in TCAD
simulations and outlines a formal procedure for achieving an optimized
result (ODFM). New design of experiments (DOE), weighted least squares
modeling and multiple-objective mean-variance optimization methods are
developed as significant parts of the new ODFM procedure. Examples
of designing a 0.18-um MOSFET device are given to show the impact
of device design procedures of device performance distributions and sensitivity
variance profiles.
-
Electronic Manufacturing Process Systems Cost Modeling and Simulation
Tools.
Keys, Balmer, and Creswell
-Northern Telecom folks -cost modeling of printed wring board assemblies
(PWBA) -cost model explored in detail -simulated sensitivity performed
by varying factors in a fractional factorial DOE that allowed estimation
of main effects and two factor interactions (says higher order interactions
are often negligible) -explored relationship between number of modules
or distinct testable function units and other variables (modulizing process
is good, he wanted to know how much to modulize) -two level factorial design
Yates algorithm is recommended -suggests computing mean, variance, skewness,
and kurtosis to check for nonnormality. -for extremely non-normal data
suggests eliminating top and bottom 2.5% of data and taking resulting center
of these limits as a typical value. -emphasis: DOE and tolerance of variables
based on confidence type intervals
-
Sequential Experiments to Characterize Processing Equipment û
Maximizing Information Content While Restraining Costs
Jack E. Reece
-see st536 review
-
An Efficient Methodology for Building Macromodels of IC Fabrication
Processes
Low and Director
This paper describes an efficient macromodeling approach for statistical
IC process design based on experimental design and regression analysis.
Automatic selection of the input variables is done as part of the model
building procedure to reduce the problem dimension to a manageable size.
The resulting macromodels are simple analytical expressions describing
the device characteristics in terms of the fundamental process variables.
The validity and efficiency of the macromodels obtained by the approach
are illustrated through their use in an IC process design centering example.
Although the approach has only been applied to the IC fabrication process
level, the underlying methodology can also be used to obtain circuit level
macromodels.
-
Statistical Equipment Modeling for VLSI Manufacturing: an Application
for LPCVD
Lin and Sponas
-see st536 review
-
Improved Within-Wafer Uniformity Modeling Through the Use of Maximum-Likelihood
Estimation of the Mean and Covariance Surfaces
Joseph C. Davis, Jacqueline M. Hughes Oliver, J. C. Lu, and Ronald
S. Gyurcsik
Modern statistical modeling techniques for characterizing the spatial
response of a single-wafer process are presented. These techniques
overcome the limitations of the commonly used ordinary least squares estimation
procedure and provide models for the expected value and variance of the
response. In addition, a procedure for generating a model of the
covariance structure which relates the various points on the wafer is presented.
These methods are applied to the characterization of the rapid thermal
chemical vapor deposition of polysilicon on top of a layer of silicon dioxide.
The results of this study indicate that the simultaneous estimation of
mean and variance models results in a significantly better representation
of the data than the standard constant-variance estimation of a mean model.
Application of Statistical Design and Response Surface Methods
to Computer-Aided VLSI Device Design
Alvarez, Abdi, Young, Weed, Teplik, and Herald
Optimization of VLSI process, device, and circuit design through computer
simulation is a necessary step in the product cycle. Strategies for
synthesizing nominal operating points based on engineering judgment are
readily available. Methodologies for simultaneously determining an
optimal operating point and analyzing its sensitivity to process and device
perturbations are less well established. Because of the complexity
of VLSI product development, an efficient strategy for studying the effect
of a previous (nth) step in the design cycle on subsequent (nth + 1) steps
is required. A methodology based on statistical design and analysis
techniques is ideally suited for this purpose. A key step is choosing
the appropriate experimental points to be simulated. In this work,
Box-Behnken response surface designs were used for this purpose.
The Box-Behnken designs use a small number of data points to estimate the
coefficients for all linear, quadratic, and first-order interaction terms
in a polynomial model relating response variables to input factors.
In this manner, a complete second-order model approximating the desired
response in N-dimensional space is obtained. This model can be used
for predictions within the simulated space and sensitivity analysis.
Design analysis becomes a multiple constraints optimization problem.
Using the derived empirical equations for the response variables, mathematical
optimization and sensitivity analysis can be performed within the constrained
"desirability" space. This leads to the concept of global input factors,
which affect a large number of the response variables, and specific input
factors which can be used to adjust the operating level of a small number
of response variables. This proposed methodology can play a key role
in "designing for manufacturability." To demonstrate the applicability
of this methodology to VLSI device design, all of the one-dimensional cross
sections in an advanced BIMOS process were simulated. Based on the
simulation results, response surfaces were approximated, conflicting device
requirements were quantified, and a region of input factors for which the
various response conditions were simultaneously met was identified.
The applicability of the proposed methodology to circuit design is also
demonstrated.
-
A New Design-Centering Methodology for VLSI Device Development
Aoki, Masuda, Simada, and Sato
VLSI yield optimization and design centering are two
key interests in the development of submicron VLSI's. Accordingly,
we have developed a new design automation technique based on simulation
CAD tools. The features of this methodology are great reduction of
simulation time in device optimization and accurate prediction of process
sensitivity in device performance. The approach we used was basically
a modification of the "design of experiment" method. This approach
makes it possible to obtain an optimum design with a large number of design
parameters.
The methodology was sucessfully applied to the optimization
of a 0.5 um MOSFET structure based on only a one-day computation
by a supercomputer (S-810) using a two-dimensional device simulator.
In the design centering, we assumed five objective device performances,
that is, threshold voltage VTH, output conductance GD,
drain current ID, VTH dependence on gate length ^VTH/^LG,
and maximum substrate current Isubmax. The use of the
device design centering system predicted an optimized nMOSFET with 0.52um
gate length, 9.4-nm gate oxide thickness and 1.6 x 1016 cm-3
substrate concentration for a given set of objective performances.
Statistical variations of device characteristics were also calculated.