Click here to go to a list of just the titles of these summaries.
The modern information era largely depends on the
development of microelectronics. After the first transistor was invented
in 1949, in the nearly half century, the microelectronic keeps on developing
with a speed stated by Moore's Law, which says that the device density
and device speed will double in ever eighteen months. Now people are talking
about 1000 MHZ microprocessors and 1GB DRAMs. The fast development is a
result of device miniaturization. This project deals with one of the many
problems in this field.
Project Goal:
Study the effects of surface pre-baking parameters
including temperature, time, and H2 flow on TiSi2 deposition on arsenic
doped substrates, mainly consumption.
This paper discusses the analysis of production process data from a company. The aim is minimizing the cost due to improper sizing. We fit a multiple regression model to the data, test it, and use the model afterwards to simulate the process and come up with the best values to best meet the target value of 4 as well as keeping the standard deviation minimum. The paper closes with final recommendation for the level of variables and for further analysis of data.
This study was built in order to fulfill the objectives
of ST 516, Experimental Statistics for Engineers. An idea of the study
was adapted from a paper toy in Thailand.
Problem Statement:
A snack company would like to advertise their new
project by attaching a toy on the snack box. Therefore, a paper helicopter
was designed to be a part of the box. This helicopter consists of a body,
two propellers, and one weight. The helicopter will rotate and fall to
the floor slowly when dropping freely if its conditions are appropriate.
The conditions suspected to affect the rotational characteristic and falling
time were: 1) body length and width; 2) propeller length and width; 3)
propeller angle; and 4) weight.
Transparent oxide (SiOx) barrier films
deposited on polymer substrates for reducing gas and water vapor permeation
through plastic packaging materials are of commercial interest for various
food and beverage applications. The most promising process of deposition
is Plasma Enhanced Chemical Vapor Deposition (PECVD) which enables deposition
at low temperatures avoiding the possibility of damaging the plastic substrate.
In order for these films to have superior barrier
properties, the PECVD deposition conditions need to be optimized, since
they are found to have a strong impact on the permeability of the final
film. When optimizing these variables, one of the most important outcomes
is the rate at which the SiOx film is deposited.
The framework of this study is the analysis of a
three-factor central composite design where the process variables measured
are power density, average gas velocity, and pressure as can be seen in
Table 1.
The main purpose of this analysis is to determine
how these process conditions affect the deposition rate obtained and how
they can be optimized in order to maximize the deposition rate.
The data collected is information about college students' concern over crime, ranging from 0 (no concern) to 25 (very concerned). The interviewers also collected information on the student's age, year in college (1-freshman,...,4-senior) family income (in thousands of dollars), and the students' gender (0-male, 1-female). The interviewer wanted to [know] which of the variables are useful in predicting the students' concern over crime.
Ongoing research in the field of fire protection
lead to the development of both test methods that predict the fire resistance
of a fabric and fabrics that exhibit superior fire resistant properties.
In spite of the progress, there are many questions left unanswered. In
the area of fire protective clothing two tests are mainly used. One is
a benchmark test, the other is a full mannequin test. The benchmark test
called the Thermal Protective Performance Test (TTP-Test) was developed
at DuPont in the late '70's. Since then, this test method has been the
research topic for several studies. One frequent problem is that fabrics
seem to behave differently in the TPP test conditions than they do during
tests using the thermal mannequin. The mannequin test, which is used to
test the design and the protective properties of a fire protective suit,
is fairly expensive because both the materials and the preparation of the
mannequin are costly.
Objectives:
The objectives of this study are to investigate
the variability and reliability of measurements made on the TPP tester.
Three different types of fabrics were used where each fabric shows a different
reaction to a high incident heat flux as apparent in the TPP test. The
fabrics are: 1) Cotton with a fire retardant finish, here the finish inhibits
flame propagation, cotton does not shrink during exposure to fire; 2) Kevlar
®/PBI, inherent flame retardant fiber, stable at exposure to flame;
and 3) Nomex, inherent flame retardant fiber, shrinkage occurs during exposure
to flame.
Two test modes, with and without spacer, will allow
an assessment if measurements are taken accurately and reliably. It will
also show if an air gap between the fabric and the sensor will lead to
a higher variability in the measurement result.
Measurements with both types of sensors, the standard
TPP sensor and the newly developed sensor, will lead to a better understanding
in the variability of heat flux measurements on the TPP test.
Objective:
This paper features a study of specific test variables
that were considered in analyzing the taste of light domestic beers. Experimental
procedures to characterize the taste of beers were conducted and are described.
Results obtained from this method were analyzed for the beers to assess
their taste in terms of which beer in what type of container is the best.
Hypothesis:
Taste is dependent upon the price of the domestic
light beer.
Assumptions:
As with any experiment, key assumptions must be
accounted for in the beginning. As for this experiment, it was assumed
that the people in the experiment provide representative response to what
they taste. Also, it is assumed that the experimental values obtained follow
a linear regression model.
Introduction:
An interest in this study lies with the need to
properly price domestic cans and bottles based on their taste. For years,
college students have declared their favorite beer based on the fact that
cheap beer tastes good. In reality, this may not be the case. For example,
if all that is purchased is cheap beer every time someone buys beer, then
they may become accustomed to the taste of this particular beer. For this
reason, a study was undertaken; does cheap beer truly taste better, or
will statistics prove that the bias is incorrect.
Problem Definition:
The IE Department at NC State has recently purchased
a rapid prototyping machine from Sanders RP Systems. The company makes
claims as to the accuracy of the prototype parts produced on the system;
however, these claims are without substantiation. Individuals in the department
are interested in knowing how accurate the prototype parts are compared
to the designed specification. Several factors determine the overall capabilities
of the machine.
The objectives of this study were to determine if
there are any differences in accuracy depending on where the part is built
in the build chamber. Also, if there are differences in accuracy of x,
y, and z orientations.
Introduction:
The objective of this project is to predict the
monthly future demand for international flights. We have collected data
on the number of international tickets sold at an undisclosed airline over
a period of twelve years.
Assumptions:
1) Assume the data is independent and identically distributed;
2) Assume the data was uninfluenced by a fear of terrorism due to a
recent terrorism attack. In other words, we are assuming no individual
month was affected differently as a result of terrorism.
In order to predict the future, we must consider
the past. By using multiple regression, we can use the past data given
of international airline passengers to forecast the future demand for international
flight travel. When attempting to forecast this future demand of international
flights, many factors must first be considered:
Are there any trends?
Is there seasonality?
The objective of this experiment is to determine the effects of four operating parameters on the final brightness of a softwood pulp from a CDED1 bleaching sequence. The four operating parameters considered in the experiment are Kappa number, Kappa factor, C1O2 substitution in the CD stage, and the percent C1O2 addition in the D1 stage of the bleaching sequence. A central composite design is used to determine a model relating the four operating parameters to the final brightness of the pulp.
This paper discusses the development of a linear regression model for determining a hotel chain's gross income based on four variables: month, week, occupancy, and location. Initially, a polynomial equation was reduced to a linear model using a statistical approach. The linear regression model then was analyzed using SAS software to determine the statistically significant variables within the linear model and to determine the coefficients of the significant variables. Finally, the adequacy of the model was checked using residual plots.
This project originated from a contact we had made
through consulting. The project was with a large pharmaceutical company
in North Carolina. Their main products are intravenous solutions for hospitals
and individuals.
They were having quality problems with a part called
an injection site. This little part is fixed to the I.V. bag so nurses
and doctors can administer intravenous drugs to a patient without having
to stick the person. This is a high volume part, about 2,500,000 per day,
made on specialized machines and is developed in-house. The settings on
these machines are not standardized; they are set based on operator instinct.
Therefore, when a new operator is introduced, the machine performance suffers
because of a lack of knowledge of how to run the machine. Thus, the need
for fixed machine settings arises.
During the production of these parts, the operator
will collect some of these parts to perform a test called the 'Pull Force
Test.' This test measures the amount of force, in pounds, required to separate
(pull) the parts. This test will be explained more in later sections. The
average of these pull force values have to be at least five to pass the
plant quality requirements. This insures that the site will not be defective
and become a health risk to the patient.
We analyzed the Injection Site Machine process to
determine if we could use Design of Experiments to set the machine parameters.
We found that there were three main variables that, if changed,could affect
the machine's performance:
1. Cycle time of the machine
2. Temperature of the heat tunnel
3. The type of shrink band used.
After our analysis, we determined that a DOE could be used. The description
of the system follows this introduction.
High competition in textile market pushes the companies
to seek for high quality products with lower costs. Lower costs are achieved
with faster machines and high efficiency. Weaving machines are getting
faster and faster every day, but to obtain high efficiency, good quality
yarns are needed. They are needed not just for efficiency but also for
the quality of the fabric produced. Fiber content, fiber denier, blend
percentage, yarn number, yarn tenacity, elongation, impurities, and evenness
are some of the yarn specifications that are important on both efficiency
issues and the end product specification.
Ring spinning is still considered as the quality
standard in the textile industry. Rotor spinning and air-jet spinning are
much faster than ring spinning and are shown as the future of spinning.
Today ring spinning can go up to 20,000 rpm where rotor spinning is up
to 130,000, and air-jet spinning up to 300 m/min. (more than 15 times faster
than ring spinning). Of course, each system has its own advantages and
disadvantages.
The general objective of this study was to compare
ring, rotor, and air-jet spun yarn specifications by investigating the
effect of fiber denier, yarn number, and twist factor. In order to observe
the differences created by these parameters a "Nested Factors and Hierarchical
Design" is used with 3x2 factorial design.
For the semester, I decided to design an experiment
with four variables and test whether they have an effect on a model of
a printed circuit board bus line or if there is any interaction between
the four variables. The experiment was based upon transmitting data across
a noisy channel and what is the resulting data out after the noise. The
main objective is what happens on a computer board when two chips talk
to each other. The reason this is a concern is that as the speed of the
bus lines that are on the circuit boards are sped up, what happens to the
signal in the noisy environment of the board. If the signal is degraded
enough to where it cannot be recovered from the bus line, then the result
is an erroneous input to the second chip from the first. Also, if there
is enough crosstalk between bus lines on the board, then they can deliver
wrong data to the end of the bus line.
So to model a data bus line on a printed circuit
board, you need to control the four variables of the gain of the driver,
the amount of Gaussian noise from the channel, the crosstalk noise from
channel crosstalk, and the amplification of the crosstalk noise. The amplification
used to model the amount of gain needed out of the driver to deliver the
signal down the bus line with immunity to the noise. The Gaussian noise
is used to model the noise injected into the signal from a lossy bus line.
The crosstalk noise is used to model the amount of crosstalk noise created
by the bus line being in parallel with another bus line on the circuit
board. The gain of the crosstalk noise is modeled since at different point
in time the signal is either additive or subtractive. Using these four
variables, the circuit board bus line can be modeled successfully and can
check that there is variance between these variables and also if there
is interaction or not.
N-Channel MOSFETs and P-Channel MOSFETs are used
in conventional CMOS technology with which semiconductor chips are conventionally
fabricated. The proper design of these MOSFETs is a critical
component to maximizing their performance. The presence of device simulators
allows device designers to tailor their devices as they would want it to
be--even before the devices are actually fabricated.
MOSFETs are fabricated on silicon crystals. For
an N-channel MOSFET, the "substrate" is doped p-type with acceptor impurities,
and the "source" and the "drain" regions are doped n-type with donor impurities.
As MOSFETs scale to smaller and smaller dimensions, all physical and doping
variables are scaled suitably to give increasing performance as devices
scale.
In conventional technologies, the substrate is doped
using Ion-Implantation of acceptor atoms, but the subsequent thermal cycles
that follow, cause the dopants to spread and the doping profile (i.e.,
the concentration gradient on the acceptor ions as measured from the Si-SiO2
interface) is almost uniform.
The smallest gate length of MOSFETs in a given technology
is ideally constant (in this report, I am considering MOSFETs with channel
length 0.1um). However, because of fluctuations in gate lithography (the
technology that is ued to define the gate), the gate length has statistical
fluctuations (typically with 3-sigma variations of at most +/120%). Let
us assume that for the 0.1um technology we are considering,t the gate-length
can vary from Lmin=0.08um to Lmax=0.12
micron.
When a MOFSET is turned off (by applying a low bias
to the gate), a small current flows from the drain to the source and is
called the off-state leakage current, Ioff. This current
component is undesirable because it leads to loss of power under static
conditions; this heats up the chip and is a drain on the battery (which
is especially undesirable for applications like portable Personal Computers).
The Ioff is worst in devices which have the shortest gate-length.
So in the 0.1um technology with gate-length fluctuations from Lmin
to Lmax, the Ioff is worst for the Lmin
devices. The semiconductor Industry Assocation (SIA) keeps revising the
National Technology Roadmap for Semiconductors (NTRS) which specifies the
scaling parameters for CMOS generations that are yet to come. For the 0.1um
technology, the worst case Ioff (measured at 0.08um) set by
the NTRS is 3-nanoamperes per micron.
When a MOFSET is turned on (by applying a large
voltage-bias to the gate) a large current from the drain to the source
and is called the Drive Current, Isat. It is desirable
to keep the Isat as high as possible because the higher it is,
the faster is the microprocessor. The Isat has to be measured
at the largest gate-length where it is minimum. So for a 0.1um technology,
the Isat would be measured at the 0.12 um gate-length (i.e.,
Lmax).
So for the 0.1um technology, the device design has
to maximize the Isat at Lmin=0.12um gate-length while ensuring the Ioff
at the Lmin=0.08um technology is no more than 3nA/um. Device
physics shows that this is actually a tradeoff--in general, trying to reduce
the Ioff involves increasing the Substrate Doping, Na.
This in turn causes another device parameter called the Threshold Voltage,
Vt, to go up which in turn causes the Isat to
go down.
Trying to improve this trade-off between Isat
and Ioff is one of the prime objectives of channel engineering,
which aims to create substrate doping profiles that move away from the
traditionally uniform substrate doping (UD) to improve this tradeoff.
A doping profile that has generated some interest in the community in recent
years is the Super-Steep Retrograde (SSR) profile which can be made
with epitaxial techniques. The aim is to leave a slightly undoped region
close to the surface with thickness tepi.
Of interest is to see whether the tradeoff between
the Ioff and Isat can be improved using the SSR profile.
I will not go into the physics of why SSR profiles may be able to improve
the tradeoff, but suffice it to say that there are factors that suggest
SSR may improve this tradeoff, and there are factors that suggest otherwise.
We need to simulate the devices to see whether this is true.
Back to Home Page.