Probability Seminar
Topics
October 17, 1996
Y. L. Tong
Georgia Tech
Dimension-Reduction Inequalities for Exchangeable Random Variables,
With Applications in Statistical Inference
Exchangeable random variables make frequent appearances in probability and
statistics, and play a central role in Bayes theory, multiple comparisons,
reliability theory, and certain other applications. This talk is concerned
with a class
of dimension-reduction inequalities for exchangeable random variables with
selected applications to statistical inference problems. The proof of the main
theorem depends on a moment inequality and de Finetti's theorem, which states
that exchangeable random variables are conditionally i.i.d. random variables.
October 24, 1996
Doug Down
Georgia Tech
Stability and Monotone Properties of a Tandem
Queueing Network under Window Flow Control
In this talk a network under window flow control is studied.
The system is modelled as a (tandem) queueing network with two types
of sources,
one uncontrolled exogenous traffic and the other controlled. Window
flow control operates on the following principle: the controlled
source cannot send more than K packets without receiving an
acknowledgement from the destination. The situation of interest in
this work is that of the flow control being active, in which case
the system may be modelled as a network in which exogenous
traffic traverses the system as before but the controlled source
can be replaced by a closed loop of K packets. Service at each of the
servers is assumed to be FIFO.
The stability for the system is examined with an emphasis on the
situation in which the network dynamics may be described by a
Markov process.
It is found
that the system is stable under the usual load condition (service
rate greater than arrival rate) on the exogenous traffic,
and in particular is independent
of the window size K. Monotonicity properties
of certain quantities in the system are identified,
which may have implications for
further analysis. Finally, the case in which the arrival and service
processes are simply assumed to be stationary will be examined.
October 31, 1996
Tom Kurtz
University of Wisconsin
Martingale problems for partially observed
Markov processes
We consider a Markov process $X$ characterized
as a solution of a martingale problem with generator $A$.
Let $Y(t)=\gamma (X(t))$. Assuming that we observe $Y$ but not $X$,
then the fundamental problem of filtering is to
characterize the conditional distribution
$\pi_t(\Gamma )=P(X(t)\in\Gamma |{\cal F}^Y_t)$.
Under very general conditions, the
probability measure-valued process $\pi$ can be
characterized as a solution of a martingale problem.
Applications of the general result include a proof of
uniqueness for the Kushner-Stratonovich equation for the
conditional distribution of a signal observed in additive
white noise, proofs of Burke's output theorem and an
analogous theorem of Harrison and Williams for
reflecting Brownian motion, conditions under which $Y$
is Markov, and proofs of uniqueness for measure-valued
diffusions.
November 1, 1996
Reuven Y. Rubinstein
Technion, Israel
Optimization of Computer Simulation Models
with Rare Events
Discrete
event simulation systems (DESS) are widely
used in many diverse areas such as
computer-communication
networks, flexible manufacturing systems, project evaluation and review
techniques (PERT), and flow networks.
Because of their complexity, such systems
are typically
analyzed via Monte Carlo simulation methods.
This talk deals with
optimization
of complex computer simulation models involving
rare events. A classic example is to
find an optimal
(s,S) policy in a
multi-item, multicommodity
inventory system, when quality standards require
the backlog probability to be extremely small.
Our
approach is based on change of the
probability measure techniques, also called
likelihood ratio (LR) and importance sampling (IS) methods.
Unfortunately,
for arbitrary probability measures
the
LR estimators and the resulting optimal solution
often tend to be unstable
and may have
large variances. Therefore, choice of the
corresponding importance sampling distribution -- and in
particular of its parameters --
in an optimal way
is an important task. We consider the case where the
IS distribution belongs to
the same parametric family as the
original (true) one
and use the
stochastic counterpart method
to handle
simulation based optimization
models. More specifically, we use a two-stage
procedure: at the first stage we identify
(estimate) the optimal parameter vector of the
IS distribution, and at the second
the optimal solution of the
underlying constrained optimization problem.
Particular emphasis will be placed on
estimation of rare events
and on integration of the associated
performance function
into
stochastic optimization programs.
Supporting numerical results are provided as well.
November 7, 1996
Walter Philipp
University of Illinois Urbana-Champaign
Weak Dependence in Probability, Analysis, and
Number Theory
In this talk we survey some of the basic facts on weak
dependence, some results on sums of lacunary trigonometric
series and their application to harmonic analysis and probabilistic
number theory. Also, we will mention some new results on the
domain of partial attraction of phi-mixing random variables.
The talk will be accessible to non-experts and graduate students.
November 14, 1996
Raid Amin
University of West Florida
Some Control Charts Based on the Extremes
Howell (1949) introduced a Shewhart-type control chart for the smallest and
largest observations. He showed that the proposed chart was useful for
monitoring the process mean and process variability, and it allowed
specification limits to be placed on the chart. We propose an exponentially
weighted moving average (EWMA) control chart which is based on smoothing the
smallest and largest observations in each sample. A two-dimensional Markov
chain to approximate the Average Run Length is developed. A design procedure
for the MaxMin EWMA control chart is given. The proposed MaxMin EWMA chart
shows which parameters have changed, and in which direction the change
occurred. The MaxMin EWMA can also be viewed as smoothed distribution-free
tolerance limits. It is a control procedure that offers excellent graphical
guidance for monitoring processes. A modified (two-sided) MaxMin chart is
also discussed. Numerical results show that the MaxMin EWMA has very good
ARL properties for changes in the mean and/or variability. The MaxMin chart
has already been implemented at a local company with success.
November 21, 1996
Serguei Foss
Novosibirsk State University and Colorado State University
Coupling and Renovation
In the first part of the talk, we introduce notions of coupling
(forward coupling) and strong coupling (backward coupling), and
show the use of these notions in the stability study of
Markov chains and of stochastically recursive sequences, and,
in particular, in a simulation of the stationary distribution of
a homogeneous discrete-time Markov Chain.
In the second part of the talk, we consider the following problem.
Let $Y \equiv Y_0 \equiv \{ X_n, n \geq 0 \}$ be a sequence of
random variables. For $k=1,2, \ldots $, put
$Y_k = \{ X_{k+n}, n \geq 0\}$ and denote by
$P_k$ the distribution of $Y_k$. When does there exist a probability
measure $P$ such that $P_k \to P$ in the total variation norm?
December 5, 1996
Minping Qian
Peking University, Beijing, China
An accelerated algorithm of Gibbs sampling
To overcome the difficulty of oscillation when the density of
samples is peaky, a reversible calculation scheme is introduced.
Theoretical discussion and calculation examples show that it does
accelerate the calculations.
January 10, 1997
Andre Dabrowski
University of Ottawa
Statistical Analysis of Ion Channel Data
Ion channels are small pores present in the outer membranes of most
biological cells. Their use by those cells in generating and transmitting
electrical signals has made their study of considerable importance in biology
and medicine. A considerable mathematical literature has developed on the
analysis of the alternating on/off current signal generated by ion channels
in "patch-clamp" experiments.
After a brief decription of patch-clamp experiments and their associated
data, we will provide an overview of the major approaches to the statistical
analysis of current traces. The renewal-theoretic approach of Dabrowski,
McDonald and Rosler (1990) will be described in greater detail, and applied
to the analysis of data arising from an experiment on stretch-sensitive ion
channels.
January 16, 1997
Christian Houdre
Georgia Tech
An Interpolation Formula and Its Consequences
We present an interpolation formula for the expectation of
functions of Gaussian random vectors. This is then applied to present
new correlation inequalities, comparison theorems and tail
inequalities for various classes of functions of Gaussian vectors.
This approach can be extended to the infinitely divisible and the
discrete cube cases.
January 30, 1997
Ming Liao
Auburn University
L'evy processes on Lie groups and stability of stochastic flows
We consider stochastic flows generated by stochastic differential
equations on compact manifolds which are contained in finite dimensional
Lie transformation groups. Using a result for limiting behavior of
L'evy processes on Lie groups, we can decompose such a stochastic
flow as a product of the following three transformations:
(1) a random "rotation" which tends to a limit as time goes to
infinity;
(2) an asymptotically deterministic flow;
(3) another random "rotation".
Using this decomposition, we may describe the random "sinks" and
"sources" of the stochastic flow explicitly. Examples of stochastic
flows on spheres will be discussed.
February 6, 1997
Dana Randall
Georgia Tech
Testable algorithms for generating self-avoiding walks
We present a polynomial time Monte Carlo algorithm for almost uniformly
generating and approximately counting self-avoiding walks in rectangular
lattices. These are classical problems that arise, for example, in the
study of long polymer chains. While there are a number of Monte Carlo
algorithms used to solve these problems in practice, these are heuristic
and their correctness relies on unproven conjectures. In contrast, our
algorithm relies on a single, widely-believed conjecture that is simpler
than preceding assumptions, and, more importantly, is one which the
algorithm itself can test. Thus our algorithm is reliable, in the sense
that it either outputs answers that are guaranteed, with high probability,
to be correct, or finds a counterexample to the conjecture. (Joint work
with Alistair Sinclair.)
February 13, 1997
Steen Andersson
Indiana University
Models combining group symmetry and conditional
independence in a multivariate normal distribution
Three of the most important concepts used in defining a
statistical model are independence, conditional distributions,
and symmetries. Statistical models given by a combination of
two of these concepts, conditional distributions and independence,
the so-called conditional independence models, have received
increasing attention in recent years. The models are defined in
terms of directed graphs, undirected graphs or a combination
of the two, the so-called chain graphs.
This paper combines conditional independence (CI) restrictions with group
symmetry (GS) restrictions to obtain the group symmetry
conditional independence (GS-CI) models. The group symmetry
models and the conditional independence models are thus
special cases of the GS-CI models. A complete solution to
the likelihood inference for the GS-CI models is presented.
Special examples of GS models are Complete Symmetry, Compound
Symmetry, Circular Symmetry, Complex Normal Distributions,
Multivariate Complete Symmetry, Multivariate Compound Symmetry,
and Multivariate Circular Symmetry. When some of these simple
GS models are combined with some of the simple CI models,
numerous well-behaved GS-CI models can be presented.
February 27, 1997
Dimitris Bertsimas
Sloan School, MIT
Optimization of multiclass queueing networks via infinite
linear programming and singular perturbation methods
We propose methods for optimization of multiclass queueing networks
that model manufacturing systems. We combine ideas from optimization and
partial differential equations.
The first approach aims to explore the dynamic character of the
problem by considering the fluid model of the queueing network. We
propose an algorithm that solves the fluid control problem based on
infinite linear programming. Our algorithm is based on nonlinear
optimization ideas, and solves large scale problems (50 station
problems with several hundred classes) very efficiently.
The second approach aims to shed light on the question of how
stochasticity affects the character of optimal policies. We use
singular perturbation techniques from the theory of partial
differential equations to obtain a series of optimization problems,
the first of which is the fluid optimal control problem mentioned in
the previous paragraph. The second order problem provides a
correction to the optimal fluid solution. This second order problem
has strong ties with the optimal control of Brownian multiclass
stochastic networks. We solve the problem explicitly in many examples
and we see that the singular perturbation approach leads to insightful
new qualitative behavior. In particular, we obtain explicit results
on how variability in the system affects the character of the optimal
policy.
March 5, 1997
Paul Glasserman
Columbia Business School
Importance Sampling for Rare Event Simulation: Good News and
Bad News
Precise estimation of rare event probabilities by simulation
can be difficult: the computational burden frequently grows
exponentially in the rarity of the event.
Importance sampling --- based on applying a change of measure
to make a rare event less rare --- can improve efficiency
by orders of magnitude. But finding the right change of
measure can be difficult. Through a variety of examples in
queueing and other contexts, a general strategy has emerged:
find the most likely path to a rare event and apply a change
of measure to follow this path. The most likely path is found
through large deviations calculations.
The first part of this talk reviews positive results that
support this strategy and examples of its potential for
dramatic variance reduction. The second part shows, however,
that the same approach can be disastrous even in very
simple examples. For each negative example, we propose a simple
modification that produces an asymptotically optimal estimator.
March 13, 1997
Hayriye Ayhan
Georgia Tech
On the Time-Dependent Occupancy and Backlog Distributions for the $GI / G /
\infty$ Queue
An examination of sample path dynamics allows a straightforward development
of integral equations having solutions that give time-dependent occupancy
and backlog distributions (conditioned on the time of the first arrival) for
the $GI/G/\infty$ queue. These integral equations are amenable to numerical
evaluation and can be generalized to characterize $GI^X/G/ \infty$ queue.
Two examples are given that illustrate the results.
April 17, 1997
Andrew Nobel
University of North Carolina, Chapel Hill
Adaptive Model Selection Using Empirical Complexities
We propose and analyze an adaptive model selection procedure for
multivariate classification, which is based on complexity penalized
empirical risk.
The procedure divides the available data into two parts.
The first is used to select an empirical cover of each model class.
The second is used to select from each cover a candidate rule with the
smallest number of misclassifications. The final estimate is chosen
from the list of candidates in order to minimize the sum of class
complexity and empirical probability of error.
A distinguishing feature of the approach is that the complexity of
each model class is assessed empirically, based on the size of its
empirical cover.
April 24, 1997
Robert Adler
University of North Carolina, Chapel Hill
and Technion -- Israel Institute of Technology
An Introduction to Superprocesses
A superprocess is a measure valued stochastic process used
for modelling, among other things, infinite density systems of particles
undergoing random motion and random branching. They can be studied either
via the general theory of Markov processes, stochastic partial differential
equations, or martingale problems.
In this talk I shall try to provide an introduction to superprocesses for
the uninitiated, describing their basic structure, some basic results,
and some interestimg open questions.
The talk will be followed by a 15 minute movie for anyone who wishes to
stay.
May 1, 1997
Alex Koldobsky
University of Texas at San Antonio
More on Schoenberg's problem on positive definite functions
In 1938, Schoenberg posed the following problem:
for which $p>0$ is the function $exp(-\|x\|_q^p)$
positive definite. The solution was completed in 1991,
and since then there have appeared a few more proofs all
of which were quite technical. We present a new proof which
significantly generalizes the solution and, in a certain sense,
clears things up. This proof is based on extending the Levy
representation of norms to the case of negative exponents. We also
show the connections between Schoenberg's problem and isotropic
random vectors, and apply our results to inequalities of correlation
type for stable vectors.
May 8, 1997
Bok Sik Yoon
Hong-Ik University, Seoul, Korea & Georgia Tech
QN-GPH Method for Sojourn Time Distributions in General Queueing
Networks
We introduce the QN-GPH method to compute the sojourn time
distributions in non-product form open queueing networks. QN-GPH is
based on GPH semi-Markov chain modelling for the location process of a
typical customer. To derive the method, GPH distribution, GPH/GPH/1
queue, and GPH semi-Markov chains are briefly explained and a seemingly
efficient method computing the transition function and first passage
time distributions in the GPH semi-Markov chain is developed. Numerical
examples in the area of telecommunication are given to demonstrate the
accuracy of the method. The QN-GPH method seems to be computationally
affordable tool for delay analysis in various manufacturing systems or
computer and communication systems.
May 15, 1997
Jan Rosinski
University of Tennessee, Knoxville
Problems of unitary representations arising in the study of stable
processes
Study of different classes of stable processes, such as
stationary, self-similar, stationary increment, isotropic, etc., leads
to the problem of obtaining explicit forms for unitary representations
on L^p spaces, which can be used for a classification of stable
processes. This approach is necessitated by the lack of a satisfactory
spectral theorem when p < 2. The talk will survey some results in this
area and present some open problems.
May 22, 1997
Robert Cooper
Florida Atlantic University
Polling Models and Vacation Models in Queueing Theory:
Some Interesting and Surprising Stuff
A polling model is used to represent a system of multiple queues that are
attended by a single server that switches from queue to queue in some
prescribed manner. These models have many important applications, such as
performance analysis of computer-communication networks and manufacturing
systems, and they tend to be quite complicated. A vacation model describes
a single-server queue in which the server can be unavailable for work
(away on "vacation") even though customers are waiting. Some vacation
models exhibit a "decomposition," in which the effects of the vacations can
be separated from the effects of the stochastic variability of the arrival
times and the service times. In a polling model, the time that the server
spends away from any particular queue, serving the other queues or switching
among them, can be viewed as a vacation from that queue. Adoption of this
viewpoint greatly simplifies the analysis of polling models.
Recently, it has been found that polling models themselves enjoy an
analogous decomposition with respect to the server switchover times
(or, in the manufacturing context, setup times), but for apparently
different reasons. Furthermore, it has recently been discovered that
some polling models exhibit counterintuitive behavior: when switchover
times increase, waiting times decrease; or, equivalently, in the parlance
of manufacturing, WIP (work in process) can be decreased by artificially
increasing the setup times.
In this talk we give an overview of polling and vacation models, including
some historical context. Also, using decomposition we "explain" the
counterintuitive behavior, and identify it as a hidden example of the
well-known renewal (length-biasing) paradox.
The talk will emphasize conceptual arguments rather than mathematical
detail, and should be of interest to a general audience.
Unless otherwise noted, the seminar meets Thursdays at 3 PM in
Skiles, Room 140.
For further
information, contact Jim Dai (dai@isye.gatech.edu)
or Richard Serfozo (
rserfozo@isye.gatech.edu).
May 29, 1997
Takis Konstantopoulos
University of Texas, Austin
Distributional Approximations of Processes, Queues and Networks
under Long-Range Dependence Assumptions
In this talk we discuss the issue of modeling and analysis of
stochastic systems under the assumption that the inputs possess
long-range dependence. The hypothesis is based on experimental
observations in high-speed communication networks that have motivated a
large body of research in the recent years. After briefly reviewing
typical experiments, models, and theoretical results on performance,
we present a detailed limit theorem for a class of traffic processes
possessing a weak regenerative structure with infinite variance cycle
times and ``burstiness constrained'' cycles. The distribution of the
approximating process is characterized and found to be of Levy-type
with a stable marginal distribution whose index is the ratio of a
parameter characterizing the tail of cycle times and a parameter
representing the asymptotic growth rate of traffic processes. We also
discuss queueing analysis for Levy networks. Finally, we comment on
the matching of distributions of both arrivals and queues with those
observed in practice.
June 5, 1997
Alan F. Karr
National Institute of Statistical Sciences
Does Code Decay?
Developers of large software systems widely believe that these systems _decay_
over time, becoming increasingly hard to change: changes take longer, cost
more
and are more likely to induce faults.
This talk will describe a large, cross-disciplinary, multi-organization study,
now in its first year, meant to define, measure and visualize code decay, to
identify its causes (both structural and organizational), to quantify effects,
and to devise remedies. Emphasis will be on the code itself and its change
history as statistical data, and on tools to describe and visualize changes.
Last updated: May 31, 1997 by J. Hasenbein ( johnny@isye.gatech.edu)