# Archived Talks

Nov. 26, 2014 -

**Marcus Raymundo**,
A Note on Complexity of Multistage Stochastic Programs.
[

Abstract]

*Abstract.*
In Shapiro [2006], estimates of the sample sizes required to solve a multistage stochastic programming problem with a given accuracy by the conditional sample average approximation method were derived. In this paper we construct an example in the multistage setting that shows that these estimates cannot be significantly improved.

*Abstract.*
We present JuMP, an optimization modeling language like AMPL, GAMS, YALMIP, and others, which is embedded in Julia, a recently developed language for technical computing. A key theme of JuMP and Julia in general is the the ability to achieve performance competitive with C from a language with high-level syntax comparable to Python and MATLAB. We describe JuMP's functionality for linear, mixed-integer, and nonlinear optimization and provide an overview of the broader "JuliaOpt" ecosystem of open-source optimization packages. The tutorial will be interactive, so we urge attendees to bring laptops with Julia preinstalled by following the accompanying instructions at

here.

Nov. 5, 2014 - INFORMS Practice Talk

** Burak Kocuk**,
Some Polyhedral Results for DCOPF Problem with Switching.
[

Abstract]

*Abstract.*
Transmission line switching in power networks has become increasingly important in practice. However, underlying theory of this problem is not well-studied. In this work, we first prove some hardness results. Then, we propose a new formulation based on cycles, which is used to find valid inequalities and polyhedral results. Finally, numerical results are provided. This is joint work with

Santanu Dey
and

Andy Sun.

**Tugce Isik**,
Processor Sharing in Non-Collaborative Queueing Networks.
[

Abstract]

*Abstract.*
We study processor sharing (PS) in non-collaborative queueing networks where multiple servers cannot work at the same station. For tandem lines with two stations, two servers, and infinite buffers, we show that either a dedicated or a PS policy maximizes the throughput, and the optimal PS policy can be achieved as a limit of round-robin policies that rotate the servers among the jobs. For systems with more than two stations, we evaluate how the performance of round-robin policies depends on the quantum of time spent at each station and the buffer size. Our numerical results show that the round-robin policies are near-optimal even if the buffers are finite as long as the servers are rotated frequently enough. This is joint work with

Sigrún Andradóttir
and

Hayriye Ayhan.

Oct. 30, 2014 -

** Warren Powell** (Princeton),
Stochastic Optimization Made Easy: Practical Strategies for Planning Under Uncertainty.
[

Abstract]

*Abstract.*
Uncertainty pervades planning and operations in a variety of settings in transportation and logistics, energy, health and finance. Customer demands, transit delays, weather events, commodity prices, equipment problems, market behavior and natural disasters are all familiar partners. Mathematical programming has given us a powerful and widely used canonical framework for modeling deterministic optimization problems. However, if we introduce uncertainty, the field fragments into a number of competing communities with names such as stochastic programming, dynamic programming, stochastic search, robust optimization, and optimal control. Lost in the jungle of stochastic optimization is the realization that we solve these problems every day. The problem is that there has been a fundamental misunderstanding of how to think about and formulate sequential decision problems under uncertainty. While deterministic optimization problems are defined by solving for decisions, sequential stochastic optimization problems require that we find robust policies. I will identify four fundamental classes of policies which unify the competing communities into a common modeling framework, including any strategy used in practice. Further, I will show using a simple energy storage problem that, depending on the characteristics of the data, that any of these four (or a hybrid) may be best on any given problem. His recent Tutorial on Stochastic Optimization in Energy for IEEE can be found

here.

Oct. 29, 2014 - INFORMS Practice Talk

** Niao He**,
Mirror Prox Algorithm for Multi-Term Composite Minimization and Semi-Separable Problems.
[

Abstract]

*Abstract.*
This work is inspired by the recent trend in many fields of building composite optimization models with hybrid regularizations and mixed penalties. We present a composite version of Mirror Prox algorithm for solving convex-concave saddle point problems and monotone variational inequalities of special structures, allowing to cover saddle point/variational analogies of the usual composite minimization. We demonstrate that the composite Mirror Prox algorithm inherits the favourable efficiency estimate of its prototype. We demonstrate that the proposed approach can be successfully applied to Lasso-type problems with several penalizing terms and to problems of semi-separable structures considered in the alternating directions methods, implying in both cases methods with the O(1/t) complexity bounds. This is joint work with

Anatoli Juditsky
and

Arkadi Nemirovski.

**Mathias Klapp**,
The One-dimensional Dynamic Dispatch Waves Problem.
[

Abstract]

*Abstract.*
We study same-day delivery (SDD) distribution systems by formulating the dynamic dispatch wave problem (DDWP) that models a depot where orders arrive randomly throughout the day. At each dispatch period (wave), the decision maker decides whether or not to dispatch a vehicle and serve a subset of open orders to minimize operational costs and charges for unattended requests at the end of the day. We begin studying the DDWP with a single vehicle and customer delivery locations on a line; we efficiently solve the deterministic case, provide an optimal a priori solution of a specific class for the stochastic case, and give heuristic policies and dual bounds for the stochastic-dynamic case. This is joint work with

Alan Erera
and

Alejandro Toriello.

Oct. 15, 2014 -

** Alex Shapiro**,
Risk Neutral and Risk Averse Approaches to Multistage Stochastic Programming.
[

Abstract]

*Abstract.*
In many practical situations one has to make decisions sequentially based on data available at the time of the decision and facing uncertainty of the future. This leads to optimization problems which can be formulated in a framework of multistage stochastic optimization. In this talk we consider risk neutral and risk averse approaches to multistage stochastic programming. We discuss conceptual and computational issues involved in formulation and solving such problems. As an example we give numerical results based on the Stochastic Dual Dynamic Programming method applied to planning of the Brazilian interconnected power system.

Oct. 1, 2014 -

** Chris Ryan** (University of Chicago),
Duality theory via Fourier-Motzkin Elimination.
[

Abstract]

*Abstract.*
We explore how Fourier-Motzkin elimination, a standard tool infinite dimensional linear programming, can be used to understand the duality theory of more general optimization problems, including semi-infinite linear programming, convex, and conic programming.
This is joint work with

Amitabh Basu (Johns Hopkins)
and

Kipp Martin (University of Chicago).

Sep. 26, 2014 -

**Brian Dandurand **,
Distributed and Coordinated Optimization Motivated by Multidisciplinary Nonconvex Multiobjective Optimization.
[

Abstract]

*Abstract.*
The needs of distributed solution approaches to decomposable nonconvex multiobjective optimization problems motivate the study of augmented Lagrangian coordination (ALC) methods and Gauss-Seidel methods from single objective optimization. The application of certain scalarization techniques to a nonconvex, vector-valued objective function results in a reformulated single objective problem (RSOP) whose objective function may not preserve certain properties assumed in Gauss-Seidel approaches such as the alternating direction method of multipliers (ADMM). The block coordinate descent method (BCD) is considered as an alternative distributed optimization approach. While BCD requires that the constraints possess a certain decomposable structure, the formulation of the RSOP as a decomposable problem may require the introduction of extra variables and extra nondecomposable constraints. Thus, an ALC approach such as the method of multipliers must be integrated with BCD, and a corresponding analysis of such an integrated approach is warranted.
In this presentation, a brief introduction to these concepts will be provided. Then state-of-art results for BCD and ALC methods are provided and compared with those for ADMM. A BCD coordination (BCDCoor) method consisting of an integration of BCD and ALC methods is stated as an alternative to ADMM for solving nonconvex RSOPs with nonseparable objective functions. A convergence analysis of solution-multiplier pair sequences generated by BCDCoor requires certain extensions to the state-of-art results for BCD and ALC methods. These required extensions are described, and current contributions to this end are discussed briefly.

Sep. 17th, 2014 -

** Linwei Xin**,
Optimality Gap of Constant-order Policies Decays Exponentially in the Lead Time for Lost Sales Models.
[

Abstract]

*Abstract.*
Inventory models with lost sales and large lead times have traditionally been considered intractable, due to the curse of dimensionality which arises from having to track the set of orders placed but not yet received (i.e. pipeline vector). Recently, Goldberg et al. (2012) laid the foundations for a new approach to solving these models, by proving that as the lead time grows large (with the other problem parameters fixed), a simple constant-order policy (proposed earlier by Reiman (2004)) is asymptotically optimal. This was quite surprising, as it is exactly this setting (i.e. large lead times) that was previously believed intractable. However, the bounds proven there are impractical, requiring the lead time to be very large before the constant-order policy becomes nearly optimal, e.g. requiring a lead time which is Omega(epsilon^{-2}) to ensure a (1+epsilon)-approximation guarantee, and involving a massive prefactor. The authors note that numerical experiments of Zipkin (2008) suggest that the constant-order policy performs quite well even for small lead times, and pose closing this gap (thus making the results practical) as an open problem.
In this work, we make significant progress towards resolving this open problem and closing this gap. In particular, for the infinite-horizon variant of the finite-horizon problem considered by Goldberg et al. (2012), we prove that the optimality gap of the same constant-order policy actually converges exponentially fast to zero, i.e. we prove that a lead time which is O(log(epsilon^{-1})) suffices to ensure a (1+epsilon)-approximation guarantee. We demonstrate that the corresponding rate of exponential decay is at least as fast as the exponential rate of convergence of the expected waiting time in a related single-server queue to its steady-state value. We also derive simple and explicit bounds for the optimality gap. For the special case of exponentially distributed demand, we further compute all expressions appearing in our bound in closed form, and numerically evaluate them, demonstrating good performance for a wide range of parameter values. Our main proof technique combines convexity arguments with ideas from queueing theory.
This is joint work with

David A. Goldberg.

Sep. 11, 2014 -

** Kimia Ghobadi** (University of Toronto),
Radiation Therapy Treatment Planning with Continuous Dose Delivery for Head-and-Neck Tumours.
[

Abstract]

*Abstract.*
In this talk we explore automated inverse treatment planning for Gamma Knife Perfexion to treat head-and-neck tumours. The main goal in designing a treatment plan is to deliver an appropriate amount of dose to tumour structures while avoiding the surrounding healthy tissue as much as possible. In continuous dose delivery treatment planning, the radiation is delivered with a continuous beam along a path inside the tumour instead of the conventional method of using multiple discrete beams.
To obtain a quality treatment plan, we first use a set of approximation algorithms to find the position of the focal points of the shots inside the tumour volume. We then used these focal points to construct a path that covers the tumour volume. Once the path is selected, a mixed-integer optimization model and its approximated linear model are formulated to find the beam intensity and duration for each point on the path. In this optimization model we minimize the dose spillage to healthy organs while ensuring the tumour receives the prescribed dose. We impose additional clinical and machine constraints such as maximum dose to healthy organs, maximizing the utility of the available beams, speed constraints for the radiation unit, etc. We test our plan on clinical test cases and observe that the quality of the plans are better or comparable with clinical treatment planning.

*Abstract.*
In the study of the simplex method one is interested on the performance of pivot rules and the number of pivots needed to reach the optimum. Departing from traditional methods in integer programming one can take a similar approach in integer programming. Separable convex integer programs can be solved via similar "pivoting" or augmentation techniques. The steps for linear programming (LPs) are circuits or edges, for IPs instead we use Graver bases moves. In this talk I will survey several Graver bases pivoting algorithms with excellent complexity result. We will show that using the right pivot rule one can prove the good performance. We show polynomially many augmentation steps are possible if best augmenting steps along Graver basis directions are performed. Using instead augmentation along directions with best ratio of cost improvement/unit length, we show that for linear objectives the number of augmentation steps is bounded by the number of elements in the Graver basis of the problem matrix, giving strongly polynomial-time algorithms for the solution of N-fold LPs and ILPs.
This is joint work with

Raymond Hemmecke (Technische Universität München) and

Jon Lee (University of Michigan).

*Abstract.*
We prove lower bounds on the black-box oracle complexity of large-scale smooth convex minimization problems. These lower bounds work for unit balls of normed spaces under a technical smoothing condition, and arbitrary smoothness parameter of the objective with respect to this norm. As a consequence, we show a unified framework for the complexity of convex optimization on \ell^p-balls, for 2<=p<=\infty. In particular, we prove that the T-step Conditional Gradient algorithm as applied to minimizing smooth convex functions over the n-dimensional box with T<=n is nearly optimal.
On the other hand, we prove lower bounds for the complexity of convex optimization over \ell^p-balls, for 1<=p<2, by combining a random subspace method with the p=\infty lower bounds. In particular, we establish the complexity of problem classes that contain the sparse recovery problem.
This is joint work with

Arkadi Nemirovski.

Apr. 11, 2014 -

** Karen Aardal**,
A Local-search Based Approximation Algorithm for the Transportation Problem with Market Choice.
[

Abstract]

*Abstract.*
In the transportation problem with market choice we are given a set of markets with demands and a set of facilities with limited capacities in a common metric space. Each market is either accepted or rejected. For each accepted market, its demand has to be fully satisfied by facilities, which incurs transportation cost. For each rejected market, a penalty cost has to be paid. The goal is to decide which markets should be accepted and which should be rejected, such that the sum of transportation and penalty cost is minimized.
The transportation problem with market choice was introduced by Damci-Kurt, Dey, and Küçükyavuz, who show that TPMC is NP-hard, and examine the polyhedral structure. Here we derive a constant factor approximation algorithm based on local search.
This is joint work with Pieter van den Berg and Shanfei Li.

Apr. 4, 2014 -

** Camilo Ortiz**,
An Inexact Block-decomposition Method for Extra Large-scale Conic Semidefinite Programming.
[

Abstract]

*Abstract.*
In this paper, we present an inexact block-decomposition (BD) first-order method for solving standard form conic semidefinite programming (SDP) which avoids computations of exact projections onto the manifold defined by the affine constraints and, as a result, is able to handle extra large SDP instances. The method is based on a two-block reformulation of the optimality conditions where the first block corresponds to the complementarity conditions and the second one corresponds to the manifolds consisting of both the primal and dual affine feasibility constraints. While the proximal subproblem corresponding to the first block is solved exactly, the one corresponding to the second block is solved inexactly in order to avoid finding the exact solution of the underlying augmented primal-dual linear system. The error condition required by the BD method naturally suggests the use of a relative error condition in the context of the latter augmented primal-dual linear system. Our implementation uses the conjugate gradient (CG) method applied to a reduced positive definite dual linear system to obtain inexact solutions of the augmented linear system. In addition, we implemented the proposed BD method with the following refinements: an adaptive choice of stepsize for performing an extragradient step; and a new dynamic scaling scheme that uses two scaling factors to balance three inclusions comprising the optimality conditions of the conic SDP. Finally, we present computational results showing the efficiency of our method for solving various extra large SDP instances, several of which cannot be solved by other existing methods, including some with at least two million constraints and/or fifty million non-zero coefficients in the affine constraints.

*Abstract.*
The cut-dominant of a connected graph G is the polyhedron that corresponds to the problem of computing global min-cuts in G. Despite the fact that computing a global min-cut can be done in polynomial time, the geometry of the cut-dominant is far from being understood. We study graphs for which all facets of the corresponding cut-dominant have right-hand side at most a fixed integer k. These graphs form a minor-closed collection. We give a complete list of forbidden minors for k <= 2. This is then applied to the TSP to give a shorter proof of a classic result of Fonlupt and Naddef (Math. Prog., 1992) that characterizes TSP-perfect graphs. This work in progress is joint with Kanstantsin Pashkovich (Brussels) and Michele Conforti (Padova).

Mar. 12, 2014 -

** Deepak Rajan**,
Strong Formulations in Electric Power Generation: A Polyhedral Study of the Unit Commitment Problem.
[

Abstract]

*Abstract.*
In this talk, we will illustrate the importance of strong formulations in Mixed-Integer Programming by focusing on recent polyhedral research in Electric Power generation. We will study the Unit Commitment problem, in which the goal is to minimize the operational cost of power generators over a finite horizon (typically a day) while meeting demand and satisfying generator physical constraints. In particular, we consider a relaxation involving two generator constraints: up/down time constraints and ramping constraints. We present several classes of new facet-defining inequalities and describe the convex-hull for special cases of the relaxation considered. Computational experiments using our inequalities in a branch-and-cut algorithm confirm that stronger formulations can decrease computations times significantly.

*Abstract.*
Cutting planes incorporated in a branch-and-bound framework is the most dominant solution approach for (mixed)-integer optimization problems. One important family of cutting planes is the family of split cuts. A computational study by Balas and Saxena indicates that the first closure associated with the family of split inequalities is a very good approximation of the convex hull of feasible solution. It is, however, NP-hard to optimize a linear function over the split closure, so achieving these results is computationally expensive. A special case of the split cuts, which can trivially be generated, is the family of GMI-inequalities that can be obtained from optimal basic feasible solutions. The computational effectiveness of these inequalities is however much more modest (Bixby, Gu, Rothberg, and Wunderling). The discrepancy between the potential effectiveness of GMI/split inequalities indicated by the study of Balas and Saxena, and the results that so far can be realized by generating such inequalities from optimal basic solutions, led Cornuejols to suggest that one should look for deep split cuts that can be separated efficiently.
In our work we suggest a heuristic way of generating GMI/split inequalities that is based on information from the structure of the underlying lattice. We present some initial computational indications.
This talk is based on joint work with Frederik von Heymann, Andrea Lodi, Andrea Tramontani, and Laurence Wolsey.

Mar. 5, 2014 -

**Tugce Isik**,
Optimal Control of Non-Collaborative Queueing Systems with Setup Costs.
[

Abstract]

*Abstract.*
We consider tandem queueing networks with flexible, non-collaborative servers and finite buffers. Servers are able to work at multiple workstations as long as at most one server serves each station. The server reassignments may result in setup costs. For two station tandem lines and no setup costs, we completely characterize the server assignment policy that maximizes the long-run average throughput. For longer lines, server assignment heuristics are developed assuming the rates of the servers do not depend on the task (i.e., tasks are homogeneous). Numerical results show that the heuristic policy with a primary assignment that keeps servers ordered from slower-to-faster as in bucket brigades manufacturing is near-optimal.
When the server reassignments are costly, we show that the optimal policy depends on the magnitude of the setup cost. We characterize the double-threshold server assignment policy that maximizes the long-run average profit in two station systems if the tasks are homogeneous and the setup costs are constant. For systems with non-homogeneous tasks and non-constant setup costs, we use a homogenized approximation of the system to develop heuristic server assignment policies. Numerical results suggest that for non-homogeneous tasks, the heuristics we propose are near-optimal.
This is joint work with

Sigrún Andradóttir
and

Hayriye Ayhan.

Feb. 26, 2014 -

**Burak Kocuk**,
Approximating the Objective Gradient Using Perceptrons for Constrained Minimization with Application in Drag Reduction.
[

Abstract]

*Abstract.*
This research is concerned with the constrained minimization problems whose objective function does not have a closed form analytical expression. However, it can be evaluated numerically which enables the determination of an estimator using random samples. We assume that the set of constraints are well-defined and explicitly known and use perceptrons to estimate the objective function. The new method, which we call Neural Reduced Gradient Algorithm, approximates the reduced gradient by combining the gradient value of the estimated objective function with the exact gradients of the constraints. It is tested with some randomly generated and also, some available convex and nonconvex optimization problems. We also provide an interesting real application of the new method in shear stress minimization for drag reduction and compare its performance with the ones obtained using linear estimators, which are proposed as a part of this work.
This is joint work with Kuban Altinel.

Feb. 19, 2014 -

**Laurence Wolsey**,
On the Facets of Continuous Knapsack Constraints.
[

Abstract]

*Abstract.*
Given a single constraint mixed integer program with n integer variables and m continuous variables with bounds:
{(x, y) \in R_+^n x Z_+^n : \sum_{i=1}^{m} x_i + \sum_{j=1}^{n} a_j y_j \ge b, x_i \le u_i, i = 1, ... , m}.
Our aim is to understand when all the nontrivial facets of the convex hull of solutions have 0-1 coefficients on the continuous variables. The special case with n = 1 is the single arc set studied by Magnanti et al., and the case with n = 2 and one unbounded continuous variable is treated by Agra and Constantino.
We show that the number of distinct non-zero coefficients for the continuous variables is bounded above by 2n −n and that for n = 2 the bound is exactly one.
Our main result is to show that when n = 2 the property holds and we provide a complete description of the convex hull. We then show by example that there are more complicated facets when n \ge 3.
We also show that similar results hold for all m and n when the coefficients of the integer variables are divisible.
This is joint work with Oktay Gunluk and Sanjeeb Dash, and Hande Yaman.

*Abstract.*
We consider a generalization of the classical quadratic assignment problem, where material flows between facilities are uncertain, and only upper and lower bounds are known for each flow. The objective is to find a minmax regret solution. We present an exact Benders decomposition algorithm based on two developed mathematical programming formulations and on the developed linearizations of master problems, and a heuristic based on using tabu search in the context of a Benders decomposition framework. Then, we develop a hybrid Benders decomposition approach that allows us to combine the speed of heuristics with the rigor and precision of the exact Benders method. We discuss the results of extensive computational experiments.
This is a joint work with

Igor Averbakh (University of Toronto).

*Abstract.*
Sparse cutting-planes are often the ones used in mixed-integer programing (MIP) solvers, since they help in solving the linear programs encountered during branch&bound more efficiently. However, how well can we approximate the integer hull by just using sparse cutting-planes? In order to understand this question better, given a polyope P (e.g. the integer hull of a MIP), let P^k be its best approximation using cuts with at most k non-zero coefficients. We consider d(P, P^k) = max_{x in P^k} (min_{y in P} |x - y|) as a measure of the quality of sparse cuts.
In our first result, we present general upper bounds on d(P, P^k) which depend on the number of vertices in the polytope and exhibits three phases as k increases. Our bounds imply that if P has polynomially many vertices, using half sparsity already approximates it very well. Second, we present a lower bound on d(P, P^k) for random polytopes that show that the upper bounds are quite tight. Third, we show that for a class of hard packing IPs, sparse cutting-planes do not approximate the integer hull well. Finally, we show that using sparse cutting-planes in extended formulations is at least as good as using them in the original polyhedron, and give an example where the former is actually much better.
Joint work with

Santanu Dey and Qianyi Wang.

*Abstract.*
We show that the problem of minimizing a quadratic polynomial with integer coefficients over the integer points in a general two-dimensional rational polyhedron is solvable in time bounded by a polynomial in the input size.
This is joint work with

Robert Weismantel.

Jan. 15, 2014 -

**Sergio Bruno**,
Integer Quadratic Programming in the Plane.
[

Abstract]

*Abstract.*
Renewable energy has drawn increasing attention in the last years due to technological improvements and environmental demands. Yet, its profitability is largely dependent on generation and price risks. Investment strategies for renewable generation rely on real options approaches such as postponing, hedging with fixed (forward) contracts and combination with other sources. We develop a dynamic programming model that includes such options and is solvable by a procedure based on the Stochastic Dual Dynamic Programming (SDDP) method. We extend the framework for the risk averse setting. The numerical results show that the generated policies are efficient investment strategies
This is a joint work with

Shabbir Ahmed
and

Alexander Shapiro.

*Abstract.*
Integer programming is the (not very appealing or descriptive) name for optimization models and algorithms in which some variables are required to have integer values. Planning and operational problems in energy, finance, health, manufacturing, military, transportation, and in almost any imaginable domain where decisions are made, are formulated and solved using integer programming. For example, most Fortune 500 companies use integer programming in some aspects of their business. Currently available software is capable of solving models with thousands, and sometimes millions, of variables and constraints. We will discuss some integer programming models whose solutions have had big impact in solving important problems, and present recent progress that has made it possible to solve very large instances and to obtain provably good solutions quickly. We will close by speculating on future advances in methodology and applications.

Nov. 13, 2013 -

**Linwei Xin**,
Distributionally Robust Inventory Control When Demand is a Martingale.
[

Abstract]

*Abstract.*
Recently, there has been considerable interest in taking a more robust approach to the stochastic models which arise in inventory control. In the distributionally robust optimization paradigm, one assumes that the joint distribution (over time) of the sequence of future demands belongs to some set of joint distributions, and solves the min-max problem of computing the control policy which is optimal against a worst-case distribution belonging to this set. Although such models have been analyzed previously, the cost and policy implications of positing different dependency structures remains poorly understood, and there have been recent efforts to develop a deeper understanding (e.g. (Agrawal et al. 2012)).
In this talk, we combine the framework of distributionally robust optimization with the theory of martingales, and consider the setting in which the sequence of future demands is assumed to belong to the family of martingales with given mean and support. We explicitly compute the optimal policy and value, and compare to the analogous setting in which demand is independent across periods (analyzed previously in (Shapiro 2012)). We identify several interesting qualitative differences between these settings, and prove that the cost incurred by an optimal policy in the independence model is always at least as large as the corresponding cost in the martingale model. We also perform an asymptotic analysis (as the number of time periods $T \rightarrow \infty$) to shed light on the interplay between the optimal policy and worst-case distribution, and derive closed-form expressions for the ratio between these costs. Interestingly, for the setting in which the penalties for over-stocking and under-stocking inventory are equivalent and the expected demand is exactly half the maximum support, we find this limiting ratio to be exactly $\frac{1}{2}$.
This is joint work with

David A. Goldberg.

Nov. 4, 2013 -

**Álvaro Lorca**,
Adaptive Robust Economic Dispatch with Dynamic Uncertainty Sets for Power Systems with Significant Wind.
[

Abstract]

*Abstract.*
The benefits of wind power as an environmentally responsible renewable energy resource have led to an increasing penetration of wind energy in today's power systems. This trend has started to reshape the paradigms of power systems operation, as dealing with uncertainty caused by the highly intermittent and uncertain wind power becomes a significant issue. Motivated by this, we present a framework using adaptive robust optimization for the economic dispatch of power systems with high level of wind penetration. We will also discuss solution algorithms and present a practical evaluation method for testing our approach in terms of cost and reliability.
This is joint work with

Andy Sun.

Oct. 1, 2013 - INFORMS Practice Talk

**Hyunwoo Park**,
Evolution of Product and Service Provider Networks: An Empirical Analysis and Visualization.
[

Abstract]

*Abstract.*
We examine how the network structure among service providers, product manufacturers, and platform providers have shaped inter-firm coordination of nearly 1,500 smartphone launches in the mobile ecosystem between 2000-2012. Network analysis on a tripartite graph—consisting of product-service-platform triads—reveals dynamic patterns of strategic service alliance and switching between platforms. We complement our analysis with network visualizations and discuss service-level strategic implications.
This is joint work with

Rahul Basole.

**Weihong Hu**,
Strategic Health Workforce Planning: Modeling, Optimality, and Uncertainty.
[

Abstract]

*Abstract.*
We propose an infinite-horizon linear programming model for workforce planning in a large health system for a single worker class. We prove the optimality of a natural look-ahead policy applicable to any system of this kind, and use real-world data to examine the performance of the policy in more complex systems with additional detail. The results motivate us to further analyze the impact of demand uncertainty.