We study a multi-product newsvendor model with customer-driven demand substitutions, where each product, once run out of stock, can be proportionally substituted by the others. This model has been widely studied in the literature, however, due to its nonconvexity and intractability, only very limited analytical properties have been discovered nor efficient solution approaches. This paper first completely characterizes the optimal order policy when the demand is known and recasts the model as a discrete submodular maximization problem. When the demand is stochastic, we first reformulate this nonconvex model as a two-stage stochastic integer program, where the recourse decision variables are mixed integers. We further study a tight upper bound via nonanticipativity dual, which is proven to be close to the true optimal and can be computed as a linear program. Finally, numerical studies demonstrate the effectiveness of the algorithms.
Dr. Weijun Xie graduated from Georgia Tech in 2017 and is Assistant Professor in the Department of Industrial and Systems Engineering at Virginia Tech.
Mar. 30th, 2018 (Friday 11am)
- Gauri Joshi (CMU)
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Distributed Stochastic Gradient Descent (SGD) when running in a synchronous manner, suffers from delays in waiting for the slowest learners (stragglers). Asynchronous methods can alleviate stragglers, but cause gradient staleness that can adversely affect convergence. In this work, we present the first theoretical characterization of the speed-up offered by asynchronous methods by analyzing the trade-off between the error in the trained model and the actual training runtime (wallclock time). The novelty in our work is that our runtime analysis considers random straggler delays, which helps us design and compare distributed SGD algorithms that strike a balance between stragglers and staleness. We also present a new convergence analysis of asynchronous SGD variants without bounded or exponential delay assumptions, and a novel learning rate schedule to compensate for gradient staleness.
Gauri Joshi is an assistant professor in the ECE department at Carnegie Mellon University since September 2017. Prior to that, she worked as a Research Staff Member at IBM T. J. Watson Research Center. Gauri completed her Ph.D. from MIT EECS in June 2016. She also received her B.Tech and M. Tech in Electrical Engineering from the Indian Institute of Technology (IIT) Bombay in 2010. Her awards and honors include the Best Thesis Prize in Computer science at MIT (2012), Institute Gold Medal of IIT Bombay (2010), Claude Shannon Research Assistantship (2015-16), and the Schlumberger Faculty for the Future fellowship (2011-2015).
Feb. 13th, 2018 (Tuesday 11am)
- Roy Schwartz (Technion)
Fast and Simple Algorithms for Submodular Maximization
Submodular maximization captures both classical problems in combinatorial optimization and practical applications that arise in other disciplines, e.g., machine learning and data mining. The size of the inputs in these applications is usually very large. Hence, it is interesting to devise approximation algorithms that in addition to providing a provable guarantee are also very fast and simple to use. In this talk, I will present several such examples from recent years.
Roy Schwartz is an assistant professor at the Computer Science Department at the Technion. His research focuses on the design and analysis of algorithms, approximation algorithms, combinatorial optimization, the geometry of metric spaces and its applications, submodular optimization, and randomized algorithms.
Jan. 23rd, 2018 (Tuesday 11am)
- David Hemmi (Monash)
High-level modeling and solution of combinatorial stochastic programs
Stochastic programming is concerned with decision making under uncertainty, seeking an optimal policy with respect to a set of possible future scenarios. While the value of Stochastic Programming is obvious to many practitioners, in reality, uncertainty in decision making is oftentimes neglected. For deterministic optimisation problems, a coherent chain of modeling and solving exists. Employing standard modeling languages and solvers for stochastic programs is however difficult. First, they have (with exceptions) no native support to formulate Stochastic Programs. Secondly solving stochastic programs with standard solvers (e.g. MIP solvers) is often computationally intractable.
I will talk about my research that aims to make Stochastic Programming more accessible. First, I will discuss modeling deterministic and stochastic programs in the Constraint Programming language MiniZinc - a modeling paradigm that retains the structure of a problem much more strongly than MIP formulations. Second, I will talk about decomposition algorithms I have been working on to solve combinatorial Stochastic Programs.
David is a PhD student at Monash University in Melbourne. Before joining Monash, he completed a Bachelors degree in Systems Engineering in Switzerland, studied Robotics at University of Bristol and worked in various engineering roles. As part of his Phd, David is working on a solver system for stochastic combinatorial optimisation problems that are modeled in in the Constraint Programming language MiniZinc. In future, he would like to combine his passion for automation with the research he is doing in operations research.
The DOS Optimization Seminars began in Fall 2007 as the Discrete Optimization Seminars (DOS). These seminars were meant to extend the scope of the Integer Programming Seminars organized by Marcos Goycoolea and Ricardo Fukasawa in previous semesters.
The scope of the Discrete Optimization Seminars was defined as "discrete optimization in its broadest sense" (i.e. integer and combinatorial optimization and related areas such as network flows, stochastic, continuous and global optimization). The extension of the seminars to these related areas lead to the elimination of the "Discrete" qualifier from the seminars title in Spring 2008. The acronym "DOS" was kept simply to keep the web pages theme. The current explanation for DOS is simply a recursive acronym for "DOS Optimization Seminars" in the spirit of the GNU acronym. This is the old logo of the seminars:
We greatly appreciate Prof. George Nemhauser
for his support to our seminar series.
Also, many thanks to the previous and current organizers:
Juan Pablo Vielma,