Stochastic Programming E-print Series (SPEPS)
http://edoc.hu-berlin.de/18452/152
Thu, 17 Aug 2017 13:35:28 GMT2017-08-17T13:35:28ZLearning Enabled Optimization: Towards a Fusion of Statistical Learning and Stochastic Optimization
http://edoc.hu-berlin.de/18452/18756
Learning Enabled Optimization: Towards a Fusion of Statistical Learning and Stochastic Optimization
Sen, Suvrajeet; Deng, Yunxiao
Several emerging applications, such as “Analytics of Things" and “Integrative Analytics" call for a fusion of statistical learning (SL) and stochastic optimization (SO). The Learning Enabled Optimization paradigm fuses concepts from these disciplines in a manner which not only enriches both SL and SO, but also provides a framework which supports rapid model updates and optimization, together with a methodology for rapid model-validation, assessment, and selection. Moreover, in many big data/big decisions applications these steps are repetitive, and possible only through a continuous cycle involving data analysis, optimization, and validation. This paper sets forth the foundation for such a framework by introducing several novel concepts such as statistical optimality, hypothesis tests for modeldelity, generalization error of stochastic optimization, and finally, a non-parametric methodology for model selection. These new concepts provide a formal framework for modeling, solving, validating, and reporting solutions for Learning Enabled Optimization (LEO). We illustrate the LEO framework by applying it to an inventory control model in which we use demand data available for ARIMA modeling in the statistical package \R". In addition, we also study a production-marketing coordination model based on combining a pedagogical production planning model with an advertising data set intended for sales prediction. Because the approach requires the solution of several stochastic programming instances, some using continuous random variables, we leverage stochastic decomposition (SD) for the fusion of regression and stochastic linear programming. In this sense, the novelty of this paper is its framework, rather than a specific new algorithm. Finally, we present an architecture of a software framework to bring about the fusion we envision.
Mon, 31 Jul 2017 00:00:00 GMThttp://edoc.hu-berlin.de/18452/187562017-07-31T00:00:00ZQuantitative Stability Analysis for Minimax Distributionally Robust RiskOptimization
http://edoc.hu-berlin.de/18452/9107
Quantitative Stability Analysis for Minimax Distributionally Robust RiskOptimization
Pichler, Alois; Xu, Huifu
Higle, Julie L.; Römisch, Werner; Sen, Surrajeet
This paper considers distributionally robust formulations of a two stage stochastic programmingproblem with the objective of minimizing a distortion risk of the minimal cost incurred at the secondstage.We carry out stability analysis by looking into variations of the ambiguity set under theWassersteinmetric, decision spaces at both stages and the support set of the random variables. In the case when itis risk neutral, the stability result is presented with the variation of the ambiguity set being measuredby generic metrics of ζ-structure, which provides a unified framework for quantitative stability analysisunder various metrics including total variation metric and Kantorovich metric. When the ambiguity set isstructured by a ζ-ball, we find that the Hausdorff distance between two ζ-balls is bounded by the distanceof their centres and difference of their radius. The findings allow us to strengthen some recent convergenceresults on distributionally robust optimization where the centre of the Wasserstein ball is constructed bythe empirical probability distribution.
Wed, 19 Apr 2017 00:00:00 GMThttp://edoc.hu-berlin.de/18452/91072017-04-19T00:00:00ZOptimal scenario generation and reduction in stochastic programming
http://edoc.hu-berlin.de/18452/9106
Optimal scenario generation and reduction in stochastic programming
Henrion, René; Römisch, Werner
Higle, Julie L.; Römisch, Werner; Sen, Surrajeet
Scenarios are indispensable ingredients for the numerical solution of stochastic optimization problems. Earlier approaches for optimal scenario generation and reduction are based on stability arguments involving distances of probabilitymeasures. In this paper we review those ideas and suggest to make use of stability estimates based on distances containing minimal information, i.e., on data appearing in the optimization model only. For linear two-stage stochasticprograms we show that the optimal scenario generation problem can be reformulatedas best approximation problem for the expected recourse function and asgeneralized semi-infinite program, respectively. The latter model turns out to beconvex if either right-hand sides or costs are random. We also review the problemsof optimal scenario reduction for two-stage models and of optimal scenario generationfor chance constrained programs. Finally, we consider scenario generationand reduction for the classical newsvendor problem.
Wed, 19 Apr 2017 00:00:00 GMThttp://edoc.hu-berlin.de/18452/91062017-04-19T00:00:00ZScenariao Reduction Revisited: Fundamental Limits and Gurarantees
http://edoc.hu-berlin.de/18452/9105
Scenariao Reduction Revisited: Fundamental Limits and Gurarantees
Rujeerapaiboon, Napat; Schindler, Kilian; Kuhn, Daniel; Wiesemann, Wolfram
Higle, Julie L.; Römisch, Werner; Sen, Surrajeet
The goal of scenario reduction is to approximate a given discrete distributionwith another discrete distribution that has fewer atoms. We distinguishcontinuous scenario reduction, where the new atoms may be chosen freely, anddiscrete scenario reduction, where the new atoms must be chosen from among theexisting ones. Using the Wasserstein distance as measure of proximity between distributions,we identify those n-point distributions on the unit ball that are leastsusceptible to scenario reduction, i.e., that have maximum Wasserstein distanceto their closest m-point distributions for some prescribed m < n. We also providesharp bounds on the added benefit of continuous over discrete scenario reduction.Finally, to our best knowledge, we propose the first polynomial-time constant-factorapproximations for both discrete and continuous scenario reduction as wellas the first exact exponential-time algorithms for continuous scenario reduction.
Tue, 21 Feb 2017 00:00:00 GMThttp://edoc.hu-berlin.de/18452/91052017-02-21T00:00:00Z