Logo of Humboldt-Universität zu BerlinLogo of Humboldt-Universität zu Berlin
edoc-Server
Open-Access-Publikationsserver der Humboldt-Universität
de|en
Header image: facade of Humboldt-Universität zu Berlin
View Item 
  • edoc-Server Home
  • Elektronische Zeitschriften
  • Stochastic Programming E-print Series (SPEPS)
  • Volume 2006
  • View Item
  • edoc-Server Home
  • Elektronische Zeitschriften
  • Stochastic Programming E-print Series (SPEPS)
  • Volume 2006
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.
All of edoc-ServerCommunity & CollectionTitleAuthorSubjectThis CollectionTitleAuthorSubject
PublishLoginRegisterHelp
StatisticsView Usage Statistics
All of edoc-ServerCommunity & CollectionTitleAuthorSubjectThis CollectionTitleAuthorSubject
PublishLoginRegisterHelp
StatisticsView Usage Statistics
View Item 
  • edoc-Server Home
  • Elektronische Zeitschriften
  • Stochastic Programming E-print Series (SPEPS)
  • Volume 2006
  • View Item
  • edoc-Server Home
  • Elektronische Zeitschriften
  • Stochastic Programming E-print Series (SPEPS)
  • Volume 2006
  • View Item
2006-12-18Buch DOI: 10.18452/8372
On Rates of Convergence for Stochastic Optimization Problems Under Non-I.I.D. Sampling
Homem-de-Mello, Tito
In this paper we discuss the issue of solving stochastic optimization problems bymeans of sample average approximations. Our focus is on rates of convergence of estimators of optimal solutions and optimal values with respect to the sample size. Thisis a well studied problem in case the samples are independent and identically distributed (i.e., when standard Monte Carlo is used); here, we study the case where thatassumption is dropped. Broadly speaking, our results show that, under appropriate assumptions, the rates of convergence for pointwise estimators under a sampling schemecarry over to the optimization case, in the sense that convergence of approximatingoptimal solutions and optimal values to their true counterparts has the same rates asin pointwise estimation. Our motivation for the study arises from two types of sampling methods that havebeen widely used in the Statistics literature. One is Latin Hypercube Sampling (LHS),a stratified sampling method originally proposed in the seventies by McKay, Beckman,and Conover (1979). The other is the class of quasi-Monte Carlo (QMC) methods,which have become popular especially after the work of Niederreiter (1992). Theadvantage of such methods is that they typically yield pointwise estimators which notonly have lower variance than standard Monte Carlo but also possess better rates ofconvergence. Thus, it is important to study the use of these techniques in sampling-based optimization. The novelty of our work arises from the fact that, while therehas been some work on the use of variance reduction techniques and QMC methods instochastic optimization, none of the existing work — to the best of our knowledge — hasprovided a theoretical study on the effect of these techniques on rates of convergence forthe optimization problem. We present numerical results for some two-stage stochasticprograms from the literature to illustrate the discussed ideas.
Files in this item
Thumbnail
23.pdf — Adobe PDF — 319.1 Kb
MD5: 37fa3375502665490abe5b1b3c1cd995
Cite
BibTeX
EndNote
RIS
InCopyright
Details
DINI-Zertifikat 2019OpenAIRE validatedORCID Consortium
Imprint Policy Contact Data Privacy Statement
A service of University Library and Computer and Media Service
© Humboldt-Universität zu Berlin
 
DOI
10.18452/8372
Permanent URL
https://doi.org/10.18452/8372
HTML
<a href="https://doi.org/10.18452/8372">https://doi.org/10.18452/8372</a>