Parallel stochastic optimization based on descent algorithms
This study addresses the stochastic optimization of a function unknown in closed form which can only be estimated based on measurementsor simulations. We consider parallel implementations of a class of stochasticoptimization methods that consist of the iterative application of a descent algorithmto a sequence of approximation functions converging in some sense to the function of interest. After discussing classical parallel modes of implementations (Jacobi, Gauss-Seidel, random, Gauss-Southwell), we devise effort-savingimplementation modes where the pace of application of the considered descentalgorithm along individual coordinates is coordinated with the evolution of the estimated accuracy of the convergent function sequence. It is shown that this approach can be regarded as a Gauss-Southwell implementation of the initialmethod in an augmented space. As an example of application we study the distributed optimization of stochastic networks using a scaled gradient projection algorithm with approximate line search, for which asymptotic propertiesare derived.
Files in this item