On adaptive sampling rules for stochastic recursions. Hashemi, F., Ghosh, S., & Pasupathy, R. In Tolk, A., Diallo, S. Y., Ryzhov, I. O., Yilmaz, L., Buckley, S., & Miller, J. A., editors, Proceedings of the 2014 Winter Simulation Conference, Piscataway, NJ, 2014. Institute of Electrical and Electronics Engineers, Inc..
On adaptive sampling rules for stochastic recursions [link]Paper  abstract   bibtex   
We consider the problem of finding a zero of an unknown function, or optimizing an unknown function, with only a stochastic simulation that outputs noise-corrupted observations. A convenient paradigm to solve such problems takes a deterministic recursion, e.g., Newton-type or trust-region, and replaces function values and derivatives appearing in the recursion with their sampled counterparts. While such a paradigm is convenient, there is as yet no clear guidance on how much simulation effort should be expended as the resulting recursion evolves through the search space. In this paper, we take the first steps towards answering this question. We propose using a fully sequential Monte Carlo sampling method to adaptively decide how much to sample at each point visited by the stochastic recursion. The termination criterion for such sampling is based on a certain relative width confidence interval constructed to ensure that the resulting iterates are consistent, and efficient in a rigorous (Monte Carlo canonical) sense. The methods presented here are adaptive in the sense that they ⤽learn⤝ to sample according to the algorithm trajectory. In this sense, our methods should be seen as refining recent methods in a similar context that use a predetermined sequence of sample sizes.
@inproceedings{2014hasghopasWSC,
	author      =   {F. Hashemi and S. Ghosh and R. Pasupathy},
	title       =   {On adaptive sampling rules for stochastic recursions},
	booktitle   =   {Proceedings of the 2014 Winter Simulation Conference},
	Publisher = {Institute of Electrical and Electronics Engineers, Inc.},
	Address = {Piscataway, NJ},
	editor      =   {A. Tolk and S. Y. Diallo and I. O. Ryzhov and L. Yilmaz and S. Buckley and J. A. Miller},
	pages       =   {},
	year        =   {2014},
	url = {},
	keywords = {},
	abstract = {We consider the problem of finding a zero of an unknown function, or optimizing an unknown function,
with only a stochastic simulation that outputs noise-corrupted observations. A convenient paradigm to solve
such problems takes a deterministic recursion, e.g., Newton-type or trust-region, and replaces function
values and derivatives appearing in the recursion with their sampled counterparts. While such a paradigm
is convenient, there is as yet no clear guidance on how much simulation effort should be expended as the
resulting recursion evolves through the search space. In this paper, we take the first steps towards answering
this question. We propose using a fully sequential Monte Carlo sampling method to adaptively decide
how much to sample at each point visited by the stochastic recursion. The termination criterion for such
sampling is based on a certain relative width confidence interval constructed to ensure that the resulting
iterates are consistent, and efficient in a rigorous (Monte Carlo canonical) sense. The methods presented
here are adaptive in the sense that they ⤽learn⤝ to sample according to the algorithm trajectory. In this
sense, our methods should be seen as refining recent methods in a similar context that use a predetermined
sequence of sample sizes.}}
	%bibbase_note = {<span style="color: green">Winner, WSC Student Paper Competition.</span>}}

%------ 2013 ------------------

Downloads: 0