Excellent! Next you can
create a new website with this list, or
embed it in an existing web page by copying & pasting
any of the following snippets.
**JavaScript**
(easiest)
**PHP**
**iFrame**
(not recommended)

```
<script src="https://bibbase.org/show?bib=http://web.ics.purdue.edu/~hunter63/PAPERS/srhunterweb.bib&jsonp=1"></script>
```

```
<?php
$contents = file_get_contents("https://bibbase.org/show?bib=http://web.ics.purdue.edu/~hunter63/PAPERS/srhunterweb.bib");
print_r($contents);
?>
```

```
<iframe src="https://bibbase.org/show?bib=http://web.ics.purdue.edu/~hunter63/PAPERS/srhunterweb.bib"></iframe>
```

For more details see the documention.

This is a preview! To use this list on your own web site
or create a new web site from it,
create a free account. The file will be added
and you will be able to edit it in the File Manager.
We will show you instructions once you've created your account.

To the site owner:

**Action required!** Mendeley is changing its
API. In order to keep using Mendeley with BibBase past April
14th, you need to:

- renew the authorization for BibBase on Mendeley, and
- update the BibBase URL in your page the same way you did when you initially set up this page.

2023
(2)

An upper bound on the Hausdorff distance between a Pareto set and its discretization in bi-objective convex quadratic optimization.
Ondes, B. E.; and Hunter, S. R.
*Optimization Letters*, 17: 45–74. 2023.
Expand abstract below for update.

link paper doi link bibtex abstract

link paper doi link bibtex abstract

@article{2023ondhun, Year = {2023}, Author = {B. E. Ondes and S. R. Hunter}, Title = {An upper bound on the {Hausdorff} distance between a {Pareto} set and its discretization in bi-objective convex quadratic optimization}, journal = {Optimization Letters}, volume = {17}, pages = {45--74}, doi = {10.1007/s11590-022-01920-7}, url_Link = {https://rdcu.be/cWCJh}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2022ondhun.pdf}, bibbase_note = {<span style="color: green">Expand abstract below for update.</span>}, abstract = {We provide upper bounds on the Hausdorff distances between the efficient set and its discretization in the decision space, and between the Pareto set (also called the Pareto front) and its discretization in the objective space, in the context of bi-objective convex quadratic optimization on a compact feasible set. Our results imply that if $t$ is the dispersion of the sampled points in the discretized feasible set, then the Hausdorff distances in both the decision space and the objective space are $O(\sqrt{t})$ as $t$ decreases to zero. Update on May 31, 2023: There appears to be an error in the statement of Lemma 1. We think the respective upper bounds in the decision and objective spaces should be $t\sqrt{\kappa^*}$ and $h(t\sqrt{\kappa^*})$. The largest effect of this error would appear in Section 3 and, in particular, Lemma 3, where $t$ would be missing the multiplier of $\sqrt{\kappa^*}$ nearly everywhere. All results for $\kappa^*=1$ and the big-O results in the statement of the main theorem are unaffected.}, keywords = {optimization > multi-objective > continuous}}

We provide upper bounds on the Hausdorff distances between the efficient set and its discretization in the decision space, and between the Pareto set (also called the Pareto front) and its discretization in the objective space, in the context of bi-objective convex quadratic optimization on a compact feasible set. Our results imply that if $t$ is the dispersion of the sampled points in the discretized feasible set, then the Hausdorff distances in both the decision space and the objective space are $O(\sqrt{t})$ as $t$ decreases to zero. Update on May 31, 2023: There appears to be an error in the statement of Lemma 1. We think the respective upper bounds in the decision and objective spaces should be $t\sqrt{κ^*}$ and $h(t\sqrt{κ^*})$. The largest effect of this error would appear in Section 3 and, in particular, Lemma 3, where $t$ would be missing the multiplier of $\sqrt{κ^*}$ nearly everywhere. All results for $κ^*=1$ and the big-O results in the statement of the main theorem are unaffected.

Properties of several performance indicators for global multi-objective simulation optimization.
Hunter, S. R.; and Ondes, B. E.
In Corlu, C. G.; Hunter, S. R.; Lam, H.; Onggo, B. S.; Shortle, J.; and Biller, B., editor(s), *Proceedings of the 2023 Winter Simulation Conference*, Piscataway, NJ, 2023. IEEE

paper link bibtex abstract

paper link bibtex abstract

@inproceedings{2023hunondWSC, Year = {2023}, Author = {S. R. Hunter and B. E. Ondes}, Title = {Properties of several performance indicators for global multi-objective simulation optimization}, Booktitle = {Proceedings of the 2023 Winter Simulation Conference}, Editor = {C. G. Corlu and S. R. Hunter and H. Lam and B. S. Onggo and J. Shortle and B. Biller}, Publisher = {IEEE}, Address = {Piscataway, NJ}, doi = {}, pages = {}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2023hunondWSC.pdf}, abstract = {We discuss the challenges in constructing and analyzing performance indicators for multi-objective simulation optimization (MOSO), and we examine properties of several performance indicators for assessing algorithms designed to solve MOSO problems to global optimality. Our main contribution lies in the definition and analysis of a modified coverage error; the modification to the coverage error enables us to obtain an upper bound that is the sum of deterministic and stochastic error terms. Then, we analyze each error term separately to obtain an overall upper bound on the modified coverage error that is a function of the dispersion of the visited points in the compact feasible set and the sampling error of the objective function values at the visited points. The upper bound provides a foundation for future mathematical analyses that characterize the rate of decay of the modified coverage error.}, keywords = {simulation optimization > multi-objective > continuous}}

We discuss the challenges in constructing and analyzing performance indicators for multi-objective simulation optimization (MOSO), and we examine properties of several performance indicators for assessing algorithms designed to solve MOSO problems to global optimality. Our main contribution lies in the definition and analysis of a modified coverage error; the modification to the coverage error enables us to obtain an upper bound that is the sum of deterministic and stochastic error terms. Then, we analyze each error term separately to obtain an overall upper bound on the modified coverage error that is a function of the dispersion of the visited points in the compact feasible set and the sampling error of the objective function values at the visited points. The upper bound provides a foundation for future mathematical analyses that characterize the rate of decay of the modified coverage error.

2022
(3)

Parallel Adaptive Survivor Selection.
Pei, L.; Nelson, B. L.; and Hunter, S. R.
*Operations Research*. 2022.

paper doi link bibtex abstract

paper doi link bibtex abstract

@article{2022peinelhun, Year = {2022}, Author = {L. Pei and B. L. Nelson and S. R. Hunter}, Title = {Parallel {A}daptive {S}urvivor {S}election}, journal = {Operations Research}, doi = {10.1287/opre.2022.2343}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/2022peinelhun.pdf}, abstract = {We reconsider the ranking \& selection (R\&S) problem in stochastic simulation optimization in light of high-performance, parallel computing, where we take ``R\&S'' to mean any procedure that simulates all systems (feasible solutions) to provide some statistical guarantee on the selected systems. We argue that when the number of systems is very large, and the parallel processing capability is also substantial, then neither the standard statistical guarantees such as probability of correct selection, nor the usual observation-saving methods such as elimination via paired comparisons or complex budget allocation, serve the experimenter well. As an alternative, we introduce a guarantee on the expected false elimination rate that avoids the curse of multiplicity, and a method to achieve it that is designed to scale computationally with problem size and parallel computing capacity. To facilitate this approach, we present a new mathematical representation, prove small-sample and asymptotic properties, evaluate variations of the method, and demonstrate a specific implementation on a problem with over 1,100,000 systems using only 20 to 80 parallel processors. Although we focus on inference about the best system here, our parallel adaptive survivor selection (PASS) framework supports many other useful definitions of ``good'' systems.}, keywords = {simulation optimization > single-objective > ranking and selection > parallel}}

We reconsider the ranking & selection (R&S) problem in stochastic simulation optimization in light of high-performance, parallel computing, where we take ``R&S'' to mean any procedure that simulates all systems (feasible solutions) to provide some statistical guarantee on the selected systems. We argue that when the number of systems is very large, and the parallel processing capability is also substantial, then neither the standard statistical guarantees such as probability of correct selection, nor the usual observation-saving methods such as elimination via paired comparisons or complex budget allocation, serve the experimenter well. As an alternative, we introduce a guarantee on the expected false elimination rate that avoids the curse of multiplicity, and a method to achieve it that is designed to scale computationally with problem size and parallel computing capacity. To facilitate this approach, we present a new mathematical representation, prove small-sample and asymptotic properties, evaluate variations of the method, and demonstrate a specific implementation on a problem with over 1,100,000 systems using only 20 to 80 parallel processors. Although we focus on inference about the best system here, our parallel adaptive survivor selection (PASS) framework supports many other useful definitions of ``good'' systems.

Adaptive sampling line search for local stochastic optimization with integer variables.
Ragavan, P. K.; Hunter, S. R.; Pasupathy, R.; and Taaffe, M. R.
*Mathematical Programming*, 196: 775–804. 2022.

link paper doi link bibtex abstract

link paper doi link bibtex abstract

@article{2022raghunetal, Year = {2022}, Author = {P. K. Ragavan and S. R. Hunter and R. Pasupathy and M. R. Taaffe}, Title = {Adaptive sampling line search for local stochastic optimization with integer variables}, journal = {Mathematical Programming}, volume = {196}, pages = {775--804}, doi = {10.1007/s10107-021-01667-6}, url_Link = {https://rdcu.be/ctZ5z}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2021raghunetal.pdf}, abstract = {We consider optimization problems with an objective function that is estimable using a Monte Carlo oracle, constraint functions that are known deterministically through a constraint-satisfaction oracle, and integer decision variables. Seeking an appropriately defined local minimum, we propose an iterative adaptive sampling algorithm that, during each iteration, performs a local optimality test using an adaptive statistical procedure, followed by a line search executed along a stochastic descent direction. We prove a number of results. First, the true function values at the iterates generated by the algorithm form an almost-supermartingale process, and the iterates are absorbed with probability one into the set of local minima in finite time. Second, such absorption happens exponentially fast in iteration number and in oracle calls. This result is analogous to non-standard rate guarantees in stochastic continuous optimization contexts involving sharp minima. Third, the oracle complexity of the proposed algorithm increases linearly in the dimensionality of the local neighborhood. As a solver, primarily due to combining line searches that use common random numbers with statistical tests for local optimality, the proposed algorithm is effective on a variety of problems. We illustrate such performance using three problem suites, on problems ranging from 25 to 200 dimensions. }, keywords = {simulation optimization > single-objective > integer-ordered}}

We consider optimization problems with an objective function that is estimable using a Monte Carlo oracle, constraint functions that are known deterministically through a constraint-satisfaction oracle, and integer decision variables. Seeking an appropriately defined local minimum, we propose an iterative adaptive sampling algorithm that, during each iteration, performs a local optimality test using an adaptive statistical procedure, followed by a line search executed along a stochastic descent direction. We prove a number of results. First, the true function values at the iterates generated by the algorithm form an almost-supermartingale process, and the iterates are absorbed with probability one into the set of local minima in finite time. Second, such absorption happens exponentially fast in iteration number and in oracle calls. This result is analogous to non-standard rate guarantees in stochastic continuous optimization contexts involving sharp minima. Third, the oracle complexity of the proposed algorithm increases linearly in the dimensionality of the local neighborhood. As a solver, primarily due to combining line searches that use common random numbers with statistical tests for local optimality, the proposed algorithm is effective on a variety of problems. We illustrate such performance using three problem suites, on problems ranging from 25 to 200 dimensions.

Central Limit Theorems for constructing confidence regions in strictly convex multi-objective simulation optimization.
Hunter, S. R.; and Pasupathy, R.
In Feng, B.; Pedrielli, G.; Peng, Y.; Shashaani, S.; Song, E.; Corlu, C. G.; Lee, L. H.; Chew, E. P.; Roeder, T.; and Lendermann, P, editor(s), *Proceedings of the 2022 Winter Simulation Conference*, pages 3015–3026, Piscataway, NJ, 2022. IEEE

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2022hunpasWSC, Year = {2022}, Author = {S. R. Hunter and R. Pasupathy}, Title = {Central {L}imit {T}heorems for constructing confidence regions in strictly convex multi-objective simulation optimization}, Booktitle = {Proceedings of the 2022 Winter Simulation Conference}, Editor = {B. Feng and G. Pedrielli and Y. Peng and S. Shashaani and E. Song and C. G. Corlu and L. H. Lee and E. P. Chew and T. Roeder and P Lendermann}, Publisher = {IEEE}, Address = {Piscataway, NJ}, doi = {10.1109/WSC57314.2022.10015390}, pages = {3015--3026}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2022hunpasWSC.pdf}, abstract = {We consider the context of multi-objective simulation optimization (MOSO) with strictly convex objectives. We show that under certain types of scalarizations, a $(1-\alpha)$-confidence region on the efficient set can be constructed if the scaled error field (over the scalarization parameter) associated with the estimated efficient set converges weakly to a mean-zero Gaussian process. The main result in this paper proves such a ``Central Limit Theorem.'' A corresponding result on the scaled error field of the image of the efficient set also holds, leading to an analogous confidence region on the Pareto set. The suggested confidence regions are still hypothetical in that they may be infinite-dimensional and therefore not computable, an issue under ongoing investigation.}, keywords = {simulation optimization > multi-objective > continuous}}

We consider the context of multi-objective simulation optimization (MOSO) with strictly convex objectives. We show that under certain types of scalarizations, a $(1-α)$-confidence region on the efficient set can be constructed if the scaled error field (over the scalarization parameter) associated with the estimated efficient set converges weakly to a mean-zero Gaussian process. The main result in this paper proves such a ``Central Limit Theorem.'' A corresponding result on the scaled error field of the image of the efficient set also holds, leading to an analogous confidence region on the Pareto set. The suggested confidence regions are still hypothetical in that they may be infinite-dimensional and therefore not computable, an issue under ongoing investigation.

2020
(4)

Bi-objective simulation optimization on integer lattices using the epsilon-constraint method in a retrospective approximation framework.
Cooper, K.; Hunter, S. R.; and Nagaraj, K.
*INFORMS Journal on Computing*, 32(4): 1080–1100. 2020.

paper doi link bibtex abstract

paper doi link bibtex abstract

@article{2020coohunnag, Year = {2020}, Author = {K. Cooper and S. R. Hunter and K. Nagaraj}, Title = {Bi-objective simulation optimization on integer lattices using the epsilon-constraint method in a retrospective approximation framework}, journal = {INFORMS Journal on Computing}, volume = {32}, number = {4}, pages = {1080--1100}, doi = {10.1287/ijoc.2019.0918}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2019coohunnag.pdf}, abstract = {We consider multi-objective simulation optimization (MOSO) problems on integer lattices, that is, nonlinear optimization problems in which multiple simultaneous objective functions can only be observed with stochastic error, e.g., as output from a Monte Carlo simulation model. The solution to a MOSO problem is the efficient set, which is the set of all feasible decision points that map to non-dominated points in the objective space. For problems with two objectives, we propose the R-PERLE algorithm, which stands for Retrospective Partitioned Epsilon-constraint with Relaxed Local Enumeration. R-PERLE is designed for simulation efficiency and provably converges to a local efficient set under appropriate regularity conditions. It uses a retrospective approximation (RA) framework and solves each resulting bi-objective sample-path problem only to an error tolerance commensurate with the sampling error. R-PERLE uses the sub-algorithm RLE to certify it has found a sample-path approximate local efficient set. We also propose R-MinRLE, which is a provably-convergent benchmark algorithm for problems with two or more objectives. R-PERLE performs favorably relative to R-MinRLE and the current state of the art, MO-COMPASS, in our numerical experiments. This work points to a family of RA algorithms for MOSO on integer lattices that employ RLE to certify sample-path approximate local efficient sets, and for which we provide the convergence guarantees.}, keywords = {simulation optimization > multi-objective > integer-ordered}}

We consider multi-objective simulation optimization (MOSO) problems on integer lattices, that is, nonlinear optimization problems in which multiple simultaneous objective functions can only be observed with stochastic error, e.g., as output from a Monte Carlo simulation model. The solution to a MOSO problem is the efficient set, which is the set of all feasible decision points that map to non-dominated points in the objective space. For problems with two objectives, we propose the R-PERLE algorithm, which stands for Retrospective Partitioned Epsilon-constraint with Relaxed Local Enumeration. R-PERLE is designed for simulation efficiency and provably converges to a local efficient set under appropriate regularity conditions. It uses a retrospective approximation (RA) framework and solves each resulting bi-objective sample-path problem only to an error tolerance commensurate with the sampling error. R-PERLE uses the sub-algorithm RLE to certify it has found a sample-path approximate local efficient set. We also propose R-MinRLE, which is a provably-convergent benchmark algorithm for problems with two or more objectives. R-PERLE performs favorably relative to R-MinRLE and the current state of the art, MO-COMPASS, in our numerical experiments. This work points to a family of RA algorithms for MOSO on integer lattices that employ RLE to certify sample-path approximate local efficient sets, and for which we provide the convergence guarantees.

PyMOSO: Software for multi-objective simulation optimization with R-PERLE and R-MinRLE.
Cooper, K.; and Hunter, S. R.
*INFORMS Journal on Computing*, 32(4): 1101–1108. 2020.

paper doi link bibtex abstract

paper doi link bibtex abstract

@article{2020coohun, Year = {2020}, Author = {K. Cooper and S. R. Hunter}, Title = {{PyMOSO}: {S}oftware for multi-objective simulation optimization with {R-PERLE} and {R-MinRLE}}, journal = {INFORMS Journal on Computing}, volume = {32}, number = {4}, pages = {1101--1108}, doi = {10.1287/ijoc.2019.0902}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2019coohun.pdf}, abstract = {We present the PyMOSO software package for (1) solving multi-objective simulation optimization (MOSO) problems on integer lattices, and (2) implementing and testing new simulation optimization (SO) algorithms. First, for solving MOSO problems on integer lattices, PyMOSO implements R-PERLE, a state-of-the-art algorithm for two objectives, and R-MinRLE, a competitive benchmark algorithm for three or more objectives. Both algorithms employ pseudo-gradients, are designed for sampling efficiency, and return solutions that, under appropriate regularity conditions, provably converge to a local efficient set with probability one as the simulation budget increases. PyMOSO can interface with existing simulation software and can obtain simulation replications in parallel. Second, for implementing and testing new SO algorithms, PyMOSO includes pseudo-random number stream management, implements algorithm testing with independent pseudo-random number streams run in parallel, and computes the performance of algorithms with user-defined metrics. For convenience, we also include an implementation of R-SPLINE for problems with one objective. The PyMOSO source code is available under a permissive open source license.}, keywords = {all software}}

We present the PyMOSO software package for (1) solving multi-objective simulation optimization (MOSO) problems on integer lattices, and (2) implementing and testing new simulation optimization (SO) algorithms. First, for solving MOSO problems on integer lattices, PyMOSO implements R-PERLE, a state-of-the-art algorithm for two objectives, and R-MinRLE, a competitive benchmark algorithm for three or more objectives. Both algorithms employ pseudo-gradients, are designed for sampling efficiency, and return solutions that, under appropriate regularity conditions, provably converge to a local efficient set with probability one as the simulation budget increases. PyMOSO can interface with existing simulation software and can obtain simulation replications in parallel. Second, for implementing and testing new SO algorithms, PyMOSO includes pseudo-random number stream management, implements algorithm testing with independent pseudo-random number streams run in parallel, and computes the performance of algorithms with user-defined metrics. For convenience, we also include an implementation of R-SPLINE for problems with one objective. The PyMOSO source code is available under a permissive open source license.

Multi-objective ranking and selection: Optimal sampling laws and tractable approximations via SCORE.
Applegate, E. A.; Feldman, G.; Hunter, S. R.; and Pasupathy, R.
*Journal of Simulation*, 14(1): 21–40. 2020.
Recipient of the KD Tocher Medal, 2019-2020, The Operational Research Society.

paper doi link bibtex abstract

paper doi link bibtex abstract

@article{2020appfeletal, Year = {2020}, Author = {E. A. Applegate and G. Feldman and S. R. Hunter and R. Pasupathy}, Title = {Multi-objective ranking and selection: {O}ptimal sampling laws and tractable approximations via {SCORE}}, journal = {Journal of Simulation}, volume = {14}, number = {1}, pages = {21--40}, doi = {10.1080/17477778.2019.1633891}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2019appfeletal.pdf}, abstract = {Consider the multi-objective ranking and selection (MORS) problem in which we select the Pareto-optimal set from a finite set of systems evaluated on three or more stochastic objectives. Solving this problem is difficult because we must determine how to allocate a simulation budget among the systems to minimize the probability that any systems are misclassified. Toward determining such a simulation budget allocation, we characterize the exact asymptotically optimal sample allocation that maximizes the misclassification-probability decay rate, and we provide an implementable allocation called MO-SCORE. The MO-SCORE allocation has three salient features: (a) it simultaneously controls the probabilities of misclassification by exclusion and inclusion; (b) it uses a fast dimension-sweep algorithm to identify phantom Pareto systems crucial for computational efficiency; and (c) it models dependence between the objectives. The MO-SCORE allocation is fast and accurate for problems with three objectives or a small number of systems. For problems with four or more objectives and a large number of systems, where modeling dependence has diminishing returns relative to computational speed, we propose independent MO-SCORE (iMO-SCORE). Our numerical experience is extensive and promising: MO-SCORE and iMO-SCORE successfully solve MORS problems involving several thousand systems in three and four objectives.}, keywords = {simulation optimization > multi-objective > ranking and selection}, bibbase_note = {<span style="color: green">Recipient of the KD Tocher Medal, 2019-2020, The Operational Research Society.</span>}}

Consider the multi-objective ranking and selection (MORS) problem in which we select the Pareto-optimal set from a finite set of systems evaluated on three or more stochastic objectives. Solving this problem is difficult because we must determine how to allocate a simulation budget among the systems to minimize the probability that any systems are misclassified. Toward determining such a simulation budget allocation, we characterize the exact asymptotically optimal sample allocation that maximizes the misclassification-probability decay rate, and we provide an implementable allocation called MO-SCORE. The MO-SCORE allocation has three salient features: (a) it simultaneously controls the probabilities of misclassification by exclusion and inclusion; (b) it uses a fast dimension-sweep algorithm to identify phantom Pareto systems crucial for computational efficiency; and (c) it models dependence between the objectives. The MO-SCORE allocation is fast and accurate for problems with three objectives or a small number of systems. For problems with four or more objectives and a large number of systems, where modeling dependence has diminishing returns relative to computational speed, we propose independent MO-SCORE (iMO-SCORE). Our numerical experience is extensive and promising: MO-SCORE and iMO-SCORE successfully solve MORS problems involving several thousand systems in three and four objectives.

Evaluation of bi-PASS for parallel simulation optimization.
Pei, L.; Nelson, B. L.; and Hunter, S. R.
In Bae, K. G.; Feng, B.; Kim, S.; Lazarova-Molnar, S.; Zheng, Z.; Roeder, T.; and Thiesing, R., editor(s), *Proceedings of the 2020 Winter Simulation Conference*, pages 2960–2971, Piscataway, NJ, 2020. IEEE

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2020peinelhunWSC, Year = {2020}, Author = {L. Pei and B. L. Nelson and S. R. Hunter}, Title = {Evaluation of {bi-PASS} for parallel simulation optimization}, Booktitle = {Proceedings of the 2020 Winter Simulation Conference}, Editor = {{K.-H.} G. Bae and B. Feng and S. Kim and S. {Lazarova-Molnar} and Z. Zheng and T. Roeder and R. Thiesing}, Publisher = {IEEE}, Address = {Piscataway, NJ}, doi = {10.1109/WSC48552.2020.9384116}, pages = {2960--2971}, url_Paper = {https://informs-sim.org/wsc20papers/301.pdf}, abstract = {Cheap parallel computing has greatly extended the reach of ranking \& selection (R\&S) for simulation optimization. In this paper we present an evaluation of bi-PASS, a R\&S procedure created specifically for parallel implementation and very large numbers of system designs. We compare bi-PASS to the state-of-the-art Good Selection Procedure and an easy-to-implement subset selection procedure. This is one of the few papers to consider both computational and statistical comparison of parallel R\&S procedures.}, keywords = {simulation optimization > single-objective > ranking and selection > parallel}}

Cheap parallel computing has greatly extended the reach of ranking & selection (R&S) for simulation optimization. In this paper we present an evaluation of bi-PASS, a R&S procedure created specifically for parallel implementation and very large numbers of system designs. We compare bi-PASS to the state-of-the-art Good Selection Procedure and an easy-to-implement subset selection procedure. This is one of the few papers to consider both computational and statistical comparison of parallel R&S procedures.

2019
(3)

An introduction to multi-objective simulation optimization.
Hunter, S. R.; Applegate, E. A.; Arora, V.; Chong, B.; Cooper, K.; Rincón-Guevara, O.; and Vivas-Valencia, C.
*ACM Transactions on Modeling and Computer Simulation*, 29(1): 7:1–7:36. January 2019.

link doi link bibtex abstract

link doi link bibtex abstract

@article{2019hunappetal, Year = {2019}, Author = {S. R. Hunter and E. A. Applegate and V. Arora and B. Chong and K. Cooper and O. {Rinc\'on-Guevara} and C. {Vivas-Valencia}}, Title = {An introduction to multi-objective simulation optimization}, journal = {ACM Transactions on Modeling and Computer Simulation}, volume = {29}, number = {1}, doi = {10.1145/3299872}, month = {January}, articleno = {7}, numpages = {36}, pages = {7:1--7:36}, url_Link={https://dl.acm.org/authorize?N676157}, abstract = {The multi-objective simulation optimization (MOSO) problem is a nonlinear multi-objective optimization problem in which multiple simultaneous and conflicting objective functions can only be observed with stochastic error. We provide an introduction to MOSO at the advanced tutorial level, aimed at researchers and practitioners who wish to begin working in this emerging area. Our focus is exclusively on MOSO methods that characterize the entire efficient or Pareto optimal set as the solution to the MOSO problem; this set then may be used as input to the broader multi-criteria decision-making process. Our introduction to MOSO includes an overview of existing theory, methods, and provably convergent algorithms for (a) MOSO on finite sets, called multi-objective ranking and selection, (b) MOSO with integer-ordered decision variables, and (c) MOSO with continuous decision variables. In the context of integer-ordered and continuous decision variables, we focus on methods that provably converge to a local efficient or Pareto optimal set under the natural ordering. We also discuss key open questions that remain in this emerging field.}, keywords = {book chapters / literature reviews / open questions / tutorials}}

The multi-objective simulation optimization (MOSO) problem is a nonlinear multi-objective optimization problem in which multiple simultaneous and conflicting objective functions can only be observed with stochastic error. We provide an introduction to MOSO at the advanced tutorial level, aimed at researchers and practitioners who wish to begin working in this emerging area. Our focus is exclusively on MOSO methods that characterize the entire efficient or Pareto optimal set as the solution to the MOSO problem; this set then may be used as input to the broader multi-criteria decision-making process. Our introduction to MOSO includes an overview of existing theory, methods, and provably convergent algorithms for (a) MOSO on finite sets, called multi-objective ranking and selection, (b) MOSO with integer-ordered decision variables, and (c) MOSO with continuous decision variables. In the context of integer-ordered and continuous decision variables, we focus on methods that provably converge to a local efficient or Pareto optimal set under the natural ordering. We also discuss key open questions that remain in this emerging field.

Open Problem–Adaptive Constant-Step Stochastic Approximation.
Pasupathy, R.; Honnappa, H.; and Hunter, S. R.
*Stochastic Systems*, 9(3): 307–310. September 2019.

doi link bibtex abstract

doi link bibtex abstract

@article{2019pashonhun, Year = {2019}, Author = {R. Pasupathy and H. Honnappa and S. R. Hunter}, Title = {Open Problem--Adaptive Constant-Step Stochastic Approximation}, journal = {Stochastic Systems}, volume = {9}, number = {3}, month = {September}, pages = {307--310}, doi = {10.1287/stsy.2019.0046}, abstract = {This paper was accepted for the Stochastic Systems Special Section on Open Problems in Applied Probability, presented at the 2018 INFORMS Annual Meeting in Phoenix, Arizona, November 4--7, 2018.}, keywords = {book chapters / literature reviews / open questions / tutorials}}

This paper was accepted for the Stochastic Systems Special Section on Open Problems in Applied Probability, presented at the 2018 INFORMS Annual Meeting in Phoenix, Arizona, November 4–7, 2018.

R-MGSPLINE: Retrospective multi-gradient search for multi-objective simulation optimization on integer lattices.
Applegate, E. A.; and Hunter, S. R.
In Mustafee, N.; Bae, K. G.; Lazarova-Molnar, S.; Rabe, M.; Szabo, C.; Haas, P.; and Son, Y., editor(s), *Proceedings of the 2019 Winter Simulation Conference*, pages 3516–3527, Piscataway, NJ, 2019. IEEE

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2019apphunWSC, Year = {2019}, Author = {E. A. Applegate and S. R. Hunter}, Title = {{R-MGSPLINE}: {R}etrospective multi-gradient search for multi-objective simulation optimization on integer lattices}, Booktitle = {Proceedings of the 2019 Winter Simulation Conference}, Editor = {N. Mustafee and {K.-H.} G. Bae and S. {Lazarova-Molnar} and M. Rabe and C. Szabo and P. Haas and {Y.-J.} Son}, Publisher = {IEEE}, Address = {Piscataway, NJ}, doi = {10.1109/WSC40007.2019.9004719}, pages = {3516--3527}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/2019apphunWSC.pdf}, abstract = {We introduce the Retrospective Multi-Gradient Search with Piecewise Linear Interpolation and Neighborhood Enumeration (R-MGSPLINE) algorithm for finding a local efficient point when solving a multi-objective simulation optimization problem on an integer lattice. In this nonlinear optimization problem, all objectives can only be observed with stochastic error, the decision variables are integer-valued, and a local solution is called the efficient set. R-MGSPLINE uses a retrospective approximation (RA) framework to repeatedly call the MGSPLINE sample-path solver at a sequence of increasing sample sizes, using the solution from the previous RA iteration as a warm start for the current RA iteration. The MGSPLINE algorithm performs a line search along a common descent direction constructed from pseudo-gradients of each objective, followed by a neighborhood enumeration for certification. Numerical experiments show that R-MGSPLINE converges to a local weakly efficient point.}, keywords = {simulation optimization > multi-objective > integer-ordered}}

We introduce the Retrospective Multi-Gradient Search with Piecewise Linear Interpolation and Neighborhood Enumeration (R-MGSPLINE) algorithm for finding a local efficient point when solving a multi-objective simulation optimization problem on an integer lattice. In this nonlinear optimization problem, all objectives can only be observed with stochastic error, the decision variables are integer-valued, and a local solution is called the efficient set. R-MGSPLINE uses a retrospective approximation (RA) framework to repeatedly call the MGSPLINE sample-path solver at a sequence of increasing sample sizes, using the solution from the previous RA iteration as a warm start for the current RA iteration. The MGSPLINE algorithm performs a line search along a common descent direction constructed from pseudo-gradients of each objective, followed by a neighborhood enumeration for certification. Numerical experiments show that R-MGSPLINE converges to a local weakly efficient point.

2018
(2)

SCORE allocations for bi-objective ranking and selection.
Feldman, G.; and Hunter, S. R.
*ACM Transactions on Modeling and Computer Simulation*, 28(1): 7:1–7:28. January 2018.

link paper doi link bibtex abstract

link paper doi link bibtex abstract

@article{2018felhun, Year = {2018}, Author = {G. Feldman and S. R. Hunter}, Title = {{SCORE} allocations for bi-objective ranking and selection}, journal = {ACM Transactions on Modeling and Computer Simulation}, volume = {28}, number = {1}, month = {January}, articleno = {7}, numpages = {28}, pages = {7:1--7:28}, doi = {10.1145/3158666}, url_Link = {https://dl.acm.org/authorize?N42866}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2017felhun.pdf}, abstract = {The bi-objective ranking and selection (R\&S) problem is a special case of the multi-objective simulation optimization problem in which two conflicting objectives are known only through dependent Monte Carlo estimators, the decision space or number of systems is finite, and each system can be sampled to some extent. The solution to the bi-objective R\&S problem is a set of systems with non-dominated objective vectors, called the set of Pareto systems. We exploit the special structure of the bi-objective problem to characterize the asymptotically optimal simulation budget allocation, which accounts for dependence between the objectives and balances the probabilities associated with two types of misclassification error. Like much of the R\&S literature, our focus is on the case in which the simulation observations are bivariate normal. Assuming normality, we then use a certain asymptotic limit to derive an easily-implementable SCORE (Sampling Criteria for Optimization using Rate Estimators) sampling framework that approximates the optimal allocation and accounts for correlation between the objectives. Perhaps surprisingly, the limiting SCORE allocation exclusively controls for misclassification-by-inclusion events, in which non-Pareto systems are falsely estimated as Pareto. We also provide an iterative algorithm for implementation. Our numerical experience with the resulting SCORE framework indicates that it is fast and accurate for problems having up to ten thousand systems.}, keywords = {simulation optimization > multi-objective > ranking and selection}}

The bi-objective ranking and selection (R&S) problem is a special case of the multi-objective simulation optimization problem in which two conflicting objectives are known only through dependent Monte Carlo estimators, the decision space or number of systems is finite, and each system can be sampled to some extent. The solution to the bi-objective R&S problem is a set of systems with non-dominated objective vectors, called the set of Pareto systems. We exploit the special structure of the bi-objective problem to characterize the asymptotically optimal simulation budget allocation, which accounts for dependence between the objectives and balances the probabilities associated with two types of misclassification error. Like much of the R&S literature, our focus is on the case in which the simulation observations are bivariate normal. Assuming normality, we then use a certain asymptotic limit to derive an easily-implementable SCORE (Sampling Criteria for Optimization using Rate Estimators) sampling framework that approximates the optimal allocation and accounts for correlation between the objectives. Perhaps surprisingly, the limiting SCORE allocation exclusively controls for misclassification-by-inclusion events, in which non-Pareto systems are falsely estimated as Pareto. We also provide an iterative algorithm for implementation. Our numerical experience with the resulting SCORE framework indicates that it is fast and accurate for problems having up to ten thousand systems.

A new framework for parallel ranking & selection using an adaptive standard.
Pei, L.; Nelson, B. L.; and Hunter, S. R.
In Rabe, M.; Juan, A. A.; Mustafee, N.; Skoogh, A.; Jain, S.; and Johansson, B., editor(s), *Proceedings of the 2018 Winter Simulation Conference*, pages 2201–2212, Piscataway, NJ, 2018. IEEE

Paper doi link bibtex abstract

Paper doi link bibtex abstract

@inproceedings{2018peinelhunWSC, Year = {2018}, Author = {L. Pei and B. L. Nelson and S. R. Hunter}, Title = {A new framework for parallel ranking \& selection using an adaptive standard}, Booktitle = {Proceedings of the 2018 Winter Simulation Conference}, Editor = {M. Rabe and A. A. Juan and N. Mustafee and A. Skoogh and S. Jain and B. Johansson}, Publisher = {IEEE}, Address = {Piscataway, NJ}, doi = {10.1109/WSC.2018.8632457}, pages = {2201--2212}, url = {http://web.ics.purdue.edu/~hunter63/PAPERS/2018peinelhunWSC.pdf}, abstract = {When we have sufficient computational resources to treat a simulation optimization problem as a ranking \& selection (R\&S) problem, then it can be ``solved.'' R\&S is exhaustive search---all feasible solutions are simulated---with meaningful statistical error control. High-performance parallel computing promises to extend the R\&S limit to even larger problems, but parallelizing R\&S procedures in a way that maintains statistical validity while achieving substantial speed-up is difficult. In this paper we introduce an entirely new framework for R\&S called Parallel Adaptive Survivor Selection (PASS) that is specifically engineered to exploit parallel computing environments for solving simulation optimization problems with a very large number of feasible solutions.}, keywords = {simulation optimization > single-objective > ranking and selection > parallel}}

When we have sufficient computational resources to treat a simulation optimization problem as a ranking & selection (R&S) problem, then it can be ``solved.'' R&S is exhaustive search—all feasible solutions are simulated—with meaningful statistical error control. High-performance parallel computing promises to extend the R&S limit to even larger problems, but parallelizing R&S procedures in a way that maintains statistical validity while achieving substantial speed-up is difficult. In this paper we introduce an entirely new framework for R&S called Parallel Adaptive Survivor Selection (PASS) that is specifically engineered to exploit parallel computing environments for solving simulation optimization problems with a very large number of feasible solutions.

2017
(3)

Efficient ranking and selection in parallel computing environments.
Ni, E. C.; Ciocan, D. F.; Henderson, S. G.; and Hunter, S. R.
*Operations Research*, 65(3): 821–836. May-June 2017.
Algorithm implemented in Simio Portal Edition (2017) and in Simio (2019).

paper doi link bibtex abstract

paper doi link bibtex abstract

@article{2017nicioetal, Year = {2017}, Author = {E. C. Ni and D. F. Ciocan and S. G. Henderson and S. R. Hunter}, Title = {Efficient ranking and selection in parallel computing environments}, Journal = {Operations Research}, volume = {65}, number = {3}, month = {May-June}, pages = {821--836}, doi = {10.1287/opre.2016.1577}, url_Paper = {https://arxiv.org/abs/1506.04986}, abstract = {The goal of ranking and selection (R\&S) procedures is to identify the best stochastic system from among a finite set of competing alternatives. Such procedures require constructing estimates of each system's performance, which can be obtained simultaneously by running multiple independent replications on a parallel computing platform. However, nontrivial statistical and implementation issues arise when designing R&S procedures for a parallel computing environment. Thus we propose several design principles for parallel R&S procedures that preserve statistical validity and maximize core utilization, especially when large numbers of alternatives or cores are involved. These principles are followed closely by our parallel Good Selection Procedure (GSP), which, under the assumption of normally distributed output, (i) guarantees to select a system in the indifference zone with high probability, (ii) runs efficiently on up to 1,024 parallel cores, and (iii) in an example uses smaller sample sizes compared to existing parallel procedures, particularly for large problems (over one million alternatives). In our computational study we discuss two methods for implementing GSP on parallel computers, namely the Message-Passing Interface (MPI) and Hadoop MapReduce and show that the latter provides good protection against core failures at the expense of a significant drop in utilization due to periodic unavoidable synchronization.}, keywords = {simulation optimization > single-objective > ranking and selection > parallel}, bibbase_note = {<span style="color: green">Algorithm implemented in Simio Portal Edition (2017) and in Simio (2019).</span>}}

The goal of ranking and selection (R&S) procedures is to identify the best stochastic system from among a finite set of competing alternatives. Such procedures require constructing estimates of each system's performance, which can be obtained simultaneously by running multiple independent replications on a parallel computing platform. However, nontrivial statistical and implementation issues arise when designing R&S procedures for a parallel computing environment. Thus we propose several design principles for parallel R&S procedures that preserve statistical validity and maximize core utilization, especially when large numbers of alternatives or cores are involved. These principles are followed closely by our parallel Good Selection Procedure (GSP), which, under the assumption of normally distributed output, (i) guarantees to select a system in the indifference zone with high probability, (ii) runs efficiently on up to 1,024 parallel cores, and (iii) in an example uses smaller sample sizes compared to existing parallel procedures, particularly for large problems (over one million alternatives). In our computational study we discuss two methods for implementing GSP on parallel computers, namely the Message-Passing Interface (MPI) and Hadoop MapReduce and show that the latter provides good protection against core failures at the expense of a significant drop in utilization due to periodic unavoidable synchronization.

Parallel ranking and selection.
Hunter, S. R.; and Nelson, B. L.
In Tolk, A.; Fowler, J.; Shao, G.; and Yücesan, E., editor(s), *Advances in Modeling and Simulation: Seminal Research from 50 Years of Winter Simulation Conferences*, of Simulation Foundations, Methods and Applications, 12, pages 249–275. Springer, 2017.

paper doi link bibtex abstract

paper doi link bibtex abstract

@incollection{2017hunnel, Year = {2017}, author = {S. R. Hunter and B. L. Nelson}, editor = {A. Tolk and J. Fowler and G. Shao and E. Y\"{u}cesan}, title = {Parallel ranking and selection}, booktitle = {Advances in Modeling and Simulation: Seminal Research from 50 Years of Winter Simulation Conferences}, series = {Simulation Foundations, Methods and Applications}, publisher = {Springer}, chapter = {12}, pages = {249--275}, doi = {10.1007/978-3-319-64182-9}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2017hunnel.pdf}, abstract = {The Winter Simulation Conference serves as the initial publication venue for many advances in ranking and selection (R\&S), including the recently-developed R\&S procedures that exploit high-performance parallel computing. We formulate a new stylized model for representing parallel R\&S procedures, and we provide an overview of existing R\&S procedures under the stylized model. We also discuss why designing R\&S procedures for a parallel computing platform is nontrivial and speculate on the future of parallel R\&S procedures. In this chapter, ``parallel computing'' means multiple processors that can execute distinct simulations independently, rather than vector or array processors designed to speed up vector-matrix calculations.}, keywords = {book chapters / literature reviews / open questions / tutorials}}

The Winter Simulation Conference serves as the initial publication venue for many advances in ranking and selection (R&S), including the recently-developed R&S procedures that exploit high-performance parallel computing. We formulate a new stylized model for representing parallel R&S procedures, and we provide an overview of existing R&S procedures under the stylized model. We also discuss why designing R&S procedures for a parallel computing platform is nontrivial and speculate on the future of parallel R&S procedures. In this chapter, ``parallel computing'' means multiple processors that can execute distinct simulations independently, rather than vector or array processors designed to speed up vector-matrix calculations.

An epsilon-constraint method for integer-ordered bi-objective simulation optimization.
Cooper, K.; Hunter, S. R.; and Nagaraj, K.
In Chan, W. K. V.; D'Ambrogio, A.; Zacharewicz, G.; Mustafee, N.; Wainer, G.; and Page, E., editor(s), *Proceedings of the 2017 Winter Simulation Conference*, pages 2303–2314, Piscataway, NJ, 2017. Institute of Electrical and Electronics Engineers, Inc.

Paper doi link bibtex abstract

Paper doi link bibtex abstract

@inproceedings{2017coohunnagWSC, Year = {2017}, Author = {K. Cooper and S. R. Hunter and K. Nagaraj}, Title = {An epsilon-constraint method for integer-ordered bi-objective simulation optimization}, Booktitle = {Proceedings of the 2017 Winter Simulation Conference}, Editor = {W. K. V. Chan and A. {D'Ambrogio} and G. Zacharewicz and N. Mustafee and G. Wainer and E. Page}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, doi = {10.1109/WSC.2017.8247961}, pages = {2303--2314}, url = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2017coohunnagWSC.pdf}, abstract = {Consider the context of integer-ordered bi-objective simulation optimization, in which the feasible region is a finite subset of the integer lattice. We propose a retrospective approximation (RA) framework to identify a local Pareto set that involves solving a sequence of sample-path bi-objective optimization problems at increasing sample sizes. We apply the epsilon-constraint method to each sample-path bi-objective optimization problem, thus solving a sequence of constrained single-objective problems in each RA iteration. We solve each constrained single-objective optimization problem using the SPLINE algorithm, thus exploiting gradient-based information. In early RA iterations, when sample sizes are small and standard errors are relatively large, we provide only a rough characterization of the Pareto set by making the number of epsilon-constraint problems a function of the standard error. As the RA algorithm progresses, the granularity of the characterization increases until we solve as many epsilon-constraint problems as there are points in the (finite) image of the local Pareto set. Our algorithm displays promising numerical performance.}, keywords = {simulation optimization > multi-objective > integer-ordered}}

Consider the context of integer-ordered bi-objective simulation optimization, in which the feasible region is a finite subset of the integer lattice. We propose a retrospective approximation (RA) framework to identify a local Pareto set that involves solving a sequence of sample-path bi-objective optimization problems at increasing sample sizes. We apply the epsilon-constraint method to each sample-path bi-objective optimization problem, thus solving a sequence of constrained single-objective problems in each RA iteration. We solve each constrained single-objective optimization problem using the SPLINE algorithm, thus exploiting gradient-based information. In early RA iterations, when sample sizes are small and standard errors are relatively large, we provide only a rough characterization of the Pareto set by making the number of epsilon-constraint problems a function of the standard error. As the RA algorithm progresses, the granularity of the characterization increases until we solve as many epsilon-constraint problems as there are points in the (finite) image of the local Pareto set. Our algorithm displays promising numerical performance.

2016
(2)

Maximizing quantitative traits in the mating design problem via simulation-based Pareto estimation.
Hunter, S. R.; and McClosky, B.
*IIE Transactions*, 48(6): 565–578. 2016.
Honorable Mention (Second Place), 2017 IIE Transactions Focused Issue on Operations Engineering & Analytics Best Applications Paper Award.

paper doi link bibtex abstract

paper doi link bibtex abstract

@article{2016hunmcc, Year = {2016}, Author = {S. R. Hunter and B. {McClosky}}, Title = {Maximizing quantitative traits in the mating design problem via simulation-based {P}areto estimation}, Journal = {IIE Transactions}, volume = {48}, number = {6}, pages = {565--578}, doi = {10.1080/0740817X.2015.1096430}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2015hunmcc1.pdf}, abstract = {Commercial plant breeders improve economically important traits by selectively mating individuals from a given breeding population. Potential pairings are evaluated before the growing season using Monte Carlo simulation, and a mating design is created to allocate a fixed breeding budget across the parent pairs to achieve desired population outcomes. We introduce a novel objective function for this mating design problem that accurately models the goals of a certain class of breeding experiments. The resulting mating design problem is a computationally burdensome simulation optimization problem on a combinatorially large set of feasible points. We propose a two-step solution to this problem: (i) simulate to estimate the performance of each parent pair, and (ii) solve an estimated version of the mating design problem, which is an integer program, using the simulation output. To reduce the computational burden when implementing steps (i) and (ii), we analytically identify a Pareto set of parent pairs that will receive the entire breeding budget at optimality. Since we wish to estimate the Pareto set in step (i) as input to step (ii), we derive an asymptotically optimal simulation budget allocation to estimate the Pareto set that, in our numerical experiments, out-performs the Multi-objective Optimal Computing Budget Allocation (MOCBA) in reducing misclassifications. Given the estimated Pareto set, we provide a branch and bound algorithm to solve the estimated mating design problem. Our approach dramatically reduces the computational effort required to solve the mating design problem when compared to naive methods.}, keywords = {various applications, simulation optimization > multi-objective > ranking and selection}, bibbase_note = {<span style="color: green">Honorable Mention (Second Place), 2017 IIE Transactions Focused Issue on Operations Engineering & Analytics Best Applications Paper Award.</span>}}

Commercial plant breeders improve economically important traits by selectively mating individuals from a given breeding population. Potential pairings are evaluated before the growing season using Monte Carlo simulation, and a mating design is created to allocate a fixed breeding budget across the parent pairs to achieve desired population outcomes. We introduce a novel objective function for this mating design problem that accurately models the goals of a certain class of breeding experiments. The resulting mating design problem is a computationally burdensome simulation optimization problem on a combinatorially large set of feasible points. We propose a two-step solution to this problem: (i) simulate to estimate the performance of each parent pair, and (ii) solve an estimated version of the mating design problem, which is an integer program, using the simulation output. To reduce the computational burden when implementing steps (i) and (ii), we analytically identify a Pareto set of parent pairs that will receive the entire breeding budget at optimality. Since we wish to estimate the Pareto set in step (i) as input to step (ii), we derive an asymptotically optimal simulation budget allocation to estimate the Pareto set that, in our numerical experiments, out-performs the Multi-objective Optimal Computing Budget Allocation (MOCBA) in reducing misclassifications. Given the estimated Pareto set, we provide a branch and bound algorithm to solve the estimated mating design problem. Our approach dramatically reduces the computational effort required to solve the mating design problem when compared to naive methods.

ASTRO-DF: Adaptive sampling trust-region optimization algorithms, heuristics, and numerical experience.
Shashaani, S.; Hunter, S. R.; and Pasupathy, R.
In Roeder, T. M. K.; Frazier, P. I.; Szechtman, R.; and Zhou, E., editor(s), *Proceedings of the 2016 Winter Simulation Conference*, pages 554–565, Piscataway, NJ, 2016. Institute of Electrical and Electronics Engineers, Inc.
2016 Winter Simulation Conference I-Sim Best Student Paper Award.

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2016shahunpasWSC, Year = {2016}, Author = {S. Shashaani and S. R. Hunter and R. Pasupathy}, Title = {{ASTRO-DF}: Adaptive sampling trust-region optimization algorithms, heuristics, and numerical experience}, Booktitle = {Proceedings of the 2016 Winter Simulation Conference}, Editor = {T. M. K. Roeder and P. I. Frazier and R. Szechtman and E. Zhou}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, Pages = {554--565}, doi = {10.1109/WSC.2016.7822121}, url_Paper = {http://www.informs-sim.org/wsc16papers/050.pdf}, abstract = {ASTRO-DF is a class of adaptive sampling algorithms for solving simulation optimization problems in which only estimates of the objective function are available by executing a Monte Carlo simulation. ASTRO-DF algorithms are iterative trust-region algorithms, where a local model is repeatedly constructed and optimized as iterates evolve through the search space. The ASTRO-DF class of algorithms is derivative-free in the sense that it does not rely on direct observations of the function derivatives. A salient feature of ASTRO-DF is the incorporation of adaptive sampling and replication to keep the model error and the trust-region radius in lock-step, to ensure efficiency. ASTRO-DF has been demonstrated to generate iterates that globally converge to a first-order critical point with probability one. In this paper, we describe and list ASTRO-DF, and discuss key heuristics that ensure good finite-time performance. We report our numerical experience with ASTRO-DF on test problems in low to moderate dimensions.}, keywords = {simulation optimization > single-objective > continuous}, bibbase_note = {<span style="color: green">2016 Winter Simulation Conference I-Sim Best Student Paper Award.</span>}}

ASTRO-DF is a class of adaptive sampling algorithms for solving simulation optimization problems in which only estimates of the objective function are available by executing a Monte Carlo simulation. ASTRO-DF algorithms are iterative trust-region algorithms, where a local model is repeatedly constructed and optimized as iterates evolve through the search space. The ASTRO-DF class of algorithms is derivative-free in the sense that it does not rely on direct observations of the function derivatives. A salient feature of ASTRO-DF is the incorporation of adaptive sampling and replication to keep the model error and the trust-region radius in lock-step, to ensure efficiency. ASTRO-DF has been demonstrated to generate iterates that globally converge to a first-order critical point with probability one. In this paper, we describe and list ASTRO-DF, and discuss key heuristics that ensure good finite-time performance. We report our numerical experience with ASTRO-DF on test problems in low to moderate dimensions.

2015
(4)

Stochastically constrained ranking and selection via SCORE.
Pasupathy, R.; Hunter, S. R.; Pujowidianto, N. A.; Lee, L. H.; and Chen, C.
*ACM Transactions on Modeling and Computer Simulation*, 25(1): 1–26. January 2015.
Journal version, Finalist, Winter Simulation Conference Best Theoretical Paper Award.

link paper doi link bibtex abstract

link paper doi link bibtex abstract

@article{2015pashunetal, Year = {2015}, Author = {R. Pasupathy and S. R. Hunter and N. A. Pujowidianto and L. H. Lee and {C.-H.} Chen}, Title = {Stochastically constrained ranking and selection via {SCORE}}, Journal = {ACM Transactions on Modeling and Computer Simulation}, shortjournal = {ACM Trans. Model. Comput. Simul.}, volume = {25}, number = {1}, month = {January}, articleno = {1}, pages = {1--26}, numpages = {26}, doi = {10.1145/2630066}, url_Link = {http://dl.acm.org/authorize?N41473}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2014pashunetal.pdf}, abstract = {Consider the context of constrained simulation optimization (SO), that is, optimization problems where the objective and constraint functions are known through dependent Monte Carlo estimators. For solving such problems on large finite spaces, we provide an easily implemented sampling framework called SCORE (Sampling Criteria for Optimization using Rate Estimators) that approximates the optimal simulation budget allocation. We develop a general theory, but like much of the existing literature on ranking and selection, our focus is on SO problems where the distribution of the simulation observations is Gaussian. We first characterize the nature of the optimal simulation budget as a bilevel optimization problem. We then show that under a certain asymptotic limit, the solution to the bilevel optimization problem becomes surprisingly tractable and is expressed through a single intuitive measure, the score. We provide an iterative SO algorithm that repeatedly estimates the score and determines how the available simulation budget should be expended across contending systems. Numerical experience with the algorithm resulting from the proposed sampling approximation is very encouraging --- in numerous examples of constrained SO problems having 1,000 to 10,000 systems, the optimal allocation is identified to negligible error within a few seconds to one minute on a typical laptop computer. Corresponding times to solve the full bilevel optimization problem range from tens of minutes to several hours.}, keywords = {simulation optimization > single-objective > stochastically constrained > ranking and selection}, bibbase_note = {<span style="color: green">Journal version, Finalist, Winter Simulation Conference Best Theoretical Paper Award.</span>}}

Consider the context of constrained simulation optimization (SO), that is, optimization problems where the objective and constraint functions are known through dependent Monte Carlo estimators. For solving such problems on large finite spaces, we provide an easily implemented sampling framework called SCORE (Sampling Criteria for Optimization using Rate Estimators) that approximates the optimal simulation budget allocation. We develop a general theory, but like much of the existing literature on ranking and selection, our focus is on SO problems where the distribution of the simulation observations is Gaussian. We first characterize the nature of the optimal simulation budget as a bilevel optimization problem. We then show that under a certain asymptotic limit, the solution to the bilevel optimization problem becomes surprisingly tractable and is expressed through a single intuitive measure, the score. We provide an iterative SO algorithm that repeatedly estimates the score and determines how the available simulation budget should be expended across contending systems. Numerical experience with the algorithm resulting from the proposed sampling approximation is very encouraging — in numerous examples of constrained SO problems having 1,000 to 10,000 systems, the optimal allocation is identified to negligible error within a few seconds to one minute on a typical laptop computer. Corresponding times to solve the full bilevel optimization problem range from tens of minutes to several hours.

Multi-objective simulation optimization on finite sets: optimal allocation via scalarization.
Feldman, G.; Hunter, S. R.; and Pasupathy, R.
In Yilmaz, L.; Chan, W. K. V.; Moon, I.; Roeder, T. M. K.; Macal, C.; and Rossetti, M. D., editor(s), *Proceedings of the 2015 Winter Simulation Conference*, pages 3610–3621, Piscataway, NJ, 2015. Institute of Electrical and Electronics Engineers, Inc.
2015 Winter Simulation Conference I-Sim Best Student Paper Award.

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2015felhunpasWSC, Year = {2015}, Author = {G. Feldman and S. R. Hunter and R. Pasupathy}, Title = {Multi-objective simulation optimization on finite sets: optimal allocation via scalarization}, Booktitle = {Proceedings of the 2015 Winter Simulation Conference}, Editor = {L. Yilmaz and W. K. V. Chan and I. Moon and T. M. K. Roeder and C. Macal and M. D. Rossetti}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, Pages = {3610--3621}, doi = {10.1109/WSC.2015.7408520}, url_Paper = {http://www.informs-sim.org/wsc15papers/412.pdf}, abstract = {We consider the multi-objective simulation optimization problem on finite sets, where we seek the Pareto set corresponding to systems evaluated on multiple performance measures, using only Monte Carlo simulation observations from each system. We ask how a given simulation budget should be allocated across the systems, and a Pareto surface retrieved, so that the estimated Pareto set minimally deviates from the true Pareto set according to a rigorously defined metric. To answer this question, we suggest scalarization, where the performance measures associated with each system are projected using a carefully considered set of weights, and the Pareto set is estimated as the union of systems that dominate across the weight set. We show that the optimal simulation budget allocation under such scalarization is the solution to a bi-level optimization problem, for which the outer problem is concave, but some inner problems are non-convex. We comment on the development of tractable approximations for use when the number of systems is large.}, keywords = {simulation optimization > multi-objective > ranking and selection}, bibbase_note = {<span style="color: green">2015 Winter Simulation Conference I-Sim Best Student Paper Award.</span>}}

We consider the multi-objective simulation optimization problem on finite sets, where we seek the Pareto set corresponding to systems evaluated on multiple performance measures, using only Monte Carlo simulation observations from each system. We ask how a given simulation budget should be allocated across the systems, and a Pareto surface retrieved, so that the estimated Pareto set minimally deviates from the true Pareto set according to a rigorously defined metric. To answer this question, we suggest scalarization, where the performance measures associated with each system are projected using a carefully considered set of weights, and the Pareto set is estimated as the union of systems that dominate across the weight set. We show that the optimal simulation budget allocation under such scalarization is the solution to a bi-level optimization problem, for which the outer problem is concave, but some inner problems are non-convex. We comment on the development of tractable approximations for use when the number of systems is large.

Optimal sampling laws for bi-objective simulation optimization on finite sets.
Hunter, S. R.; and Feldman, G.
In Yilmaz, L.; Chan, W. K. V.; Moon, I.; Roeder, T. M. K.; Macal, C.; and Rossetti, M. D., editor(s), *Proceedings of the 2015 Winter Simulation Conference*, pages 3749–3757, Piscataway, NJ, 2015. Institute of Electrical and Electronics Engineers, Inc.

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2015hunfelWSC, Year = {2015}, Author = {S. R. Hunter and G. Feldman}, Title = {Optimal sampling laws for bi-objective simulation optimization on finite sets}, Booktitle = {Proceedings of the 2015 Winter Simulation Conference}, Editor = {L. Yilmaz and W. K. V. Chan and I. Moon and T. M. K. Roeder and C. Macal and M. D. Rossetti}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, Pages = {3749--3757}, doi = {10.1109/WSC.2015.7408532}, url_Paper = {http://www.informs-sim.org/wsc15papers/424.pdf}, abstract = {We consider the bi-objective simulation optimization (SO) problem on finite sets, that is, an optimization problem where for each ``system'' the two objective functions are estimated as output from a Monte Carlo simulation. The solution to this bi-objective SO problem is a set of non-dominated systems, also called the Pareto set. In this context, we derive the large deviations rate function for the rate of decay of the probability of a misclassification event as a function of the proportion of sample allocated to each competing system. Notably, we account for the presence of dependence between the estimates of each system's performance on the two objectives. The asymptotically optimal allocation maximizes the rate of decay of the probability of misclassification and is the solution to a concave maximization problem.}, keywords = {simulation optimization > multi-objective > ranking and selection}}

We consider the bi-objective simulation optimization (SO) problem on finite sets, that is, an optimization problem where for each ``system'' the two objective functions are estimated as output from a Monte Carlo simulation. The solution to this bi-objective SO problem is a set of non-dominated systems, also called the Pareto set. In this context, we derive the large deviations rate function for the rate of decay of the probability of a misclassification event as a function of the proportion of sample allocated to each competing system. Notably, we account for the presence of dependence between the estimates of each system's performance on the two objectives. The asymptotically optimal allocation maximizes the rate of decay of the probability of misclassification and is the solution to a concave maximization problem.

Comparing Message Passing Interface and MapReduce for large-scale parallel ranking and selection.
Ni, E. C.; Ciocan, D. F.; Henderson, S. G.; and Hunter, S. R.
In Yilmaz, L.; Chan, W. K. V.; Moon, I.; Roeder, T. M. K.; Macal, C.; and Rossetti, M. D., editor(s), *Proceedings of the 2015 Winter Simulation Conference*, pages 3858–3867, Piscataway, NJ, 2015. Institute of Electrical and Electronics Engineers, Inc.

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2015nicioetalWSC, Year = {2015}, Author = {E. C. Ni and D. F. Ciocan and S. G. Henderson and S. R. Hunter}, Title = {Comparing {M}essage {P}assing {I}nterface and {M}ap{R}educe for large-scale parallel ranking and selection}, Booktitle = {Proceedings of the 2015 Winter Simulation Conference}, Editor = {L. Yilmaz and W. K. V. Chan and I. Moon and T. M. K. Roeder and C. Macal and M. D. Rossetti}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, Pages = {3858--3867}, doi = {10.1109/WSC.2015.7408542}, url_Paper = {http://www.informs-sim.org/wsc15papers/434.pdf}, abstract = {We compare two methods for implementing ranking and selection algorithms in large-scale parallel computing environments. The Message Passing Interface (MPI) provides the programmer with complete control over sending and receiving messages between cores, and is fragile with regard to core failures or messages going awry. In contrast, MapReduce handles all communication and is quite robust, but is more rigid in terms of how algorithms can be coded. As expected in a high-performance computing context, we find that MPI is the more efficient of the two environments, although MapReduce is a reasonable choice. Accordingly, MapReduce may be attractive in environments where cores can stall or fail, such as is possible in low-budget cloud computing.}, keywords = {simulation optimization > single-objective > ranking and selection > parallel}}

We compare two methods for implementing ranking and selection algorithms in large-scale parallel computing environments. The Message Passing Interface (MPI) provides the programmer with complete control over sending and receiving messages between cores, and is fragile with regard to core failures or messages going awry. In contrast, MapReduce handles all communication and is quite robust, but is more rigid in terms of how algorithms can be coded. As expected in a high-performance computing context, we find that MPI is the more efficient of the two environments, although MapReduce is a reasonable choice. Accordingly, MapReduce may be attractive in environments where cores can stall or fail, such as is possible in low-budget cloud computing.

2014
(3)

A bound on the performance of an optimal ambulance redeployment policy.
Maxwell, M. S.; Ni, E. C.; Tong, C.; Henderson, S. G.; Hunter, S. R.; and Topaloglu, H.
*Operations Research*, 62(5): 1014–1027. September-October 2014.

paper doi link bibtex abstract

paper doi link bibtex abstract

@article{2014maxnietal, Year = {2014}, Author = {M. S. Maxwell and E. C. Ni and C. Tong and S. G. Henderson and S. R. Hunter and H. Topaloglu}, Title = {A bound on the performance of an optimal ambulance redeployment policy}, Journal = {Operations Research}, volume = {62}, number = {5}, month = {September-October}, pages = {1014--1027}, doi = {10.1287/opre.2014.1302}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/2014maxnietal.pdf}, abstract = {Ambulance redeployment is the practice of repositioning ambulance fleets in real time in an attempt to reduce response times to future calls. When redeployment decisions are based on real-time information on the status and location of ambulances, the process is called system-status management. An important performance measure is the long-run fraction of calls with response times over some time threshold. We construct a lower bound on this performance measure that holds for nearly any ambulance redeployment policy through comparison methods for queues. The computation of the bound involves solving a number of integer programs and then simulating a multi-server queue. This work originated when one of the authors was asked to analyze a response to a request-for-proposals for ambulance services in a county in North America.}, keywords = {various applications}}

Ambulance redeployment is the practice of repositioning ambulance fleets in real time in an attempt to reduce response times to future calls. When redeployment decisions are based on real-time information on the status and location of ambulances, the process is called system-status management. An important performance measure is the long-run fraction of calls with response times over some time threshold. We construct a lower bound on this performance measure that holds for nearly any ambulance redeployment policy through comparison methods for queues. The computation of the bound involves solving a number of integer programs and then simulating a multi-server queue. This work originated when one of the authors was asked to analyze a response to a request-for-proposals for ambulance services in a county in North America.

Sequential detection of convexity from noisy function evaluations.
Jian, N.; Henderson, S. G.; and Hunter, S. R.
In Tolk, A.; Diallo, S. D.; Ryzhov, I. O.; Yilmaz, L.; Buckley, S.; and Miller, J. A., editor(s), *Proceedings of the 2014 Winter Simulation Conference*, pages 3892–3903, Piscataway, NJ, 2014. Institute of Electrical and Electronics Engineers, Inc.

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2014jiahenhunWSC, Year = {2014}, Author = {N. Jian and S. G. Henderson and S. R. Hunter}, Title = {Sequential detection of convexity from noisy function evaluations}, Booktitle = {Proceedings of the 2014 Winter Simulation Conference}, Editor = {A. Tolk and S. D. Diallo and I. O. Ryzhov and L. Yilmaz and S. Buckley and J. A. Miller}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, Pages = {3892--3903}, doi = {10.1109/WSC.2014.7020215}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/2014jiahenhunWSC.pdf}, abstract = {Consider a function that can only be evaluated with noise. Given estimates of the function values from simulation on a finite set of points, we seek a procedure to detect convexity or non-convexity of the true function on those points. We review an existing frequentist hypothesis test, and introduce a sequential Bayesian test. Our Bayesian test applies for both independent sampling and sampling with common random numbers, with known or unknown sampling variance. In each iteration, we collect a set of samples and update a posterior distribution on the true function values, and use that as the prior belief in our next iteration. We then approximate the probability that the function is convex based on the posterior using Monte Carlo simulation.}, keywords = {structure detection}}

Consider a function that can only be evaluated with noise. Given estimates of the function values from simulation on a finite set of points, we seek a procedure to detect convexity or non-convexity of the true function on those points. We review an existing frequentist hypothesis test, and introduce a sequential Bayesian test. Our Bayesian test applies for both independent sampling and sampling with common random numbers, with known or unknown sampling variance. In each iteration, we collect a set of samples and update a posterior distribution on the true function values, and use that as the prior belief in our next iteration. We then approximate the probability that the function is convex based on the posterior using Monte Carlo simulation.

A comparison of two parallel ranking and selection procedures.
Ni, E. C.; Henderson, S. G.; and Hunter, S. R.
In Tolk, A.; Diallo, S. D.; Ryzhov, I. O.; Yilmaz, L.; Buckley, S.; and Miller, J. A., editor(s), *Proceedings of the 2014 Winter Simulation Conference*, pages 3761–3772, Piscataway, NJ, 2014. Institute of Electrical and Electronics Engineers, Inc.

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2014nihenhunWSC, Year = {2014}, Author = {E. C. Ni and S. G. Henderson and S. R. Hunter }, Title = {A comparison of two parallel ranking and selection procedures}, Booktitle = {Proceedings of the 2014 Winter Simulation Conference}, Editor = {A. Tolk and S. D. Diallo and I. O. Ryzhov and L. Yilmaz and S. Buckley and J. A. Miller}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, Pages = {3761--3772}, doi = {10.1109/WSC.2014.7020204}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/2014nihenhunWSC.pdf}, abstract = {Ranking and selection (R&S) procedures designed for serial computing environments include two-stage procedures (e.g., NSGS), which provide guarantees on good selection and are simple to implement, and fully-sequential screening-based procedures (e.g., KN), which save simulation replications by eliminating inferior systems early. In a parallel computing environment, a naively-parallelized NSGS procedure may require more simulation replications than a sequential screening procedure such as that of Ni, Hunter, and Henderson (2013) (NHH), but requires less communication since there is no periodic screening. The parallel procedure NHH may require less simulation replications overall, but requires more communication to implement periodic screening. We numerically explore the tradeoffs between these two procedures on a parallel computing platform. In particular, we discuss their statistical validity, efficiency, and implementation, including communication and load-balancing.}, keywords = {simulation optimization > single-objective > ranking and selection > parallel}}

Ranking and selection (R&S) procedures designed for serial computing environments include two-stage procedures (e.g., NSGS), which provide guarantees on good selection and are simple to implement, and fully-sequential screening-based procedures (e.g., KN), which save simulation replications by eliminating inferior systems early. In a parallel computing environment, a naively-parallelized NSGS procedure may require more simulation replications than a sequential screening procedure such as that of Ni, Hunter, and Henderson (2013) (NHH), but requires less communication since there is no periodic screening. The parallel procedure NHH may require less simulation replications overall, but requires more communication to implement periodic screening. We numerically explore the tradeoffs between these two procedures on a parallel computing platform. In particular, we discuss their statistical validity, efficiency, and implementation, including communication and load-balancing.

2013
(2)

Optimal sampling laws for stochastically constrained simulation optimization on finite sets.
Hunter, S. R.; and Pasupathy, R.
*INFORMS Journal on Computing*, 25(3): 527–542. Summer 2013.
First Place, 2011 INFORMS Computing Society Student Paper Award.

paper doi link bibtex abstract

paper doi link bibtex abstract

@article{2013hunpas, Year = {2013}, Author = {S. R. Hunter and R. Pasupathy}, Title = {Optimal sampling laws for stochastically constrained simulation optimization on finite sets}, Journal = {INFORMS Journal on Computing}, shortjournal = {INFORMS J. Comput.}, volume = {25}, number = {3}, month = {Summer}, pages = {527--542}, doi = {10.1287/ijoc.1120.0519}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/2013hunpas.pdf}, abstract = {Consider the context of selecting an optimal system from among a finite set of competing systems, based on a stochastic objective function and subject to multiple stochastic constraints. In this context, we characterize the asymptotically optimal sample allocation that maximizes the rate at which the probability of false selection tends to zero. Since the optimal allocation is the result of a concave maximization problem, its solution is particularly easy to obtain in contexts where the underlying distributions are known or can be assumed. We provide a consistent estimator for the optimal allocation and a corresponding sequential algorithm fit for implementation. Various numerical examples demonstrate how the proposed allocation differs from competing algorithms.}, keywords = {simulation optimization > single-objective > stochastically constrained > ranking and selection}, bibbase_note = {<span style="color: green">First Place, 2011 INFORMS Computing Society Student Paper Award.</span>}}

Consider the context of selecting an optimal system from among a finite set of competing systems, based on a stochastic objective function and subject to multiple stochastic constraints. In this context, we characterize the asymptotically optimal sample allocation that maximizes the rate at which the probability of false selection tends to zero. Since the optimal allocation is the result of a concave maximization problem, its solution is particularly easy to obtain in contexts where the underlying distributions are known or can be assumed. We provide a consistent estimator for the optimal allocation and a corresponding sequential algorithm fit for implementation. Various numerical examples demonstrate how the proposed allocation differs from competing algorithms.

Ranking and selection in a high performance computing environment.
Ni, E. C.; Hunter, S. R.; and Henderson, S. G.
In Pasupathy, R.; Kim, S.; Tolk, A.; Hill, R.; and Kuhl, M. E., editor(s), *Proceedings of the 2013 Winter Simulation Conference*, pages 833–845, Piscataway, NJ, 2013. Institute of Electrical and Electronics Engineers, Inc.

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2013nihunhenWSC, Year = {2013}, Author = {E. C. Ni and S. R. Hunter and S. G. Henderson}, Title = {Ranking and selection in a high performance computing environment}, Booktitle = {Proceedings of the 2013 Winter Simulation Conference}, Editor = {R. Pasupathy and {S.-H.} Kim and A. Tolk and R. Hill and M. E. Kuhl}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, Pages = {833--845}, doi = {10.1109/WSC.2013.6721475}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/2013nihunhenWSC.pdf}, abstract = {We explore the adaptation of a ranking and selection procedure, originally designed for a sequential computer, to a high-performance (parallel) computing setting. We pay particular attention to screening and explaining why care is required in implementing screening in parallel settings. We develop an algorithm that allows screening at both the master and worker levels, and that apportions work to processors in such a way that excessive communication is avoided. In doing so, we reply on a random number generator with many streams and substreams.}, keywords = {simulation optimization > single-objective > ranking and selection > parallel}}

We explore the adaptation of a ranking and selection procedure, originally designed for a sequential computer, to a high-performance (parallel) computing setting. We pay particular attention to screening and explaining why care is required in implementing screening in parallel settings. We develop an algorithm that allows screening at both the master and worker levels, and that apportions work to processors in such a way that excessive communication is avoided. In doing so, we reply on a random number generator with many streams and substreams.

2012
(2)

Exploring bounds on ambulance deployment policy performance.
Ni, E. C.; Hunter, S. R.; Henderson, S. G.; and Topaloglu, H.
In Laroque, C.; Himmelspach, J.; Pasupathy, R.; Rose, O.; and Uhrmacher, A. M., editor(s), *Proceedings of the 2012 Winter Simulation Conference*, pages 497–508, Piscataway, NJ, 2012. Institute of Electrical and Electronics Engineers, Inc.

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2012nihunetalWSC, Year = {2012}, Author = {E. C. Ni and S. R. Hunter and S. G. Henderson and H. Topaloglu}, Title = {Exploring bounds on ambulance deployment policy performance}, Booktitle = {Proceedings of the 2012 Winter Simulation Conference}, Editor = {C. Laroque and J. Himmelspach and R. Pasupathy and O. Rose and A. M. Uhrmacher}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, Pages = {497--508}, doi = {10.1109/WSC.2012.6465198}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/2012nihunetalWSC.pdf}, abstract = {Ambulance deployment involves controlling a fleet of ambulances, often in real time, in an attempt to keep response times small. Simulation has been used to devise redeployment policies, and bounds have been obtained from a combination of comparison methods for queues (coupling) and simulation. These techniques yield varying results on two realistic examples. In an attempt to understand the varying results, we explore the performance of the policies and bounds on artificial models.}, keywords = {various applications}}

Ambulance deployment involves controlling a fleet of ambulances, often in real time, in an attempt to keep response times small. Simulation has been used to devise redeployment policies, and bounds have been obtained from a combination of comparison methods for queues (coupling) and simulation. These techniques yield varying results on two realistic examples. In an attempt to understand the varying results, we explore the performance of the policies and bounds on artificial models.

Closed-form sampling laws for stochastically constrained simulation optimization on large finite sets.
Pujowidianto, N. A.; Hunter, S. R.; Pasupathy, R.; Lee, L. H.; and Chen, C.
In Laroque, C.; Himmelspach, J.; Pasupathy, R.; Rose, O.; and Uhrmacher, A. M., editor(s), *Proceedings of the 2012 Winter Simulation Conference*, pages 168–177, Piscataway, NJ, 2012. Institute of Electrical and Electronics Engineers, Inc.
Finalist, Winter Simulation Conference Best Theoretical Paper Award.

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2012pujhunetalWSC, Year = {2012}, Author = {N. A. Pujowidianto and S. R. Hunter and R. Pasupathy and L. H. Lee and {C.-H.} Chen}, Title = {Closed-form sampling laws for stochastically constrained simulation optimization on large finite sets}, Booktitle = {Proceedings of the 2012 Winter Simulation Conference}, Editor = {C. Laroque and J. Himmelspach and R. Pasupathy and O. Rose and A. M. Uhrmacher}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, Pages = {168--177}, doi = {10.1109/WSC.2012.6465141}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/2012pujhunetalWSC.pdf}, abstract = {Consider the context of constrained simulation optimization (SO), i.e., optimization problems where the objective function and constraints are known through a Monte Carlo simulation, with corresponding estimators possibly dependent. We identify the nature of sampling plans that characterize efficient algorithms, particularly in large countable spaces. We show that in a certain asymptotic sense, the optimal sampling characterization, that is, the sampling budget for each system that guarantees optimal convergence rates, depends on a single easily estimable quantity called the score. This result provides a useful and easily implementable sampling allocation that approximates the optimal allocation, which is otherwise intractable due to it being the solution to a difficult bilevel optimization problem. Our results point to a simple sequential algorithm for efficiently solving large-scale constrained simulation optimization problems on finite sets.}, keywords = {simulation optimization > single-objective > stochastically constrained > ranking and selection}, bibbase_note = {<span style="color: green">Finalist, Winter Simulation Conference Best Theoretical Paper Award.</span>}}

Consider the context of constrained simulation optimization (SO), i.e., optimization problems where the objective function and constraints are known through a Monte Carlo simulation, with corresponding estimators possibly dependent. We identify the nature of sampling plans that characterize efficient algorithms, particularly in large countable spaces. We show that in a certain asymptotic sense, the optimal sampling characterization, that is, the sampling budget for each system that guarantees optimal convergence rates, depends on a single easily estimable quantity called the score. This result provides a useful and easily implementable sampling allocation that approximates the optimal allocation, which is otherwise intractable due to it being the solution to a difficult bilevel optimization problem. Our results point to a simple sequential algorithm for efficiently solving large-scale constrained simulation optimization problems on finite sets.

2011
(1)

Optimal sampling laws for constrained simulation optimization on finite sets: the bivariate normal case.
Hunter, S. R.; Pujowidianto, N. A.; Chen, C.; Lee, L. H.; Pasupathy, R.; and Yap, C. M.
In Jain, S.; Creasey, R. R.; Himmelspach, J.; White, K. P.; and Fu, M., editor(s), *Proceedings of the 2011 Winter Simulation Conference*, pages 4294–4302, Piscataway, NJ, 2011. Institute of Electrical and Electronics Engineers, Inc.
Finalist, Winter Simulation Conference Best Theoretical Paper Award.

paper doi link bibtex abstract

paper doi link bibtex abstract

@inproceedings{2011hunpujetalWSC, Year = {2011}, Author = {S. R. Hunter and N. A. Pujowidianto and {C.-H.} Chen and L. H. Lee and R. Pasupathy and C. M. Yap}, Title = {Optimal sampling laws for constrained simulation optimization on finite sets: the bivariate normal case}, Booktitle = {Proceedings of the 2011 Winter Simulation Conference}, Editor = {S. Jain and R. R. Creasey and J. Himmelspach and K. P. White and M. Fu}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, Pages = {4294--4302}, doi = {10.1109/WSC.2011.6148116}, url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/2011hunpujetalWSC.pdf}, abstract = {Consider the context of selecting an optimal system from among a finite set of competing systems, based on a ``stochastic'' objective function and subject to a single ``stochastic'' constraint. In this setting, and assuming the objective and constraint performance measures have a bivariate normal distribution, we present a characterization of the optimal sampling allocation across systems. Unlike previous work on this topic, the characterized optimal allocations are asymptotically exact and expressed explicitly as a function of the correlation between the performance measures.}, keywords = {simulation optimization > single-objective > stochastically constrained > ranking and selection}, bibbase_note = {<span style="color: green">Finalist, Winter Simulation Conference Best Theoretical Paper Award.</span>}}

Consider the context of selecting an optimal system from among a finite set of competing systems, based on a ``stochastic'' objective function and subject to a single ``stochastic'' constraint. In this setting, and assuming the objective and constraint performance measures have a bivariate normal distribution, we present a characterization of the optimal sampling allocation across systems. Unlike previous work on this topic, the characterized optimal allocations are asymptotically exact and expressed explicitly as a function of the correlation between the performance measures.

2010
(1)

Large-deviation sampling laws for constrained simulation optimization on finite sets.
Hunter, S. R.; and Pasupathy, R.
In Johansson, B.; Jain, S.; Montoya-Torres, J.; Hugan, J.; and Yücesan, E., editor(s), *Proceedings of the 2010 Winter Simulation Conference*, pages 995–1002, Piscataway, NJ, 2010. Institute of Electrical and Electronics Engineers, Inc.

doi link bibtex abstract

doi link bibtex abstract

@inproceedings{2010hunpasWSC, Year = {2010}, Author = {S. R. Hunter and R. Pasupathy}, Title = {Large-deviation sampling laws for constrained simulation optimization on finite sets}, Booktitle = {Proceedings of the 2010 Winter Simulation Conference}, Editor = {B. Johansson and S. Jain and J. {Montoya-Torres} and J. Hugan and E. Y\"{u}cesan}, Publisher = {Institute of Electrical and Electronics Engineers, Inc.}, Address = {Piscataway, NJ}, Pages = {995--1002}, doi = {10.1109/WSC.2010.5679092}, abstract = {We consider the problem of selecting an optimal system from among a finite set of competing systems, based on a ``stochastic'' objective function and subject to a single ``stochastic'' constraint. By strategically dividing the competing systems, we derive a large deviations sampling framework that asymptotically minimizes the probability of false selection. We provide an illustrative example where a closed-form sampling law is obtained after relaxation.}, keywords = {simulation optimization > single-objective > stochastically constrained > ranking and selection}}

We consider the problem of selecting an optimal system from among a finite set of competing systems, based on a ``stochastic'' objective function and subject to a single ``stochastic'' constraint. By strategically dividing the competing systems, we derive a large deviations sampling framework that asymptotically minimizes the probability of false selection. We provide an illustrative example where a closed-form sampling law is obtained after relaxation.