Moment-ratio diagrams for univariate distributions.
Vargo, E.; Pasupathy, R.; and Leemis, L.
Journal of Quality Technology, 42(3): 276–286. 2010.
Paper
link
bibtex
abstract
@article{2010varpaslee,
author = {E. Vargo and R. Pasupathy and L. Leemis},
title = {Moment-ratio diagrams for univariate distributions},
journal = {Journal of Quality Technology},
year = {2010},
volume = {42},
number = {3},
pages = {276--286},
doi = {},
url = {http://web.ics.purdue.edu/~pasupath/PAPERS/2010varpasleeCOLOR.pdf},
keywords = {moment-ratio diagrams},
abstract = {We present two moment-ratio diagrams along with guidance for their interpretation. The first moment-ratio diagram is a graph of skewness versus kurtosis for common univariate probability distributions. The second moment-ratio diagram is a graph of coefficient of variation versus skewness for common univariate probability distributions. Both of these diagrams, to our knowledge, are the most comprehensive to date. The diagrams serve four purposes: (1) they quantify the proximity between various univariate distributions based on their second, third, and fourth moments; (2) they illustrate the versatility of a particular distribution based on the range of values that the moments can assume; and (3) they can be used to create a short list of potential probability models based on a data set; (4) they clarify the limiting relationships between various well-known distribution families. The use of the moment-ratio diagrams for choosing a distribution that models given data is illustrated.}}
%========== NOLINK ==========
We present two moment-ratio diagrams along with guidance for their interpretation. The first moment-ratio diagram is a graph of skewness versus kurtosis for common univariate probability distributions. The second moment-ratio diagram is a graph of coefficient of variation versus skewness for common univariate probability distributions. Both of these diagrams, to our knowledge, are the most comprehensive to date. The diagrams serve four purposes: (1) they quantify the proximity between various univariate distributions based on their second, third, and fourth moments; (2) they illustrate the versatility of a particular distribution based on the range of values that the moments can assume; and (3) they can be used to create a short list of potential probability models based on a data set; (4) they clarify the limiting relationships between various well-known distribution families. The use of the moment-ratio diagrams for choosing a distribution that models given data is illustrated.
Simulation-based optimization of maximum green setting under retrospective approximation framework.
Li, P.; Abbas, M.; Pasupathy, R.; and Head, L.
Transportation Research, 2192: 1–10. 2010.
doi
link
bibtex
abstract
@Article{2010liabbetal,
author = {P. Li and M. Abbas and R. Pasupathy and L. Head},
title = {Simulation-based optimization of maximum green setting under retrospective approximation framework},
journal = {Transportation Research},
year = {2010},
volume = {2192},
doi = {10.3141/2192-01},
pages = {1--10},
keywords = {transportation},
abstract = {Most traffic signal systems work under highly dynamic traffic conditions, and they can be studied adequately only through simulation. As a result, how to optimize traffic signal system parameters in a stochastic framework has become increasingly important. Retrospective approximation (RA) represents the latest theoretical development in stochastic simulation. Under the RA framework, the solution to a simulation-based optimization problem can be approached with a sequence of approximate optimization problems. Each of these problems has a specific sample size and is solved to a specific error tolerance. This research applied the RA concept to the optimal design of the maximum green setting of the multidetector green extension system. It also designed a variant of the Markov monotonic search algorithm that can accommodate the requirements of the RA framework, namely, the inheritable Markov monotonic search algorithm, and implemented the RA-based optimization engine within VISSIM. The results show that the optimized maximum green can considerably increase composite performance (reducing delay and increasing safety) compared with traditional designs. The optimization methodology presented in this paper can easily be expanded to other signal parameters.}}
Most traffic signal systems work under highly dynamic traffic conditions, and they can be studied adequately only through simulation. As a result, how to optimize traffic signal system parameters in a stochastic framework has become increasingly important. Retrospective approximation (RA) represents the latest theoretical development in stochastic simulation. Under the RA framework, the solution to a simulation-based optimization problem can be approached with a sequence of approximate optimization problems. Each of these problems has a specific sample size and is solved to a specific error tolerance. This research applied the RA concept to the optimal design of the maximum green setting of the multidetector green extension system. It also designed a variant of the Markov monotonic search algorithm that can accommodate the requirements of the RA framework, namely, the inheritable Markov monotonic search algorithm, and implemented the RA-based optimization engine within VISSIM. The results show that the optimized maximum green can considerably increase composite performance (reducing delay and increasing safety) compared with traditional designs. The optimization methodology presented in this paper can easily be expanded to other signal parameters.
A method for fast generation of Poisson random vectors.
Shin, K.; and Pasupathy, R.
INFORMS Journal on Computing, 22(1): 81–92. 2010.
doi
link
bibtex
abstract
@article{2010shipas,
author = {K. Shin and R. Pasupathy},
title = {A method for fast generation of {P}oisson random vectors},
journal = {INFORMS Journal on Computing},
year = {2010},
volume = {22},
number = {1},
pages = {81--92},
doi = {10.1287/ijoc.1090.0332},
keywords = {random variate generation},
abstract = {We present the ``trivariate reduction extension'' (TREx)?an exact algorithm for the fast generation of bivariate Poisson random vectors. Like the normal-to-anything (NORTA) procedure, TREx has two phases: a preprocessing phase when the required algorithm parameters are identified, and a generation phase when the parameters identified during the preprocessing phase are used to generate the desired Poisson vector. We prove that the proposed algorithm covers the entire range of theoretically feasible correlations, and we provide efficient-computation directives and rigorous bounds for truncation error control. We demonstrate through extensive numerical tests that TREx, being a specialized algorithm for Poisson vectors, has a preprocessing phase that is uniformly a hundred to a thousand times faster than a fast implementation of NORTA. The generation phases of TREx and NORTA are comparable in speed, with that of TREx being marginally faster. All code is publicly available.}}
We present the ``trivariate reduction extension'' (TREx)?an exact algorithm for the fast generation of bivariate Poisson random vectors. Like the normal-to-anything (NORTA) procedure, TREx has two phases: a preprocessing phase when the required algorithm parameters are identified, and a generation phase when the parameters identified during the preprocessing phase are used to generate the desired Poisson vector. We prove that the proposed algorithm covers the entire range of theoretically feasible correlations, and we provide efficient-computation directives and rigorous bounds for truncation error control. We demonstrate through extensive numerical tests that TREx, being a specialized algorithm for Poisson vectors, has a preprocessing phase that is uniformly a hundred to a thousand times faster than a fast implementation of NORTA. The generation phases of TREx and NORTA are comparable in speed, with that of TREx being marginally faster. All code is publicly available.
On choosing parameters in retrospective-approximation algorithms for stochastic root finding and simulation optimization.
Pasupathy, R.
Operations Research, 58(4): 889–901. 2010.
Paper
doi
link
bibtex
abstract
@Article{2010pas,
author = {R. Pasupathy},
title = {On choosing parameters in retrospective-approximation algorithms for stochastic root finding and simulation optimization},
journal = {Operations Research},
year = {2010},
volume = {58},
number = {4},
pages = {889--901},
url = {http://web.ics.purdue.edu/~pasupath/PAPERS/2010pas.pdf},
doi = {10.1287/opre.1090.0773},
keywords = {retrospective approximation, stochastic root-finding},
abstract = {The stochastic root-finding problem is that of finding a zero of a vector-valued function known only through a stochastic simulation. The simulation-optimization problem is that of locating a real-valued function's minimum, again with only a stochastic simulation that generates function estimates. Retrospective approximation (RA) is a sample-path technique for solving such problems, where the solution to the underlying problem is approached via solutions to a sequence of approximate deterministic problems, each of which is generated using a specified sample size, and solved to a specified error tolerance. Our primary focus, in this paper, is providing guidance on choosing the sequence of sample sizes and error tolerances in RA algorithms. We first present an overview of the conditions that guarantee the correct convergence of RA's iterates. Then we characterize a class of error-tolerance and sample-size sequences that are superior to others in a certain precisely defined sense. We also identify and recommend members of this class and provide a numerical example illustrating the key results.}}
%bibbase_note = {<span style="color: green">Winner, 2010 INFORMS Computing Society Student Paper Award.</span>}
The stochastic root-finding problem is that of finding a zero of a vector-valued function known only through a stochastic simulation. The simulation-optimization problem is that of locating a real-valued function's minimum, again with only a stochastic simulation that generates function estimates. Retrospective approximation (RA) is a sample-path technique for solving such problems, where the solution to the underlying problem is approached via solutions to a sequence of approximate deterministic problems, each of which is generated using a specified sample size, and solved to a specified error tolerance. Our primary focus, in this paper, is providing guidance on choosing the sequence of sample sizes and error tolerances in RA algorithms. We first present an overview of the conditions that guarantee the correct convergence of RA's iterates. Then we characterize a class of error-tolerance and sample-size sequences that are superior to others in a certain precisely defined sense. We also identify and recommend members of this class and provide a numerical example illustrating the key results.
The Initial Transient in Steady-State Point Estimation: Contexts, A Bibliography, The MSE Criterion, and The MSER Statistic.
Pasupathy, R.; and Schmeiser, B. W.
In B. Johansson; S. Jain; J. Montoya-Torres; J. Hugan; and E. Yücesan, editor(s),
Proceedings of the 2010 Winter Simulation Conference, pages 184–197, Piscataway, NJ, 2010. Institute of Electrical and Electronics Engineers, Inc.
Paper
link
bibtex
abstract
@inproceedings{2010passchBWSC,
author = {R. Pasupathy and B. W. Schmeiser},
title = {The Initial Transient in Steady-State Point Estimation: Contexts, A Bibliography, The MSE Criterion, and The MSER Statistic},
booktitle = {Proceedings of the 2010 Winter Simulation Conference},
Publisher = {Institute of Electrical and Electronics Engineers, Inc.},
Address = {Piscataway, NJ},
Editor = {B.~Johansson and S.~Jain and J.~Montoya-Torres and J.~Hugan and E.~Y\"{u}cesan},
pages = {184--197},
year = {2010},
url = {http://www.informs-sim.org/wsc10papers/017.pdf},
keywords = {initial transient},
abstract = {The initial transient is an unavoidable issue when estimating parameters of steady-state distributions. We discuss contexts and factors that affect how the initial transient is handled, provide a bibliography (from the system simulation literature), discuss criteria for evaluating initial-transient algorithms, arguing for focusing on the mean squared error (mse). We discuss the MSER statistic, showing that it is asymptotically proportional to the mse and therefore a good foundation for initial-transient algorithms. We suggest two new algorithms (MSER-LLM and MSER-LLM2) for using the MSER statistic and compare them, based on empirical results for M/M/1 and AR(1) data processes, to the original MSER algorithm (MSER-GM).}
}
The initial transient is an unavoidable issue when estimating parameters of steady-state distributions. We discuss contexts and factors that affect how the initial transient is handled, provide a bibliography (from the system simulation literature), discuss criteria for evaluating initial-transient algorithms, arguing for focusing on the mean squared error (mse). We discuss the MSER statistic, showing that it is asymptotically proportional to the mse and therefore a good foundation for initial-transient algorithms. We suggest two new algorithms (MSER-LLM and MSER-LLM2) for using the MSER statistic and compare them, based on empirical results for M/M/1 and AR(1) data processes, to the original MSER algorithm (MSER-GM).
Root Finding via DARTS: Dynamic Adaptive Random Target Shooting.
Pasupathy, R.; and Schmeiser, B. W.
In B. Johansson; S. Jain; J. Montoya-Torres; J. Hugan; and E. Yücesan, editor(s),
Proceedings of the 2010 Winter Simulation Conference, pages 1255–1262, Piscataway, NJ, 2010. Institute of Electrical and Electronics Engineers, Inc.
Paper
link
bibtex
abstract
@inproceedings{2010passchAWSC,
author = {R. Pasupathy and B. W. Schmeiser},
title = {Root Finding via DARTS: Dynamic Adaptive Random Target Shooting},
booktitle = {Proceedings of the 2010 Winter Simulation Conference},
Publisher = {Institute of Electrical and Electronics Engineers, Inc.},
Address = {Piscataway, NJ},
Editor = {B.~Johansson and S.~Jain and J.~Montoya-Torres and J.~Hugan and E.~Y\"{u}cesan},
pages = {1255--1262},
year = {2010},
url = {http://www.informs-sim.org/wsc10papers/115.pdf},
keywords = {stochastic root-finding, stochastic approximation},
abstract = {Consider multi-dimensional root finding when the equations are available only implicitly via a Monte Carlo simulation oracle that for any solution returns a vector of point estimates. We develop DARTS, a stochastic-approximation algorithm that makes quasi-Newton moves to a new solution whenever the current sample size is large compared to the estimated quality of the current solution and estimated sampling error. We show that DARTS converges in a certain precise sense, and discuss reasons to expect substantial computational efficiencies over traditional stochastic approximation variations.}}
Consider multi-dimensional root finding when the equations are available only implicitly via a Monte Carlo simulation oracle that for any solution returns a vector of point estimates. We develop DARTS, a stochastic-approximation algorithm that makes quasi-Newton moves to a new solution whenever the current sample size is large compared to the estimated quality of the current solution and estimated sampling error. We show that DARTS converges in a certain precise sense, and discuss reasons to expect substantial computational efficiencies over traditional stochastic approximation variations.
Selecting Small Quantiles.
Pasupathy, R.; Szechtman, R.; and Yücesan, E.
In B. Johansson; S. Jain; J. Montoya-Torres; J. Hugan; and E. Yücesan, editor(s),
Proceedings of the 2010 Winter Simulation Conference, pages 2762–2770, Piscataway, NJ, 2010. Institute of Electrical and Electronics Engineers, Inc.
Paper
link
bibtex
abstract
@inproceedings{2010passzeyucWSC,
author = {R. Pasupathy and R. Szechtman and E. Y\"{u}cesan},
title = {Selecting Small Quantiles},
booktitle = {Proceedings of the 2010 Winter Simulation Conference},
Publisher = {Institute of Electrical and Electronics Engineers, Inc.},
Address = {Piscataway, NJ},
Editor = {B.~Johansson and S.~Jain and J.~Montoya-Torres and J.~Hugan and E.~Y\"{u}cesan},
pages = {2762--2770},
year = {2010},
url = {http://www.informs-sim.org/wsc10papers/255.pdf},
keywords = {ranking and selection, quantiles, optimal allocation},
abstract = {Ranking and selection (R&S) techniques are statistical methods developed to select the best system, or a subset of systems from among a set of alternative system designs. R&S via simulation is particularly appealing as it combines modeling flexibility of simulation with the efficiency of statistical techniques for effective decision making. The overwhelming majority of the R&S research, however, focuses on the expected performance of competing designs. Alternatively, quantiles, which provide additional information about the distribution of the performance measure of interest, may serve as better risk measures than the usual expected value. In stochastic systems, quantiles indicate the level of system performance that can be delivered with a specified probability. In this paper, we address the problem of ranking and selection based on quantiles. In particular, we formulate the problem and characterize the optimal budget allocation scheme using the large deviations theory.}}
Ranking and selection (R&S) techniques are statistical methods developed to select the best system, or a subset of systems from among a set of alternative system designs. R&S via simulation is particularly appealing as it combines modeling flexibility of simulation with the efficiency of statistical techniques for effective decision making. The overwhelming majority of the R&S research, however, focuses on the expected performance of competing designs. Alternatively, quantiles, which provide additional information about the distribution of the performance measure of interest, may serve as better risk measures than the usual expected value. In stochastic systems, quantiles indicate the level of system performance that can be delivered with a specified probability. In this paper, we address the problem of ranking and selection based on quantiles. In particular, we formulate the problem and characterize the optimal budget allocation scheme using the large deviations theory.
Large-deviation sampling laws for constrained simulation optimization on finite sets.
Hunter, S. R.; and Pasupathy, R.
In B. Johansson; S. Jain; J. Montoya-Torres; J. Hugan; and E. Yucesan, editor(s),
Proceedings of the 2010 Winter Simulation Conference, pages 995–1002, Piscataway, NJ, 2010. Institute of Electrical and Electronics Engineers, Inc.
Paper
link
bibtex
abstract
20 downloads
@inproceedings{2010hunpasWSC,
Year = {2010},
Author = {S. R. Hunter and R. Pasupathy},
Title = {Large-deviation sampling laws for constrained simulation optimization on finite sets},
Booktitle = {Proceedings of the 2010 Winter Simulation Conference},
Editor = {B.~Johansson and S.~Jain and J.~Montoya-Torres and J.~Hugan and E.~Yucesan},
Publisher = {Institute of Electrical and Electronics Engineers, Inc.},
Address = {Piscataway, NJ},
Pages = {995--1002},
url = {http://www.informs-sim.org/wsc10papers/090.pdf},
urldate = {2013-07-20},
Keywords = {constrained simulation optimization, optimal allocation, ranking and selection},
abstract = {We consider the problem of selecting an optimal system from among a finite set of competing systems, based on a ``stochastic'' objective function and subject to a single ``stochastic'' constraint. By strategically dividing the competing systems, we derive a large deviations sampling framework that asymptotically minimizes the probability of false selection. We provide an illustrative example where a closed-form sampling law is obtained after relaxation.}}
We consider the problem of selecting an optimal system from among a finite set of competing systems, based on a ``stochastic'' objective function and subject to a single ``stochastic'' constraint. By strategically dividing the competing systems, we derive a large deviations sampling framework that asymptotically minimizes the probability of false selection. We provide an illustrative example where a closed-form sampling law is obtained after relaxation.