Comparing Message Passing Interface and MapReduce for large-scale parallel ranking and selection. Ni, E. C., Ciocan, D. F., Henderson, S. G., & Hunter, S. R. In Yilmaz, L., Chan, W. K. V., Moon, I., Roeder, T. M. K., Macal, C., & Rossetti, M. D., editors, Proceedings of the 2015 Winter Simulation Conference, pages 3858–3867, Piscataway, NJ, 2015. Institute of Electrical and Electronics Engineers, Inc..
Comparing Message Passing Interface and MapReduce for large-scale parallel ranking and selection [pdf]Paper  doi  abstract   bibtex   
We compare two methods for implementing ranking and selection algorithms in large-scale parallel computing environments. The Message Passing Interface (MPI) provides the programmer with complete control over sending and receiving messages between cores, and is fragile with regard to core failures or messages going awry. In contrast, MapReduce handles all communication and is quite robust, but is more rigid in terms of how algorithms can be coded. As expected in a high-performance computing context, we find that MPI is the more efficient of the two environments, although MapReduce is a reasonable choice. Accordingly, MapReduce may be attractive in environments where cores can stall or fail, such as is possible in low-budget cloud computing.

Downloads: 0