Parallel Performance of Molecular Dynamics Trajectory Analysis. Khoshlessan, M., Paraskevakos, I., Fox, G., C., Jha, S., & Beckstein, O. Technical Report 2019. Paper abstract bibtex The performance of biomolecular molecular dynamics (MD) simulations has steadily increased on modern high performance computing (HPC) resources but acceleration of the analysis of the output trajectories has lagged behind so that analyzing simulations is increasingly becoming a bottleneck. To close this gap, we studied the performance of parallel trajectory analysis with MPI and the Python MDAnalysis library on three different XSEDE supercomputers where trajectories were read from a Lustre parallel file system. We found that strong scaling performance was impeded by stragglers, MPI processes that were slower than the typical process and that therefore dominated the overall run time. Stragglers were less prevalent for compute-bound workloads, thus pointing to file reading as a crucial bottleneck for scaling. However, a more complicated picture emerged in which both the computation and the ingestion of data exhibited close to ideal strong scaling behavior whereas stragglers were primarily caused by either large MPI communication costs or long times to open the single shared trajectory file. We improved overall strong scaling performance by two different approaches to file access, namely subfiling (splitting the trajectory into as many trajectory segments as number of processes) and MPI-IO with Parallel HDF5 trajectory files. Applying these strategies, we obtained near ideal strong scaling on up to 384 cores (16 nodes). We summarize our lessons-learned in guidelines and strategies on how to take advantage of the available HPC resources to gain good scalability and potentially reduce trajectory analysis times by two orders of magnitude compared to the prevalent serial approach.
@techreport{
title = {Parallel Performance of Molecular Dynamics Trajectory Analysis},
type = {techreport},
year = {2019},
keywords = {Big Data,Global Arrays,HDF5,HPC,MDAnalysis,MPI,MPI I/O,Molecular Dynamics,Python,Straggler,Trajectory Analysis},
pages = {1-60},
id = {1cd1748a-66c8-398f-8788-00d0b3a69597},
created = {2019-10-01T17:21:01.838Z},
file_attached = {true},
profile_id = {42d295c0-0737-38d6-8b43-508cab6ea85d},
last_modified = {2020-05-11T14:43:31.902Z},
read = {false},
starred = {false},
authored = {true},
confirmed = {true},
hidden = {false},
citation_key = {Khoshlessan2019},
private_publication = {false},
abstract = {The performance of biomolecular molecular dynamics (MD) simulations has steadily increased on modern high performance computing (HPC) resources but acceleration of the analysis of the output trajectories has lagged behind so that analyzing simulations is increasingly becoming a bottleneck. To close this gap, we studied the performance of parallel trajectory analysis with MPI and the Python MDAnalysis library on three different XSEDE supercomputers where trajectories were read from a Lustre parallel file system. We found that strong scaling performance was impeded by stragglers, MPI processes that were slower than the typical process and that therefore dominated the overall run time. Stragglers were less prevalent for compute-bound workloads, thus pointing to file reading as a crucial bottleneck for scaling. However, a more complicated picture emerged in which both the computation and the ingestion of data exhibited close to ideal strong scaling behavior whereas stragglers were primarily caused by either large MPI communication costs or long times to open the single shared trajectory file. We improved overall strong scaling performance by two different approaches to file access, namely subfiling (splitting the trajectory into as many trajectory segments as number of processes) and MPI-IO with Parallel HDF5 trajectory files. Applying these strategies, we obtained near ideal strong scaling on up to 384 cores (16 nodes). We summarize our lessons-learned in guidelines and strategies on how to take advantage of the available HPC resources to gain good scalability and potentially reduce trajectory analysis times by two orders of magnitude compared to the prevalent serial approach.},
bibtype = {techreport},
author = {Khoshlessan, Mahzad and Paraskevakos, Ioannis and Fox, Geoffrey C and Jha, Shantenu and Beckstein, Oliver}
}
Downloads: 0
{"_id":"mEfT4hKGNbZu53kya","bibbaseid":"khoshlessan-paraskevakos-fox-jha-beckstein-parallelperformanceofmoleculardynamicstrajectoryanalysis-2019","authorIDs":["2bizGXSnDsXQbogsQ","3SsZg8CSgNDAkrabb","4aXSAWTk8MaSbXhMj","5456ed5a8b01c81930000073","5de93c3db8c3f8de010000a9","5defd44e090769df010000ba","5df18b7bfe9d8edf01000008","5df2377b480e6fde01000035","5df246fa480e6fde0100013d","5df7361e8efe8dde010000fe","5dfc213fff6df7de01000057","5e0689bfd4589cdf01000044","5e1cee7babed9bde010001d9","5e2226f271dcf8df010000b1","5e24cd70981ceddf01000083","5e26026a2368a7de01000105","5e3eaf4ff657b4f20100001b","5e446802084293df01000046","5e46d54e42fb31df010000a9","5e499c28552c77df010000a8","5e49d160b63120f201000092","5e4ef4db338acfde01000087","5e5852802c2732de010000ed","5e5dce9bc64f0ede01000096","5e5df567863279df0100010a","5e630897e358e6de01000005","8Cgyjf2XkpC3FxSTg","9T9Zu4RnaH77rRHKt","A3sHpcc6RuMAYWLD5","AiZxoWsAianHJNbTg","CmEAeJx46L8v6Tp93","DvmYadxRMcQfPLKNX","EtqX3DBa4DkSTy2to","F3F8prsjH5xqKZbZc","FkTM8oEgkT7bdmvn9","GM7zjFibKnPExxT8T","JPwGRoqK3kaczKvvv","K5Nkuiee8dtgvN6bX","KZvLoMsJvpJDCgNXY","L3GNaCS9sjRxugoZj","M3SzHzxNkumjiSNG5","QqDTDJkWLP3pp5EX6","S4cWQ9PcHwZHSBmAN","YDgq8BzdH3tY94Lib","Yk5aG9v2wpG4Q6wEh","Zh8LgjZYTS7vdAgkQ","e2p7XKzzx5wWhZMD7","fPmEja5AwEh7HGxK5","fhMZJEiLWyxqwu5Pk","fhMnwyjN9v4zbzhEn","gggZyPTYCDu76QLQP","nivqCLGroAf4HiG58","nj6mDv8PodFco5WFf","pQa2mHFCGQDh4vWwP","yz4KMHcfiWqMdmwgj","zabY7SRDW9ph8EZFT"],"author_short":["Khoshlessan, M.","Paraskevakos, I.","Fox, G., C.","Jha, S.","Beckstein, O."],"bibdata":{"title":"Parallel Performance of Molecular Dynamics Trajectory Analysis","type":"techreport","year":"2019","keywords":"Big Data,Global Arrays,HDF5,HPC,MDAnalysis,MPI,MPI I/O,Molecular Dynamics,Python,Straggler,Trajectory Analysis","pages":"1-60","id":"1cd1748a-66c8-398f-8788-00d0b3a69597","created":"2019-10-01T17:21:01.838Z","file_attached":"true","profile_id":"42d295c0-0737-38d6-8b43-508cab6ea85d","last_modified":"2020-05-11T14:43:31.902Z","read":false,"starred":false,"authored":"true","confirmed":"true","hidden":false,"citation_key":"Khoshlessan2019","private_publication":false,"abstract":"The performance of biomolecular molecular dynamics (MD) simulations has steadily increased on modern high performance computing (HPC) resources but acceleration of the analysis of the output trajectories has lagged behind so that analyzing simulations is increasingly becoming a bottleneck. To close this gap, we studied the performance of parallel trajectory analysis with MPI and the Python MDAnalysis library on three different XSEDE supercomputers where trajectories were read from a Lustre parallel file system. We found that strong scaling performance was impeded by stragglers, MPI processes that were slower than the typical process and that therefore dominated the overall run time. Stragglers were less prevalent for compute-bound workloads, thus pointing to file reading as a crucial bottleneck for scaling. However, a more complicated picture emerged in which both the computation and the ingestion of data exhibited close to ideal strong scaling behavior whereas stragglers were primarily caused by either large MPI communication costs or long times to open the single shared trajectory file. We improved overall strong scaling performance by two different approaches to file access, namely subfiling (splitting the trajectory into as many trajectory segments as number of processes) and MPI-IO with Parallel HDF5 trajectory files. Applying these strategies, we obtained near ideal strong scaling on up to 384 cores (16 nodes). We summarize our lessons-learned in guidelines and strategies on how to take advantage of the available HPC resources to gain good scalability and potentially reduce trajectory analysis times by two orders of magnitude compared to the prevalent serial approach.","bibtype":"techreport","author":"Khoshlessan, Mahzad and Paraskevakos, Ioannis and Fox, Geoffrey C and Jha, Shantenu and Beckstein, Oliver","bibtex":"@techreport{\n title = {Parallel Performance of Molecular Dynamics Trajectory Analysis},\n type = {techreport},\n year = {2019},\n keywords = {Big Data,Global Arrays,HDF5,HPC,MDAnalysis,MPI,MPI I/O,Molecular Dynamics,Python,Straggler,Trajectory Analysis},\n pages = {1-60},\n id = {1cd1748a-66c8-398f-8788-00d0b3a69597},\n created = {2019-10-01T17:21:01.838Z},\n file_attached = {true},\n profile_id = {42d295c0-0737-38d6-8b43-508cab6ea85d},\n last_modified = {2020-05-11T14:43:31.902Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Khoshlessan2019},\n private_publication = {false},\n abstract = {The performance of biomolecular molecular dynamics (MD) simulations has steadily increased on modern high performance computing (HPC) resources but acceleration of the analysis of the output trajectories has lagged behind so that analyzing simulations is increasingly becoming a bottleneck. To close this gap, we studied the performance of parallel trajectory analysis with MPI and the Python MDAnalysis library on three different XSEDE supercomputers where trajectories were read from a Lustre parallel file system. We found that strong scaling performance was impeded by stragglers, MPI processes that were slower than the typical process and that therefore dominated the overall run time. Stragglers were less prevalent for compute-bound workloads, thus pointing to file reading as a crucial bottleneck for scaling. However, a more complicated picture emerged in which both the computation and the ingestion of data exhibited close to ideal strong scaling behavior whereas stragglers were primarily caused by either large MPI communication costs or long times to open the single shared trajectory file. We improved overall strong scaling performance by two different approaches to file access, namely subfiling (splitting the trajectory into as many trajectory segments as number of processes) and MPI-IO with Parallel HDF5 trajectory files. Applying these strategies, we obtained near ideal strong scaling on up to 384 cores (16 nodes). We summarize our lessons-learned in guidelines and strategies on how to take advantage of the available HPC resources to gain good scalability and potentially reduce trajectory analysis times by two orders of magnitude compared to the prevalent serial approach.},\n bibtype = {techreport},\n author = {Khoshlessan, Mahzad and Paraskevakos, Ioannis and Fox, Geoffrey C and Jha, Shantenu and Beckstein, Oliver}\n}","author_short":["Khoshlessan, M.","Paraskevakos, I.","Fox, G., C.","Jha, S.","Beckstein, O."],"urls":{"Paper":"https://bibbase.org/service/mendeley/42d295c0-0737-38d6-8b43-508cab6ea85d/file/1da40a71-4074-6b8b-3396-8bae2e4737de/Khoshlessan_et_al___Unknown___Parallel_Performance_of_Molecular_Dynamics_Trajectory_Analysis.pdf.pdf"},"biburl":"https://bibbase.org/service/mendeley/42d295c0-0737-38d6-8b43-508cab6ea85d","bibbaseid":"khoshlessan-paraskevakos-fox-jha-beckstein-parallelperformanceofmoleculardynamicstrajectoryanalysis-2019","role":"author","keyword":["Big Data","Global Arrays","HDF5","HPC","MDAnalysis","MPI","MPI I/O","Molecular Dynamics","Python","Straggler","Trajectory Analysis"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"techreport","biburl":"https://bibbase.org/service/mendeley/42d295c0-0737-38d6-8b43-508cab6ea85d","creationDate":"2019-07-12T00:55:38.798Z","downloads":0,"keywords":["big data","global arrays","hdf5","hpc","mdanalysis","mpi","mpi i/o","molecular dynamics","python","straggler","trajectory analysis"],"search_terms":["parallel","performance","molecular","dynamics","trajectory","analysis","khoshlessan","paraskevakos","fox","jha","beckstein"],"title":"Parallel Performance of Molecular Dynamics Trajectory Analysis","year":2019,"dataSources":["zgahneP4uAjKbudrQ","ya2CyA73rpZseyrZ8","2252seNhipfTmjEBQ"]}