A distributed workflow for an astrophysical OpenMP application: Using the data capacitor over WAN to enhance productivity. Henschel, R., Michael, S., & Simms, S. In HPDC 2010 - Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, pages 644-650, 2010.
A distributed workflow for an astrophysical OpenMP application: Using the data capacitor over WAN to enhance productivity [link]Website  doi  abstract   bibtex   
Astrophysical simulations of protoplanetary disks and gas giant planet formation are being performed with a variety of numerical methods. Some of the codes in use today have been producing scientifically significant results for several years, or even decades. Each must simulate millions of resolution elements for millions of time steps, capture and store output data, and rapidly and efficiently analyze this data. To do this effectively, a parallel code is needed that scales to tens or hundreds of processors. Furthermore, an efficient workflow for the transport, analysis, and interpretation of the output data is needed to achieve scientifically meaningful results. Since such simulations are usually performed on moderate to large parallel systems, the compute system is generally located at a remote institution. However, analysis of results is typically performed interactively, and due to the fact that most supercomputing centers do not offer dedicated interactive nodes, the transfer of simulation output data to local resources becomes necessary. Even if interactive resources were available, typical network latencies make X-forwarded displays nearly impossible to work with. Since data sets can be quite large and traditional transfer mechanisms such as scp and sftp offer relatively low throughput, this transfer of data sets becomes a bottleneck in the research workflow. In this article we measure the scalability of the Computational HYdronamics with MultiplE Radiation Algorithms (CHYMERA) code on the SGI Altix architecture. We find that it scales well up to 64 threads for moderate and large sized problems. We also present a novel approach to enable rapid transfer and analysis of simulation data via the Data Capacitor (DC) and LustreWAN (Wide Area Network) [17]. The usage of aWAN file system to tie batch system operated compute resources and interactive analysis and visualization resources together is of general interest and can be applied broadly. Copyright 2010 ACM.
@inproceedings{
 title = {A distributed workflow for an astrophysical OpenMP application: Using the data capacitor over WAN to enhance productivity},
 type = {inproceedings},
 year = {2010},
 keywords = {Application programming interfaces (API); Astroph,Data capacitor; File systems; Lustre; OpenMP; Tera,Wide area networks},
 pages = {644-650},
 websites = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-78649988288&doi=10.1145%2F1851476.1851571&partnerID=40&md5=8b963adcab6d7eee29876b97e4e3da99},
 city = {Chicago, IL},
 id = {b917fdcd-4412-3b59-adcd-04217599d90e},
 created = {2018-02-27T18:07:25.511Z},
 file_attached = {false},
 profile_id = {42d295c0-0737-38d6-8b43-508cab6ea85d},
 group_id = {27e0553c-8ec0-31bd-b42c-825b8a5a9ae8},
 last_modified = {2018-02-27T18:07:25.511Z},
 read = {false},
 starred = {false},
 authored = {false},
 confirmed = {false},
 hidden = {false},
 citation_key = {Henschel2010644},
 source_type = {conference},
 notes = {cited By 3; Conference of 19th ACM International Symposium on High Performance Distributed Computing, HPDC 2010 ; Conference Date: 21 June 2010 Through 25 June 2010; Conference Code:82622},
 private_publication = {false},
 abstract = {Astrophysical simulations of protoplanetary disks and gas giant planet formation are being performed with a variety of numerical methods. Some of the codes in use today have been producing scientifically significant results for several years, or even decades. Each must simulate millions of resolution elements for millions of time steps, capture and store output data, and rapidly and efficiently analyze this data. To do this effectively, a parallel code is needed that scales to tens or hundreds of processors. Furthermore, an efficient workflow for the transport, analysis, and interpretation of the output data is needed to achieve scientifically meaningful results. Since such simulations are usually performed on moderate to large parallel systems, the compute system is generally located at a remote institution. However, analysis of results is typically performed interactively, and due to the fact that most supercomputing centers do not offer dedicated interactive nodes, the transfer of simulation output data to local resources becomes necessary. Even if interactive resources were available, typical network latencies make X-forwarded displays nearly impossible to work with. Since data sets can be quite large and traditional transfer mechanisms such as scp and sftp offer relatively low throughput, this transfer of data sets becomes a bottleneck in the research workflow. In this article we measure the scalability of the Computational HYdronamics with MultiplE Radiation Algorithms (CHYMERA) code on the SGI Altix architecture. We find that it scales well up to 64 threads for moderate and large sized problems. We also present a novel approach to enable rapid transfer and analysis of simulation data via the Data Capacitor (DC) and LustreWAN (Wide Area Network) [17]. The usage of aWAN file system to tie batch system operated compute resources and interactive analysis and visualization resources together is of general interest and can be applied broadly. Copyright 2010 ACM.},
 bibtype = {inproceedings},
 author = {Henschel, R and Michael, S and Simms, S},
 doi = {10.1145/1851476.1851571},
 booktitle = {HPDC 2010 - Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing}
}

Downloads: 0