Scalable and Efficient Whole-exome Data Processing Using Workflows on the Cloud. Cala, J., Marei, E., Yu, Y., Takeda, K., & Missier, P. Future Generation Computer Systems, 2016. abstract bibtex Dataflow-style workflows offer a simple, high-level programming model for flexible prototyping of scientific applications as an attractive alternative to low-level scripting. At the same time, workflow management systems (WFMS) may support data parallelism over big datasets by providing scalable, distributed deployment and execution of the workflow over a cloud infrastructure. In theory, the combination of these properties makes workflows a natural choice for implementing Big Data processing pipelines, common for instance in bioinformatics. In practice, however, correct workflow design for parallel Big Data problems can be complex and very time-consuming. In this paper we present our experience in porting a genomics data processing pipeline from an existing scripted implementation deployed on a closed HPC cluster, to a workflow-based design deployed on the Microsoft Azure public cloud. We draw two contrasting and general conclusions from this project. On the positive side, we show that our solution based on the e-Science Central WFMS and deployed in the cloud clearly outperforms the original HPC-based implementation achieving up to 2.3x speed-up. However, in order to deliver such performance we describe the importance of optimising the workflow deployment model to best suit the characteristics of the cloud computing infrastructure. The main reason for the performance gains was the availability of fast, node-local SSD disks delivered by D-series Azure VMs combined with the implicit use of local disk resources by e-Science Central workflow engines. These conclusions suggest that, on parallel Big Data problems, it is important to couple understanding of the cloud computing architecture and its software stack with simplicity of design, and that further efforts in automating parallelisation of complex pipelines are required.
@article{cala_scalable_2016,
title = {Scalable and {Efficient} {Whole}-exome {Data} {Processing} {Using} {Workflows} on the {Cloud}},
volume = {In press},
abstract = {Dataflow-style workflows offer a simple, high-level programming model for flexible prototyping of scientific applications as an attractive alternative to low-level scripting. At the same time, workflow management systems (WFMS) may support data parallelism over big datasets by providing scalable, distributed deployment and execution of the workflow over a cloud infrastructure. In theory, the combination of these properties makes workflows a natural choice for implementing Big Data processing pipelines, common for instance in bioinformatics. In practice, however, correct workflow design for parallel Big Data problems can be complex and very time-consuming. In this paper we present our experience in porting a genomics data processing pipeline from an existing scripted implementation deployed on a closed HPC cluster, to a workflow-based design deployed on the Microsoft Azure public cloud. We draw two contrasting and general conclusions from this project. On the positive side, we show that our solution based on the e-Science Central WFMS and deployed in the cloud clearly outperforms the original HPC-based implementation achieving up to 2.3x speed-up. However, in order to deliver such performance we describe the importance of optimising the workflow deployment model to best suit the characteristics of the cloud computing infrastructure. The main reason for the performance gains was the availability of fast, node-local SSD disks delivered by D-series Azure VMs combined with the implicit use of local disk resources by e-Science Central workflow engines. These conclusions suggest that, on parallel Big Data problems, it is important to couple understanding of the cloud computing architecture and its software stack with simplicity of design, and that further efforts in automating parallelisation of complex pipelines are required.},
number = {Special Issue: Big Data in the Cloud - Best paper award at the FGCS forum 2016},
journal = {Future Generation Computer Systems},
author = {Cala, Jacek and Marei, Eyad and Yu, Yaobo and Takeda, Kenji and Missier, Paolo},
year = {2016},
keywords = {workflow, Performance analysis, Cloud computing, HPC, Whole-exome sequencing, Workflow-based application, cloud, genomics, ?},
}
Downloads: 0
{"_id":"uQiGFwiJreq6X9oyz","bibbaseid":"cala-marei-yu-takeda-missier-scalableandefficientwholeexomedataprocessingusingworkflowsonthecloud-2016","downloads":0,"creationDate":"2016-01-19T08:53:21.418Z","title":"Scalable and Efficient Whole-exome Data Processing Using Workflows on the Cloud","author_short":["Cala, J.","Marei, E.","Yu, Y.","Takeda, K.","Missier, P."],"year":2016,"bibtype":"article","biburl":"https://bibbase.org/f/MTSG9SdhWPisKNpZX/MyPublications-bibbase.bib","bibdata":{"bibtype":"article","type":"article","title":"Scalable and Efficient Whole-exome Data Processing Using Workflows on the Cloud","volume":"In press","abstract":"Dataflow-style workflows offer a simple, high-level programming model for flexible prototyping of scientific applications as an attractive alternative to low-level scripting. At the same time, workflow management systems (WFMS) may support data parallelism over big datasets by providing scalable, distributed deployment and execution of the workflow over a cloud infrastructure. In theory, the combination of these properties makes workflows a natural choice for implementing Big Data processing pipelines, common for instance in bioinformatics. In practice, however, correct workflow design for parallel Big Data problems can be complex and very time-consuming. In this paper we present our experience in porting a genomics data processing pipeline from an existing scripted implementation deployed on a closed HPC cluster, to a workflow-based design deployed on the Microsoft Azure public cloud. We draw two contrasting and general conclusions from this project. On the positive side, we show that our solution based on the e-Science Central WFMS and deployed in the cloud clearly outperforms the original HPC-based implementation achieving up to 2.3x speed-up. However, in order to deliver such performance we describe the importance of optimising the workflow deployment model to best suit the characteristics of the cloud computing infrastructure. The main reason for the performance gains was the availability of fast, node-local SSD disks delivered by D-series Azure VMs combined with the implicit use of local disk resources by e-Science Central workflow engines. These conclusions suggest that, on parallel Big Data problems, it is important to couple understanding of the cloud computing architecture and its software stack with simplicity of design, and that further efforts in automating parallelisation of complex pipelines are required.","number":"Special Issue: Big Data in the Cloud - Best paper award at the FGCS forum 2016","journal":"Future Generation Computer Systems","author":[{"propositions":[],"lastnames":["Cala"],"firstnames":["Jacek"],"suffixes":[]},{"propositions":[],"lastnames":["Marei"],"firstnames":["Eyad"],"suffixes":[]},{"propositions":[],"lastnames":["Yu"],"firstnames":["Yaobo"],"suffixes":[]},{"propositions":[],"lastnames":["Takeda"],"firstnames":["Kenji"],"suffixes":[]},{"propositions":[],"lastnames":["Missier"],"firstnames":["Paolo"],"suffixes":[]}],"year":"2016","keywords":"workflow, Performance analysis, Cloud computing, HPC, Whole-exome sequencing, Workflow-based application, cloud, genomics, ?","bibtex":"@article{cala_scalable_2016,\n\ttitle = {Scalable and {Efficient} {Whole}-exome {Data} {Processing} {Using} {Workflows} on the {Cloud}},\n\tvolume = {In press},\n\tabstract = {Dataflow-style workflows offer a simple, high-level programming model for flexible prototyping of scientific applications as an attractive alternative to low-level scripting. At the same time, workflow management systems (WFMS) may support data parallelism over big datasets by providing scalable, distributed deployment and execution of the workflow over a cloud infrastructure. In theory, the combination of these properties makes workflows a natural choice for implementing Big Data processing pipelines, common for instance in bioinformatics. In practice, however, correct workflow design for parallel Big Data problems can be complex and very time-consuming. In this paper we present our experience in porting a genomics data processing pipeline from an existing scripted implementation deployed on a closed HPC cluster, to a workflow-based design deployed on the Microsoft Azure public cloud. We draw two contrasting and general conclusions from this project. On the positive side, we show that our solution based on the e-Science Central WFMS and deployed in the cloud clearly outperforms the original HPC-based implementation achieving up to 2.3x speed-up. However, in order to deliver such performance we describe the importance of optimising the workflow deployment model to best suit the characteristics of the cloud computing infrastructure. The main reason for the performance gains was the availability of fast, node-local SSD disks delivered by D-series Azure VMs combined with the implicit use of local disk resources by e-Science Central workflow engines. These conclusions suggest that, on parallel Big Data problems, it is important to couple understanding of the cloud computing architecture and its software stack with simplicity of design, and that further efforts in automating parallelisation of complex pipelines are required.},\n\tnumber = {Special Issue: Big Data in the Cloud - Best paper award at the FGCS forum 2016},\n\tjournal = {Future Generation Computer Systems},\n\tauthor = {Cala, Jacek and Marei, Eyad and Yu, Yaobo and Takeda, Kenji and Missier, Paolo},\n\tyear = {2016},\n\tkeywords = {workflow, Performance analysis, Cloud computing, HPC, Whole-exome sequencing, Workflow-based application, cloud, genomics, ?},\n}\n\n","author_short":["Cala, J.","Marei, E.","Yu, Y.","Takeda, K.","Missier, P."],"key":"cala_scalable_2016","id":"cala_scalable_2016","bibbaseid":"cala-marei-yu-takeda-missier-scalableandefficientwholeexomedataprocessingusingworkflowsonthecloud-2016","role":"author","urls":{},"keyword":["workflow","Performance analysis","Cloud computing","HPC","Whole-exome sequencing","Workflow-based application","cloud","genomics","?"],"metadata":{"authorlinks":{}},"downloads":0},"search_terms":["scalable","efficient","whole","exome","data","processing","using","workflows","cloud","cala","marei","yu","takeda","missier"],"keywords":["workflow","performance analysis","cloud computing","hpc","whole-exome sequencing","workflow-based application","cloud","genomics","?"],"authorIDs":[],"dataSources":["zh27EpT9RPew3MWSE","nF6KkFb4XxGruanwy","BDjqJntjXzyBmLxhv","oiWqtmpFQ6ZtiMEK2","k75vCTghu54BjX5qH","j9tnaL2u4rifwAc2v","NCorZq2vkXK6BnhLF","ze2X9uz8Dcv2oGipf","afppXLgSuddAzAL9e","wJE4ynGem9MRsXBRn","9zrgMZfGdRkdkNXfZ","qTQGxWDYeue2pHBus"]}