Evaluating FAIR Maturity Through a Scalable, Automated, Community-Governed Framework. Wilkinson, M., D., Dumontier, M., Sansone, S., da Silva Santos, L., O., B., Prieto, M., Batista, D., McQuilton, P., Kuhn, T., Rocca-Serra, P., Crosas, M., & Schultes, E. bioRxiv, Cold Spring Harbor Laboratory, 2019.
Evaluating FAIR Maturity Through a Scalable, Automated, Community-Governed Framework [link]Website  abstract   bibtex   
Transparent evaluations of FAIRness are increasingly required by a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers. We propose a scalable, automatable framework to evaluate digital resources that encompasses measurable indicators, open source tools, and participation guidelines, which come together to accommodate domain relevant community-defined FAIR assessments. The components of the framework are: (1) Maturity Indicators - community-authored specifications that delimit a specific automatically-measurable FAIR behavior; (2) Compliance Tests - small Web apps that test digital resources against individual Maturity Indicators; and (3) the Evaluator, a Web application that registers, assembles, and applies community-relevant sets of Compliance Tests against a digital resource, and provides a detailed report about what a machine \textquotedblleftsees\textquotedblright when it visits that resource. We discuss the technical and social considerations of FAIR assessments, and how this translates to our community-driven infrastructure. We then illustrate how the output of the Evaluator tool can serve as a roadmap to assist data stewards to incrementally and realistically improve the FAIRness of their resources.
@article{
 title = {Evaluating FAIR Maturity Through a Scalable, Automated, Community-Governed Framework},
 type = {article},
 year = {2019},
 identifiers = {[object Object]},
 websites = {https://www.biorxiv.org/content/early/2019/05/28/649202},
 publisher = {Cold Spring Harbor Laboratory},
 id = {eb8c11c4-3bc8-3dcc-ac4d-fbf3fcf87f10},
 created = {2019-09-06T09:36:15.372Z},
 file_attached = {false},
 profile_id = {17c87d5d-2470-32d7-b273-0734a1d9195f},
 last_modified = {2019-09-06T09:36:15.372Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {true},
 hidden = {false},
 citation_key = {Wilkinson649202},
 source_type = {article},
 private_publication = {false},
 abstract = {Transparent evaluations of FAIRness are increasingly required by a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers. We propose a scalable, automatable framework to evaluate digital resources that encompasses measurable indicators, open source tools, and participation guidelines, which come together to accommodate domain relevant community-defined FAIR assessments. The components of the framework are: (1) Maturity Indicators - community-authored specifications that delimit a specific automatically-measurable FAIR behavior; (2) Compliance Tests - small Web apps that test digital resources against individual Maturity Indicators; and (3) the Evaluator, a Web application that registers, assembles, and applies community-relevant sets of Compliance Tests against a digital resource, and provides a detailed report about what a machine \textquotedblleftsees\textquotedblright when it visits that resource. We discuss the technical and social considerations of FAIR assessments, and how this translates to our community-driven infrastructure. We then illustrate how the output of the Evaluator tool can serve as a roadmap to assist data stewards to incrementally and realistically improve the FAIRness of their resources.},
 bibtype = {article},
 author = {Wilkinson, Mark D and Dumontier, Michel and Sansone, Susanna-Assunta and da Silva Santos, Luiz Olavo Bonino and Prieto, Mario and Batista, Dominique and McQuilton, Peter and Kuhn, Tobias and Rocca-Serra, Philippe and Crosas, Mercè and Schultes, Erik},
 journal = {bioRxiv}
}

Downloads: 0