Evaluating FAIR-Compliance Through an Objective, Automated, Community-Governed Framework. Wilkinson, M., D., Dumontier, M., Sansone, S., Bonino da Silva Santos, L., O., Prieto, M., Gautier, J., McQuilton, P., Murphy, D., Crosas, M., & Schultes, E. bioRxiv, 1, 2018.
Evaluating FAIR-Compliance Through an Objective, Automated, Community-Governed Framework [link]Website  abstract   bibtex   2 downloads  
With the increased adoption of the FAIR Principles, a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers, are seeking ways to transparently evaluate resource FAIRness. We describe the FAIR Evaluator, a software infrastructure to register and execute tests of compliance with the recently published FAIR Metrics. The Evaluator enables digital resources to be assessed objectively and transparently. We illustrate its application to three widely used generalist repositories - Dataverse, Dryad, and Zenodo - and report their feedback. Evaluations allow communities to select relevant Metric subsets to deliver FAIRness measurements in diverse and specialized applications. Evaluations are executed in a semi-automated manner through Web Forms filled-in by a user, or through a JSON-based API. A comparison of manual vs automated evaluation reveals that automated evaluations are generally stricter, resulting in lower, though more accurate, FAIRness scores. Finally, we highlight the need for enhanced infrastructure such as standards registries, like FAIRsharing, as well as additional community involvement in domain-specific data infrastructure creation.
@article{
 title = {Evaluating FAIR-Compliance Through an Objective, Automated, Community-Governed Framework},
 type = {article},
 year = {2018},
 websites = {http://biorxiv.org/content/early/2018/09/16/418376.abstract},
 month = {1},
 day = {1},
 id = {d430a4a2-3a04-3ed8-a33e-da6d5c24699a},
 created = {2018-09-19T09:04:32.869Z},
 file_attached = {false},
 profile_id = {17c87d5d-2470-32d7-b273-0734a1d9195f},
 last_modified = {2018-10-15T08:36:22.866Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {true},
 hidden = {false},
 citation_key = {Wilkinson2018},
 source_type = {JOUR},
 private_publication = {false},
 abstract = {With the increased adoption of the FAIR Principles, a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers, are seeking ways to transparently evaluate resource FAIRness. We describe the FAIR Evaluator, a software infrastructure to register and execute tests of compliance with the recently published FAIR Metrics. The Evaluator enables digital resources to be assessed objectively and transparently. We illustrate its application to three widely used generalist repositories - Dataverse, Dryad, and Zenodo - and report their feedback. Evaluations allow communities to select relevant Metric subsets to deliver FAIRness measurements in diverse and specialized applications. Evaluations are executed in a semi-automated manner through Web Forms filled-in by a user, or through a JSON-based API. A comparison of manual vs automated evaluation reveals that automated evaluations are generally stricter, resulting in lower, though more accurate, FAIRness scores. Finally, we highlight the need for enhanced infrastructure such as standards registries, like FAIRsharing, as well as additional community involvement in domain-specific data infrastructure creation.},
 bibtype = {article},
 author = {Wilkinson, Mark D and Dumontier, Michel and Sansone, Susanna-Assunta and Bonino da Silva Santos, Luiz Olavo and Prieto, Mario and Gautier, Julian and McQuilton, Peter and Murphy, Derek and Crosas, Merce and Schultes, Erik},
 journal = {bioRxiv}
}

Downloads: 2