Scalable Agile Framework for Execution in AI for Medical AI Ethics Policy Design in Small- and Medium-Sized Enterprises. Nemteanu, I., Jr, A. M., Joe, L., Lopez, R., Lopez, P., & Pettine, W. W. Journal of Medical Internet Research, 28(1):e80028, JMIR Publications Inc., Toronto, Canada, February, 2026.
Paper doi abstract bibtex Artificial intelligence (AI) is transforming patient care, but it also raises ethical questions, such as bias and transparency. While a range of well-established frameworks exist to guide responsible AI practice, most were designed for academic or regulatory settings and can be hard to operationalize within fast-moving, resource-limited small and medium-sized enterprises (SMEs). We report on the collaborative design of the SAFE-AI (Scalable Agile Framework for Execution in AI), an approach that embeds ethical safeguards, including fairness, transparency, responsibility metrics, and continuous monitoring, directly into standard Agile development cycles. In keeping with established Agile principles, SAFE-AI provides “just enough structure” to integrate ethical oversight into existing workflows without prescribing extensive new governance layers. Similar to other Agile frameworks, such as Scrum, which is described as a “lightweight framework” designed to help teams solve complex problems through iterative learning and minimal process overhead, SAFE-AI aims to remain practical for organizations that may not have dedicated ethics or compliance staff. Rather than simplifying technical methods, SAFE-AI simplifies when and how ethical review is triggered and documented, making responsible AI practices feasible even in environments with limited ethics, governance, or compliance resources. SAFE-AI assumes the presence of qualified data scientists and engineers, and it does not replace the need for statistical or technical expertise but instead provides a lightweight structure for coordinating and documenting work that those experts already perform. We followed a design-science, practice-oriented approach over 20 weeks. After a discovery workshop, a cross-functional team was assembled that included SME employees, ethics researchers, and academic partners. The SME’s role was limited to informing design constraints and feasibility considerations during the co-design phase. No operational pilot or production deployment was conducted as part of this study. To reduce the risk of internal design bias and improve generalizability, we also consulted external stakeholders through structured feedback sessions, including clinicians, health care domain experts, and regulatory specialists. Their feedback was incorporated into each prototype-feedback cycle, ensuring that priorities reflected not only the SME’s immediate context but also broader clinical and regulatory perspectives. The co-design process produced a 4-phase SAFE-AI life cycle: discovery, assessment, development, and monitoring. SAFE-AI’s phase-specific checklists meld acceptance, fairness, and transparency metrics into each Agile sprint. A novel scenario-based probability analogy mapping method was added to translate model risk and uncertainty into plain-language narratives for nontechnical stakeholders, forming the framework’s core “responsibility metrics” layer. SAFE-AI is presented as a proposed framework showing that meaningful ethical safeguards can be embedded easily within common workflows used by SMEs that already use basic Agile or iterative development practices. Its checklist-driven phases and automatic review triggers provide a defensible way to track fairness, transparency, and responsibility throughout the model lifecycle.
@article{nemteanu_scalable_2026,
title = {Scalable {Agile} {Framework} for {Execution} in {AI} for {Medical} {AI} {Ethics} {Policy} {Design} in {Small}- and {Medium}-{Sized} {Enterprises}},
volume = {28},
url = {https://www.jmir.org/2026/1/e80028},
doi = {10.2196/80028},
abstract = {Artificial intelligence (AI) is transforming patient care, but it also raises ethical questions, such as bias and transparency. While a range of well-established frameworks exist to guide responsible AI practice, most were designed for academic or regulatory settings and can be hard to operationalize within fast-moving, resource-limited small and medium-sized enterprises (SMEs). We report on the collaborative design of the SAFE-AI (Scalable Agile Framework for Execution in AI), an approach that embeds ethical safeguards, including fairness, transparency, responsibility metrics, and continuous monitoring, directly into standard Agile development cycles. In keeping with established Agile principles, SAFE-AI provides “just enough structure” to integrate ethical oversight into existing workflows without prescribing extensive new governance layers. Similar to other Agile frameworks, such as Scrum, which is described as a “lightweight framework” designed to help teams solve complex problems through iterative learning and minimal process overhead, SAFE-AI aims to remain practical for organizations that may not have dedicated ethics or compliance staff. Rather than simplifying technical methods, SAFE-AI simplifies when and how ethical review is triggered and documented, making responsible AI practices feasible even in environments with limited ethics, governance, or compliance resources. SAFE-AI assumes the presence of qualified data scientists and engineers, and it does not replace the need for statistical or technical expertise but instead provides a lightweight structure for coordinating and documenting work that those experts already perform. We followed a design-science, practice-oriented approach over 20 weeks. After a discovery workshop, a cross-functional team was assembled that included SME employees, ethics researchers, and academic partners. The SME’s role was limited to informing design constraints and feasibility considerations during the co-design phase. No operational pilot or production deployment was conducted as part of this study. To reduce the risk of internal design bias and improve generalizability, we also consulted external stakeholders through structured feedback sessions, including clinicians, health care domain experts, and regulatory specialists. Their feedback was incorporated into each prototype-feedback cycle, ensuring that priorities reflected not only the SME’s immediate context but also broader clinical and regulatory perspectives. The co-design process produced a 4-phase SAFE-AI life cycle: discovery, assessment, development, and monitoring. SAFE-AI’s phase-specific checklists meld acceptance, fairness, and transparency metrics into each Agile sprint. A novel scenario-based probability analogy mapping method was added to translate model risk and uncertainty into plain-language narratives for nontechnical stakeholders, forming the framework’s core “responsibility metrics” layer. SAFE-AI is presented as a proposed framework showing that meaningful ethical safeguards can be embedded easily within common workflows used by SMEs that already use basic Agile or iterative development practices. Its checklist-driven phases and automatic review triggers provide a defensible way to track fairness, transparency, and responsibility throughout the model lifecycle.},
language = {EN},
number = {1},
urldate = {2026-02-26},
journal = {Journal of Medical Internet Research},
publisher = {JMIR Publications Inc., Toronto, Canada},
author = {Nemteanu, Ion and Jr, Adir Mancebo and Joe, Leslie and Lopez, Ryan and Lopez, Patricia and Pettine, Warren Woodrich},
month = feb,
year = {2026},
pages = {e80028},
}
Downloads: 0
{"_id":"CffTYNbpK2DYBLobW","bibbaseid":"nemteanu-jr-joe-lopez-lopez-pettine-scalableagileframeworkforexecutioninaiformedicalaiethicspolicydesigninsmallandmediumsizedenterprises-2026","author_short":["Nemteanu, I.","Jr, A. M.","Joe, L.","Lopez, R.","Lopez, P.","Pettine, W. W."],"bibdata":{"bibtype":"article","type":"article","title":"Scalable Agile Framework for Execution in AI for Medical AI Ethics Policy Design in Small- and Medium-Sized Enterprises","volume":"28","url":"https://www.jmir.org/2026/1/e80028","doi":"10.2196/80028","abstract":"Artificial intelligence (AI) is transforming patient care, but it also raises ethical questions, such as bias and transparency. While a range of well-established frameworks exist to guide responsible AI practice, most were designed for academic or regulatory settings and can be hard to operationalize within fast-moving, resource-limited small and medium-sized enterprises (SMEs). We report on the collaborative design of the SAFE-AI (Scalable Agile Framework for Execution in AI), an approach that embeds ethical safeguards, including fairness, transparency, responsibility metrics, and continuous monitoring, directly into standard Agile development cycles. In keeping with established Agile principles, SAFE-AI provides “just enough structure” to integrate ethical oversight into existing workflows without prescribing extensive new governance layers. Similar to other Agile frameworks, such as Scrum, which is described as a “lightweight framework” designed to help teams solve complex problems through iterative learning and minimal process overhead, SAFE-AI aims to remain practical for organizations that may not have dedicated ethics or compliance staff. Rather than simplifying technical methods, SAFE-AI simplifies when and how ethical review is triggered and documented, making responsible AI practices feasible even in environments with limited ethics, governance, or compliance resources. SAFE-AI assumes the presence of qualified data scientists and engineers, and it does not replace the need for statistical or technical expertise but instead provides a lightweight structure for coordinating and documenting work that those experts already perform. We followed a design-science, practice-oriented approach over 20 weeks. After a discovery workshop, a cross-functional team was assembled that included SME employees, ethics researchers, and academic partners. The SME’s role was limited to informing design constraints and feasibility considerations during the co-design phase. No operational pilot or production deployment was conducted as part of this study. To reduce the risk of internal design bias and improve generalizability, we also consulted external stakeholders through structured feedback sessions, including clinicians, health care domain experts, and regulatory specialists. Their feedback was incorporated into each prototype-feedback cycle, ensuring that priorities reflected not only the SME’s immediate context but also broader clinical and regulatory perspectives. The co-design process produced a 4-phase SAFE-AI life cycle: discovery, assessment, development, and monitoring. SAFE-AI’s phase-specific checklists meld acceptance, fairness, and transparency metrics into each Agile sprint. A novel scenario-based probability analogy mapping method was added to translate model risk and uncertainty into plain-language narratives for nontechnical stakeholders, forming the framework’s core “responsibility metrics” layer. SAFE-AI is presented as a proposed framework showing that meaningful ethical safeguards can be embedded easily within common workflows used by SMEs that already use basic Agile or iterative development practices. Its checklist-driven phases and automatic review triggers provide a defensible way to track fairness, transparency, and responsibility throughout the model lifecycle.","language":"EN","number":"1","urldate":"2026-02-26","journal":"Journal of Medical Internet Research","publisher":"JMIR Publications Inc., Toronto, Canada","author":[{"propositions":[],"lastnames":["Nemteanu"],"firstnames":["Ion"],"suffixes":[]},{"propositions":[],"lastnames":["Jr"],"firstnames":["Adir","Mancebo"],"suffixes":[]},{"propositions":[],"lastnames":["Joe"],"firstnames":["Leslie"],"suffixes":[]},{"propositions":[],"lastnames":["Lopez"],"firstnames":["Ryan"],"suffixes":[]},{"propositions":[],"lastnames":["Lopez"],"firstnames":["Patricia"],"suffixes":[]},{"propositions":[],"lastnames":["Pettine"],"firstnames":["Warren","Woodrich"],"suffixes":[]}],"month":"February","year":"2026","pages":"e80028","bibtex":"@article{nemteanu_scalable_2026,\n\ttitle = {Scalable {Agile} {Framework} for {Execution} in {AI} for {Medical} {AI} {Ethics} {Policy} {Design} in {Small}- and {Medium}-{Sized} {Enterprises}},\n\tvolume = {28},\n\turl = {https://www.jmir.org/2026/1/e80028},\n\tdoi = {10.2196/80028},\n\tabstract = {Artificial intelligence (AI) is transforming patient care, but it also raises ethical questions, such as bias and transparency. While a range of well-established frameworks exist to guide responsible AI practice, most were designed for academic or regulatory settings and can be hard to operationalize within fast-moving, resource-limited small and medium-sized enterprises (SMEs). We report on the collaborative design of the SAFE-AI (Scalable Agile Framework for Execution in AI), an approach that embeds ethical safeguards, including fairness, transparency, responsibility metrics, and continuous monitoring, directly into standard Agile development cycles. In keeping with established Agile principles, SAFE-AI provides “just enough structure” to integrate ethical oversight into existing workflows without prescribing extensive new governance layers. Similar to other Agile frameworks, such as Scrum, which is described as a “lightweight framework” designed to help teams solve complex problems through iterative learning and minimal process overhead, SAFE-AI aims to remain practical for organizations that may not have dedicated ethics or compliance staff. Rather than simplifying technical methods, SAFE-AI simplifies when and how ethical review is triggered and documented, making responsible AI practices feasible even in environments with limited ethics, governance, or compliance resources. SAFE-AI assumes the presence of qualified data scientists and engineers, and it does not replace the need for statistical or technical expertise but instead provides a lightweight structure for coordinating and documenting work that those experts already perform. We followed a design-science, practice-oriented approach over 20 weeks. After a discovery workshop, a cross-functional team was assembled that included SME employees, ethics researchers, and academic partners. The SME’s role was limited to informing design constraints and feasibility considerations during the co-design phase. No operational pilot or production deployment was conducted as part of this study. To reduce the risk of internal design bias and improve generalizability, we also consulted external stakeholders through structured feedback sessions, including clinicians, health care domain experts, and regulatory specialists. Their feedback was incorporated into each prototype-feedback cycle, ensuring that priorities reflected not only the SME’s immediate context but also broader clinical and regulatory perspectives. The co-design process produced a 4-phase SAFE-AI life cycle: discovery, assessment, development, and monitoring. SAFE-AI’s phase-specific checklists meld acceptance, fairness, and transparency metrics into each Agile sprint. A novel scenario-based probability analogy mapping method was added to translate model risk and uncertainty into plain-language narratives for nontechnical stakeholders, forming the framework’s core “responsibility metrics” layer. SAFE-AI is presented as a proposed framework showing that meaningful ethical safeguards can be embedded easily within common workflows used by SMEs that already use basic Agile or iterative development practices. Its checklist-driven phases and automatic review triggers provide a defensible way to track fairness, transparency, and responsibility throughout the model lifecycle.},\n\tlanguage = {EN},\n\tnumber = {1},\n\turldate = {2026-02-26},\n\tjournal = {Journal of Medical Internet Research},\n\tpublisher = {JMIR Publications Inc., Toronto, Canada},\n\tauthor = {Nemteanu, Ion and Jr, Adir Mancebo and Joe, Leslie and Lopez, Ryan and Lopez, Patricia and Pettine, Warren Woodrich},\n\tmonth = feb,\n\tyear = {2026},\n\tpages = {e80028},\n}\n\n","author_short":["Nemteanu, I.","Jr, A. M.","Joe, L.","Lopez, R.","Lopez, P.","Pettine, W. W."],"key":"nemteanu_scalable_2026","id":"nemteanu_scalable_2026","bibbaseid":"nemteanu-jr-joe-lopez-lopez-pettine-scalableagileframeworkforexecutioninaiformedicalaiethicspolicydesigninsmallandmediumsizedenterprises-2026","role":"author","urls":{"Paper":"https://www.jmir.org/2026/1/e80028"},"metadata":{"authorlinks":{}}},"bibtype":"article","biburl":"https://api.zotero.org/users/1528358/collections/DH4R6ZWA/items?key=L8lztpUgybScRoWSW76OhIE0&format=bibtex&limit=100","dataSources":["hoqThwEepw4WmWudJ","zYjFbyFmZWKpRCD4j"],"keywords":[],"search_terms":["scalable","agile","framework","execution","medical","ethics","policy","design","small","medium","sized","enterprises","nemteanu","jr","joe","lopez","lopez","pettine"],"title":"Scalable Agile Framework for Execution in AI for Medical AI Ethics Policy Design in Small- and Medium-Sized Enterprises","year":2026}