A Psychology-based Unified Dynamic Framework for Curriculum Learning. Meng, G., Zeng, Q., Lalor, J. P., & Yu, H. August, 2024. arXiv:2408.05326 [cs]Paper abstract bibtex Directly learning from examples of random difficulty levels is often challenging for both humans and machine learning models. A more effective strategy involves exposing learners to examples in a progressive order, from easy to difficult. Curriculum Learning (CL) has been proposed to implement this strategy in machine learning model training. However, two key challenges persist in CL framework design: defining the difficulty of training data and determining the appropriate amount of data to input at each training step. This paper presents a Psychology-based Unified Dynamic Framework for Curriculum Learning (PUDF), drawing inspiration from psychometrics. We quantify the difficulty of training data by applying Item Response Theory (IRT) to responses from Artificial Crowds (AC). This theory-driven IRT-AC approach leads to global (i.e., model-independent) and interpretable difficulty values. Leveraging IRT, we propose a Dynamic Data Selection via Model Ability Estimation (DDS-MAE) strategy to schedule the appropriate amount of data during model training. Since our difficulty labeling and model ability estimation are based on a consistent theory, namely IRT, their values are comparable within the same scope, potentially leading to a faster convergence compared to the other CL methods. Experimental results demonstrate that fine-tuning pre-trained language models with PUDF enhances their performance on the GLUE benchmark. Moreover, PUDF surpasses other state-of-the-art (SOTA) CL methods on the GLUE benchmark. We further explore the components of PUDF, namely the difficulty measurer (IRT-AC) and the training scheduler (DDS-MAE) qualitatively and quantitatively. Lastly, we conduct an ablation study to clarify which components of PUDF contribute to faster convergence and higher accuracy.
@misc{meng_psychology-based_2024,
title = {A {Psychology}-based {Unified} {Dynamic} {Framework} for {Curriculum} {Learning}},
url = {http://arxiv.org/abs/2408.05326},
abstract = {Directly learning from examples of random difficulty levels is often challenging for both humans and machine learning models. A more effective strategy involves exposing learners to examples in a progressive order, from easy to difficult. Curriculum Learning (CL) has been proposed to implement this strategy in machine learning model training. However, two key challenges persist in CL framework design: defining the difficulty of training data and determining the appropriate amount of data to input at each training step. This paper presents a Psychology-based Unified Dynamic Framework for Curriculum Learning (PUDF), drawing inspiration from psychometrics. We quantify the difficulty of training data by applying Item Response Theory (IRT) to responses from Artificial Crowds (AC). This theory-driven IRT-AC approach leads to global (i.e., model-independent) and interpretable difficulty values. Leveraging IRT, we propose a Dynamic Data Selection via Model Ability Estimation (DDS-MAE) strategy to schedule the appropriate amount of data during model training. Since our difficulty labeling and model ability estimation are based on a consistent theory, namely IRT, their values are comparable within the same scope, potentially leading to a faster convergence compared to the other CL methods. Experimental results demonstrate that fine-tuning pre-trained language models with PUDF enhances their performance on the GLUE benchmark. Moreover, PUDF surpasses other state-of-the-art (SOTA) CL methods on the GLUE benchmark. We further explore the components of PUDF, namely the difficulty measurer (IRT-AC) and the training scheduler (DDS-MAE) qualitatively and quantitatively. Lastly, we conduct an ablation study to clarify which components of PUDF contribute to faster convergence and higher accuracy.},
urldate = {2024-09-03},
publisher = {arXiv},
author = {Meng, Guangyu and Zeng, Qingkai and Lalor, John P. and Yu, Hong},
month = aug,
year = {2024},
note = {arXiv:2408.05326 [cs]},
keywords = {Computer Science - Computation and Language},
}
Downloads: 0
{"_id":"aC68HK3ZCjDXPbjmL","bibbaseid":"meng-zeng-lalor-yu-apsychologybasedunifieddynamicframeworkforcurriculumlearning-2024","author_short":["Meng, G.","Zeng, Q.","Lalor, J. P.","Yu, H."],"bibdata":{"bibtype":"misc","type":"misc","title":"A Psychology-based Unified Dynamic Framework for Curriculum Learning","url":"http://arxiv.org/abs/2408.05326","abstract":"Directly learning from examples of random difficulty levels is often challenging for both humans and machine learning models. A more effective strategy involves exposing learners to examples in a progressive order, from easy to difficult. Curriculum Learning (CL) has been proposed to implement this strategy in machine learning model training. However, two key challenges persist in CL framework design: defining the difficulty of training data and determining the appropriate amount of data to input at each training step. This paper presents a Psychology-based Unified Dynamic Framework for Curriculum Learning (PUDF), drawing inspiration from psychometrics. We quantify the difficulty of training data by applying Item Response Theory (IRT) to responses from Artificial Crowds (AC). This theory-driven IRT-AC approach leads to global (i.e., model-independent) and interpretable difficulty values. Leveraging IRT, we propose a Dynamic Data Selection via Model Ability Estimation (DDS-MAE) strategy to schedule the appropriate amount of data during model training. Since our difficulty labeling and model ability estimation are based on a consistent theory, namely IRT, their values are comparable within the same scope, potentially leading to a faster convergence compared to the other CL methods. Experimental results demonstrate that fine-tuning pre-trained language models with PUDF enhances their performance on the GLUE benchmark. Moreover, PUDF surpasses other state-of-the-art (SOTA) CL methods on the GLUE benchmark. We further explore the components of PUDF, namely the difficulty measurer (IRT-AC) and the training scheduler (DDS-MAE) qualitatively and quantitatively. Lastly, we conduct an ablation study to clarify which components of PUDF contribute to faster convergence and higher accuracy.","urldate":"2024-09-03","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Meng"],"firstnames":["Guangyu"],"suffixes":[]},{"propositions":[],"lastnames":["Zeng"],"firstnames":["Qingkai"],"suffixes":[]},{"propositions":[],"lastnames":["Lalor"],"firstnames":["John","P."],"suffixes":[]},{"propositions":[],"lastnames":["Yu"],"firstnames":["Hong"],"suffixes":[]}],"month":"August","year":"2024","note":"arXiv:2408.05326 [cs]","keywords":"Computer Science - Computation and Language","bibtex":"@misc{meng_psychology-based_2024,\n\ttitle = {A {Psychology}-based {Unified} {Dynamic} {Framework} for {Curriculum} {Learning}},\n\turl = {http://arxiv.org/abs/2408.05326},\n\tabstract = {Directly learning from examples of random difficulty levels is often challenging for both humans and machine learning models. A more effective strategy involves exposing learners to examples in a progressive order, from easy to difficult. Curriculum Learning (CL) has been proposed to implement this strategy in machine learning model training. However, two key challenges persist in CL framework design: defining the difficulty of training data and determining the appropriate amount of data to input at each training step. This paper presents a Psychology-based Unified Dynamic Framework for Curriculum Learning (PUDF), drawing inspiration from psychometrics. We quantify the difficulty of training data by applying Item Response Theory (IRT) to responses from Artificial Crowds (AC). This theory-driven IRT-AC approach leads to global (i.e., model-independent) and interpretable difficulty values. Leveraging IRT, we propose a Dynamic Data Selection via Model Ability Estimation (DDS-MAE) strategy to schedule the appropriate amount of data during model training. Since our difficulty labeling and model ability estimation are based on a consistent theory, namely IRT, their values are comparable within the same scope, potentially leading to a faster convergence compared to the other CL methods. Experimental results demonstrate that fine-tuning pre-trained language models with PUDF enhances their performance on the GLUE benchmark. Moreover, PUDF surpasses other state-of-the-art (SOTA) CL methods on the GLUE benchmark. We further explore the components of PUDF, namely the difficulty measurer (IRT-AC) and the training scheduler (DDS-MAE) qualitatively and quantitatively. Lastly, we conduct an ablation study to clarify which components of PUDF contribute to faster convergence and higher accuracy.},\n\turldate = {2024-09-03},\n\tpublisher = {arXiv},\n\tauthor = {Meng, Guangyu and Zeng, Qingkai and Lalor, John P. and Yu, Hong},\n\tmonth = aug,\n\tyear = {2024},\n\tnote = {arXiv:2408.05326 [cs]},\n\tkeywords = {Computer Science - Computation and Language},\n}\n\n","author_short":["Meng, G.","Zeng, Q.","Lalor, J. P.","Yu, H."],"key":"meng_psychology-based_2024","id":"meng_psychology-based_2024","bibbaseid":"meng-zeng-lalor-yu-apsychologybasedunifieddynamicframeworkforcurriculumlearning-2024","role":"author","urls":{"Paper":"http://arxiv.org/abs/2408.05326"},"keyword":["Computer Science - Computation and Language"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"misc","biburl":"http://fenway.cs.uml.edu/papers/pubs-all.bib","dataSources":["TqaA9miSB65nRfS5H"],"keywords":["computer science - computation and language"],"search_terms":["psychology","based","unified","dynamic","framework","curriculum","learning","meng","zeng","lalor","yu"],"title":"A Psychology-based Unified Dynamic Framework for Curriculum Learning","year":2024}