{"_id":"GL5xhuzh6ETBSBpm7","bibbaseid":"lalor-wu-yu-ciftcrowdinformedfinetuningtoimprovemachinelearningability-2017","author_short":["Lalor, J","Wu, H","Yu, H"],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"CIFT: Crowd-Informed Fine-Tuning to Improve Machine Learning Ability.","abstract":"tem Response Theory (IRT) allows for measuring ability of Machine Learning models as compared to a human population. However, it is difficult to create a large dataset to train the ability of deep neural network models (DNNs). We propose Crowd-Informed Fine-Tuning (CIFT) as a new training process, where a pre-trained model is fine-tuned with a specialized supplemental training set obtained via IRT model-fitting on a large set of crowdsourced response patterns. With CIFT we can leverage the specialized set of data obtained through IRT to inform parameter tuning in DNNs. We experiment with two loss functions in CIFT to represent (i) memorization of fine-tuning items and (ii) learning a probability distribution over potential labels that is similar to the crowdsourced distribution over labels to simulate crowd knowledge. Our results show that CIFT improves ability for a state-of-the-art DNN model for Recognizing Textual Entailment (RTE) tasks and is generalizable to a large-scale RTE test set.","author":[{"propositions":[],"lastnames":["Lalor"],"firstnames":["J"],"suffixes":[]},{"propositions":[],"lastnames":["Wu"],"firstnames":["H"],"suffixes":[]},{"propositions":[],"lastnames":["Yu"],"firstnames":["H"],"suffixes":[]}],"month":"February","year":"2017","bibtex":"@inproceedings{lalor_cift:_2017,\n\ttitle = {{CIFT}: {Crowd}-{Informed} {Fine}-{Tuning} to {Improve} {Machine} {Learning} {Ability}.},\n\tabstract = {tem Response Theory (IRT) allows for measuring ability of Machine Learning models as compared to a human population. However, it is difficult to create a large dataset to train the ability of deep neural network models (DNNs). We propose Crowd-Informed Fine-Tuning (CIFT) as a new training process, where a pre-trained model is fine-tuned with a specialized supplemental training set obtained via IRT model-fitting on a large set of crowdsourced response patterns. With CIFT we can leverage the specialized set of data obtained through IRT to inform parameter tuning in DNNs. We experiment with two loss functions in CIFT to represent (i) memorization of fine-tuning items and (ii) learning a probability distribution over potential labels that is similar to the crowdsourced distribution over labels to simulate crowd knowledge. Our results show that CIFT improves ability for a state-of-the-art DNN model for Recognizing Textual Entailment (RTE) tasks and is generalizable to a large-scale RTE test set.},\n\tauthor = {Lalor, J and Wu, H and Yu, H},\n\tmonth = feb,\n\tyear = {2017},\n}\n\n","author_short":["Lalor, J","Wu, H","Yu, H"],"key":"lalor_cift:_2017","id":"lalor_cift:_2017","bibbaseid":"lalor-wu-yu-ciftcrowdinformedfinetuningtoimprovemachinelearningability-2017","role":"author","urls":{},"metadata":{"authorlinks":{}},"html":""},"bibtype":"inproceedings","biburl":"http://fenway.cs.uml.edu/papers/pubs-all.bib","dataSources":["TqaA9miSB65nRfS5H"],"keywords":[],"search_terms":["cift","crowd","informed","fine","tuning","improve","machine","learning","ability","lalor","wu","yu"],"title":"CIFT: Crowd-Informed Fine-Tuning to Improve Machine Learning Ability.","year":2017}