Training Complex Models with Multi-Task Weak Supervision. Ratner, A., Hancock, B., Dunnmon, J., Sala, F., Pandey, S., & Ré, C.
Training Complex Models with Multi-Task Weak Supervision [link]Paper  abstract   bibtex   
As machine learning models continue to increase in complexity, collecting large hand-labeled training sets has become one of the biggest roadblocks in practice. Instead, weaker forms of supervision that provide noisier but cheaper labels are often used. However, these weak supervision sources have diverse and unknown accuracies, may output correlated labels, and may label different tasks or apply at different levels of granularity. We propose a framework for integrating and modeling such weak supervision sources by viewing them as labeling different related sub-tasks of a problem, which we refer to as the multi-task weak supervision setting. We show that by solving a matrix completion-style problem, we can recover the accuracies of these multi-task sources given their dependency structure, but without any labeled data, leading to higher-quality supervision for training an end model. Theoretically, we show that the generalization error of models trained with this approach improves with the number of unlabeled data points, and characterize the scaling with respect to the task and dependency structures. On three fine-grained classification problems, we show that our approach leads to average gains of 20.2 points in accuracy over a traditional supervised approach, 6.8 points over a majority vote baseline, and 4.1 points over a previously proposed weak supervision method that models tasks separately.
@article{ratnerTrainingComplexModels2018,
  archivePrefix = {arXiv},
  eprinttype = {arxiv},
  eprint = {1810.02840},
  primaryClass = {cs, stat},
  title = {Training {{Complex Models}} with {{Multi}}-{{Task Weak Supervision}}},
  url = {http://arxiv.org/abs/1810.02840},
  abstract = {As machine learning models continue to increase in complexity, collecting large hand-labeled training sets has become one of the biggest roadblocks in practice. Instead, weaker forms of supervision that provide noisier but cheaper labels are often used. However, these weak supervision sources have diverse and unknown accuracies, may output correlated labels, and may label different tasks or apply at different levels of granularity. We propose a framework for integrating and modeling such weak supervision sources by viewing them as labeling different related sub-tasks of a problem, which we refer to as the multi-task weak supervision setting. We show that by solving a matrix completion-style problem, we can recover the accuracies of these multi-task sources given their dependency structure, but without any labeled data, leading to higher-quality supervision for training an end model. Theoretically, we show that the generalization error of models trained with this approach improves with the number of unlabeled data points, and characterize the scaling with respect to the task and dependency structures. On three fine-grained classification problems, we show that our approach leads to average gains of 20.2 points in accuracy over a traditional supervised approach, 6.8 points over a majority vote baseline, and 4.1 points over a previously proposed weak supervision method that models tasks separately.},
  urldate = {2019-04-16},
  date = {2018-10-05},
  keywords = {Statistics - Machine Learning,Computer Science - Machine Learning},
  author = {Ratner, Alexander and Hancock, Braden and Dunnmon, Jared and Sala, Frederic and Pandey, Shreyash and Ré, Christopher},
  file = {/home/dimitri/Nextcloud/Zotero/storage/E38MTCZJ/Ratner et al. - 2018 - Training Complex Models with Multi-Task Weak Super.pdf;/home/dimitri/Nextcloud/Zotero/storage/VIS28T3Q/1810.html}
}

Downloads: 0