Data Programming: Creating Large Training Sets, Quickly. Ratner, A., De Sa, C., Wu, S., Selsam, D., & Ré, C.
Data Programming: Creating Large Training Sets, Quickly [link]Paper  abstract   bibtex   
Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can "denoise" the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable.
@article{ratnerDataProgrammingCreating2016,
  archivePrefix = {arXiv},
  eprinttype = {arxiv},
  eprint = {1605.07723},
  primaryClass = {cs, stat},
  title = {Data {{Programming}}: {{Creating Large Training Sets}}, {{Quickly}}},
  url = {http://arxiv.org/abs/1605.07723},
  shorttitle = {Data {{Programming}}},
  abstract = {Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can "denoise" the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable.},
  urldate = {2019-04-16},
  date = {2016-05-25},
  keywords = {Statistics - Machine Learning,Computer Science - Artificial Intelligence,Computer Science - Machine Learning},
  author = {Ratner, Alexander and De Sa, Christopher and Wu, Sen and Selsam, Daniel and Ré, Christopher},
  file = {/home/dimitri/Nextcloud/Zotero/storage/DM9SNV2M/Ratner et al. - 2016 - Data Programming Creating Large Training Sets, Qu.pdf;/home/dimitri/Nextcloud/Zotero/storage/RJD8M3JQ/1605.html}
}

Downloads: 0