Measuring Compositional Generalization: A Comprehensive Method on Realistic Data. Keysers, D., Schärli, N., Scales, N., Buisman, H., Furrer, D., Kashubin, S., Momchev, N., Sinopalnikov, D., Stafiniak, L., Tihon, T., Tsarkov, D., Wang, X., van Zee, M., & Bousquet, O. In International Conference on Learning Representations, 2020.
Measuring Compositional Generalization: A Comprehensive Method on Realistic Data [link]Paper  abstract   bibtex   13 downloads  
State-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.
@inproceedings{keysersMeasuringCompositionalGeneralization2020,
  title = {Measuring {{Compositional Generalization}}: {{A Comprehensive Method}} on {{Realistic Data}}},
  shorttitle = {Measuring {{Compositional Generalization}}},
  booktitle = {International {{Conference}} on {{Learning Representations}}},
  author = {Keysers, Daniel and Sch{\"a}rli, Nathanael and Scales, Nathan and Buisman, Hylke and Furrer, Daniel and Kashubin, Sergii and Momchev, Nikola and Sinopalnikov, Danila and Stafiniak, Lukasz and Tihon, Tibor and Tsarkov, Dmitry and Wang, Xiao and van Zee, Marc and Bousquet, Olivier},
  year = {2020},
  url = {https://openreview.net/forum?id=SygcCnNKwr},
  urldate = {2024-03-18},
  abstract = {State-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.},
  langid = {english},
  file = {/Users/shanest/sync/library/Keysers et al/2019/Keysers et al. - 2020 - Measuring Compositional Generalization A Comprehe.pdf}
}

Downloads: 13