Adversarial evaluation for open-domain dialogue generation. Bruni, E. & Fernández, R. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 284–288, Saarbrücken, Germany, August, 2017. Association for Computational Linguistics.
Adversarial evaluation for open-domain dialogue generation [pdf]Paper  doi  abstract   bibtex   1 download  
We investigate the potential of adversarial evaluation methods for open-domain dialogue generation systems, comparing the performance of a discriminative agent to that of humans on the same task. Our results show that the task is hard, both for automated models and humans, but that a discriminative agent can learn patterns that lead to above-chance performance.

Downloads: 1