Data-efficient goal-oriented conversation with dialogue knowledge transfer networks. Shalyminov, I., Lee, S., Eshghi, A., & Lemon, O. 2019. abstract bibtex Copyright © 2019, arXiv, All rights reserved. Goal-oriented dialogue systems are now being widely adopted in industry where it is of key importance to maintain a rapid prototyping cycle for new products and domains. Data-driven dialogue system development has to be adapted to meet this requirement — therefore, reducing the amount of data and annotations necessary for training such systems is a central research problem. In this paper, we present the Dialogue Knowledge Transfer Network (DiKTNet), a state-of-the-art approach to goal-oriented dialogue generation which only uses a few example dialogues (i.e. few-shot learning), none of which has to be annotated. We achieve this by performing a 2-stage training. Firstly, we perform unsupervised dialogue representation pre-training on a large source of goal-oriented dialogues in multiple domains, the MetaLWOz corpus. Secondly, at the transfer stage, we train DiKTNet using this representation together with 2 other textual knowledge sources with different levels of generality: ELMo encoder and the main dataset’s source domains. Our main dataset is the Stanford Multi-Domain dialogue corpus. We evaluate our model on it in terms of BLEU and Entity F1 scores, and show that our approach significantly and consistently improves upon a series of baseline models as well as over the previous state-of-the-art dialogue generation model, ZSDG. The improvement upon the latter — up to 10% in Entity F1 and the average of 3% in BLEU score — is achieved using only the equivalent of 10% of ZSDG’s in-domain training data.
@misc{
title = {Data-efficient goal-oriented conversation with dialogue knowledge transfer networks},
type = {misc},
year = {2019},
source = {arXiv},
id = {aa4adda5-5691-30ec-a0ae-17c626fef900},
created = {2020-11-06T23:59:00.000Z},
file_attached = {false},
profile_id = {d7d2e6da-aa5b-3ab3-b3f2-a5350adf574a},
last_modified = {2020-11-10T23:36:50.345Z},
read = {false},
starred = {false},
authored = {true},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {Copyright © 2019, arXiv, All rights reserved. Goal-oriented dialogue systems are now being widely adopted in industry where it is of key importance to maintain a rapid prototyping cycle for new products and domains. Data-driven dialogue system development has to be adapted to meet this requirement — therefore, reducing the amount of data and annotations necessary for training such systems is a central research problem. In this paper, we present the Dialogue Knowledge Transfer Network (DiKTNet), a state-of-the-art approach to goal-oriented dialogue generation which only uses a few example dialogues (i.e. few-shot learning), none of which has to be annotated. We achieve this by performing a 2-stage training. Firstly, we perform unsupervised dialogue representation pre-training on a large source of goal-oriented dialogues in multiple domains, the MetaLWOz corpus. Secondly, at the transfer stage, we train DiKTNet using this representation together with 2 other textual knowledge sources with different levels of generality: ELMo encoder and the main dataset’s source domains. Our main dataset is the Stanford Multi-Domain dialogue corpus. We evaluate our model on it in terms of BLEU and Entity F1 scores, and show that our approach significantly and consistently improves upon a series of baseline models as well as over the previous state-of-the-art dialogue generation model, ZSDG. The improvement upon the latter — up to 10% in Entity F1 and the average of 3% in BLEU score — is achieved using only the equivalent of 10% of ZSDG’s in-domain training data.},
bibtype = {misc},
author = {Shalyminov, I. and Lee, S. and Eshghi, A. and Lemon, O.}
}
Downloads: 0
{"_id":"un2KjhC2fxcLpFyxz","bibbaseid":"shalyminov-lee-eshghi-lemon-dataefficientgoalorientedconversationwithdialogueknowledgetransfernetworks-2019","author_short":["Shalyminov, I.","Lee, S.","Eshghi, A.","Lemon, O."],"bibdata":{"title":"Data-efficient goal-oriented conversation with dialogue knowledge transfer networks","type":"misc","year":"2019","source":"arXiv","id":"aa4adda5-5691-30ec-a0ae-17c626fef900","created":"2020-11-06T23:59:00.000Z","file_attached":false,"profile_id":"d7d2e6da-aa5b-3ab3-b3f2-a5350adf574a","last_modified":"2020-11-10T23:36:50.345Z","read":false,"starred":false,"authored":"true","confirmed":false,"hidden":false,"private_publication":false,"abstract":"Copyright © 2019, arXiv, All rights reserved. Goal-oriented dialogue systems are now being widely adopted in industry where it is of key importance to maintain a rapid prototyping cycle for new products and domains. Data-driven dialogue system development has to be adapted to meet this requirement — therefore, reducing the amount of data and annotations necessary for training such systems is a central research problem. In this paper, we present the Dialogue Knowledge Transfer Network (DiKTNet), a state-of-the-art approach to goal-oriented dialogue generation which only uses a few example dialogues (i.e. few-shot learning), none of which has to be annotated. We achieve this by performing a 2-stage training. Firstly, we perform unsupervised dialogue representation pre-training on a large source of goal-oriented dialogues in multiple domains, the MetaLWOz corpus. Secondly, at the transfer stage, we train DiKTNet using this representation together with 2 other textual knowledge sources with different levels of generality: ELMo encoder and the main dataset’s source domains. Our main dataset is the Stanford Multi-Domain dialogue corpus. We evaluate our model on it in terms of BLEU and Entity F1 scores, and show that our approach significantly and consistently improves upon a series of baseline models as well as over the previous state-of-the-art dialogue generation model, ZSDG. The improvement upon the latter — up to 10% in Entity F1 and the average of 3% in BLEU score — is achieved using only the equivalent of 10% of ZSDG’s in-domain training data.","bibtype":"misc","author":"Shalyminov, I. and Lee, S. and Eshghi, A. and Lemon, O.","bibtex":"@misc{\n title = {Data-efficient goal-oriented conversation with dialogue knowledge transfer networks},\n type = {misc},\n year = {2019},\n source = {arXiv},\n id = {aa4adda5-5691-30ec-a0ae-17c626fef900},\n created = {2020-11-06T23:59:00.000Z},\n file_attached = {false},\n profile_id = {d7d2e6da-aa5b-3ab3-b3f2-a5350adf574a},\n last_modified = {2020-11-10T23:36:50.345Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {false},\n hidden = {false},\n private_publication = {false},\n abstract = {Copyright © 2019, arXiv, All rights reserved. Goal-oriented dialogue systems are now being widely adopted in industry where it is of key importance to maintain a rapid prototyping cycle for new products and domains. Data-driven dialogue system development has to be adapted to meet this requirement — therefore, reducing the amount of data and annotations necessary for training such systems is a central research problem. In this paper, we present the Dialogue Knowledge Transfer Network (DiKTNet), a state-of-the-art approach to goal-oriented dialogue generation which only uses a few example dialogues (i.e. few-shot learning), none of which has to be annotated. We achieve this by performing a 2-stage training. Firstly, we perform unsupervised dialogue representation pre-training on a large source of goal-oriented dialogues in multiple domains, the MetaLWOz corpus. Secondly, at the transfer stage, we train DiKTNet using this representation together with 2 other textual knowledge sources with different levels of generality: ELMo encoder and the main dataset’s source domains. Our main dataset is the Stanford Multi-Domain dialogue corpus. We evaluate our model on it in terms of BLEU and Entity F1 scores, and show that our approach significantly and consistently improves upon a series of baseline models as well as over the previous state-of-the-art dialogue generation model, ZSDG. The improvement upon the latter — up to 10% in Entity F1 and the average of 3% in BLEU score — is achieved using only the equivalent of 10% of ZSDG’s in-domain training data.},\n bibtype = {misc},\n author = {Shalyminov, I. and Lee, S. and Eshghi, A. and Lemon, O.}\n}","author_short":["Shalyminov, I.","Lee, S.","Eshghi, A.","Lemon, O."],"biburl":"https://bibbase.org/service/mendeley/d7d2e6da-aa5b-3ab3-b3f2-a5350adf574a","bibbaseid":"shalyminov-lee-eshghi-lemon-dataefficientgoalorientedconversationwithdialogueknowledgetransfernetworks-2019","role":"author","urls":{},"metadata":{"authorlinks":{}}},"bibtype":"misc","biburl":"https://bibbase.org/service/mendeley/d7d2e6da-aa5b-3ab3-b3f2-a5350adf574a","dataSources":["ya2CyA73rpZseyrZ8","TcQYToyGTfqDApS68","BEu9hLf9unY5A9Pwe","gQ3XnmcCc7p5JYvBa","pB6WyiWKHxyFpemKA","2252seNhipfTmjEBQ"],"keywords":[],"search_terms":["data","efficient","goal","oriented","conversation","dialogue","knowledge","transfer","networks","shalyminov","lee","eshghi","lemon"],"title":"Data-efficient goal-oriented conversation with dialogue knowledge transfer networks","year":2019}