Multiturn dialogue generation by modeling sentence-level and discourse-level contexts.

Sci Rep

State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing, 100024, China.

Published: November 2022

Currently, multiturn dialogue models generate human-like responses based on pretrained language models given a dialogue history. However, most existing models simply concatenate dialogue histories, which makes it difficult to maintain a high degree of consistency throughout the generated text. We speculate that this is because the encoder ignores information about the hierarchical structure between sentences. In this paper, we propose a novel multiturn dialogue generation model that captures contextual information at the sentence level and at the discourse level during the encoding process. The context semantic information is dynamically modeled through a difference-aware module. A sentence order prediction training task is also designed to learn representation by reconstructing the order of disrupted sentences with a learning-to-rank algorithm. Experiments on the multiturn dialogue dataset, DailyDialog, demonstrate that our model substantially outperforms the baseline model in terms of both automatic and human evaluation metrics, generating more fluent and informative responses than the baseline model.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9701771PMC
http://dx.doi.org/10.1038/s41598-022-24787-1DOI Listing

Publication Analysis

Top Keywords

multiturn dialogue
16
dialogue generation
8
baseline model
8
dialogue
5
multiturn
4
generation modeling
4
modeling sentence-level
4
sentence-level discourse-level
4
discourse-level contexts
4
contexts currently
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!