Knowledge distillation improves student model performance. However, using a larger teacher model does not necessarily result in better distillation gains due to significant architecture and output gaps with smaller student networks. To address this issue, we reconsider teacher outputs and find that categories with strong teacher confidence benefit distillation more, while those with weaker certainty contribute less.
View Article and Find Full Text PDFCurrently, multiturn dialogue models generate human-like responses based on pretrained language models given a dialogue history. However, most existing models simply concatenate dialogue histories, which makes it difficult to maintain a high degree of consistency throughout the generated text. We speculate that this is because the encoder ignores information about the hierarchical structure between sentences.
View Article and Find Full Text PDFGenerating fluent, coherent, and informative text from structured data is called table-to-text generation. Copying words from the table is a common method to solve the "out-of-vocabulary" problem, but it's difficult to achieve accurate copying. In order to overcome this problem, we invent an auto-regressive framework based on the transformer that combines a copying mechanism and language modeling to generate target texts.
View Article and Find Full Text PDF