Attention in Natural Language Processing.

IEEE Trans Neural Netw Learn Syst

Published: October 2021

Attention is an increasingly popular mechanism used in a wide range of neural architectures. The mechanism itself has been realized in a variety of formats. However, because of the fast-paced advances in this domain, a systematic overview of attention is still missing. In this article, we define a unified model for attention architectures in natural language processing, with a focus on those designed to work with vector representations of the textual data. We propose a taxonomy of attention models according to four dimensions: the representation of the input, the compatibility function, the distribution function, and the multiplicity of the input and/or output. We present the examples of how prior information can be exploited in attention models and discuss ongoing research efforts and open challenges in the area, providing the first extensive categorization of the vast body of literature in this exciting domain.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2020.3019893DOI Listing

Publication Analysis

Top Keywords

natural language
8
language processing
8
attention models
8
attention
6
attention natural
4
processing attention
4
attention increasingly
4
increasingly popular
4
popular mechanism
4
mechanism wide
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!