The ability to fine-tune pre-trained deep learning models to learn how to process a downstream task using a large training set allow to significantly improve performances of named entity recognition. Large language models are recent models based on the Transformers architecture that may be conditioned on a new task with in-context learning, by providing a series of instructions or prompt. These models only require few examples and such approach is defined as few shot learning.
View Article and Find Full Text PDF