Motivation: Training domain-specific named entity recognition (NER) models requires high quality hand curated gold standard datasets which are time-consuming and expensive to create. Furthermore, the storage and memory required to deploy NLP models can be prohibitive when the number of tasks is large. In this work, we explore utilizing multi-task learning to reduce the amount of training data needed to train new domain-specific models. We evaluate our system across 22 distinct biomedical NER datasets and evaluate the extent to which transfer learning helps task performance using two forms of ablation.

Results: We found that multitasking models generally do not improve performance, but in many cases perform on par compared to single-task models. However, we show that in some cases, new unseen tasks can be trained as a single model using less data by starting with weights from a multitask model and improve performance.

Availability: The software underlying this article are available in: https://github.com/NLPatVCU/multitasking_bert-1.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jbi.2022.104062DOI Listing

Publication Analysis

Top Keywords

entity recognition
8
models
6
effects data
4
data entity
4
entity ablation
4
ablation multitask
4
multitask learning
4
learning models
4
models biomedical
4
biomedical entity
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!