Sentimental Analysis of COVID-19 Tweets Using Deep Learning Models.

Infect Dis Rep

Telemedicine and Telepharmacy Centre, School of Medicinal and health products sciences, University of Camerino, 62032 Camerino, Italy.

Published: April 2021

The novel coronavirus disease (COVID-19) is an ongoing pandemic with large global attention. However, spreading false news on social media sites like Twitter is creating unnecessary anxiety towards this disease. The motto behind this study is to analyses tweets by Indian netizens during the COVID-19 lockdown. The data included tweets collected on the dates between 23 March 2020 and 15 July 2020 and the text has been labelled as fear, sad, anger, and joy. Data analysis was conducted by Bidirectional Encoder Representations from Transformers (BERT) model, which is a new deep-learning model for text analysis and performance and was compared with three other models such as logistic regression (LR), support vector machines (SVM), and long-short term memory (LSTM). Accuracy for every sentiment was separately calculated. The BERT model produced 89% accuracy and the other three models produced 75%, 74.75%, and 65%, respectively. Each sentiment classification has accuracy ranging from 75.88-87.33% with a median accuracy of 79.34%, which is a relatively considerable value in text mining algorithms. Our findings present the high prevalence of keywords and associated terms among Indian tweets during COVID-19. Further, this work clarifies public opinion on pandemics and lead public health authorities for a better society.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8167749PMC
http://dx.doi.org/10.3390/idr13020032DOI Listing

Publication Analysis

Top Keywords

bert model
8
three models
8
sentimental analysis
4
covid-19
4
analysis covid-19
4
tweets
4
covid-19 tweets
4
tweets deep
4
deep learning
4
learning models
4

Similar Publications

CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning.

Med Image Comput Comput Assist Interv

October 2024

Department of Biomedical Engineering, Yale University, New Haven, CT, USA.

Recent advancements in Contrastive Language-Image Pre-training (CLIP) [21] have demonstrated notable success in self-supervised representation learning across various tasks. However, the existing CLIP-like approaches often demand extensive GPU resources and prolonged training times due to the considerable size of the model and dataset, making them poor for medical applications, in which large datasets are not always common. Meanwhile, the language model prompts are mainly manually derived from labels tied to images, potentially overlooking the richness of information within training samples.

View Article and Find Full Text PDF

Digital transformation has significantly impacted public procurement, improving operational efficiency, transparency, and competition. This transformation has allowed the automation of data analysis and oversight in public administration. Public procurement involves various stages and generates a multitude of documents.

View Article and Find Full Text PDF

Objective: Brief hospital course (BHC) summaries are clinical documents that summarize a patient's hospital stay. While large language models (LLMs) depict remarkable capabilities in automating real-world tasks, their capabilities for healthcare applications such as synthesizing BHCs from clinical notes have not been shown. We introduce a novel preprocessed dataset, the MIMIC-IV-BHC, encapsulating clinical note and BHC pairs to adapt LLMs for BHC synthesis.

View Article and Find Full Text PDF

Background: The increasing prevalence of cognitive impairment and dementia threatens global health, necessitating the development of accessible tools for detection of cognitive impairment. This study explores using a transformer-based approach to detect cognitive impairment using acoustic markers of spontaneous speech.

Method: Recordings of unstructured interviews from baseline visits were obtained from participants of The 90+ Study, a longitudinal study of individuals older than 90 years.

View Article and Find Full Text PDF

Background: Spontaneous speech is easily obtainable and has the potential to become an accessible and low-cost marker for cognitive function. The time-consuming and labor-intensive nature of speech analysis has been a major obstacle to utilizing this promising tool. This study uses a novel transformer-based methodology to explore associations between spontaneous speech language features and global cognition.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!