During the Covid-19 pandemic, the widespread use of social media platforms has facilitated the dissemination of information, fake news, and propaganda, serving as a vital source of self-reported symptoms related to Covid-19. Existing graph-based models, such as Graph Neural Networks (GNNs), have achieved notable success in Natural Language Processing (NLP). However, utilizing GNN-based models for propaganda detection remains challenging because of the challenges related to mining distinct word interactions and storing nonconsecutive and broad contextual data. In this study, we propose a Hierarchical Graph-based Integration Network (H-GIN) designed for detecting propaganda in text within a defined domain using multilabel classification. H-GIN is extracted to build a bi-layer graph inter-intra-channel, such as Residual-driven Enhancement and Processing (RDEP) and Attention-driven Multichannel feature Fusing (ADMF) with suitable labels at two distinct classification levels. First, RDEP procedures facilitate information interactions between distant nodes. Second, by employing these guidelines, ADMF standardizes the Tri-Channels 3-S (sequence, semantic, and syntactic) layer, enabling effective propaganda detection through related and unrelated information propagation of news representations into a classifier from the existing ProText, Qprop, and PTC datasets, thereby ensuring its availability to the public. The H-GIN model demonstrated exceptional performance, achieving an impressive 82% accuracy and surpassing current leading models. Notably, the model's capacity to identify previously unseen examples across diverse openness scenarios at 82% accuracy using the ProText dataset was particularly significant.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730963 | PMC |
http://dx.doi.org/10.1038/s41598-024-74126-9 | DOI Listing |
Sci Rep
January 2025
EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, 11586, Riyadh, Saudi Arabia.
During the Covid-19 pandemic, the widespread use of social media platforms has facilitated the dissemination of information, fake news, and propaganda, serving as a vital source of self-reported symptoms related to Covid-19. Existing graph-based models, such as Graph Neural Networks (GNNs), have achieved notable success in Natural Language Processing (NLP). However, utilizing GNN-based models for propaganda detection remains challenging because of the challenges related to mining distinct word interactions and storing nonconsecutive and broad contextual data.
View Article and Find Full Text PDFJMIR Infodemiology
January 2025
Computational Social Science DataLab, University Institute of Research for Sustainable Social Development (INDESS), University of Cadiz, Jerez de la Frontera, Spain.
Background: During the COVID-19 pandemic, social media platforms have been a venue for the exchange of messages, including those related to fake news. There are also accounts programmed to disseminate and amplify specific messages, which can affect individual decision-making and present new challenges for public health.
Objective: This study aimed to analyze how social bots use hashtags compared to human users on topics related to misinformation during the outbreak of the COVID-19 pandemic.
J Forensic Sci
January 2025
Department of Electronics & Communication Engineering, Jaypee Institute of Information & Technology Noida, Noida, India.
PLoS One
July 2024
EIAS Data Science Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia.
Social media platforms serve as communication tools where users freely share information regardless of its accuracy. Propaganda on these platforms refers to the dissemination of biased or deceptive information aimed at influencing public opinion, encompassing various forms such as political campaigns, fake news, and conspiracy theories. This study introduces a Hybrid Feature Engineering Approach for Propaganda Identification (HAPI), designed to detect propaganda in text-based content like news articles and social media posts.
View Article and Find Full Text PDFPLoS One
July 2024
NLP & IR Group, Dpto. Lenguajes y Sistemas Informáticos, Universidad Nacional de Educación a Distancia (UNED), Madrid, Spain.
Disinformation in the medical field is a growing problem that carries a significant risk. Therefore, it is crucial to detect and combat it effectively. In this article, we provide three elements to aid in this fight: 1) a new framework that collects health-related articles from verification entities and facilitates their check-worthiness and fact-checking annotation at the sentence level; 2) a corpus generated using this framework, composed of 10335 sentences annotated in these two concepts and grouped into 327 articles, which we call KEANE (faKe nEws At seNtence lEvel); and 3) a new model for verifying fake news that combines specific identifiers of the medical domain with triplets subject-predicate-object, using Transformers and feedforward neural networks at the sentence level.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!