The rapid dissemination of information has been accompanied by the proliferation of fake news, posing significant challenges in discerning authentic news from fabricated narratives. This study addresses the urgent need for effective fake news detection mechanisms. The spread of fake news on digital platforms has necessitated the development of sophisticated tools for accurate detection and classification. Deep learning models, particularly Bi-LSTM and attention-based Bi-LSTM architectures, have shown promise in tackling this issue. This research utilized Bi-LSTM and attention-based Bi-LSTM models, integrating an attention mechanism to assess the significance of different parts of the input data. The models were trained on an 80% subset of the data and tested on the remaining 20%, employing comprehensive evaluation metrics including Recall, Precision, F1-Score, Accuracy, and Loss. Comparative analysis with existing models revealed the superior efficacy of the proposed architectures. The attention-based Bi-LSTM model demonstrated remarkable proficiency, outperforming other models in terms of accuracy (97.66%) and other key metrics. The study highlighted the potential of integrating advanced deep learning techniques in fake news detection. The proposed models set new standards in the field, offering effective tools for combating misinformation. Limitations such as data dependency, potential for overfitting, and language and context specificity were acknowledged. The research underscores the importance of leveraging cutting-edge deep learning methodologies, particularly attention mechanisms, in fake news identification. The innovative models presented pave the way for more robust solutions to counter misinformation, thereby preserving the veracity of digital information. Future research should focus on enhancing data diversity, model efficiency, and applicability across various languages and contexts.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10800750 | PMC |
http://dx.doi.org/10.3389/fdata.2023.1320800 | DOI Listing |
R Soc Open Sci
January 2025
Arizona State University, Glendale, AZ, USA.
Numerous psychological findings have shown that incidental exposure to ideas makes those ideas seem more true, a finding commonly referred to as the 'illusory truth' effect. Under many accounts of the illusory truth effect, initial exposure to a statement provides a metacognitive feeling of 'fluency' or familiarity that, upon subsequent exposure, leads people to infer that the statement is more likely to be true. However, genuine beliefs do not only affect truth judgements about individual statements, they also imply other beliefs and drive decision-making.
View Article and Find Full Text PDFPublic Health Rep
January 2025
Department of Population Medicine, University of Guelph, Guelph, ON, Canada.
Objectives: Communication plays a pivotal role in addressing modern and complex public health challenges. Our study assessed the extent to which communication-related course outlines in Canadian master of public health (MPH) programs aligned with national and international public health competency frameworks in their coverage of communication competencies.
Methods: We conducted an environmental scan and content analysis of MPH courses relevant to public health communication in 2022 and 2023.
Ann Ig
January 2025
Department of Experimental Medicine, University of Salento, Lecce, Complesso Ecotekne, Lecce, Italy.
Background: Correct information is an essential tool to guide thoughts, attitudes, daily choices or more important decisions such as those regarding health. Today, a huge amount of information sources and media is available. Increasing possibilities of obtaining data also require understanding and positioning skills, particularly the ability to navigate the ocean of information and to choose what is best without becoming overwhelmed.
View Article and Find Full Text PDFAdvances in the use of AI have led to the emergence of a greater variety of forms disinformation can take and channels for its proliferation. In this context, the future of legal mechanisms to address AI-powered disinformation remains to be determined. Additional complexity for legislators working in the field arises from the need to harmonize national legal frameworks of democratic states with the need for regulation of potentially dangerous digital content.
View Article and Find Full Text PDFFront Artif Intell
January 2025
Alma Sistemi Srl, Rome, Italy.
This study explores the evolving role of social media in the spread of misinformation during the Ukraine-Russia conflict, with a focus on how artificial intelligence (AI) contributes to the creation of deceptive war imagery. Specifically, the research examines the relationship between color patterns (LUTs) in war-related visuals and their perceived authenticity, highlighting the economic, political, and social ramifications of such manipulative practices. AI technologies have significantly advanced the production of highly convincing, yet artificial, war imagery, blurring the line between fact and fiction.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!