Objective: Thyroid-associated ophthalmopathy (TAO) is a prevalent orbital disease significantly affecting patients' daily lives. Nowadays, TikTok acts as a novel tool for healthcare but involved videos need to be further assessed. Many individuals search for disease information on TikTok before professional consultation. This study aimed to assess the quality of TAO-related TikTok videos and correlations between video variables and quality.
Methods: The top 150 TikTok videos were collected using the keyword TAO. Duplicate, too-short, irrelevant videos and similar videos from the same source were excluded. Two raters evaluated the included videos' overall quality, reliability, understandability, and actionability on different sources and content focuses.
Results: Ninety videos had received nearly 15,000 likes and 2000 shares. Ophthalmologists and treatment focus were two primary parts among categorized groups, whose quality scores were much higher than others. The average Patient Education Materials Assessment Tool for Audiovisual Materials scores, Global Quality Scores, and DISCERN scores indicated that these videos were easy to understand (87.6%), actionable (74.5%), and fair in quality (44.97). The number of added hashtags was an essential variable positively correlated with video understandability. Additionally, popularity showed negative correlations with the overall quality, while video length positively correlated with its reliability and negatively correlated with the uploaded days.
Conclusion: Certified healthcare professionals uploaded most TAO videos, resulting in acceptable quality with minimal misinformation. To serve as a qualified source of patient educational materials, TikTok is supposed to promote longer disease-related videos and enhance reliability and understandability simultaneously.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638993 | PMC |
http://dx.doi.org/10.1177/20552076241304594 | DOI Listing |
Sensors (Basel)
December 2024
School of Electronic Information Engineering, Taiyuan University of Science and Technology, Taiyuan 030024, China.
Human pose estimation is an important research direction in the field of computer vision, which aims to accurately identify the position and posture of keypoints of the human body through images or videos. However, multi-person pose estimation yields false detection or missed detection in dense crowds, and it is still difficult to detect small targets. In this paper, we propose a Mamba-based human pose estimation.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Centre for Sleep Medicine Kempenhaeghe, 5590 AB Heeze, The Netherlands.
Continuous respiration monitoring is an important tool in assessing the patient's health and diagnosing pulmonary, cardiovascular, and sleep-related breathing disorders. Various techniques and devices, both contact and contactless, can be used to monitor respiration. Each of these techniques can provide different types of information with varying accuracy.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Australian Urban Research Infrastructure Network (AURIN), University of Melbourne, Melbourne, VIC 3052, Australia.
Public transportation systems play a vital role in modern cities, but they face growing security challenges, particularly related to incidents of violence. Detecting and responding to violence in real time is crucial for ensuring passenger safety and the smooth operation of these transport networks. To address this issue, we propose an advanced artificial intelligence (AI) solution for identifying unsafe behaviours in public transport.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Department of Industrial Engineering, Chosun University, Gwangju 61452, Republic of Korea.
In human activity recognition, accurate and timely fall detection is essential in healthcare, particularly for monitoring the elderly, where quick responses can prevent severe consequences. This study presents a new fall detection model built on a transformer architecture, which focuses on the movement speeds of key body points tracked using the MediaPipe library. By continuously monitoring these key points in video data, the model calculates real-time speed changes that signal potential falls.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Republic of Korea.
Generating accurate and contextually rich captions for images and videos is essential for various applications, from assistive technology to content recommendation. However, challenges such as maintaining temporal coherence in videos, reducing noise in large-scale datasets, and enabling real-time captioning remain significant. We introduce MIRA-CAP (Memory-Integrated Retrieval-Augmented Captioning), a novel framework designed to address these issues through three core innovations: a cross-modal memory bank, adaptive dataset pruning, and a streaming decoder.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!