AI Article Synopsis

  • * A significant majority (91.9%) of the students displayed poor knowledge regarding CVS and its implications, highlighting a gap in awareness.
  • * Key risk factors associated with increased odds of developing CVS included having refractive errors (wearing glasses), poor sitting posture, and close eye-screen distance, suggesting the need for better education on preventive measures for digital device users.

Article Abstract

Computer vision syndrome has been an issue of concern among students who use digital devices continuously. This study aimed to determine the prevalence and level of knowledge on computer vision syndrome and its relationship with associated factors among undergraduate students in a public university in Malaysia. This study was conducted between 26 May and 23 June 2022 at National University of Malaysia. A cross-sectional study among 208 undergraduate students from a public university was conducted. A self-reported questionnaire via Google Form was used to capture the data among the undergraduates. The prevalence and associated factors of computer vision syndrome were each evaluated using the validated Computer Vision Syndrome Questionnaire and Computer Vision Syndrome Survey Form 3 questionnaires, respectively, while knowledge of computer vision syndrome was assessed using a validated questionnaire from a previous study. All the data were analyzed using the Statistical Package for Social Sciences version 26.0 software (IBM Corp.; Armonk, NY, USA). The prevalence of computer vision syndrome among undergraduates was 63.0% (n=131), with 91.9% having poor knowledge of computer vision syndrome. Significant associations toward computer vision syndrome were found among undergraduates who have refractive errors/wearing glass (69.3%), screen edge at or above horizontal eye level (79.4%), uncomfortable sitting postures (79.4%) and close eye-screen distance (82.0%). In-depth analysis showed that having refractive errors/wearing glasses (aOR: 1.93; CI: 1.05, 3.57), uncomfortable sitting postures (aOR: 2.01; CI: 1.08, 3.74), and close eye-screen distance (aOR: 2.81; CI: 1.31, 6.05) had odd chance to develop computer vision syndrome. The study's findings denoted that digital device users should have more knowledge of computer vision syndrome and practice the preventable measures, such as proper viewing distance and angle, upright sitting postures, appropriate screen and surrounding illuminance, as well as regular eye check-ups.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11332467PMC
http://dx.doi.org/10.5152/FNJN.2024.23037DOI Listing

Publication Analysis

Top Keywords

computer vision
48
vision syndrome
48
knowledge computer
16
associated factors
12
computer
12
vision
12
syndrome
12
undergraduate students
12
sitting postures
12
factors computer
8

Similar Publications

Alzheimer's disease (AD), a progressive neurodegenerative condition, notably impacts cognitive functions and daily activity. One method of detecting dementia involves a task where participants describe a given picture, and extensive research has been conducted using the participants' speech and transcribed text. However, very few studies have explored the modality of the image itself.

View Article and Find Full Text PDF

Visual attribution in medical imaging seeks to make evident the diagnostically-relevant components of a medical image, in contrast to the more common detection of diseased tissue deployed in standard machine vision pipelines (which are less straightforwardly interpretable/explainable to clinicians). We here present a novel generative visual attribution technique, one that leverages latent diffusion models in combination with domain-specific large language models, in order to generate normal counterparts of abnormal images. The discrepancy between the two hence gives rise to a mapping indicating the diagnostically-relevant image components.

View Article and Find Full Text PDF

CFPLncLoc: A multi-label lncRNA subcellular localization prediction based on Chaos game representation and centralized feature pyramid.

Int J Biol Macromol

January 2025

National Center for Applied Mathematics in Hunan, Xiangtan University, Hunan 411105, China; Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education, Xiangtan University, Hunan 411105, China.

There is increasing evidence that the subcellular localization of long noncoding RNAs (lncRNAs) can provide valuable insights into their biological functions. In terms of transcriptomes, lncRNAs were usually found in multiple subcellular localizations. Although several computational methods have been developed to predict the subcellular localization of lncRNAs, few of them were designed for lncRNAs that have multiple subcellular localizations.

View Article and Find Full Text PDF

Purpose: Accurate identification of radiographic landmarks is fundamental to characterizing glenohumeral relationships before and sequentially after shoulder arthroplasty, but manual annotation of these radiographs is laborious. We report on the use of artificial intelligence, specifically computer vision and deep learning models (DLMs), in determining the accuracy of DLM-identified and surgeon identified (SI) landmarks before and after anatomic shoulder arthroplasty.

Materials & Methods: 240 true anteroposterior radiographs were annotated using 11 standard osseous landmarks to train a deep learning model.

View Article and Find Full Text PDF

Reactive lymphocytes are an important type of leukocytes, which are morphologically transformed from lymphocytes. The increase in these cells is usually a sign of certain virus infections, so their detection plays an important role in the fight against diseases. Manual detection of reactive lymphocytes is undoubtedly time-consuming and labor-intensive, requiring a high level of professional knowledge.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!