Human Activity Recognition (HAR) is the process of automatically detecting human actions from the data collected from different types of sensors. Research related to HAR has devoted particular attention to monitoring and recognizing the human activities of a single occupant in a home environment, in which it is assumed that only one person is present at any given time. Recognition of the activities is then used to identify any abnormalities within the routine activities of daily living. Despite the assumption in the published literature, living environments are commonly occupied by more than one person and/or accompanied by pet animals. In this paper, a novel method based on different entropy measures, including Approximate Entropy (ApEn), Sample Entropy (SampEn), and Fuzzy Entropy (FuzzyEn), is explored to detect and identify a visitor in a home environment. The research has mainly focused on when another individual visits the main occupier, and it is, therefore, not possible to distinguish between their movement activities. The goal of this research is to assess whether entropy measures can be used to detect and identify the visitor in a home environment. Once the presence of the main occupier is distinguished from others, the existing activity recognition and abnormality detection processes could be applied for the main occupier. The proposed method is tested and validated using two different datasets. The results obtained from the experiments show that the proposed method could be used to detect and identify a visitor in a home environment with a high degree of accuracy based on the data collected from the occupancy sensors.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7514904 | PMC |
http://dx.doi.org/10.3390/e21040416 | DOI Listing |
J Assist Reprod Genet
January 2025
Vrije Universiteit Brussel (VUB), Universitair Ziekenhuis Brussel (UZ Brussel), Clinical Sciences, Research Group Genetics, Reproduction and Development, Centre for Medical Genetics, Laarbeeklaan 101, 1090, Brussels, Belgium.
Purpose: Primary ovarian insufficiency (POI) is an important cause of female infertility, stemming from follicle dysfunction or premature oocyte depletion. Pathogenic variants in genes such as NOBOX, GDF9, BMP15, and FSHR have been linked to POI. NOBOX, a transcription factor expressed in oocytes and granulosa cells, plays a pivotal role in folliculogenesis.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Disease, Shanghai, 200080, China.
The objectives of this study are to construct a deep convolutional neural network (DCNN) model to diagnose and classify meibomian gland dysfunction (MGD) based on the in vivo confocal microscope (IVCM) images and to evaluate the performance of the DCNN model and its auxiliary significance for clinical diagnosis and treatment. We extracted 6643 IVCM images from the three hospitals' IVCM database as the training set for the DCNN model and 1661 IVCM images from the other two hospitals' IVCM database as the test set to examine the performance of the model. Construction of the DCNN model was performed using DenseNet-169.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA.
Vision transformer (ViT)and convolutional neural networks (CNNs) each possess distinct strengths in medical imaging: ViT excels in capturing long-range dependencies through self-attention, while CNNs are adept at extracting local features via spatial convolution filters. While ViT may struggle with capturing detailed local spatial information, critical for tasks like anomaly detection in medical imaging, shallow CNNs often fail to effectively abstract global context. This study aims to explore and evaluate hybrid architectures that integrate ViT and CNN to leverage their complementary strengths for enhanced performance in medical vision tasks, such as segmentation, classification, reconstruction, and prediction.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.
This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.
View Article and Find Full Text PDFClin Exp Nephrol
January 2025
Reach-J Steering Committee, Tsukuba, Ibaraki, Japan.
Background: Although several studies have examined the Kidney Disease Quality of Life (KDQOL) in patients with chronic kidney disease (CKD), the factors associated with kidney-related symptoms have not been fully explored.
Methods: This nationwide multicenter cohort study enrolled 2248 patients. To identify the factors associated with each item or the three KDQOL domains, such as burden of kidney disease, symptoms/problems of kidney disease, and impact of kidney disease on daily life, multiple regression analysis was performed using baseline data.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!