Although quantitative analysis of biological images demands precise extraction of specific organelles or cells, it remains challenging in broad-field grayscale images, where traditional thresholding methods have been hampered due to complex image features. Nevertheless, rapidly growing artificial intelligence technology is overcoming obstacles. We previously reported the fine-tuned apodized phase-contrast microscopy system to capture high-resolution, label-free images of organelle dynamics in unstained living cells (Shimasaki, K. et al. (2024). Cell Struct. Funct., 49: 21-29). We here showed machine learning-based segmentation models for subcellular targeted objects in phase-contrast images using fluorescent markers as origins of ground truth masks. This method enables accurate segmentation of organelles in high-resolution phase-contrast images, providing a practical framework for studying cellular dynamics in unstained living cells.Key words: label-free imaging, organelle dynamics, apodized phase contrast, deep learning-based segmentation.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1247/csf.24036 | DOI Listing |
MethodsX
June 2025
Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Nigdi, Pune 411044, India.
Recent advancements in artificial intelligence (AI) have increased interest in intelligent transportation systems, particularly autonomous vehicles. Safe navigation in traffic-heavy environments requires accurate road scene segmentation, yet traditional computer vision methods struggle with complex scenarios. This study emphasizes the role of deep learning in improving semantic segmentation using datasets like the Indian Driving Dataset (IDD), which presents unique challenges in chaotic road conditions.
View Article and Find Full Text PDFSci Technol Adv Mater
December 2024
JST-CREST, Saitama, Japan.
In this review, we present a new set of machine learning-based materials research methodologies for polycrystalline materials developed through the Core Research for Evolutionary Science and Technology project of the Japan Science and Technology Agency. We focus on the constituents of polycrystalline materials (i.e.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA.
Deep neural networks (DNNs) have demonstrated exceptional performance across various image segmentation tasks. However, the process of preparing datasets for training segmentation DNNs is both labor-intensive and costly, as it typically requires pixel-level annotations for each object of interest. To mitigate this challenge, alternative approaches such as using weak labels (e.
View Article and Find Full Text PDFFront Neurol
January 2025
School of Acu-Mox and Tuina, Chengdu University of Traditional Chinese Medicine, Chengdu, China.
Objective: To develop a machine learning-based model for predicting the clinical efficacy of acupuncture intervention in patients with upper limb dysfunction following ischemic stroke, and to assess its potential role in guiding clinical practice.
Methods: Data from 1,375 ischemic stroke patients with upper limb dysfunction were collected from two hospitals, including medical records and Digital Subtraction Angiography (DSA) reports. All patients received standardized acupuncture treatment.
J Bone Oncol
February 2025
School of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou, 362001, China.
Objective: Segmenting and reconstructing 3D models of bone tumors from 2D image data is of great significance for assisting disease diagnosis and treatment. However, due to the low distinguishability of tumors and surrounding tissues in images, existing methods lack accuracy and stability. This study proposes a U-Net model based on double dimensionality reduction and channel attention gating mechanism, namely the DCU-Net model for oncological image segmentation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!