Ultrasound computed tomography (USCT) is an emerging technology that offers a noninvasive and radiation-free imaging approach with high sensitivity, making it promising for the early detection and diagnosis of breast cancer. The speed-of-sound (SOS) parameter plays a crucial role in distinguishing between benign masses and breast cancer. However, traditional SOS reconstruction methods face challenges in achieving a balance between resolution and computational efficiency, which hinders their clinical applications due to high computational complexity and long reconstruction times. In this paper, we propose a novel and efficient approach for direct SOS image reconstruction based on an improved conditional generative adversarial network. The generator directly reconstructs SOS images from time-of-flight information, eliminating the need for intermediate steps. Residual spatial-channel attention blocks are integrated into the generator to adaptively determine the relevance of arrival time from the transducer pair corresponding to each pixel in the SOS image. An ablation study verified the effectiveness of this module. Qualitative and quantitative evaluation results on breast phantom datasets demonstrate that this method is capable of rapidly reconstructing high-quality SOS images, achieving better generation results and image quality. Therefore, we believe that the proposed algorithm represents a new direction in the research area of USCT SOS reconstruction.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10770017 | PMC |
http://dx.doi.org/10.1007/s13534-023-00310-x | DOI Listing |
J Microsc
January 2025
Department of Mechanical, Materials and Aerospace Engineering, University of Liverpool, Liverpool, UK.
Electron backscatter diffraction (EBSD) has developed over the last few decades into a valuable crystallographic characterisation method for a wide range of sample types. Despite these advances, issues such as the complexity of sample preparation, relatively slow acquisition, and damage in beam-sensitive samples, still limit the quantity and quality of interpretable data that can be obtained. To mitigate these issues, here we propose a method based on the subsampling of probe positions and subsequent reconstruction of an incomplete data set.
View Article and Find Full Text PDFSmall Methods
January 2025
Dept. Chemical Engineering and Biotechnology, University of Cambridge, Cambridge, CB3 0AS, UK.
The integration of Machine Learning (ML) with super-resolution microscopy represents a transformative advancement in biomedical research. Recent advances in ML, particularly deep learning (DL), have significantly enhanced image processing tasks, such as denoising and reconstruction. This review explores the growing potential of automation in super-resolution microscopy, focusing on how DL can enable autonomous imaging tasks.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Control and Computer Engineering (DAUIN), Politecnico di Torino, Corso Duca degli Abruzzi, 24, 10129 Torino, Italy.
The increasing demand for hazelnut kernels is favoring an upsurge in hazelnut cultivation worldwide, but ongoing climate change threatens this crop, affecting yield decreases and subject to uncontrolled pathogen and parasite attacks. Technical advances in precision agriculture are expected to support farmers to more efficiently control the physio-pathological status of crops. Here, we report a straightforward approach to monitoring hazelnut trees in an open field, using aerial multispectral pictures taken by drones.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of AI & Big Data, Honam University, Gwangju 62399, Republic of Korea.
This study proposes an advanced plant disease classification framework leveraging the Attention Score-Based Multi-Vision Transformer (Multi-ViT) model. The framework introduces a novel attention mechanism to dynamically prioritize relevant features from multiple leaf images, overcoming the limitations of single-leaf-based diagnoses. Building on the Vision Transformer (ViT) architecture, the Multi-ViT model aggregates diverse feature representations by combining outputs from multiple ViTs, each capturing unique visual patterns.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Instituto de Telecomunicações (IT), Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisbon, Portugal.
Shrimp farming is a growing industry, and automating certain processes within aquaculture tanks is becoming increasingly important to improve efficiency. This paper proposes an image-based system designed to address four key tasks in an aquaculture tank with : estimating shrimp length and weight, counting shrimps, and evaluating feed pellet food attractiveness. A setup was designed, including a camera connected to a Raspberry Pi computer, to capture high-quality images around a feeding plate during feeding moments.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!