Training deep learning models for image registration or segmentation of dynamic contrast enhanced (DCE) MRI data is challenging. This is mainly due to the wide variations in contrast enhancement within and between patients. To train a model effectively, a large dataset is needed, but acquiring it is expensive and time consuming. Instead, style transfer can be used to generate new images from existing images. In this study, our objective is to develop a style transfer method that incorporates spatio-temporal information to either add or remove contrast enhancement from an existing image.We propose a temporal image-to-image style transfer network (TIST-Net), consisting of an auto-encoder combined with convolutional long short-term memory networks. This enables disentanglement of the content and style latent spaces of the time series data, using spatio-temporal information to learn and predict key structures. To generate new images, we use deformable and adaptive convolutions which allow fine grained control over the combination of the content and style latent spaces. We evaluate our method, using popular metrics and a previously proposed contrast weighted structural similarity index measure. We also perform a clinical evaluation, where experts are asked to rank images generated by multiple methods.Our model achieves state-of-the-art performance on three datasets (kidney, prostate and uterus) achieving an SSIM of 0.91 ± 0.03, 0.73 ± 0.04, 0.88 ± 0.04 respectively when performing style transfer between a non-enhanced image and a contrast-enhanced image. Similarly, SSIM results for style transfer from a contrast-enhanced image to a non-enhanced image were 0.89 ± 0.03, 0.82 ± 0.03, 0.87 ± 0.03. In the clinical evaluation, our method was ranked consistently higher than other approaches.TIST-Net can be used to generate new DCE-MRI data from existing images. In future, this may improve models for tasks such as image registration or segmentation by allowing small training datasets to be expanded.

Download full-text PDF

Source
http://dx.doi.org/10.1088/1361-6560/ad4193DOI Listing

Publication Analysis

Top Keywords

style transfer
24
dynamic contrast
8
contrast enhanced
8
image registration
8
registration segmentation
8
contrast enhancement
8
generate images
8
existing images
8
content style
8
style latent
8

Similar Publications

To improve the expressiveness and realism of illustration images, the experiment innovatively combines the attention mechanism with the cycle consistency adversarial network and proposes an efficient style transfer method for illustration images. The model comprehensively utilizes the image restoration and style transfer capabilities of the attention mechanism and the cycle consistency adversarial network, and introduces an improved attention module, which can adaptively highlight the key visual elements in the illustration, thereby maintaining artistic integrity during the style transfer process. Through a series of quantitative and qualitative experiments, high-quality style transfer is achieved, especially while retaining the original features of the illustration.

View Article and Find Full Text PDF

Transferring knowledge learned from standard GelSight sensors to other visuotactile sensors is appealing for reducing data collection and annotation. However, such cross-sensor transfer is challenging due to the differences between sensors in internal light sources, imaging effects, and elastomer properties. By understanding the data collected from each type of visuotactile sensors as domains, we propose a few-sample-driven style-to-content unsupervised domain adaptation method to reduce cross-sensor domain gaps.

View Article and Find Full Text PDF

Table Extraction with Table Data Using VGG-19 Deep Learning Model.

Sensors (Basel)

January 2025

Faculty of Science and Environmental Studies, Department of Computer Science, Lakehead University, Thunder Bay, ON P7B 5E1, Canada.

In recent years, significant progress has been achieved in understanding and processing tabular data. However, existing approaches often rely on task-specific features and model architectures, posing challenges in accurately extracting table structures amidst diverse layouts, styles, and noise contamination. This study introduces a comprehensive deep learning methodology that is tailored for the precise identification and extraction of rows and columns from document images that contain tables.

View Article and Find Full Text PDF

Recent literature on positive youth development through sports has consistently emphasized the role of parents in developing and transferring life skills of athletes. However, related research findings are still lacking, especially within Asia. This study aimed to validate a structural relationship of perceived positive and negative parenting attitudes, basic psychological needs, life skills development, and transfer among student-athletes in South Korea.

View Article and Find Full Text PDF

Predicting cell morphological responses to perturbations using generative modeling.

Nat Commun

January 2025

Department of Computational Health, Institute of Computational Biology, Helmholtz Zentrum München, Munich, Germany.

Advancements in high-throughput screenings enable the exploration of rich phenotypic readouts through high-content microscopy, expediting the development of phenotype-based drug discovery. However, analyzing large and complex high-content imaging screenings remains challenging due to incomplete sampling of perturbations and the presence of technical variations between experiments. To tackle these shortcomings, we present IMage Perturbation Autoencoder (IMPA), a generative style-transfer model predicting morphological changes of perturbations across genetic and chemical interventions.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!