Aims And Objectives: The aim of the study is to compare the single matrix approach and slice-by-slice approach for computing singular value decomposition (SVD) to achieve near-lossless compression of PET/CT images.

Materials And Methods: The parameters used for comparison were SVD computation time, percentage compression and percentage difference between ROI counts on compressed and original images. SVD of 49 F-18-FDG PET/CT studies (33 370 PET/CT images) was computed using both approaches. The smaller singular values contributing insignificant information to the image were truncated, and then, the compressed image was reconstructed. A mask (101 × 101pixels) was used to extract the ROI counts from compressed and original images. Two nuclear medicine physicians compared compressed images with their corresponding original images for loss of clinical details and the presence of generated artifacts. Structural Similarity Index Measure, blur, brightness, contrast per pixel and global contrast factor were used for objective assessment of image quality. Wilcoxon test was applied to find a statistically significant difference between the parameters used for comparison at alpha = 0.05.

Results: Nuclear medicine physicians found compressed image identical to the corresponding original image. The values of comparation parameters were significantly larger for the single matrix approach in comparison with the slice-by-slice approach. The maximum percentage error between the compressed image and original image was less than 5%.

Conclusions: Up to 64 % and 44% near-lossless compression of PET and CT images were achieved, respectively using the slice-by-slice approach, and up to 58 and 53% near-lossless compression of PET and CT images were achieved respectively using the single matrix approach.

Download full-text PDF

Source
http://dx.doi.org/10.1097/MNM.0000000000001603DOI Listing

Publication Analysis

Top Keywords

near-lossless compression
16
single matrix
12
matrix approach
12
slice-by-slice approach
12
original images
12
compressed image
12
compression pet/ct
8
images
8
pet/ct images
8
singular decomposition
8

Similar Publications

State-of-the-Art Trends in Data Compression: COMPROMISE Case Study.

Entropy (Basel)

November 2024

Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia.

After a boom that coincided with the advent of the internet, digital cameras, digital video and audio storage and playback devices, the research on data compression has rested on its laurels for a quarter of a century. Domain-dependent lossy algorithms of the time, such as JPEG, AVC, MP3 and others, achieved remarkable compression ratios and encoding and decoding speeds with acceptable data quality, which has kept them in common use to this day. However, recent computing paradigms such as cloud computing, edge computing, the Internet of Things (IoT), and digital preservation have gradually posed new challenges, and, as a consequence, development trends in data compression are focusing on concepts that were not previously in the spotlight.

View Article and Find Full Text PDF
Article Synopsis
  • Video-based point cloud compression (V-PCC) is a new MPEG standard that effectively compresses both static and dynamic point clouds with various levels of quality loss.
  • In scenarios where the original point cloud isn't available, it’s important to create reduced-reference quality metrics, which can evaluate visual quality without direct comparison to the original.
  • The study proposes a new metric called PCQAML, which uses a set of 19 selected features related to various aspects of point clouds and demonstrates superior performance against existing metrics in multiple statistical measures.
View Article and Find Full Text PDF

Lossless and Near-Lossless Compression Algorithms for Remotely Sensed Hyperspectral Images.

Entropy (Basel)

April 2024

Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia.

Rapid and continuous advancements in remote sensing technology have resulted in finer resolutions and higher acquisition rates of hyperspectral images (HSIs). These developments have triggered a need for new processing techniques brought about by the confined power and constrained hardware resources aboard satellites. This article proposes two novel lossless and near-lossless compression methods, employing our recent seed generation and quadrature-based square rooting algorithms, respectively.

View Article and Find Full Text PDF

Several lossy compressors have achieved superior compression rates for mass spectrometry (MS) data at the cost of storage precision. Currently, the impacts of precision losses on MS data processing have not been thoroughly evaluated, which is critical for the future development of lossy compressors. We first evaluated different storage precision (32 bit and 64 bit) in lossless mzML files.

View Article and Find Full Text PDF

Lossless and near-lossless image compression is of paramount importance to professional users in many technical fields, such as medicine, remote sensing, precision engineering and scientific research. But despite rapidly growing research interests in learning-based image compression, no published method offers both lossless and near-lossless modes. In this paper, we propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!