Medical imaging is an essential data source that has been leveraged worldwide in healthcare systems. In pathology, histopathology images are used for cancer diagnosis, whereas these images are very complex and their analyses by pathologists require large amounts of time and effort. On the other hand, although convolutional neural networks (CNNs) have produced near-human results in image processing tasks, their processing time is becoming longer and they need higher computational power. In this paper, we implement a quantized ResNet model on two histopathology image datasets to optimize the inference power consumption. We analyze classification accuracy, energy estimation, and hardware utilization metrics to evaluate our method. First, the original RGB-colored images are utilized for the training phase, and then compression methods such as channel reduction and sparsity are applied. Our results show an accuracy increase of 6% from RGB on 32-bit (baseline) to the optimized representation of sparsity on RGB with a lower bit-width, i.e., <8:8>. For energy estimation on the used CNN model, we found that the energy used in RGB color mode with 32-bit is considerably higher than the other lower bit-width and compressed color modes. Moreover, we show that lower bit-width implementations yield higher resource utilization and a lower memory bottleneck ratio. This work is suitable for inference on energy-limited devices, which are increasingly being used in the Internet of Things (IoT) systems that facilitate healthcare systems.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9415388PMC
http://dx.doi.org/10.3390/mi13081364DOI Listing

Publication Analysis

Top Keywords

lower bit-width
12
histopathology image
8
convolutional neural
8
neural networks
8
healthcare systems
8
energy estimation
8
enabling intelligent
4
intelligent iots
4
iots histopathology
4
image analysis
4

Similar Publications

An empirical study of LLaMA3 quantization: from LLMs to MLLMs.

Vis Intell

December 2024

Department of Information Technology and Electrical Engineering, ETH Zurich, Sternwartstrasse 7, Zürich, Switzerland.

The LLaMA family, a collection of foundation language models ranging from 7B to 65B parameters, has become one of the most powerful open-source large language models (LLMs) and the popular LLM backbone of multi-modal large language models (MLLMs), widely used in computer vision and natural language understanding tasks. In particular, LLaMA3 models have recently been released and have achieved impressive performance in various domains with super-large scale pre-training on over 15T tokens of data. Given the wide application of low-bit quantization for LLMs in resource-constrained scenarios, we explore LLaMA3's capabilities when quantized to low bit-width.

View Article and Find Full Text PDF

Deployable mixed-precision quantization with co-learning and one-time search.

Neural Netw

January 2025

University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, 611731, China. Electronic address:

Article Synopsis
  • Mixed-precision quantization helps deploy deep neural networks on devices with limited resources, but optimally configuring bit-widths for different layers is still a challenge that hasn't been fully addressed.
  • This study introduces Cobits, a new framework that intelligently assigns bit-widths based on the ranges of input and quantized values, and uses a co-learning strategy to manage both shared and specific quantization parameters.
  • Experiments show that Cobits significantly outperforms existing quantization methods on popular datasets while maintaining efficiency, and it can easily adapt to different deployment scenarios; the code will be available online.
View Article and Find Full Text PDF

In order to reduce the power consumption of digital signal processing (DSP) in a coherent optical communication system, a low complexity equalization scheme in DSP flow of a 400 Gb/s DP-16QAM system has been proposed. This scheme is based on Fermat number transform (FNT), which sequentially performs static equalization (SE) and dynamic equalization (DE) in the transform domain. For different distances, the proposed scheme finds the optimal solution under the condition that transform length and data bit width are mutually restricted under different transmission distances while achieving low complexity and optimal performance.

View Article and Find Full Text PDF

High-Throughput Polar Code Decoders with Information Bottleneck Quantization.

Entropy (Basel)

May 2024

Microelectronic Systems Design Research Group, RPTU Kaiserslautern-Landau, 67663 Kaiserslautern, Germany.

Article Synopsis
  • The forward error correction (FEC) unit is a key component in digital baseband processing that demands high computational effort and power, making its efficient implementation critical for future mobile broadband standards.
  • Quantization affects the area, power consumption, and throughput of FEC decoders; while lower bit widths help with efficiency, they can reduce error correction performance.
  • This paper introduces optimized Fast Simplified Successive-Cancellation (Fast-SSC) polar code decoder implementations using a non-uniform quantization method based on the Information Bottleneck, achieving improvements in area and energy efficiency compared to existing decoders.
View Article and Find Full Text PDF

Medical imaging is an essential data source that has been leveraged worldwide in healthcare systems. In pathology, histopathology images are used for cancer diagnosis, whereas these images are very complex and their analyses by pathologists require large amounts of time and effort. On the other hand, although convolutional neural networks (CNNs) have produced near-human results in image processing tasks, their processing time is becoming longer and they need higher computational power.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!