Cancer is an illness that instils fear in many individuals throughout the world due to its lethal nature. However, in most situations, cancer may be cured if detected early and treated properly. Computer-aided diagnosis is gaining traction because it may be used as an initial screening test for many illnesses, including cancer. Deep learning (DL) is a CAD-based artificial intelligence (AI) powered approach which attempts to mimic the cognitive process of the human brain. Various DL algorithms have been applied for breast cancer diagnosis and have obtained adequate accuracy due to the DL technology's high feature learning capabilities. However, when it comes to real-time application, deep neural networks (NN) have a high computational complexity in terms of power, speed, and resource usage. With this in mind, this work proposes a miniaturised NN to reduce the number of parameters and computational complexity for hardware deployment. The quantised NN is then accelerated using field-programmable gate arrays (FPGAs) to increase detection speed and minimise power consumption while guaranteeing high accuracy, thus providing a new avenue in assisting radiologists in breast cancer diagnosis using digital mammograms. When evaluated on benchmark datasets such as DDSM, MIAS, and INbreast, the suggested method achieves high classification rates. The proposed model achieved an accuracy of 99.38% on the combined dataset.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s11517-023-02883-2 | DOI Listing |
Philos Trans A Math Phys Eng Sci
January 2025
Indian Institute of Technology Gandhinagar, Gandhinagar, Gujarat, India.
Modern language models such as bidirectional encoder representations from transformers have revolutionized natural language processing (NLP) tasks but are computationally intensive, limiting their deployment on edge devices. This paper presents an energy-efficient accelerator design tailored for encoder-based language models, enabling their integration into mobile and edge computing environments. A data-flow-aware hardware accelerator design for language models inspired by Simba, makes use of approximate fixed-point POSIT-based multipliers and uses high bandwidth memory (HBM) in achieving significant improvements in computational efficiency, power consumption, area and latency compared to the hardware-realized scalable accelerator Simba.
View Article and Find Full Text PDFCurr Med Imaging
January 2025
School of Life Sciences, Tiangong University, Tianjin 300387, China.
Objective: The objective of this research is to enhance pneumonia detection in chest X-rays by leveraging a novel hybrid deep learning model that combines Convolutional Neural Networks (CNNs) with modified Swin Transformer blocks. This study aims to significantly improve diagnostic accuracy, reduce misclassifications, and provide a robust, deployable solution for underdeveloped regions where access to conventional diagnostics and treatment is limited.
Methods: The study developed a hybrid model architecture integrating CNNs with modified Swin Transformer blocks to work seamlessly within the same model.
Comput Biol Med
January 2025
Department of Computer Science, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran. Electronic address:
Tiny machine learning (TinyML) and edge intelligence have emerged as pivotal paradigms for enabling machine learning on resource-constrained devices situated at the extreme edge of networks. In this paper, we explore the transformative potential of TinyML in facilitating pervasive, low-power cardiovascular monitoring and real-time analytics for patients with cardiac anomalies, leveraging wearable devices as the primary interface. To begin with, we provide an overview of TinyML software and hardware enablers, accompanied by an examination of networking solutions such as Low-power Wide area network (LPWAN) that facilitate the seamless deployment of TinyML frameworks.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Intelligent Embedded Systems of Computer Science, University of Duisburg-Essen, 47057 Duisburg, Germany.
This study presents a comprehensive workflow for developing and deploying Multi-Layer Perceptron (MLP)-based soft sensors on embedded FPGAs, addressing diverse deployment objectives. The proposed workflow extends our prior research by introducing greater model adaptability. It supports various configurations-spanning layer counts, neuron counts, and quantization bitwidths-to accommodate the constraints and capabilities of different FPGA platforms.
View Article and Find Full Text PDFNat Comput Sci
January 2025
IBM Research Europe, Rüschlikon, Switzerland.
Large language models (LLMs), with their remarkable generative capacities, have greatly impacted a range of fields, but they face scalability challenges due to their large parameter counts, which result in high costs for training and inference. The trend of increasing model sizes is exacerbating these challenges, particularly in terms of memory footprint, latency and energy consumption. Here we explore the deployment of 'mixture of experts' (MoEs) networks-networks that use conditional computing to keep computational demands low despite having many parameters-on three-dimensional (3D) non-volatile memory (NVM)-based analog in-memory computing (AIMC) hardware.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!