Cancer is an illness that instils fear in many individuals throughout the world due to its lethal nature. However, in most situations, cancer may be cured if detected early and treated properly. Computer-aided diagnosis is gaining traction because it may be used as an initial screening test for many illnesses, including cancer. Deep learning (DL) is a CAD-based artificial intelligence (AI) powered approach which attempts to mimic the cognitive process of the human brain. Various DL algorithms have been applied for breast cancer diagnosis and have obtained adequate accuracy due to the DL technology's high feature learning capabilities. However, when it comes to real-time application, deep neural networks (NN) have a high computational complexity in terms of power, speed, and resource usage. With this in mind, this work proposes a miniaturised NN to reduce the number of parameters and computational complexity for hardware deployment. The quantised NN is then accelerated using field-programmable gate arrays (FPGAs) to increase detection speed and minimise power consumption while guaranteeing high accuracy, thus providing a new avenue in assisting radiologists in breast cancer diagnosis using digital mammograms. When evaluated on benchmark datasets such as DDSM, MIAS, and INbreast, the suggested method achieves high classification rates. The proposed model achieved an accuracy of 99.38% on the combined dataset.

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11517-023-02883-2DOI Listing

Publication Analysis

Top Keywords

hardware deployment
8
deep learning
8
breast cancer
8
cancer diagnosis
8
computational complexity
8
cancer
5
deployment deep
4
learning model
4
model classification
4
classification breast
4

Similar Publications

Modern language models such as bidirectional encoder representations from transformers have revolutionized natural language processing (NLP) tasks but are computationally intensive, limiting their deployment on edge devices. This paper presents an energy-efficient accelerator design tailored for encoder-based language models, enabling their integration into mobile and edge computing environments. A data-flow-aware hardware accelerator design for language models inspired by Simba, makes use of approximate fixed-point POSIT-based multipliers and uses high bandwidth memory (HBM) in achieving significant improvements in computational efficiency, power consumption, area and latency compared to the hardware-realized scalable accelerator Simba.

View Article and Find Full Text PDF

Objective: The objective of this research is to enhance pneumonia detection in chest X-rays by leveraging a novel hybrid deep learning model that combines Convolutional Neural Networks (CNNs) with modified Swin Transformer blocks. This study aims to significantly improve diagnostic accuracy, reduce misclassifications, and provide a robust, deployable solution for underdeveloped regions where access to conventional diagnostics and treatment is limited.

Methods: The study developed a hybrid model architecture integrating CNNs with modified Swin Transformer blocks to work seamlessly within the same model.

View Article and Find Full Text PDF

Tiny machine learning (TinyML) and edge intelligence have emerged as pivotal paradigms for enabling machine learning on resource-constrained devices situated at the extreme edge of networks. In this paper, we explore the transformative potential of TinyML in facilitating pervasive, low-power cardiovascular monitoring and real-time analytics for patients with cardiac anomalies, leveraging wearable devices as the primary interface. To begin with, we provide an overview of TinyML software and hardware enablers, accompanied by an examination of networking solutions such as Low-power Wide area network (LPWAN) that facilitate the seamless deployment of TinyML frameworks.

View Article and Find Full Text PDF

This study presents a comprehensive workflow for developing and deploying Multi-Layer Perceptron (MLP)-based soft sensors on embedded FPGAs, addressing diverse deployment objectives. The proposed workflow extends our prior research by introducing greater model adaptability. It supports various configurations-spanning layer counts, neuron counts, and quantization bitwidths-to accommodate the constraints and capabilities of different FPGA platforms.

View Article and Find Full Text PDF

Large language models (LLMs), with their remarkable generative capacities, have greatly impacted a range of fields, but they face scalability challenges due to their large parameter counts, which result in high costs for training and inference. The trend of increasing model sizes is exacerbating these challenges, particularly in terms of memory footprint, latency and energy consumption. Here we explore the deployment of 'mixture of experts' (MoEs) networks-networks that use conditional computing to keep computational demands low despite having many parameters-on three-dimensional (3D) non-volatile memory (NVM)-based analog in-memory computing (AIMC) hardware.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!