This study presents a comprehensive workflow for developing and deploying Multi-Layer Perceptron (MLP)-based soft sensors on embedded FPGAs, addressing diverse deployment objectives. The proposed workflow extends our prior research by introducing greater model adaptability. It supports various configurations-spanning layer counts, neuron counts, and quantization bitwidths-to accommodate the constraints and capabilities of different FPGA platforms. The workflow incorporates a custom-developed, open-source toolchain that facilitates quantization-aware training, integer-only inference, automated accelerator generation using VHDL templates, and synthesis alongside performance estimation. A case study on fluid flow estimation was conducted on two FPGA platforms: the AMD Spartan-7 XC7S15 and the Lattice iCE40UP5K. For precision-focused and latency-sensitive deployments, a six-layer, 60-neuron MLP accelerator quantized to 8 bits on the XC7S15 achieved an MSE of 56.56, an MAPE of 1.61%, and an inference latency of 23.87 μs. Moreover, for low-power and energy-constrained deployments, a five-layer, 30-neuron MLP accelerator quantized to 8 bits on the iCE40UP5K achieved an inference latency of 83.37 μs, a power consumption of 2.06 mW, and an energy consumption of just 0.172 μJ per inference. These results confirm the workflow's ability to identify optimal FPGA accelerators tailored to specific deployment requirements, achieving a balanced trade-off between precision, inference latency, and energy efficiency.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11722680 | PMC |
http://dx.doi.org/10.3390/s25010083 | DOI Listing |
Sensors (Basel)
January 2025
Department of Mechanical and Aerospace Engineering, Politecnico di Torino, 10129 Turin, Italy.
This study investigates the potential of deploying a neural network model on an advanced programmable logic controller (PLC), specifically the Finder Opta™, for real-time inference within the predictive maintenance framework. In the context of Industry 4.0, edge computing aims to process data directly on local devices rather than relying on a cloud infrastructure.
View Article and Find Full Text PDFMicromachines (Basel)
December 2024
Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China.
Reconfigurable processor-based acceleration of deep convolutional neural network (DCNN) algorithms has emerged as a widely adopted technique, with particular attention on sparse neural network acceleration as an active research area. However, many computing devices that claim high computational power still struggle to execute neural network algorithms with optimal efficiency, low latency, and minimal power consumption. Consequently, there remains significant potential for further exploration into improving the efficiency, latency, and power consumption of neural network accelerators across diverse computational scenarios.
View Article and Find Full Text PDFPhilos Trans A Math Phys Eng Sci
January 2025
Microsystems Group, School of Engineering, Newcastle University, Newcastle upon Tyne NE1 7RU, UK.
The increasing demand for processing large volumes of data for machine learning (ML) models has pushed data bandwidth requirements beyond the capability of traditional von Neumann architecture. In-memory computing (IMC) has recently emerged as a promising solution to address this gap by enabling distributed data storage and processing at the micro-architectural level, significantly reducing both latency and energy. In this article, we present In-Memory comPuting architecture based on Y-FlAsh technology for Coalesced Tsetlin machine inference (IMPACT), underpinned on a cutting-edge memory device, Y-Flash, fabricated on a 180 nm complementary metal oxide semiconductor (CMOS) process.
View Article and Find Full Text PDFPhilos Trans A Math Phys Eng Sci
January 2025
RPTU Kaiserslautern-Landau, Kaiserslautern, Germany.
The advent of in-memory computing has introduced a new paradigm of computation, which offers significant improvements in terms of latency and power consumption for emerging embedded AI accelerators. Nevertheless, the effect of the hardware variations and non-idealities of the emerging memory technologies may significantly compromise the accuracy of inferred neural networks and result in malfunctions in safety-critical applications. This article addresses the issue from three different perspectives.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Intelligent Embedded Systems of Computer Science, University of Duisburg-Essen, 47057 Duisburg, Germany.
This study presents a comprehensive workflow for developing and deploying Multi-Layer Perceptron (MLP)-based soft sensors on embedded FPGAs, addressing diverse deployment objectives. The proposed workflow extends our prior research by introducing greater model adaptability. It supports various configurations-spanning layer counts, neuron counts, and quantization bitwidths-to accommodate the constraints and capabilities of different FPGA platforms.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!