In this paper, an ultrafast steady-state genetic algorithm processor (GAP) is presented. Due to the heavy computational load of genetic algorithms (GAs), they usually take a long time to find optimum solutions. Hardware implementation is a significant approach to overcome the problem by speeding up the GAs procedure. Hence, we designed a digital CMOS implementation of GA in [Formula: see text] process. The proposed processor is not bounded to a specific application. Indeed, it is a general-purpose processor, which is capable of performing optimization in any possible application. Utilizing speed-boosting techniques, such as pipeline scheme, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the proposed processor significantly reduces the processing time. Furthermore, by relying on a built-in discard operator the proposed hardware may be used in constrained problems that are very common in control applications. In the proposed design, a large search space is achievable through the bit string length extension of individuals in the genetic population by connecting the 32-bit GAPs. In addition, the proposed processor supports parallel processing, in which the GAs procedure can be run on several connected processors simultaneously.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TCYB.2015.2451595DOI Listing

Publication Analysis

Top Keywords

proposed processor
12
genetic algorithm
8
algorithm processor
8
gas procedure
8
fitness computation
8
processor
6
proposed
5
high-speed general
4
general purpose
4
genetic
4

Similar Publications

An Improved Speed Sensing Method for Drive Control.

Sensors (Basel)

January 2025

Departamento de Ingeniería Electrónica, Universidad de Sevilla, 41092 Seville, Spain.

Variable-speed electrical drive control typically relies upon a two-loop scheme, one for torque/speed and another for stator current control. In modern drive control methods, the actual mechanical speed is needed for both loops. In practical applications, the speed is often acquired by incremental rotary encoders.

View Article and Find Full Text PDF

Relay protection devices must operate continuously throughout the year without anomalies. With the integration of advanced technology and process chips in secondary equipment, new risks need to be addressed to ensure the reliability of these relay protection devices. One such risk is the impact of α-particles inducing single event effects (SEEs) on the secondary equipment.

View Article and Find Full Text PDF

Sparse Convolution FPGA Accelerator Based on Multi-Bank Hash Selection.

Micromachines (Basel)

December 2024

Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China.

Reconfigurable processor-based acceleration of deep convolutional neural network (DCNN) algorithms has emerged as a widely adopted technique, with particular attention on sparse neural network acceleration as an active research area. However, many computing devices that claim high computational power still struggle to execute neural network algorithms with optimal efficiency, low latency, and minimal power consumption. Consequently, there remains significant potential for further exploration into improving the efficiency, latency, and power consumption of neural network accelerators across diverse computational scenarios.

View Article and Find Full Text PDF

Standards for data generation and collection are important for integration and for achieving data-driven actionable insights in dairy farming. Data integration and analysis are critical for advancing the dairy industry, enabling better decision-making, and improving operational efficiencies. This commentary paper discusses the challenges of and proposes pathways for standardizing data generation and collection based on insights from a multidisciplinary group of stakeholders.

View Article and Find Full Text PDF

ShaderNN: A Lightweight and Efficient Inference Engine for Real-time Applications on Mobile GPUs.

Neurocomputing (Amst)

January 2025

Department of Electrical and Computer Engineering, University of Maryland at College Park, 8223 Paint Branch Dr, College Park, MD, 20740, USA.

Inference using deep neural networks on mobile devices has been an active area of research in recent years. The design of a deep learning inference framework targeted for mobile devices needs to consider various factors, such as the limited computational capacity of the devices, low power budget, varied memory access methods, and I/O bus bandwidth governed by the underlying processor's architecture. Furthermore, integrating an inference framework with time-sensitive applications - such as games and video-based software to perform tasks like ray tracing denoising and video processing - introduces the need to minimize data movement between processors and increase data locality in the target processor.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!