The development of the mobile industry brings about the demand for high-performance embedded systems in order to meet the requirement of user-centered application. Because of the limitation of memory resource, employing compressed data is efficient for an embedded system. However, the workload for data decompression causes an extreme bottleneck to the embedded processor. One of the ways to alleviate the bottleneck is to integrate a hardware accelerator along with the processor, constructing a system-on-chip (SoC) for the embedded system. In this paper, we propose a lossless decompression accelerator for an embedded processor, which supports LZ77 decompression and static Huffman decoding for an inflate algorithm. The accelerator is implemented on a field programmable gate array (FPGA) to verify the functional suitability and fabricated in a Samsung 65 nm complementary metal oxide semiconductor (CMOS) process. The performance of the accelerator is evaluated by the Canterbury corpus benchmark and achieved throughput up to 20.7 MB/s at 50 MHz system clock frequency.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7911039PMC
http://dx.doi.org/10.3390/mi12020145DOI Listing

Publication Analysis

Top Keywords

embedded processor
12
lossless decompression
8
decompression accelerator
8
accelerator embedded
8
embedded system
8
embedded
6
accelerator
5
processor
4
processor gui
4
gui development
4

Similar Publications

A high performance heterogeneous hardware architecture for brain computer interface.

Biomed Eng Lett

January 2025

School of Chemistry and Chemical Engineering, Tianjin University of Technology, Tianjin, 300384 People's Republic of China.

Brain-computer interface (BCI) has been widely used in human-computer interaction. The introduction of artificial intelligence has further improved the performance of BCI system. In recent years, the development of BCI has gradually shifted from personal computers to embedded devices, which boasts lower power consumption and smaller size, but at the cost of limited device resources and computing speed, thus can hardly improve the support of complex algorithms.

View Article and Find Full Text PDF

Physics-based Ising machines (IM) have been developed as dedicated processors for solving hard combinatorial optimization problems with higher speed and better energy efficiency. Generally, such systems employ local search heuristics to traverse energy landscapes in searching for optimal solutions. Here, we quantify and address some of the major challenges met by IMs by extending energy-landscape geometry visualization tools known as disconnectivity graphs.

View Article and Find Full Text PDF

High performance Si-MoS heterogeneous embedded DRAM.

Nat Commun

November 2024

State Key Laboratory of ASIC and System, Fudan University, Shanghai, P. R. China.

Article Synopsis
  • Embedded Dynamic RAM (eDRAM) is crucial for high-performance processors, with a new type called heterogeneous 2T-eDRAM combining silicon and molybdenum disulfide (MoS) to solve retention issues.
  • The low OFF current of the MoS write transistor allows for significantly improved data retention while the Si read transistor enhances high drive current, leading to a 1000x better retention and 100x higher sense margin than previous types.
  • A novel 3D design stacking MoS on Si increases integration density, achieving 6000 s data retention, a 35 μA/μm sense margin, and speeds of 5 ns, representing a major leap in memory technology.
View Article and Find Full Text PDF

The explosive growth in computation and energy cost of artificial intelligence has spurred interest in alternative computing modalities to conventional electronic processors. Photonic processors, which use photons instead of electrons, promise optical neural networks with ultralow latency and power consumption. However, existing optical neural networks, limited by their designs, have not achieved the recognition accuracy of modern electronic neural networks.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!