The neural networks offer iteration capability for low-density parity-check (LDPC) decoding with superior performance at transmission. However, to cope with increasing code length and rate, the complexity of the neural network increases significantly. This is due to the large amount of feature extraction required to maintain the error correction capability. Based on this gap, we design a new iterative LDPC decoding technique named graph model neural network-belief propagation (GMNN-BP). GMNN-BP uses graph models as a link between deep learning and belief propagation (BP) algorithms, combining the advantages of both. Compared to traditional fully connected neural network decoders, the GMNN-BP decoding has the substantial benefit of avoiding learning and judging codeword categories directly from a large amount of data and requiring less training data as well. The proposed algorithm is verified by simulation and experiment and is tested by using IEEE 802.3ca standard LDPC code word. The results show that the GMNN-BP decoding algorithm is superior to the BP-based iterative decoding method under the same number of iterations, and the maximum gain can reach 1.9dB. When achieving the same performance, the GMNN-BP decoding algorithm only requires half the number of iterations of other algorithms.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1364/OE.534637 | DOI Listing |
The neural networks offer iteration capability for low-density parity-check (LDPC) decoding with superior performance at transmission. However, to cope with increasing code length and rate, the complexity of the neural network increases significantly. This is due to the large amount of feature extraction required to maintain the error correction capability.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!