This repository contains the codes for the paper Iterative Error Decimation for Syndrome-Based Neural Network Decoders, accepted for publication in the Journal of Communication and Information Systems (JCIS).
In this project, we introduce a new syndrome-based decoder where a deep neural network (DNN) estimates the error pattern from the reliability and syndrome of the received vector. The proposed algorithm works by iteratively selecting the most confident positions to be the error bits of the error pattern, updating the vector received when a new position of the error pattern is selected.
If the code or the paper has been useful in your research, please add a citation to our work:
@article{kamassury_ied,
title={Iterative Error Decimation for Syndrome-Based Neural Network Decoders},
author={Kamassury, Jorge K S and Silva, Danilo},
journal={Journal of Communication and Information Systems},
year={2021}
}
For an overview of the project, follow the steps from the main_code module, namely:
- Get the parity check matrix (H):
bch_par
- Building the neural network:
models_nets
- Model training:
training_nn
- Model inference using the IED decoder:
BER_FER
- Plot of inference results:
inference
The default configuration (using the function in get_training_model) will train a model with the cross entropy as the loss function. The following are the important parameters of the training
training_nn(model, H, loss, lr, batch_size, spe, epochs, EbN0_dB, tec)
, where:model
: neural network for short length BCH codeH
: parity check matrixloss
: loss function (by default, binary cross entropy)lr
: learning ratebatch size
: batch size for trainingspe
: steps per epochepochs
: number of epochs for trainingEbN0_dB
: ratio of energy per bit to noise power spectral densitytec
: technique for changing the learning rate (ReduceLROnPlateau
orCyclicalLearningRate
)
Important routines can be found in the module uteis, especially:
training_generator
: simulates the transmission of codewords via the AWGN channel for model traininggetfer
: computes the metrics BLER, BER, ...biawgn
: simulate codewords for inferencecustom_loss
: custom loss function joining binary cross entropy and loss syndrome
All pre-trained models are in the folder models, where:
model_63_45
: trained model for the BCH(63, 45) code;model_relu_63_36
: trained model for BCH(63, 36) code using ReLU as activation function;model_sigmoid_63_36
: trained model for BCH(63, 36) code using Sigmoid as activation function;model_BN_sigmoid_63_36
: trained model for BH(63, 36) code using Sigmoid as activation function and batch normalization layers.
To perform model inference for the BER and BLER metrics, use the module ber_fer_result, where:
max_nfe
: number of block errorsT
: number of iterations using IEDp_initial
:EbN0_dB
initial value for inferencep_end
:EbN0_dB
final value for inference
If you just want to load the pre-trained model, perform and plot the inference, use the script load_infer_plot.
Performances for BCH codes using the IED decoder are in the folder results.
- BLER and BER for the BCH(63,45) code, respectively:
- BLER and BER for the BCH(63, 36) code, respectively: