Playing with Low-density parity-check codes
Switch branches/tags
Clone or download
Latest commit ec19a8a Dec 12, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore gitignore object files Nov 21, 2018
LICENSE Initial commit Oct 17, 2018
Makefile moved {en,de}coder codes to own files and fine-tuned header dependencies Nov 29, 2018
README.md README cleanup Dec 4, 2018
algorithms.hh use vbsl() Dec 4, 2018
avx2.hh added vbsl() Dec 4, 2018
comparison_factor_2_S2_B4_QPSK_1000_lin_scale.png added BER comparison plots for various algorithms Nov 5, 2018
comparison_factor_2_S2_B4_QPSK_1000_log_scale.png added BER comparison plots for various algorithms Nov 5, 2018
dvb_s2_tables.hh added LINKS_MIN_CN to the tables Oct 21, 2018
dvb_s2x_tables.hh added DVB-S2X tables I got from Ron Economos. Thanks Ron! Oct 30, 2018
dvb_t2_tables.hh added DVB-T2 tables Nov 2, 2018
encoder.hh replaced constructor(ldpc) of encoder and decoder with init(ldpc) Dec 5, 2018
encoder2.hh replaced constructor(ldpc) of encoder and decoder with init(ldpc) Dec 5, 2018
exclusive_reduce.hh Initial commit Oct 17, 2018
flooding_decoder.hh replaced constructor(ldpc) of encoder and decoder with init(ldpc) Dec 5, 2018
generic.hh use int16_t for int8_t saturating arithmetic Dec 5, 2018
interleaver.hh split PITL out ouf PCTITL Nov 10, 2018
itls_handler.cc moved create_interleaver() to itls_handler.cc Nov 21, 2018
layered_decoder.hh restructured to make constant values more obvious to see Dec 8, 2018
ldpc.hh added group_len() Dec 8, 2018
llr_bit0_8psk.png added some pics to show LLRs in noisy 8PSK Oct 25, 2018
llr_bit1_8psk.png added some pics to show LLRs in noisy 8PSK Oct 25, 2018
llr_bit2_8psk.png added some pics to show LLRs in noisy 8PSK Oct 25, 2018
log-sum-product.png remade pics to include all bitnodes Oct 21, 2018
min-sum-c.png remade pics to include all bitnodes Oct 21, 2018
mods_handler.cc moved create_modulation() to mods_handler.cc Nov 21, 2018
modulation.hh interface change: do the symbol (de-)mapping in bulk Nov 12, 2018
neon.hh added vbsl() Dec 4, 2018
psk.hh use std::abs() instead of abs() to avoid problems Nov 19, 2018
qam.hh use std::abs() instead of abs() to avoid problems Nov 19, 2018
simd.hh added vbsl() Dec 4, 2018
sse4_1.hh fixed copy / paste error Dec 12, 2018
sum-product.png remade pics to include all bitnodes Oct 21, 2018
tables_handler.cc moved create_ldpc() to tables_handler.cc Nov 21, 2018
testbench.cc replaced constructor(ldpc) of encoder and decoder with init(ldpc) Dec 5, 2018
testbench.hh enable SIMD by default Nov 26, 2018

README.md

Playing with Low-density parity-check codes

To study LDPC codes I've started implementing a soft decision decoder using floating point operations only.

For better speed (at almost the same decoding performance) I've added support for saturating fixed-point operations.

Parallel decoding of multiple blocks using SIMD is available for all variations of the min-sum algorithm.

You can switch between two decoder schedules:

  • flooding schedule: numerically stable but also slow.
  • layered schedule: numerical stability is traded for speed.

You can switch between six Belief propagation algorithms:

  • min-sum algorithm: using minimum and addition
  • offset-min-sum algorithm: using minimum, addition and a constant offset
  • min-sum-c algorithm: using minimum, addition and a correction factor
  • sum-product algorithm: using tanh+atanh-functions, addition and multiplication
  • log-sum-product algorithm: using log+exp-functions to replace above multiplication with addition in the log domain
  • lambda-min algorithm: same as log-sum-product, but using only lambda minima

The following applies to the flooding schedule only:
You can enable the self-corrected update for any of the above listed algorithms to further boost its decoding performance. It works by erasing unreliable bit nodes, whose signs fluctuate between updates. As shown in the BER plots below, the min-sum algorithm benefits the most from the erasures.

Decoding speed varies about 10ms (no errors) to 300ms (max errors) for the rate 1/2 N=64800 code using self-corrected min-sum on my workstation.

Here some good reads:

  • Low-Density Parity-Check Codes
    by Robert G. Gallager - 1963
  • Near Shannon Limit Performance of Low Density Parity Check Codes
    by David J.C. MacKay and Radford M. Neal - 1996
  • An introduction to LDPC codes
    by William E. Ryan - 2003
  • An efficient message-passing schedule for LDPC decoding
    by Eran Sharon, Simon Litsyn and Jacob Goldberger - 2004
  • DVB-S2 Low Density Parity Check Codes with near Shannon Limit Performance
    by Mustafa Eroz, Feng-Wen Sun and Lin-Nan Lee - 2004
  • Reduced-Complexity Decoding of LDPC Codes
    by J. Chen, A. Dholakia, E. Eleftheriou, M. Fossorier and X.–Y. Hu - 2005
  • Self-Corrected Min-Sum decoding of LDPC codes
    by Valentin Savin - 2008

Here some DVB standards:

BER comparison of the various algorithms

The following plots were made by computing MS, MSC, SCMS and SCMSC with fixed-point saturating arithmetics using a factor of 2 while SP and SCSP are computed using double precision floating-point arithmetics.

Used DVB-S2 B4 table, QPSK modulation and averaged over 1000 blocks:

To better see the behaviour at low SNR, here with a linear BER scale: comparison linear scale

To better see the waterfall region and the boundary to quasi-errorless decoding, here the logarithmic BER scale: comparison logarithmic scale

Impact of the varying degrees of the bit nodes on their convergence behaviour

The color on the following three plots are to be interpreted like this:

  • Red: parity bit nodes with degree two
  • Green: message bit nodes with degree eight
  • Blue: message bit nodes with degree three

This is the second fastest algorithm, min-sum-c, but it needs a few iterations longer to converge: min-sum-c

The sum-product algorithms converge much faster than the min-sum algorithms, but they involve transcendental functions. log-sum-product

Here we see the fastest convergence, where bit nodes go to minus or plus infinity (and sometimes back from): sum-product

Getting soft information from symbols

For the LDPC codes to work best, one needs soft reliability information for each bit.

Here we see the log-likelihood ratios of the different bits of many 8PSK modulated symbols, disturbed by AWGN:

LLR of bit 0 in 8PSK

LLR of bit 1 in 8PSK

LLR of bit 2 in 8PSK

exclusive_reduce.hh

Reduce N times while excluding ith input element

It computes the following, but having only O(N) complexity and using O(1) extra storage:

	output[0] = input[1];
	output[1] = input[0];
	for (int i = 2; i < N; ++i)
		output[i] = op(input[0], input[1]);
	for (int i = 0; i < N; ++i)
		for (int j = 2; j < N; ++j)
			if (i != j)
				output[i] = op(output[i], input[j]);