Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8PSK constellation diagram #58

Closed
Koltice opened this issue Oct 11, 2022 · 6 comments
Closed

8PSK constellation diagram #58

Koltice opened this issue Oct 11, 2022 · 6 comments

Comments

@Koltice
Copy link

Koltice commented Oct 11, 2022

Hi!

I wanted to compare different Bit Error Rates for different modulations (QPSK, 8PSK and 16QAM) for the uncoded and encoded case (LDPC).
The system is is the same, just the modulation differs...

output1
output2

I'm not sure if the 8PSK plot is correct. Especially in the encoded case, as the BER doesnt seem to improve with the encoder.
For the QAM and 16QAM the encoded plot looks better, as an improvement is visible, in contrast to the uncoded case.
Where I think the mistake may be, is during the set up of the constellation diagram for the mapper and demapper:

8PSK:
const8PSK

NUM_BITS_PER_SYMBOL = 3
BLOCK_LENGTH = 3**8
n=12*3

real = 1 * np.cos(np.pi/4)
imag = 1 * np.sin(np.pi/4)
CONST_SHAPE = np.array([-1, 1, 1j, -1j, complex(real, imag), complex(real, -imag), complex(-real, imag), complex(-real, -imag)]) # set the shape of the constellation diagram
constellation = sn.mapping.Constellation("custom", NUM_BITS_PER_SYMBOL, CONST_SHAPE, normalize=True, center=True)
constellation.show()

16QAM:
const16qam

NUM_BITS_PER_SYMBOL = 4
BLOCK_LENGTH = 2**8
n=2**8

constellation = sn.mapping.Constellation("qam", NUM_BITS_PER_SYMBOL, normalize=True, center=True)
constellation.show();

QPSK:
constqpsk

NUM_BITS_PER_SYMBOL = 2
BLOCK_LENGTH = 2**8
n=2**8

constellation = sn.mapping.Constellation("qam", NUM_BITS_PER_SYMBOL, normalize=True, center=True)
constellation.show();

Any help or ideas would be greatly appriciated, thanks!

@jhoydis jhoydis closed this as completed Oct 11, 2022
@jhoydis jhoydis reopened this Oct 11, 2022
@jhoydis
Copy link
Collaborator

jhoydis commented Oct 11, 2022

Hi,

I have done some 8PSK simulations with your constellation in this notebook:
https://colab.research.google.com/drive/1QoE7I3pAymRqWgggz832B-TP72uO-dZw?usp=sharing

It seems to work as expected. Maybe you have some error in your Mapper/Demapper?

Hope this helps.

@jhoydis jhoydis closed this as completed Oct 11, 2022
@Koltice
Copy link
Author

Koltice commented Oct 11, 2022

Thanks for the help!
I had a mistake in my code, now it's fixed and the plots look like this

output1
output2

I hope you can help my with one question I have still:
Why hase the 16QAM a worse BER than the 8PSK? Shouldn't be the curves the other way around?
Thanks!

@jhoydis
Copy link
Collaborator

jhoydis commented Oct 12, 2022

Not sure to understand your observation correctly. 16QAM is clearly better with a channel code which shows that you get a higher mutual information at the output of the demapper. For the uncoded results, it might because I use in my example code wrongly the ebnodb2no-function for uncoded transmissions. It should be

no = ebnodb2no(ebno_db, num_bits_per_symbol=self.num_bits_per_symbol, coderate=1)

@Koltice
Copy link
Author

Koltice commented Oct 12, 2022

Thanks again for your help!
sorry I got it mixed up there, so what I don't understand is:
Why hase the 8PSK a worse BER than the 16QAM? Shouldn't the plot show this order (from best to worst) --> 8PSK - 16QAM?
So for the uncoded case it's pretty close, but I'd have thought, that in the encoded case, the encoded 8PSK performes a bit better than the encoded16QAM... or is there something I'm overlooking?

output
output1

This is the code I used:

import numpy as np
import tensorflow as tf
import time # for throughput measurements

# Configure the notebook to use only a single GPU and allocate only as much memory as needed
# For more details, see https://www.tensorflow.org/guide/gpu
gpus = tf.config.list_physical_devices('GPU')
print('Number of GPUs available :', len(gpus))
if gpus:
    gpu_num = 0 # Number of the GPU to be used
    try:
        #tf.config.set_visible_devices([], 'GPU')
        tf.config.set_visible_devices(gpus[gpu_num], 'GPU')
        print('Only GPU number', gpu_num, 'used.')
        tf.config.experimental.set_memory_growth(gpus[gpu_num], True)
    except RuntimeError as e:
        print(e)
        
# Import Sionna
try:
    import sionna
except ImportError as e:
    # Install Sionna if package is not already installed
    import os
    os.system("pip install sionna")
    import sionna
    

# For the implementation of the Keras models
from tensorflow.keras import Model

# Import required Sionna components
from sionna.mapping import Constellation, Mapper, Demapper
from sionna.utils import BinarySource, compute_ber, BinaryCrossentropy, ebnodb2no, PlotBER
from sionna.channel import AWGN
from sionna.fec.ldpc import LDPC5GEncoder, LDPC5GDecoder
class UncodedSystemAWGN(Model): 
    def __init__(self, n,constellation,NUM_BITS_PER_SYMBOL_psk):
        super().__init__() 
        self.num_bits_per_symbol = NUM_BITS_PER_SYMBOL_psk
        self.n = n
        self.k = int(n*1)
        self.coderate = 1
        self.constellation = constellation
        self.mapper = Mapper(constellation=self.constellation)
        self.demapper = Demapper("app", constellation=self.constellation, hard_out=True)
        self.binary_source = BinarySource()
        self.awgn_channel = AWGN()
    
    @tf.function(jit_compile=True) 
    def __call__(self, batch_size, ebno_db):
        no = ebnodb2no(ebno_db, num_bits_per_symbol=self.num_bits_per_symbol, coderate=1)
        bits = self.binary_source([batch_size, self.n])
        codewords = bits
        x = self.mapper(codewords)
        y = self.awgn_channel([x, no])
        bits_hat = self.demapper([y,no])
        return bits, bits_hat
class CodedSystemAWGN(Model): 
    def __init__(self, n, coderate, constellation,NUM_BITS_PER_SYMBOL):
        super().__init__() 
        self.num_bits_per_symbol = NUM_BITS_PER_SYMBOL
        self.n = n
        self.k = int(n*coderate)
        self.coderate = coderate
        self.constellation = constellation
        self.mapper = Mapper(constellation=self.constellation)
        self.demapper = Demapper("app", constellation=self.constellation)
        self.binary_source = BinarySource()
        self.awgn_channel = AWGN()
        self.encoder = LDPC5GEncoder(self.k, self.n)
        self.decoder = LDPC5GDecoder(self.encoder, hard_out=True)
    
    @tf.function(jit_compile=True) # activate graph execution to speed things up
    def __call__(self, batch_size, ebno_db):
        no = ebnodb2no(ebno_db, num_bits_per_symbol=self.num_bits_per_symbol, coderate=self.coderate)
        bits = self.binary_source([batch_size, self.k])
        codewords = self.encoder(bits)
        x = self.mapper(codewords)
        y = self.awgn_channel([x, no])
        llr = self.demapper([y,no])
        bits_hat = self.decoder(llr)
        return bits, bits_hat
NUM_BITS_PER_SYMBOL_qam = 4

constellation2 = Constellation("qam", NUM_BITS_PER_SYMBOL_qam)
constellation2.show();

NUM_BITS_PER_SYMBOL_psk = 3

real = 1 * np.cos(np.pi/4)
imag = 1 * np.sin(np.pi/4)
CONST_SHAPE = np.array([-1, 1, 1j, -1j, complex(real, imag), complex(real, -imag), complex(-real, imag), complex(-real, -imag)]) # set the shape of the constellation diagram
constellation1 = Constellation("custom", NUM_BITS_PER_SYMBOL_psk, CONST_SHAPE)
constellation1.show();
n = 1200
k = 800
coderate = k/n
model_coded_awgn = CodedSystemAWGN(n, coderate, constellation1,NUM_BITS_PER_SYMBOL_psk)
model_uncoded_awgn = UncodedSystemAWGN(n,constellation1,NUM_BITS_PER_SYMBOL_psk)
model_coded_awgn_2 = CodedSystemAWGN(n, coderate, constellation2,NUM_BITS_PER_SYMBOL_qam)
model_uncoded_awgn_2 = UncodedSystemAWGN(n,constellation2,NUM_BITS_PER_SYMBOL_qam)

ber_plots = PlotBER("AWGN")

ber_plots.simulate(model_uncoded_awgn,
                  ebno_dbs=tf.range(0,15),
                  batch_size=256,
                  num_target_block_errors=100, # simulate until 100 block errors occured
                  legend="Uncoded 8PSK",
                  soft_estimates=True,
                  max_mc_iter=100, # run 100 Monte-Carlo simulations (each with batch_size samples)
                  show_fig=False);

ber_plots.simulate(model_uncoded_awgn_2,
                  ebno_dbs=tf.range(0,15),
                  batch_size=256,
                  num_target_block_errors=100, # simulate until 100 block errors occured
                  legend="Uncoded 16QAM",
                  soft_estimates=True,
                  max_mc_iter=100, # run 100 Monte-Carlo simulations (each with batch_size samples)
                  show_fig=True);

ber_plots = PlotBER("AWGN")

ber_plots.simulate(model_coded_awgn,
                  ebno_dbs=tf.range(0,9),
                  batch_size=256,
                  num_target_block_errors=100, # simulate until 100 block errors occured
                  legend="Coded 8PSK",
                  soft_estimates=True,
                  max_mc_iter=100, # run 100 Monte-Carlo simulations (each with batch_size samples)
                  show_fig=False);

ber_plots.simulate(model_coded_awgn_2,
                  ebno_dbs=tf.range(0,9),
                  batch_size=256,
                  num_target_block_errors=100, # simulate until 100 block errors occured
                  legend="Coded 16QAM",
                  soft_estimates=True,
                  max_mc_iter=100, # run 100 Monte-Carlo simulations (each with batch_size samples)
                  show_fig=True);

@jhoydis
Copy link
Collaborator

jhoydis commented Oct 12, 2022

I think you might be ignoring that you plot BER vs Eb/No and not vs SNR. It is not surprising that QAM (with ray labelling) does better than PSK.

@Koltice
Copy link
Author

Koltice commented Oct 13, 2022

Alright, I think I got it! Thanks again for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants