-
Notifications
You must be signed in to change notification settings - Fork 224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TDL channel normalization #12
Comments
Your understanding of how (optional) normalisation is done in Sionna is correct. Optional per-batch-example normalisation was implemented as it allows convenient control of the SNR of a resource grid or block by fixing the transmit power or the noise power. Note that the TDL channel model provides access to the path powers through the property |
Thanks for your answer. For the TDL I have actually created a custom version of Sionna's TDL channel model, so that I can normalize the channel as you propose (and also set custom json model). But I wasn't sure it was fully theoretically correct to do it that way. I add a if self._normalize_taps:
self._mean_powers = mean_powers / tf.reduce_sum(mean_powers) # do not need sqrt() as it is power
else:
self._mean_powers = mean_powers Sionna's per-batch-example way to normalize should be ok most of the time, but I think it may create some strange behavior if the sample size is to small. For instance the noise power is not normalize on a per sample base: the noise distribution is set once and the Monte Carlo simulation will provide results corresponding to the desired noise distribution. You do not need to normalize the noise power for each sample and you should be able to do the same with the channel distribution. From this code I obtain the following figure (sorry for the custom class, you should be able to reproduce the first curve with a normalized TDL model A json and standard Sionna's TDL channel model): Sionna's normalization seems here a little unbalanced in the spectrum. But this is not always the case depending on the fft size. Again, I think this is not a major issue or not even an issue, but I fill like it could be done more correctly if that makes sense. Anyway, thanks for the great job! |
The observed artefact is expected when normalizing the channel frequency responses of each batch example separately. Starting from version 0.11, CDL and TDL power delay profiles are normalized to have unit total power. For system level models (UMi, UMa, and RMa), the power delay profiles were already normalized. Note that the per-example normalization is still very useful to get a clean SNR defintion since codewords are contained within one slot. |
Hello,
The following question / remark concerns only the TDL channel models as they are the ones I have been using. Yet this may also apply to others channel models, but I don't know.
To my understanding, the current channel normalization process is done in either
cir_to_time_channel()
orcir_to_ofdm_channel()
. The normalization is a per sample normalization as each channel sample per batch is normalized (also per couple tx/rx). Also from the documentation: normalize (bool) – If set to True, the channel is normalized over the block size to ensure unit average energy per time step. and normalize (bool) – If set to True, the channel is normalized over the resource grid to ensure unit average energy per resource element.Instead of a per sample normalization, you could compute the normalization one time in the class init with an arbitrary sufficient number of samples. This way you wouldn't need to compute the normalization factor at each step and apply the scaling only if needed?
The text was updated successfully, but these errors were encountered: