Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on power calculation (dBm) #1923

Open
mweyland opened this issue Feb 3, 2022 · 7 comments
Open

Question on power calculation (dBm) #1923

mweyland opened this issue Feb 3, 2022 · 7 comments

Comments

@mweyland
Copy link

mweyland commented Feb 3, 2022

Hey guys

I looked at the code used to calculate the power within the filter bandwidth because I am trying to do something similar in a different (completely unrelated) project. I read the wiki article about it and also a number of threads where "dBm" vs. "S-Meter" is debated ... passionately. Thankfully my question is not on that debate at all. All line numbers refer to the current ui_spectrum.c in the default branch:

On line 1383, arm_cfft_f32() is called. I presume this is computing the complex DFT of the voltage samples obtained from the QSD.

On line 1390, arm_cmplx_mag_f32() is called to turn the FFT result into magnitudes. Since the samples z are sampled voltages and this function is computing sqrt( Re(z)² + Im(z)² ), those magnitudes are signal amplitudes of the respective sinusoids. These sinusoids are still a representation of voltage.

In UiSpectrum_CalculateDBm(), the frequency bins that are within the filter bandwidth if the receiver are found, taking care of all edge cases. The boundary-bins are stored in Lbin and Ubin, respectively.

Starting from line 2084, I believe the samples are reshuffled such that the subsequent operation is easy but I am not entirely sure about that. In any case the voltage signal magnitudes are copied from FFT_MagData to FFT_Samples which I believe is used as some kind of working buffer(?)

Then in the loop on line 2101, the magnitudes between Lbin and Ubin are summed. These still represent voltages.

Finally on line 2114 this sum of voltages (magnitudes) is converted into the logarithmic domain.

Now here is my question: A power-equivalent to the voltage would be obtained by squaring the voltage (P ~ U²). In the logarithmic domain, we have log(U²) = 2*log(U), thus this factor of 2 would be accounted for in the calibration procedure when the variable slope is determined. In the present case, however, U is a sum of the voltages (or rather, magnitudes that represent voltages in the digital domain). Naively, I would assume that the conversion to power/dBm would call for the RMS value of the voltage. According to Parseval's theorem, this could be achieved by squaring the magnitudes before summing. Thus we would have log(U²_1 + U²_2 + ... + U²_n), which is NOT equal to 2*log(U_1 + U_2 + ... + U_n).

I'm not claiming to have found a bug in the code but I'm wondering about the mathematics that allow for the simple summing of magnitudes during aforementioned loop...

@DD4WH
Copy link
Contributor

DD4WH commented Feb 4, 2022

@mweyland
Thanks for the question!
I had to think about that a little more, because the code was written more than 5 years ago ;-).

  • If I remember correctly, the magnitude values calculated in the FFT represent the RMS spectrum, magnitudes being proportional to RMS voltages (calculated as sqrt(I * I + Q * Q) by the arm_cmplx_mag_f32 function).
  • The code sums up these voltages (as you pointed out)
  • then we calculate log10 of this sum --> this is the power of the signal in the desired filter passband bandwidth
  • coherent power gain (influence of the FFT window) and FFT processing gain (influenced by the size of the FFT) is accounted for by a simple calibration factor

We achieved astonishing accuracy when compared with exact power measurements (there are several of those measurements reported on the web/in forums).

Not sure whether this makes sense, I treated it in the philosophy of these sources:
https://kluedo.ub.uni-kl.de/files/4293/exact_fft_measurements.pdf
https://www.sjsu.edu/people/burford.furman/docs/me120/FFT_tutorial_NI.pdf

@mweyland
Copy link
Author

mweyland commented Feb 4, 2022

Thank you so much for that quick and comprehensive response. Those two documents are a great resource for what I'm trying to pull of... thanks :-)

Regarding UHSDR, there is one confusing thing left: I understand that we are in agreement that the code sums up the voltages. However, both documents you linked seem to suggest that the summing is to be done on the squared voltage. I don't have UHSDR (although in particular after this, there is little doubt that I have to get it...) but I did some simulations and the results are very accurate and linear with the sum of squared voltages and become very nonlinear when the voltages are not squared.

Now I cannot find the squaring in the UHSDR code and you seem to confirm that the summing is of (non-squared) voltages. But then again, the procedure was demonstrated to be accurate and I do not understand how that is possible.

@DD4WH
Copy link
Contributor

DD4WH commented Feb 4, 2022

Yes, you are right:

  • we sum the sqrt (I * I + Q * Q)
  • then we log10 that sum
  • I agree that for exact results we should not take the sqrt of the magnitudes before applying the logarithm, that looks like a bug

Could you share your simulations? I calculated the correct and false way for two different examples and the results do not differ largely. But I agree: taking the sqrt is one operation too much.

I am a little hesitant in fixing this though. All users would need to calibrate their rigs again . . . So before we do that, it should be clear which amount of error we can expect, even if we keep the "wrong" calculation.

Will think about that a little more! Thanks for spotting this!

@mweyland
Copy link
Author

mweyland commented Feb 4, 2022

This gets more interesting every time I look at it... I just did some math and it may be equivalent after all because some term factors out and can be pulled out of the log. When I said "simulations" I actually meant that I tried it with another front end and some python code to do the summing etc. with the results mentioned above. So the math may check out and my code may be wrong... I will try to sort this out over the week end and report back. Most likely everything is just fine. Thank you for your time!

@mweyland
Copy link
Author

mweyland commented Feb 5, 2022

OK I think I figured it out. I want to state these findings super carefully, though, because I came here as somebody who was looking for how this is done "right", lacking the confidence to do so. Now I find myself in a position where I'm starting to claim things may potentially be wrong and I want to do that in the most humble way possible. Also I really hope I am not wasting anyone's time.

I believe there are cases where the lack of squaring is absorbed in the consand slope calibration factors but there may also be cases where this is not the case. I provide examples of the latter at the bottom. All tabular results are generated with power_detector_simu.py (attached), which is a modified version of the program I used before with a real frontend. This one, however, generates quadrature signals in software. There are a number of simplifying assumptions:

  • No "external" noise so I ignore potential corrections regarding noise. These may make things even worse but I don't think that accounting for noise will fix the potential issue described below.
  • I use a rectangular window so I don't have to correct for CPG. You can activate an FFT window by uncommenting the corresponding line in the code and that will add some fixed gain so the findings are still valid when a nontrivial window function is used.

The code generates a signal at power levels that increase in steps of 10 dB and calculates:

  1. Power in the time domain using v_RMS. This is the baseline.
  2. Power obtained by squaring and bin-summing.
  3. "Power" obtained by bin-summing without squaring.

In the tables below, all figures are given in dB (i.e. in the logarithmic domain), in the aforementioned order (1-3) and for the third case where no squaring is done, there is an extra multiplicative factor of 2. (20*log10(...) instead of 10*log10(...)). Any multiplicative factor will get absorbed in the slope calibration factor so doing that doesn't harm, indeed the code sets slope to a value close to 20 which will produce somewhat correct results in some cases (see below).

Single, unmodulated carrier (CW)

0.0 0.0 2.2855139553492143e-10
10.0 10.0 10.000000000228553
20.0 20.0 20.000000000228553
30.0 30.0 30.000000000228553
40.0 40.0 40.00000000022855
50.0 50.0 50.00000000022855

As you can see, any of the three methods yields the same power. This looks good so far. Why does that work? Assuming that just a single bin is non-zero (I will relax this assumption in the next example), the bin sum is a sum of many zeros plus that one non-zero bin. So log( 0 + ... + 0 + x + 0 + ...) = log(x), and log(x) vs. log(x²)=2 log(x) is just a matter of a multiplicative factor, which I did introduce for the voltage case.

Frequency modulated signal
I chose this example because I am most familiar with the nature of FM signals and it is what threw me off initially. You can replicate the results by changing the baseband signal:

    #signal = amplitude**0.5 * get_carrier_iq(t, baseband_freq)
    signal = amplitude**0.5 * get_fm_iq(t, baseband_freq, 1e3) 

Power calculations for different power levels and with the 3 methods are:

0.0 -9.643274665532873e-16 5.66480488927558
10.0 10.000000000000002 15.664804889275583
20.0 20.0 25.664804889275583
30.0 30.0 35.66480488927558
40.0 40.0 45.66480488927559
50.0 50.0 55.66480488927559

Conclusion: Within that series of measurements, the dB power is consistent, but an offset of 5.66 dB occurs when squaring is omitted. This had me confused yesterday and prompted me to write the prior comment where I thought that things may be OK after all. Before discussing the results, let's consider how this is possible. In the binning log(sum(x²_i)), you cannot simply replace the square with a multiplicative factor of 2 (there is just no arithmetic law to do that and a counter example is trivial to produce). So why does it still work? The answer is that when comparing the same signal at different power levels, each Fourier coefficient is scaled by some amplification factor a. So we have log(sum(a*x²)) = log(a*sum(x²)) = log(a) + log(sum(x²)) and log(sum(a*x)) = log(a*sum(x)) = log(a) + log(sum(x)). In other terms, the log(sum(...)) remains the same as a is varied. It changes when x_i is squared, but this just creates an offset. I believe this does not merely hold for FM, but for any signal with multiple non-zero Fourier coefficients.

Is it bad news? Imagine this: You are tuned to an FM station. The station transmits a bare carrier without modulation. Regardless of the computation method, the computed power is 0 dB (first row in the first table above). Now the carrier is frequency modulated with a 1 kHz tone at any reasonable deviation (first row of the second table above). The power of the FM signal remains unchanged. This is reflected in the time domain calculation (0 dB), in the sum-of-squared-bins calculation (0 dB), but the calculation where the squaring is omitted yields 5.66 dB. And I don't think this is something that you can calibrate out, because this "error" will change as a function of the modulation index and I guess there is no doubt that this will not affect the signal power as long as the signal essentially remains within in the detector bandwidth.

This should be easy enough to test with real hardware.

Stretching the love for mathematics

One last thing: FM carrier and sideband magnitudes are stipulated by Bessel functions of the first kind. Wikipedia has a handy table. With a zero-modulation index (CW), the carrier voltage is 1, we have no sidebands and the signal power is 1²=1. With a modulation index of 0.25, the carrier voltage drops to 0.98 and two sidebands of 0.12 "V" pop up (along with an infinite number other sidebands that we don't care about because they are super low in magnitude). Summing voltages:

0.98 + 2* 0.12 = 1.22 (we are off by 22%)

Summing power:
0.98² + 2*0.12² = 0.975 (off by 2.5%).

I claim that the 22% error in the first calculation originates in the lack of squaring, and that the 2.5% error in the second calculation is simply because we disregarded some energy in the sidebands of low magnitude. If we take more sidebands into account, the first error grows and the the second error converges to 0 it appears:

>>> import scipy.special
>>> import numpy as np
>>> e=1;m=0.5; scipy.special.jv(0,m)**e+2*np.sum(scipy.special.jv((np.arange(1,10000)),m)**e)
1.489680506646045
>>> e=2;m=0.5; scipy.special.jv(0,m)**e+2*np.sum(scipy.special.jv((np.arange(1,10000)),m)**e)
1.0

I am sure that some mathematical trickery exists to prove that the sum of all squared Bessel functions is 1 but my math is not good enough to pull it off. I believe that what I showed is enough to demonstrate that the problem is real, or to disprove me. In the latter case I am very sorry for having wasted all your time.

I am now eager to actually test this on some real hardware but I have yet to figure out which kit to get...

@DD4WH
Copy link
Contributor

DD4WH commented Feb 5, 2022

@mweyland Thanks for your very detailed analysis and your python code! As I mentioned earlier in this thread, I fully confirm (again) that the UHSDR code has a bug in that it takes the sqrt of the (I * I + Q * Q) to calculate the power in dBm for the S-meter. So you have wonderfully demonstrated why this is incorrect and you have also given an estimation of the predicted errors for CW and FM signals. As I am not familiar with python and not familiar with mathematics I cannot comment on that in detail, sorry for that!

  • that also explains why the S-meter is so accurate in its present state when testing with a signal generator (producing a single frequency, hence CW signal) !
  • I doubt that anybody has tested the S-meter with FM signals produced by a signal generator :-)
  • So you have answered all the questions, I think !

Now it is up to us to decide whether we change the UHSDR S-meter code to skip the sqrt.

  • PRO: it should make the S-meter reading more accurate (at least for FM input signals; as you have shown, the S-meter is quite accurate for CW signals, also see my notes below)
  • CON: the user would have to re-calibrate his/her rig (could be relevant for a few thousands of them . . .)

Just for the record: this was my simple kind of mathematics for this problem which I had drawn yesterday (2 different signals; first example is uniform noise; 2nd is a CW tone) ;-):

grafik

@mweyland
Copy link
Author

mweyland commented Feb 9, 2022

First of all, thank you for having that discussion and sharing your thoughts on an issue that originally was not very much related to the UHSDR project. That really made my day(s)!

As far as the path forward is concerned, I guess that is indeed up to you, also because (1) I don't know much about operating on HF (i.e. how operators use the S- or dBm meter in practice) and (2) I am unclear about the implications of recalibrating a couple of thousand rigs...

There is, however, one last thing I wish to add, hopefully contributing to the information based on which you will make the decision: I am presuming that the calibration is done with a signal source emitting a single, continuous, unmodulated tone ("CW"). The links that I found on the actual calibration done by, e.g. DL8MBY, are all dead so I don't know for certain how this was done. Obviously, the dBm meter will be accurate in absolute and relative terms when measuring such CW tones after calibration. When measuring any other signal, there will be loss of absolute accuracy. For example, if you split the single tone into two individual tones that have the same total power as the original single tone, the dBm display will be off by some amount. If you attenuate that signal by 10 dB, the dBm display will drop by 10 dB. So in relative terms, this is still perfectly fine, it's just that the actual power, in absolute terms, is not what is displayed.

I believe that this problem aggravates as more and more frequency components are introduced. I.e. an "SSB" signal with a ton of spectral content will have a total power that is potentially quite different from what is being displayed. Noise, I think, will be even worse. If you attenuate that noise by 10 dB, the dBm display will drop accordingly, but it's just not the actual total power in dBm. My point is that this is not just an issue with FM. I chose that as a counter example because I am very familiar with the math and SDR processing of such signals but I guess it would have been better to provide an example with, e.g., noise.

Again, I really don't know how people use this dBm display in practice, but I could very well see them comparing some kind of noise floor somewhere in a "free spot" in the band with the power of an incoming signal, and this kind of "manual" SNR-measurement may(!) be very, very off. It would be off on all UHSDR units by the same amount, thus, they would remain comparable amongst each other. But let's assume somebody like me comes along with a totally different piece of equipment, it would be very much misleading to compare our measurements. Also, if somebody decides to quantify, e.g. the amount of noise at a particular site -- perhaps to compare against the VERON noise floor measurements with a calibrated antenna -- that number would be comparable between UHSDR users but it would not be an accurate power estimation in dBm. Personally, that would bother me very much :-).

All that said, there may be an easy way out: If the calibration was done with a single ton, it is correct for single tones. Regardless on whether you square or not, you just have to divide slope by 2. I guess we'd have to think a bit more about that, but fixing the squaring issue without recal could potentially retain the calibration for single tone signals, and for any other signal (FM/AM/noise/SSB/whatever else there is), the dBm reading would just become more correct(?)

Probably the best way forward would be to test a modified firmware and observe the impact in practice with "real" signals instead of some sandbox games in python. I also think that the problem gets worse as the number of FFT bins is increased. In your pen&paper solution, you only have a handful of bins -- much less than I used in my simulation. If the number of bins is small, maybe the issue can be neglected. It is worth a try in my opinion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants