Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Noise issues #30

Closed
windytan opened this issue Sep 30, 2016 · 9 comments
Closed

Noise issues #30

windytan opened this issue Sep 30, 2016 · 9 comments

Comments

@windytan
Copy link
Owner

windytan commented Sep 30, 2016

I've gotten some reports that redsea requires a stronger FM signal than other decoders. Possible reasons discussed here.

1) rtl_fm

  • Are the parameters for rtl_fm optimal?
  • Is there a poor-quality resampling phase somewhere?
  • Is the bandwidth (171 kHz) right?

2) PLL

There's jitter in the 57 kHz PLL (realized as nco_crcf_pll_step in liquid-pll), especially when the signal is noisy.

  • Is this an issue?
  • What could affect this? Loop filter bandwidth?
  • What about the phase error multiplier?

Below, the PLL tracks a a good-quality RDS subcarrier. 99.9 % of blocks were received. Time is in seconds.

plot

Here's a noisy signal, with 60.1 % of all blocks received.

plot

Average spectral power of the two signals, good signal in green and the noisy one in red:

plot

Looking at the graph, there's a 27 dB difference in SNR. Is it realistic to receive error-free data in the noisy case?

3) Symbol synchronizer

  • Is liquid's symbol synchronizer being used correctly?
  • What should be the correct values for bandwidth, delay, excess bandwidth factor?
  • Do we really need separate PLL and symbol synchronizer? Couldn't they be fused somehow? Afterall, the PLL already gives us a multiple of the symbol speed (57,000 / 48 = 1187.5).
  • What about symtrack in liquid-dsp 1.3? It seems to perform much worse than the current processing chain + symsync, but could the parameters be adjusted?
@mvglasow
Copy link

My setup’s slightly different, but based on the same code and thus likely to suffer from the same issues: audio samples are obtained with C code based on rtl_fm, which are then processed into a RDS binary stream using a Java implementation based on an early (ca. April 2015) version of redsea. This is implemented as a plugin to RDS Surveyor, which does all the higher-level RDS stuff, in addition to visualizing signal strength and RDS block error rates. The plugin also downsamples audio to 48,000 Hz and plays it through the sound card—helpful because the amount of static is a rough indicator for signal quality.

I use a Logitech VG0002A (FC0013 tuner), no extra shielding, hooked up to my roof antenna. OS is Ubuntu MATE 16.04 (64-bit), running on an Intel Core2 Duo P8400 @ 2.26GHz × 2. For reference, I also use a Si4703-based dongle, i.e. a dedicated RDS FM tuner, wrapped in aluminium foil for shielding, with the same software setup.

I notice the Si4703 picks up fewer stations than the RTL2832, though RDS is usually crisp with hardly any block errors. With the RTL2832 occasionally I get good RDS data, other days things look quite messy. The antenna setup in my house probably isn’t the greatest either, I am also having issues with analog radio reception.

My observations:

  • Block errors kick in once CPU load is constantly above 80% on both cores; at 128K without audio load is usually around 60–80% on both cores. Enabling audio eats a few extra CPU cycles; disabling it slightly improves RDS reception.
  • Sample rate matters. At 128K things look decent but going up as far as 250K improved signal quality (this was before I implemented audio). With more CPU power, you might be able to go higher.
  • The atan algorithm in rtl_fm (-A command line option) makes a difference. std and fast seemed almost identical in terms of signal quality and CPU load, but with lut I immediately noticed extra static and didn’t get a single RDS group.
  • Automatic gain control on the FC0013 seems botched. On some occasions, I’d get the strongest local station at 20+ dBm, on others it was somewhere around -5. This changed from time to time for no apparent reason—I suspect gain control to be the culprit. I’ve worked around this by controlling gain in software. This was implemented only today, hence no long-term test results yet, but so far gain has consistently gone up to the maximum of 197 dB.
  • It seems the system needs some time to stabilize. On startup, I frequently see patterns of good and bad blocks alternating quite reproducibly, then the error rate gradually improves. I’m wondering if this could be some kind of timing issue.
  • Disabling rtl_fm resampling resulted in signal strength improving by some 5 dBm.

@windytan
Copy link
Owner Author

Thanks, great observations!
Note that the current versions, based on liquid-dsp, are much more noise-resistant and efficient; CPU load is at 0.8 % on my 2.8 GHz Intel Core i7.

@mvglasow
Copy link

mvglasow commented May 1, 2017

Thanks for the heads-up; this is where my implementation differs, as I needed the carrier demodulation part to be in pure Java and thus used Java DSP Collection. It’s a lot less feature-complete than liquid-dsp; modernizing that part of the code will take some more research.

I just enabled csv stats on my implementation and monitored the PLL frequency. On a good sample, it jumps about wildly for about a quarter of a second, then goes to 57002 and slowly decreases to around 56997.5 with minimal oscillation, though that takes a few seconds.

Sometimes, after things have stabilized, I have brief periods of mostly bad samples, or even temporary interruptions, which quickly return to normal. These seem to correspond to the frequency diverging from its stable value and then returning in the stats.

I’m wondering if the pilot tone is subject to similar jitter. If the pilot tone remains stable while the PLL frequency is all over the place, it may be a sign that the PLL is doing weird things (or something’s specifically messing up the frequency range of the subcarrier). If there is similar jitter in the pilot tone—I’d expect the changes in the PLL frequency to slightly lag behind—we may have some issue that affects all frequencies and the PLL jitter is just a symptom of it. Sporadically dropped samples come to mind, or maybe jitter in the sample rate.

In the latter case, it would seem logical to rely on the pilot tone as a tuning reference. However, I see that you dropped pilot tone recovery in e4edd12—what was the motivation for that? Did you observe any changes in noise resilience before and after?

@windytan
Copy link
Owner Author

windytan commented May 1, 2017

Pilot tone recovery was dropped because not all RDS-carrying stations are guaranteed to have a pilot tone. Such a station is, for instance, a local station here that is monaural and thus has no stereo subcarrier or pilot tone, yet transmits a PS name via RDS.

It could be possible to detect the presence of a pilot tone and base our selection of clock base on that; or use a command-line switch. Afterall, as per the RDS standard, the RDS subcarrier is supposed to be locked to the third harmonic.

I didn't get to test the effect it had on noise resilience as I didn't have the test scripts that I now have.

@mvglasow
Copy link

mvglasow commented May 2, 2017

Interesting—in fact I was wondering the other day if the pilot tone is present on monaural stations which transmit RDS.

Another thing I noticed here: when I turn the dial and pick up some noise before tuning into a good station, it takes a very long time before I get some RDS data. In one instance, I spent some 2–3 minutes listening to static before turning into the strongest local station here, no RDS until I gave up 5 minutes later. Analysis showed that for most of the latter part, the PLL remained stubbornly locked to some 56600 Hz, with remarkable stability.

Conclusion: Noise can profoundly throw the PLL off, an effect which seems to increase with the duration of the noise, and it can take a long time to recover even when a good signal is received again.

Another approach might be to run an FFT on a DFT on the relevant frequency ranges of the baseband signal and look for the pattern of the RDS subcarrier around 57 kHz—we should see two peaks, some 1.65 kHz apart, and a local minimum in the middle, which would be the subcarrier frequency. If we don’t see a clear RDS subcarrier pattern, we’re probably listening to a non-RDS station (or even plain static) and should leave the subcarrier frequency alone. When we detect the RDS subcarrier again after having lost it, re-initialize the PLL with the default frequency.

While we’re at it, we could check for both the pilot tone and the subcarrier and, if both are present, see if they agree. Since the signal level of the pilot tone is higher than that of the RDS subcarrier (10% vs. 5%), I’d expect the pilot tone to have better noise resilience.

@mvglasow
Copy link

mvglasow commented May 5, 2017

Results are discouraging.

I gave up on pilot tone recovery, as it saturated my CPU, negating any potential improvements in reliability. Maybe redsea lends itself better to that approach.

The DFT approach doesn’t seem to work—unless I’ve made an error somewhere. I collected two seconds of samples (which should give me 0.5-Hz frequency bins), then ran a DFT at 17/19/21 kHz as well as at 54/56.175/57/57.825 kHz and looked for the peak pattern.

No matter whether I was listening to a good station or static, the DFT analysis kept flipping happily between detecting and losing the pilot tone and/or the RDS subcarrier (independently of each other). Frequently, on the good station, DFT analysis would tell me it had lost the subcarrier but the decoder would happily continue spitting out data.

@mvglasow
Copy link

mvglasow commented May 9, 2017

A few more ideas:

I see the signal level of the good MPX looks more “balanced” than that of the noisy one, which looks more jagged, with occasional peaks. Possibly as a result of those, AGC gain is lower. I wonder if it’s possible to set the AGC to a more “aggressive” setting (higher gain) and how this would affect the PLL and everything after.

Speaking of gain control—when I mentioned gain control in software, I meant “determine in software what gain value to set the tuner to”. For redsea, that would mean controlling rtl_fm parameters.

Frequency correction in the “homebrew” PLL looked like this:
fsc -= 0.5 * pll_beta * d_phi_sc;
I’m by no means an expert on signals, but would it be possible to factor signal quality into that equation, as a kind of confidence level, which would be lower when noise is detected? (And would this still be possible with liquid-dsp?)

As for detecting noise, I just worked something out for an FM seek algorithm and am quite happy with the results: Because RSSI is a poor indicator of a valid FM station (there are weak transmitters as well as strong noise), I analyzed the demodulated spectrum up to the RDS subcarrier frequency. I ran an FFT across the spectrum, then determined the average power level (in dB) and mean average deviation. For noise, mean average deviation is lower than for a good station but the average power level is higher. By experimenting with some good and some bad signals I established a threshold for both values, then did a simple weighted difference, with weights chosen so that the result would be zero for the threshold. The greater the value, the better the signal. Maybe that’s a basis for a confidence level.

To work around issues with noise detuning the PLL beyond recovery, I ended up monitoring the PLL frequency and allowing it to drift within +/-14 Hz of the subcarrier frequency. (7 Hz is the tolerance as per the specs, which I doubled to allow for similar inaccuracy on the receiving end). When the frequency goes outside that range, I simply reset it to 57 kHz—admittedly not the most elegant solution, but it dramatically improved behavior in scanning scenarios, altering between good stations, noisy stations and pure static.

@windytan
Copy link
Owner Author

PLL drift/detuning is no longer a problem, now that liquid is in use. The only remaining issue there is jitter with very noisy signals, as in the pictures above, but I'm not even sure if that causes any degradation in quality.

As for the homebrewn PLL design in very old versions of redsea, I'm kind of reluctant to comment on that with no background in theory myself.

@windytan
Copy link
Owner Author

After testing the current version of redsea against RDS Spy on a noisy MPX signal, it seems redsea performed better, recoving around twice as many groups. So I guess noise should not be a problem any more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants