-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation Fault #18
Comments
Forgot to mention, I'm running gnuradio version 3.10.5.1 (Python 3.8.10). |
another note, I just noticed that I have the throttle block in the transmitter side, actually I did remove the throttle block while testing and I got the same segmentation fault. Thanks! |
Hi @aoweis , Thanks for your interest in the project and for reporting the issue. So, first of all, for your use case, I would recommend running Currently, the Regarding the segfault, I am aware that the symbol synchronizer implementation is not perfect. I have experienced problems before, especially when using high oversampling ratios (the The difference between the two symbol synchronizer implementations has to do with a design tradeoff. The in-tree symbol sync is more flexible/configurable but slower. The OOT implementation (within gr-dvbs2rx) is less flexible but faster for the dvb-s2 use case. Regarding the throttle block, indeed, it won't make a difference for the problem you are seeing. It seems the problem happens at the Rx side. I hope this helps. Thanks |
Hi again, @aoweis , I have tried to add support for the bladeRF on the The code is on a branch named Next, based on the screenshots you sent, I think you need to run the Tx side with the following options:
Similarly, you can run the Rx side with the following options:
Note:
|
Hi @igorauad, It will be my pleasure to help in this. I tried the command, but it seems there's something missing with the gui:
and I get this error:
Similarly, on the rx side: gives this error:
I then tried to run without the --gui. On the Tx side The bladeRF seems to initialize properly but then I get a message saying "buffer submission requested, but no transfers are currently available". Here is the last couple of messages: [bladeRF sink] init: Buffers: 512, samples per buffer: 4096, active transfers: 32 If I run a spectrum analyzer using the other bladeRF, I can see some signal being transmitted: On the Rx side The receiver seems to be working fine, waiting for data. I modified the output file to received.ts
Here is the output of the bladeRF:
I check the size of the received.ts file and it remains zero. Nothing seems to be received. |
Hi @aoweis , Thanks for the feedback. My bad, silly mistakes on the GUI implementation. I've pushed a fix to the Once you run the Rx in GUI mode, could you share a screenshot? Also, you can run the rx with option |
debug.zip Here are the details. Note, the attached zip file contains the output debug message for all trials. Trial 1: with standard OOT symbol synchronizer (start Rx then Tx) Command: Result: segfault – core dumped Trial 2: with standard OOT symbol synchronizer – start Tx then Rx Command: Result: seems to be synchronized but file size is still zero Trial 3: with Gnu Radio in tree symbol sycnhronizer Command: Result: several Rx over run messages from bladeRF in the beginning only. Rx file size is still zero. |
Hi @aoweis , Cool, thanks for sending further info. I have a couple of comments:
|
Hi @igorauad, It worked! Not 100% sure which one fixed it, but I was finally able to watch the movie on the Rx machine after doing all the above corrections. I was using Gnuradio's in tree symbol synch with pilots. If I remove the pilots, I'll have high bit error rate. Regarding the clipping at the Tx side, using --gui-all, I found that the I/Q amplitudes were +/- 0.7. I did reduce the gain from 50 to 30 but I don't think this had any effect on the overall performance. I think that the Tx gain works on the RF signal, so the only problem with too much gain could be to saturate the Tx amplifier. However, I'm using a Bias-Tee PA from Nuand which should be compatible with BladeRF. I'd assume if a high gain could saturate the PA, Nuand would have warned the users, which is not the case. As for the DC offset, I built a BPSK on Gnuradio and I had to manually follow the same procedure as the USRP. I tuned the RF freq at (f_rf - offset), and used a frequency shift block with frequency = -offset. The bladeRF block in GnuRadio has an option to automatically correct for DC offset, but I couldn't confirm it's working properly. Thanks for your help! |
@aoweis , Glad to know it worked!
Indeed, the current implementation needs more work to improve the pilotless mode. The current limitation is documented here: https://github.com/igorauad/gr-dvbs2rx/blob/master/docs/support.md. This task has been on my backlog for some time, but I still plan on working on it soon.
That is the expected range. However, I thought more about it and realized the scaling was still wrong, although good enough to prevent clipping issues. The previous implementation scaled the samples so that the complex IQ magnitude would be within +-1.0. However, what we want is for the independent I and Q amplitudes to range within +-1.0. So I have updated that on the master branch, which now includes the preliminary support for the bladeRF.
Right. Perhaps you could try the Tx gain change on-the-fly by running
I had a quick look at the bladeRF source block. As far as I can tell, it does not do anything when the DC offset mode is set: https://github.com/Nuand/gr-bladeRF/blob/main/lib/bladerf/bladerf_source_c.cc#L434. I believe you may need to calibrate the DC offset as instructed in https://github.com/Nuand/bladeRF/wiki/DC-offset-and-IQ-Imbalance-Correction. Your approach based on DSP also works at the expense of CPU usage. The USRP does that on the FPGA. However, for experimental purposes, or when there are sufficient CPU resources, the approach you adopted shouldn't be a problem. Let me know if there is anything more I can help with. |
Hi again @igorauad, Is there a way to reduce the end to end delay when running dvbs2-tx and dvbs2-rx over an RF link? I did some experiments based on our discussion above using bladeRF and two computers, and I calculated a delay of approx 11 seconds end to end. I was piping the webcam in the Tx machine using G-Streamer to dvbs2-tx as follwoing:
and in another terminal: and using this command in the Rx machine:
Note that the Gstreamer itself causes around 1 sec delay maximum (I figured this out by receiving the stream in another terminal on the same Tx machine and playing it). Is this delay caused by slow hardware? can we achieve lower delay? Thank you |
Hi @aoweis , Thanks for the question. That is an interesting investigation. Let's try to understand some possible sources of delay. RRC Filtering on TxThe root-raised cosine (RRC) filter used for pulse shaping on Tx is deliberately designed with a relatively significant delay. The motivation is due to the RRC response's dependence on the truncation length. In summary, the longer the filter, the lower the "truncation effects, " resulting in a sharper stopband attenuation with lower sidelobes. This is generally more important on the Tx (uplink) side than on the Rx side so that the uplink transmitter minimizes interference to adjacent carriers. That being said, the default value of the Now, a delay of 50 symbols is not a lot. With 1 Msps, that is only 5 microseconds. Hence, even though you can tune FECThe inner LDPC decoder block waits until a few frames are collected before processing them all at once. By doing so, it can benefit from SIMD speed gains. The number of frames processed in one go depends on the SIMD architecture available in your host. The longest wait is with AVX2, which processes 32 frames at a time. So the worst-case latency introduced at this stage is that of 32 frames. In contrast, the outer BCH decoder currently processes one codeword/frame at a time, so it does not impose a latency bottleneck. So, next, let's figure out how much latency the LDPC decoder can add. I see you are transmitting with the default frame parameters, which are:
Hence, each PLFRAME has a length of 32490 symbols, which you can check in Python with the following: from gnuradio import dvbs2rx
dvbs2rx.pl_info(constellation='qpsk', code='1/4', frame_size='normal', pilots=False) With the default 1 Msps baud rate, each PLFRAME has a duration of roughly 32.5 ms. Hence, the FEC layer waiting for 32 frames may introduce a latency of up to ~1 second (or 1040 ms). Also, in this analysis, I'm only considering the minimum time to accumulate 32 frames, which is sample-rate-dependent, and I'm excluding the time to process such frames, which is CPU-dependent. You can very loosely validate this delay by running a loopback simulation with real-time flows, like so: cat example.ts | timeout -k 1s 4s dvbs2-tx --out-real-time | dvbs2-rx --in-real-time --sink file --out-file /tmp/decoded.ts In this case, the Each frame carries Kbch bytes (BCH message length) and Kbch=16008 bits (2001 bytes) with QPSK 1/4 normal FECFRAMES. So, if I'm not missing anything, by simulating for 5 seconds, at best, it would process 153 frames, which is enough to convey
TSPFinally, I'd recommend checking the various
See, e.g., this thread. That may make a more significant difference in latency. I'd be curious to know how this investigation goes. Regards |
Hi @aoweis , I will close this issue for now. Please feel free to reopen if you have more questions. |
Hi,
I opened dvbs2_tx_rx.grc simulation file, choose a ts file for the file source and ran it successfully.
I then split the flow into two separate files: one for transmitting and one for receiving, as I wanted to test it by actually transmitting over the air using two bladeRF 2.0 boards.
For the transmitter, I fed the virtual sink directly to the bladeRF sink (and removed the throttle block), and deleted all the other blocks. I setup the RF parameters, etc.
On another machine running Ubuntu 20.04, I setup the receiver by removing the transmitting blocks and connecting the bladeRF source to the DVB-S2 Rx Hier block. The RF parameters were also set up to match the frequency on the Tx side.
Both bladeRF boards are equipment with suitable antennas and bias tees and are located within 30 cm of each other.
I ran the receiver and I can see the frequency spectrum normally. Once I start the transmitter, after a few seconds the gnuradio on the Rx side just stops. I tried to run the gnuradio python file from the command line and I found that a segmentation fault is returned possibly once the receiver detects a specific signal coming from the transmitter. I ran the python file from gdb and got the following output:
hread 21 "symbol_sync_cc8" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffd48e9700 (LWP 3933)]
0x00007ffff1454da0 in ?? () from /lib/x86_64-linux-gnu/libvolk.so.2.4
(gdb) backtrace
#0 0x00007ffff1454da0 in () at /lib/x86_64-linux-gnu/libvolk.so.2.4
#1 0x00007fffe7b8a65e in gr::dvbs2rx::symbol_sync_cc_impl::loop(std::complex const*, std::complex, int, int) ()
at /usr/local/lib/x86_64-linux-gnu/libgnuradio-dvbs2rx.so.1.0.0
#2 0x00007fffe7b8b0a1 in gr::dvbs2rx::symbol_sync_cc_impl::general_work(int, std::vector<int, std::allocator >&, std::vector<void const, std::allocator<void const*> >&, std::vector<void*, std::allocator<void*> >&) ()
at /usr/local/lib/x86_64-linux-gnu/libgnuradio-dvbs2rx.so.1.0.0
#3 0x00007ffff18188cd in gr::block_executor::run_one_iteration() ()
at /lib/x86_64-linux-gnu/libgnuradio-runtime.so.3.10.5
#4 0x00007ffff18879b7 in gr::tpb_thread_body::tpb_thread_body(std::shared_ptrgr::block, std::shared_ptrboost::barrier, int) ()
at /lib/x86_64-linux-gnu/libgnuradio-runtime.so.3.10.5
#5 0x00007ffff186ec7d in ()
at /lib/x86_64-linux-gnu/libgnuradio-runtime.so.3.10.5
#6 0x00007ffff186fa57 in ()
at /lib/x86_64-linux-gnu/libgnuradio-runtime.so.3.10.5
#7 0x00007ffff10da43b in ()
at /lib/x86_64-linux-gnu/libboost_thread.so.1.71.0
#8 0x00007ffff7d97609 in start_thread (arg=)
at pthread_create.c:477
#9 0x00007ffff7ed1133 in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
_
The text was updated successfully, but these errors were encountered: