Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

getcol() returning inconsistent results with large variable-shaped columns #130

Open
o-smirnov opened this issue Jan 23, 2018 · 52 comments
Open
Milestone

Comments

@o-smirnov
Copy link

o-smirnov commented Jan 23, 2018

I am scratching my eyes trying to figure out what I'm missing here, but it seems a genuinely puzzling bug. @gervandiepen, @tammojan, how can this be?

(This is on a pretty vanilla Ubuntu 16.04 server, using the latest packages from KERN-3.)

In [3]: tab=table("avg-1chan.ms")
Successful readonly open of default-locked table avg-1chan.ms: 27 columns, 1223048 rows
In [4]: d=tab.getcol("DATA")
In [5]: f=tab.getcol("FLAG")
In [6]: f[69418]
Out[6]: 
array([[False, False, False, False],
       [False, False, False, False],
       [False, False, False, False],
       ..., 
       [False, False, False, False],
       [False, False, False, False],
       [False, False, False, False]], dtype=bool)
In [7]: tab.getcol("FLAG",69418,1)
Out[7]: 
array([[[ True,  True,  True,  True],
        [ True,  True,  True,  True],
        [ True,  True,  True,  True],
        ..., 
        [ True,  True,  True,  True],
        [ True,  True,  True,  True],
        [ True,  True,  True,  True]]], dtype=bool)
In [8]: d[69418]
Out[8]: 
array([[ 0.+0.j,  0.+0.j,  0.+0.j,  0.+0.j],
       [ 0.+0.j,  0.+0.j,  0.+0.j,  0.+0.j],
       [ 0.+0.j,  0.+0.j,  0.+0.j,  0.+0.j],
       ..., 
       [ 0.+0.j,  0.+0.j,  0.+0.j,  0.+0.j],
       [ 0.+0.j,  0.+0.j,  0.+0.j,  0.+0.j],
       [ 0.+0.j,  0.+0.j,  0.+0.j,  0.+0.j]], dtype=complex64)
In [9]: tab.getcol("DATA",69418,1)
Out[9]: 
array([[[  1.20964111e+03 +1.52587891e-05j,
          -2.37663841e+00 -1.49164677e+00j,
          -2.37663794e+00 +1.49164653e+00j,
           3.89369751e+02 +7.62939453e-06j],
        [  1.22079517e+03 +0.00000000e+00j,
          -8.47115636e-01 -9.04934645e-01j,
          -8.47115695e-01 +9.04934704e-01j,
           3.92823456e+02 -7.62939453e-06j],
        [  1.23377441e+03 +1.52587891e-05j,
           4.08445597e-02 -1.52157629e+00j,
           4.08445895e-02 +1.52157617e+00j,
           3.95710052e+02 +0.00000000e+00j],
        ..., 
        [  5.70287842e+02 +4.57763672e-05j,
           3.22431064e+00 +4.90659904e+00j,
           3.22431159e+00 -4.90659761e+00j,
           2.56262512e+02 +7.62939453e-06j],
        [  5.69278503e+02 +1.52587891e-05j,
           2.94665289e+00 +4.95356131e+00j,
           2.94665217e+00 -4.95356178e+00j,
           2.55841415e+02 -3.81469727e-06j],
        [  5.69348999e+02 +4.57763672e-05j,
           2.95559502e+00 +4.40746403e+00j,
           2.95559382e+00 -4.40746403e+00j,
           2.55576401e+02 +1.14440918e-05j]]], dtype=complex64)
@gijzelaerr gijzelaerr added this to the 4.0 milestone Nov 7, 2018
@o-smirnov
Copy link
Author

I'm seeing this bug again with a large MeerKAT MS.

screenshot from 2019-02-13 15-02-56

Somebody help! Am I hallucinating? If I can't trust getcol() to return data from the MS properly, than all our software is basically undermined.

@cyriltasse
Copy link

I had seen something similar in the past - did you try with getcolnp?

@cyriltasse
Copy link

Like here #38

@o-smirnov
Copy link
Author

Yep same bug. So yeah definitely getcol() is broken with large columns. :( getcolnp() seems to work...

In my case, I have a 5715090 rows, and a [474,4] data column... and if I read the whole thing in at once, everything from row 1184533 onwards is null. But if I read it with getcolnp(), things seem to be fine.

@bennahugo take note.

@cyriltasse
Copy link

@gervandiepen @tammojan what would be the problem for getcol to actually be a wrapper to getcolnp?

@bennahugo
Copy link

Note the possible related bug affecting 4 correlation data I have reported earlier casacore/casacore#756

@bennahugo
Copy link

bennahugo commented Feb 13, 2019

That may point to something deeper than python-casacore, since it affects CASA as well (the ms module specifically)

@tammojan
Copy link
Contributor

I'm having a look at this; it would be useful to have a reproducible error. Preferably (but not necessarily) without sending enormous measurement sets over.

@tammojan
Copy link
Contributor

@gervandiepen @tammojan what would be the problem for getcol to actually be a wrapper to getcolnp?

I don't know, good question. getcol is a wrapper around getValueFromTable, getcolnp is a wrapper around getValueFromTableVH (VH is for ValueHolder, which is the C++ side of a python object, in this case a numpy array). The only difference between the functions seems to be who allocates the data.

The wrapper @cyriltasse could just allocate a python object. Interested in @gervandiepen's ideas.

@gervandiepen
Copy link
Contributor

gervandiepen commented Feb 13, 2019 via email

@o-smirnov
Copy link
Author

My column is 57150904744*8 = 40.36 GB. The first 1184533 rows (8.36 GB) are read in properly, then I get nulls. (Neither number seems particularly round or significant...)

Numpy arrays work fine, I can read the whole column with getcolnp().

@gervandiepen
Copy link
Contributor

gervandiepen commented Feb 14, 2019 via email

@gervandiepen
Copy link
Contributor

gervandiepen commented Feb 14, 2019 via email

@o-smirnov
Copy link
Author

Excellent, thanks!

@bennahugo
Copy link

Fantastic thanks @gervandiepen. @SpheMakh this is a serious bugfix affecting most python-based software in the pipeline. Please build casacore:master from source

@cyriltasse
Copy link

@gervandiepen nice nice! :)

@gijzelaerr
Copy link
Member

@gervandiepen I guess this has been solved?

@gervandiepen
Copy link
Contributor

I've closed it.

@o-smirnov
Copy link
Author

Gah, I've been bitten by this bug again. Someone remind me, when does the fixed release (3.1.1) get into KERN and/or KERN-dev? And can I pip install it in the meantime?

@gijzelaerr
Copy link
Member

This will land in KERN-dev / KERN-6 on the next KERN release sprint which has not been scheduled yet.

@o-smirnov
Copy link
Author

Please reopen! I see it rearing its ugly head again.

I'm running in a venv with python-casacore 3.3.1. Here's an example:

In [67]: tab0 = table("../msdir/1557766852_sdp_l0-kgb_cal.ms")                                                                                          
Successful readonly open of default-locked table ../msdir/1557766852_sdp_l0-kgb_cal.ms: 28 columns, 1176690 rows

In [68]: tab1 = tab0.query("FIELD_ID==3")                                                                                                               

In [69]: cd1 = tab1.getcol("CORRECTED_DATA")                                                                                                            

In [70]: cd1.shape                                                                                                                                      
Out[70]: (406260, 4096, 4)

# column all null from row ~150000 
In [72]: cd1[150000:].max()                                                                                                                             
Out[72]: 0j

# but not if I read a subset from row 150000 onwards
In [76]: tab1.getcol("CORRECTED_DATA", 150000, 1000).max()                                                                                              
Out[76]: (543.7675-1.1396092j)

@tammojan tammojan reopened this Dec 2, 2020
@bennahugo
Copy link

Same issue with getcolnp @o-smirnov ?

@o-smirnov
Copy link
Author

An excellent question, @bennahugo! The plot thickens:

In [80]: cd2=np.empty_like(cd1)                                                                                                                         

In [81]: cd2[:] = 9999                                                                                                                                  

In [82]: tab1.getcolnp("CORRECTED_DATA", cd2)                                                                                                           

In [83]: cd2[150000:].max()                                                                                                                             
Out[83]: (9999+0j)

In [84]: tab1.nrows()                                                                                                                                   
Out[84]: 406260

In [85]: cd2.shape                                                                                                                                      
Out[85]: (406260, 4096, 4)

Doesn't look like it reads in the whole array, does it?

@bennahugo
Copy link

ok then it is different from the previous known bug and seems to indicate that taql selection is broken. Can you confirm these are variable shaped columns?

@o-smirnov
Copy link
Author

Variable-shaped. But why do you think this is a TaQL issue and not another large array issue? It could be that the influence of the TaQL query above is only in determining whether resulting table is in [broken] large-array or [working] small-array regime.

In [87]: tab0.getcoldesc("CORRECTED_DATA")                                                                                                              
Out[87]: 
{'valueType': 'complex',
 'dataManagerType': 'TiledShapeStMan',
 'dataManagerGroup': 'TiledCorrected',
 'option': 0,
 'maxlen': 0,
 'comment': 'The data column',
 'ndim': 2,
 '_c_order': True,
 'keywords': {'UNIT': 'Jy'}}

@bennahugo
Copy link

bennahugo commented Dec 2, 2020 via email

@o-smirnov
Copy link
Author

o-smirnov commented Dec 14, 2020

@bennahugo the MS is /net/simon/home/oms/projects/OldDevils/msdir/1557766852_sdp_l0-kgb_cal.ms.

Reproducing is dead simple:

In [5]: tab = table("1557766852_sdp_l0-kgb_cal.ms")
Successful readonly open of default-locked table ../msdir/1557766852_sdp_l0-kgb_cal.ms: 26 columns, 1176690 rows

In [6]: tab1 = tab.query("FIELD_ID==3")

In [7]: cd1 = tab1.getcol("CORRECTED_DATA")

In [8]: cd1[144116:].max()
Out[8]: 0j

In [9]: cd1[:144116].max()
Out[9]: (22463.154+2217.4185j)

In [10]: cd1.shape
Out[10]: (406260, 4096, 4)

In [12]: import numpy as np
In [13]: cd2 = np.full_like(cd1, 99999)
In [14]: tab1.getcolnp("CORRECTED_DATA", cd2)

In [16]: cd2[:144116].max()
Out[16]: (22463.154+2217.4185j)

In [17]: cd2[144116:].min()
Out[17]: (99999+0j)

@bennahugo
Copy link

@o-smirnov Good news I cannot reproduce this on another multifield dataset after compiling 3.1.1 from source, bad news is I cannot reproduce this on another multifield dataset after compiling 3.1.1 from source...... I'm copying your data over to my environment now. Either there is a difference in the storage manager (which I don't think is the case - both measurement sets are MeerKAT datasets dumped through ska-sa/katdal) or you are somehow binding to a broken casacore....

@bennahugo
Copy link

bennahugo commented Dec 17, 2020

Just an update. I can reproduce this on @o-smirnov's dataset with a custom compiled casacore and bound python-casacore both at 3.3.1. The target he is attempting to load is a field which scans are interleaved by many other scans and hundreds of thousands of rows apart. I tried reproducing the same issue on another dataset by loading the first few scans and the last scan of a target field at the native 208kHz resolution, again separated by hundreds of thousands of rows. There I cannot reproduce the bug. This makes it extremely hard to track, because I can't just give you a command to simulate a large dataset at the temporal-spectro resolution of MeerKAT with a script. I do note that I have seen very strange artifacts popping up when imaging full resolution meerkat datasets with the calibrator scans interleaved, where we use getcolnp and TaQL to select the target rows. When I split out and image only the target the problem seemingly disappears!

I will attempt to track down the bug with gdb and @o-smirnov's data - unfortunately it is 500 GiB in size so shifting it around is not ideal.

@bennahugo
Copy link

bennahugo commented Dec 17, 2020

may be related to cyriltasse/DDFacet#536 - that was a multi-target observation, so this could show because of the large separation of target scans for a particular field

@bennahugo
Copy link

I've rewritten some of the observation logic in ratt-ru/simms#58 to simulate a multi field interleaved dataset with the scans dispersed in round robin fashion hundreds of thousands of rows apart to try and get something closer to @o-smirnov's edge case. Alas it still does not reproduce, so I'm not sure how to construct a reproducable example as it seemingly only affects this dataset and this dataset only - perhaps there was some planetary alignment..... We will need to make the dataset available for download somewhere - unfortunately the ftp is too small to host this

@o-smirnov
Copy link
Author

I'll try to extract increasingly smaller subsets of this MS to see if I can get a smaller reproducer MS. Thanks for verifying @bennahugo. It's an extremely worrying bug, but thankfully doesn't seem to occur that easily, eh?

@o-smirnov
Copy link
Author

Here's another datapoint for you. If I read the column in chunks like so:

cd1[:] = 9999
rowchunk = 100000
for row0 in range(0, nrows, rowchunk):
    tab1.getcolnp("CORRECTED_DATA", cd1[row0:row0+rowchunk], row0, rowchunk)
print(rowchunk, "unfilled:", np.where(cd1==9999))

..then the failure mode depends on chosen chunksize:

  • <=45000 rows: array is completely filled (with the correct data, I will optimistically assume)

  • 47500-50000 rows: array is unfilled from row 135420 onwards (Numerology alert! 135420 is exactly the number of rows per scan here!)

  • 100000 rows: array is unfilled from row 167232 onwards (not a meaningful number to me...)

I could keep up a binary search to find the exact chunksize at which the bug starts appearing (somewhere between 45000 and 47500), but there's no meaningful power of 2 or anything like that in there, so I don't see the point. (For reference, 32768 rows is 2^29 visibilities, and 65536 rows is 2^30 visibilities.)

For now, I will implement a workaround in cubical to read/write in chunks of 2^29 data points or less. Stay tuned.

@bennahugo
Copy link

bennahugo commented Dec 18, 2020

Interresting.... that means we may be dealing a signed 32 bit float indexer somewhere. Good job. My gdb is now hooking into gdbserver on the server, but it is painfully slow to compile things with symbol paths relative to the host gdb session. I will probably only get to it next year after my vacation

@o-smirnov
Copy link
Author

that means we may be dealing a signed 32 bit float indexer

Well, 32768 rows is 2^29 visibilities is 2^32 bytes, and that works ok... and some time before we get up to 2^30 visibilities and 2^33 bytes, it falls to the floor in a heap...

So yes, we're suspiciously close to the dreaded binary powers of 31-32. Yet strangely the "breaking point" is somewhere in between whole powers of 2, so what gives?

@o-smirnov
Copy link
Author

@tammojan, @gervandiepen please feel free to chime in. This is an existential bug after all. :)

o-smirnov added a commit to ratt-ru/CubiCal that referenced this issue Dec 18, 2020
@o-smirnov
Copy link
Author

OK, "good" news is, I averaged the MS down to 1024 channels (factor of 4), and I can still reproduce the bug.

Interestingly, the "breaking point" (in terms of the number of rows) has shifted, it is not simply x4 larger. The code above now works with rowchunks of 100,000, and breaks with rowchunks of 160,000.

But at least the "reproducer" MS is now only 73Gb! So, it is eminently practical to ship it to anyone interested. (@bennahugo: you can find it on /net/simon/home/oms/projects/OldDevils/cc-polcal/tmp-1k.ms/).

@gervandiepen
Copy link
Contributor

I'm looking into the problem, but my time is limited. Too often me grandchildren require my attention, which is pretty nice :-)

@o-smirnov
Copy link
Author

I'm sure they're a lot more fun than we are. 😀

Can I bribe them with some ice cream to take an interest in the table system?

@gervandiepen
Copy link
Contributor

gervandiepen commented Dec 21, 2020 via email

@JSKenyon
Copy link

A very random contribution, but have you tried mucking with the cache size and seeing if it changes anything @o-smirnov?

@o-smirnov
Copy link
Author

How would I do that exactly? You mean setdmprop?

@JSKenyon
Copy link

ms.setmaxcachesize(COL_NAME, SIZE)

Do not set it to zero, as that means no limit. Setting to 1 (one byte, I believe) would be an interesting experiment. Actually, maybe unlimited would be interesting if you have enough RAM.

@o-smirnov
Copy link
Author

@Athanaseus has been bitten by this as well.

JSKenyon added a commit to ratt-ru/CubiCal that referenced this issue May 12, 2021
* partial fix for #93. Should allow for one spw at a time

* fixes #93. Adds workaround for casacore/python-casacore#130

* added future-fstrings

* correctly named future-fstrings

* ...and install it for all pythons

* back to mainstream sharedarray

* lowered message verbosity

* droppping support for python<3.6

* Remove two uses of future_fstrings.

Co-authored-by: JSKenyon <jskenyon007@gmail.com>
Co-authored-by: JSKenyon <jonosken@gmail.com>
o-smirnov added a commit to ratt-ru/CubiCal that referenced this issue May 12, 2021
* partial fix for #93. Should allow for one spw at a time

* fixes #93. Adds workaround for casacore/python-casacore#130

* added future-fstrings

* correctly named future-fstrings

* ...and install it for all pythons

* back to mainstream sharedarray

* lowered message verbosity

* droppping support for python<3.6

* Issues 427 and 429 (#430)

* Fixes #427

* Fixes #429

* Update __init__.py

Bump version

* Update Jenkinsfile.sh (#447)

* Update Jenkinsfile.sh

Remove python 2 building

* Update Jenkinsfile.sh

* Issue 431 (#432)

* Beginning of nobeam functionality.

* Remove duplicated source provider.

* Change syntax for python 2.7 compatibility.

* Update montblanc version in setup to avoid installation woes.

Co-authored-by: JSKenyon <jonosken@gmail.com>

* Remove two uses of future_fstrings.

* make pypi compatible (#446)

* make pypi compatible

* error

* Update setup.py

Only support up to v0.6.1 of montblanc

* Update setup.py

Co-authored-by: Benjamin Hugo <bennahugo@gmail.com>
Co-authored-by: JSKenyon <jskenyon007@gmail.com>
Co-authored-by: JSKenyon <jonathan.simon.kenyon@gmail.com>

* Update __init__.py

* Fix warning formatting

Co-authored-by: Oleg Smirnov <osmirnov@gmail.com>
Co-authored-by: Benjamin Hugo <bennahugo@gmail.com>
Co-authored-by: JSKenyon <jskenyon007@gmail.com>
Co-authored-by: JSKenyon <jonosken@gmail.com>
Co-authored-by: Gijs Molenaar <gijs@pythonic.nl>
Co-authored-by: JSKenyon <jonathan.simon.kenyon@gmail.com>
JSKenyon added a commit to ratt-ru/CubiCal that referenced this issue Apr 13, 2023
* Added different slope modes for slope solvers. Experimental.

* initial rehashing to support a more geneal slope structure (e.g. iono+delay)

* fixes #428

* fixes #433

* no-X polcal experiments

* added support for arbitrary Jones terms in *first* chain position

* fixed typo

* Noxcal (#450)

* partial fix for #93. Should allow for one spw at a time

* fixes #93. Adds workaround for casacore/python-casacore#130

* added future-fstrings

* correctly named future-fstrings

* ...and install it for all pythons

* back to mainstream sharedarray

* lowered message verbosity

* droppping support for python<3.6

* Issues 427 and 429 (#430)

* Fixes #427

* Fixes #429

* Update __init__.py

Bump version

* Update Jenkinsfile.sh (#447)

* Update Jenkinsfile.sh

Remove python 2 building

* Update Jenkinsfile.sh

* Issue 431 (#432)

* Beginning of nobeam functionality.

* Remove duplicated source provider.

* Change syntax for python 2.7 compatibility.

* Update montblanc version in setup to avoid installation woes.

Co-authored-by: JSKenyon <jonosken@gmail.com>

* Remove two uses of future_fstrings.

* make pypi compatible (#446)

* make pypi compatible

* error

* Update setup.py

Only support up to v0.6.1 of montblanc

* Update setup.py

Co-authored-by: Benjamin Hugo <bennahugo@gmail.com>
Co-authored-by: JSKenyon <jskenyon007@gmail.com>
Co-authored-by: JSKenyon <jonathan.simon.kenyon@gmail.com>

* Update __init__.py

* Fix warning formatting

Co-authored-by: Oleg Smirnov <osmirnov@gmail.com>
Co-authored-by: Benjamin Hugo <bennahugo@gmail.com>
Co-authored-by: JSKenyon <jskenyon007@gmail.com>
Co-authored-by: JSKenyon <jonosken@gmail.com>
Co-authored-by: Gijs Molenaar <gijs@pythonic.nl>
Co-authored-by: JSKenyon <jonathan.simon.kenyon@gmail.com>

* fixed bug when outer chain term was DD and incorrect model was used. Properly implemented scalar updates

* added scalar and unislope update types

* added auto-generated Stimela2 schemas

* added schemas to manifest

* use / separator for stimela schema; omit defaults in schema; fix path

* minor bugfixes

* added stimela_cabs file. Fixed problem with legacy-only flags on output

* added stimela_cabs.yml

* changed / to .

* fixed problem with IFR gain flags overpropagating

* added parset argument

---------

Co-authored-by: Oleg Smirnov <osmirnov@gmail.com>
Co-authored-by: Lexy Andati <andatilexy@gmail.com>
Co-authored-by: Benjamin Hugo <bennahugo@gmail.com>
Co-authored-by: Gijs Molenaar <gijs@pythonic.nl>
o-smirnov added a commit to ratt-ru/CubiCal that referenced this issue Feb 12, 2024
* Added different slope modes for slope solvers. Experimental.

* initial rehashing to support a more geneal slope structure (e.g. iono+delay)

* fixes #428

* fixes #433

* no-X polcal experiments

* added support for arbitrary Jones terms in *first* chain position

* fixed typo

* Noxcal (#450)

* partial fix for #93. Should allow for one spw at a time

* fixes #93. Adds workaround for casacore/python-casacore#130

* added future-fstrings

* correctly named future-fstrings

* ...and install it for all pythons

* back to mainstream sharedarray

* lowered message verbosity

* droppping support for python<3.6

* Issues 427 and 429 (#430)

* Fixes #427

* Fixes #429

* Update __init__.py

Bump version

* Update Jenkinsfile.sh (#447)

* Update Jenkinsfile.sh

Remove python 2 building

* Update Jenkinsfile.sh

* Issue 431 (#432)

* Beginning of nobeam functionality.

* Remove duplicated source provider.

* Change syntax for python 2.7 compatibility.

* Update montblanc version in setup to avoid installation woes.

Co-authored-by: JSKenyon <jonosken@gmail.com>

* Remove two uses of future_fstrings.

* make pypi compatible (#446)

* make pypi compatible

* error

* Update setup.py

Only support up to v0.6.1 of montblanc

* Update setup.py

Co-authored-by: Benjamin Hugo <bennahugo@gmail.com>
Co-authored-by: JSKenyon <jskenyon007@gmail.com>
Co-authored-by: JSKenyon <jonathan.simon.kenyon@gmail.com>

* Update __init__.py

* Fix warning formatting

Co-authored-by: Oleg Smirnov <osmirnov@gmail.com>
Co-authored-by: Benjamin Hugo <bennahugo@gmail.com>
Co-authored-by: JSKenyon <jskenyon007@gmail.com>
Co-authored-by: JSKenyon <jonosken@gmail.com>
Co-authored-by: Gijs Molenaar <gijs@pythonic.nl>
Co-authored-by: JSKenyon <jonathan.simon.kenyon@gmail.com>

* fixed bug when outer chain term was DD and incorrect model was used. Properly implemented scalar updates

* added scalar and unislope update types

* added auto-generated Stimela2 schemas

* added schemas to manifest

* use / separator for stimela schema; omit defaults in schema; fix path

* minor bugfixes

* added stimela_cabs file. Fixed problem with legacy-only flags on output

* added stimela_cabs.yml

* changed / to .

* fixed problem with IFR gain flags overpropagating

* Update __init__.py

* Stimelation (#480)

* fixes to AIPS leakage plotter (more table format support); started an AIPS bandpass plotter but this is WIP

* add check for INDE value in PRTAB leakages

* changed to hierarchical config

---------

Co-authored-by: JSKenyon <jskenyon007@gmail.com>
Co-authored-by: Jonathan <jonosken@gmail.com>
Co-authored-by: Lexy Andati <andatilexy@gmail.com>
Co-authored-by: Benjamin Hugo <bennahugo@gmail.com>
Co-authored-by: Gijs Molenaar <gijs@pythonic.nl>
Co-authored-by: JSKenyon <jonathan.simon.kenyon@gmail.com>
Co-authored-by: Athanaseus Javas Ramaila <aramaila@ska.ac.za>
sstansill added a commit to casangi/xradio that referenced this issue Apr 23, 2024
Jan-Willem added a commit to casangi/xradio that referenced this issue May 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants