Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Tikhonov regularization in BASEX #225

Closed
DanHickstein opened this issue Aug 30, 2018 · 28 comments
Closed

Enable Tikhonov regularization in BASEX #225

DanHickstein opened this issue Aug 30, 2018 · 28 comments
Assignees
Milestone

Comments

@DanHickstein
Copy link
Member

DanHickstein commented Aug 30, 2018

In a discussion with Mikhail Ryazanov, I realized that we have disabled the Tikhonov regularization factor in the basex algorithm. This occurs on line 184 of basex.py:

    q_vert, q_horz = 0, 0  # No Tikhonov regularization

As I recall, we originally thought that this factor didn't significantly affect the output. However, I think that this parameter can have a large effect on the transform in the presence of noise, greatly suppressing the nose at small r-values.

You can see here the difference between q_vert, q_horz = 0, 0 (left) and q_vert, q_horz = 100, 100 (right).

image

I think that many users of the old BASEX.exe enjoyed this smoothing, and this is one of the reasons that BASEX became so popular. So, I think that we need to re-activate this feature.

However, in our current implementation, the "basis set" that is saved (pre-cached) already has the tikhonov regularization factor included. So, we would either need to change the naming convention of the saved files to include the tikhonov factor in the file name (in addition to the image size), or we would need to slightly re-work the algorithm to save the basis sets at a slightly earlier stage, and include the tikhonov factor in the computation of the transform (instead of in the computation of the basis sets). This would result in somewhat slower transform times, but allow users the most flexibility to try several tikhonov regularization factors. We should likely investigate how much this slows down the transform before making a decision.

Additionally, Mikhail suggested that we look into using one of the linear algebra "solve" functions instead of matrix inversion and other matrix algebra steps. I think that the appropriate numpy function is this one:
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.linalg.solve.html

@DanHickstein DanHickstein self-assigned this Aug 30, 2018
@DanHickstein DanHickstein added this to the Version 0.9 milestone Aug 30, 2018
@MikhailRyazanov
Copy link
Collaborator

I mean that in principle x = numpy.linalg.solve(A, b) should work faster than Ai = numpy.linalg.inv(A); x = numpy.dot(Ai, b). But it works only for A x = b, while for x A = b you will need to transpose everything before and after (although transposes should be cheap). And for repetitive applications (isn't it the main goal of this project?) precomputing the inverse matrix (or an LU decomposition) once and then applying it to each image is still better.

Regarding caching, it seems that you load the matrices from disk every time. Have you considered keeping them in memory (as globals in abel.basex?) after the first load? That way, a reasonable approach would be to store the basis matrices on disk "as is", and upon the first invocation calculate the regularized inverses and keep them in memory for further invocations.

@DanHickstein
Copy link
Member Author

Hi Mikhail! Welcome to PyAbel :)

n principle x = numpy.linalg.solve(A, b) should work faster than Ai = numpy.linalg.inv(A); x = numpy.dot(Ai, b).

This is an interesting observation! I'm not enough of a computer scientist or mathematician to argue either way about this from first principles. But, I wrote some simple code to compare x = numpy.linalg.solve(A, b) with Ai = numpy.linalg.inv(A); x = numpy.dot(Ai, b) for randomly generated matrices A and b (see this gist)
The times look like this:

comparison of linalg

So, you are correct! np.linalg.solve beats the alternative. But, it's still about 70% of the execution speed, so we're not talking about and orders-of-magnitude improvement.

Regarding caching, it seems that you load the matrices from disk every time. Have you considered keeping them in memory (as globals in abel.basex?) after the first load?

This is a really good point! I think that this will generate some considerable discussion, so I opened a new issue: #226

Back to the issue of including the Tikhonov regularization:

a reasonable approach would be to store the basis matrices on disk "as is", and upon the first invocation calculate the regularized inverses and keep them in memory for further invocations.
Okay, if I understand, you are proposing a hybrid approach: we save the basis sets to disk before calculating the regularization or inversion (different from what we are doing now). Then, we load the basis set, calculate whatever regularization the user requires, and cache this in RAM (as a variable). Then, if a repeated transform with the same regularization factor is required, we can load the matrices from RAM.

This seems like a reasonable approach, but it has the disadvantage that, if someone wants to re-run their program from scratch, they lose the regularized matrices. And, it would take quite a bit of additional effort to implement. At this point, the easiest thing to implement would be to save the basis sets before including the regularization (like you are proposing), and not worry about caching to RAM. This would create a slower transform, but maybe it's an acceptable trade-off.

@MikhailRyazanov
Copy link
Collaborator

OK, I've just opened a pull request (#227) with this stuff implemented.

Here is a speed comparison of abel.Transform(...).transform (end-users' perspective, so to say):
cmp
Dashed lines here are the first call, solid with symbols are the sustained performance (average of 10 calls after the first).

Caching, as expected does not help much for practically important cases, but was implemented mostly to avoid recalculating the transform matrices.

"reg var" means that the regularization parameter was changed (and thus the regularization reapplied) every time. When it remains constant, the sustained speed is better (more or less equal to "cached", usually better than "old"):
cmp_reg
I do not know why the speed depends on the regularization parameter. And especially why for larger images larger regularization parameters actually yield a faster transform! Maybe, scipy does not like ill-conditioned matrices. :–)

In any case, all speed differences are within a factor of 2:
rel
However, here even the first run uses only 1 loaded/calculated set of matrices and 3 cached sets, so the "raw" speed difference could be ~3 times larger. (I did not figure out how to run the included benchmarks without generating basis sets every time...)

@MikhailRyazanov
Copy link
Collaborator

There might be some more improvements. I did not look closely at the implementation, but I have noticed that M_vert is filled with zeros and never used. If this is not an error, we probably don't need to store it.
(And I would say that the vertical transform is not needed at all, as it makes no sense and probably amplifies noise.)

@DanHickstein
Copy link
Member Author

Thanks for performing these comparisons! Honestly, if we are only talking about factors of 2 in speed, then I would say that we can potentially trade speed improvements for improvements in the readability/useability/hackability of the code.

That's interesting about M_vert being zeros! Yes, I don't quite understand what a "vertical transform" would do. Certainly, if you have large tikhonov factors, weird things can happen in the vertical direction, and this seems inappropriate. We should get the same results applying to Abel transform row-by-row as we would to the entire image.

@stggh
Copy link
Collaborator

stggh commented Sep 10, 2018

Still travelling.

The concept of basis memory storage is useful and interesting. Thanks for your work on this @MikhailRyazanov .

The implementation is really one of calling the basis generation function get_bs_basex_cached() once and then the transform basex_core_transform() multiple times. In principle, this could be done by the user, however, I agree, it would be nice to perform the basis check automatically.

Note, ideally, get_bs_basex() call should be moved into the common abel.tools.basis.py file.

@DanHickstein's idea of a global basis variable is a good one, preferably passed to each method, rather than accessed via global, just as basis_dir is. If the variable is not None and right size(s), use it, bypassing the call toabel.tools.basis.get_basis_cached().

basis_dir could be repurposed, to be either the basis tuple, as returned from abel.tools.get_bs_cached(method) or a string specifying the basis directory? Note, additional parameter information may need to be included in the tuple to ensure the correct basis is used (particularly for linbasex).

@MikhailRyazanov
Copy link
Collaborator

@DanHickstein, with no RAM caching it is indeed ~2 times slower:
rel_nc
It might be not a big deal for small images, but on my computer (i5-4278U @ 2.60 GHz, DDR3-1600) processing a 1281×1281 image (a reasonable size for VMI) takes ~10 s with caching and ~14 s without, which is a noticeable difference for end users.
By the way, without the vertical transform it is ~5 s (~twice faster, as expected). I need to look at the original BASEX code to see what it actually does and what are the results. (By the way, BASEX transforms it in 1–2 s, but it does only 1 quadrant.)

@MikhailRyazanov
Copy link
Collaborator

@stggh, I do not not what are the use cases for this project and how the architecture decisions were made, but I would probably have the Transform as an abstract base class representing the operation rather than its results, and then derive from it actual classes implementing specific methods. Then such "transform" object could be carried around, with all settings and caches embedded, and applied to different images. If for some reason different transforms are needed, they can be done by different objects, each holding its appropriate settings and caches.

But you probably do not want to refactor everything now, so do whatever fits in the current conventions or tell me how exactly it should look.

@MikhailRyazanov
Copy link
Collaborator

In principle, automatic caching can be done without globals, using some mad skills. _M and _LR can even be changed to "normal" user-suppliable arguments (the cached "default" values will be still accessible through inspect.getargspec()). And maybe even some decorators to do this already exist... But I think that using module variables is more understandable, and that a proper solution would be encapsulation in a class.

@DanHickstein
Copy link
Member Author

with no RAM caching it is indeed ~2 times slower

Well, if there aren't many disadvantages to caching, then we may as well do it.

I do not not what are the use cases for this project and how the architecture decisions were made

Unless @rth was involved, any architecture decision were made by myself and @stggh, two experimentalists that don't know all that much about software engineering. So, I'm sure that nothing is perfect :)

Our main motivation for making Transform a class so that we could access the various outputs that could be returned using nice "dot notation". You're idea about using classes to ease the implementation of new transform methods sounds very interesting. But yeah, it sounds like it might be some work to implement, so we'd want to do a careful pro/con analysis.

If the globals stay confined to the abel. namespace, then I think it's fine to use them.

@MikhailRyazanov
Copy link
Collaborator

I've gone through the BASEX version 2.0 (basex.exe) sources (Borland C++) to check
what it actually does and how. Here is what I've learned, for future reference:

  1. The code is written poorly. There are no comments except very few in Main.h, which are also misleading.

  2. It does all calculations in single precision (C++ float).

  3. The martices are transposed with respect to the article, and thus are multiplied backwards. The article uses first index for x and r, and second for z. The code has them reversed, so horizontal lines in the raw image correspond to matrix rows. But the matrices are stored as concatenated columns, so image matrices undergo "transposition" for input/output.

  4. The basis sets and other matrices were apparently generated in Matlab, since there is no C++ code for them. But there is no Matlab code either.

  5. The correspondence (hopefully, correct) between the article and code notations and the files:

Article Code Files Comment
initial_image Raw input. Matrix rows = horizontal image lines
initial_raw Input padded with zeros to 1001×1001 size, centered
PT raw initial_raw folded left-right; for "2D" also top-bottom
I(r, z) raster reconst_image First index is z, second is r
CT Ci Coefficients matrix
ZT Mc narrow_c.bin
broad_c.bin
Image basis
XT M narrow.bin
broad.bin
Projected basis
(X XT)T mm mm.bin
mm_br.bin
(Z−1)T = BT a a.bin
a_br.bin
Vertical-transform matrix
[(X−1)T]T = A(0)T b b.bin
b_br.bin
Horizontal-transform matrix without regularization
[(X XT + 20·I)−1X]T = A(20)T b2 b2.bin
b2_br.bin
Horizontal-transform matrix with regularization parameter = 20
  1. All calculations are performed with folded images (1 quadrant for "2D", one half for "1D"), so the matrices above correspond to 1 quadrant (no vertical transform is done for "1D").

  2. The regularization parameter (I'll call it "reg") corresponds to q12.

  3. There are precomputed regularized matrices, which are used when reg = 20.0 is requested, although this does not seem to increase the speed noticeably.
    (This reg = 20 is also different from q12 = 50 or 5 in the article.)

  4. The regularization is never applied to the vertical transform. This makes the vertical-transform computations quite pointless, since mathematically the whole chain raw → Ci → reconst_image is an identity transform (vertically).
    The speed and anisotropy distributions for "2D" are calculated from the coefficients matrix (and explicit basis function), but this is unlikely to be any better than using the reconstructed image, especially for basis functions broader than 1 pixel. The distributions for "1D" are calculated from the reconstructed image, so Ci is not used.

@MikhailRyazanov
Copy link
Collaborator

Important conclusions from my previous comment are:

  1. No vertical regularization should be used.
  2. The vertical transform is not needed at all.
  3. My reg does correspond to the "regularization parameter" in BASEX.

So I've done 1. and 2., plus combined the horizontal transform into a single matrix. The speed has improved a lot:
rel
The results now are not bitwise equal the old ones, since the operations are performed differently, but they are within the numerical precision: the maximal intensity discrepancy is ~10−14 of the maximal image intensity.

But, as @DanHickstein has pointed out, this implementation transforms full-width images. This does not give any advantages, but only makes is slower. I think, we should switch to a half-width transform, like in BASEX.

@DanHickstein
Copy link
Member Author

Well done Mikhail! It is great to know exactly how Basex.exe operates, and I'm very glad that you have figured out the issues with the vertical transform and vertical regularization.

At this point, I think that it is clear that you now have an excellent understanding of the basex code. You should feel free to edit the code as extensively as you like. I have never been completely happy with our basex implementation, because the old variable names and weird structures have been transferred from the legacy Matlab code, when it would be much better to directly connect the variable names to the published paper.

A few responses to your specific points:

The code is written poorly. There are no comments except very few in Main.h, which are also misleading.

Ha! Well, the current basex.py in PyAbel could probably use more comments as well.

The martices are transposed with respect to the article, and thus are multiplied backwards.

Gosh, that does seem really confusing!

The basis sets and other matrices were apparently generated in Matlab, since there is no C++ code for them. But there is no Matlab code either.

As far as I can tell, the basis sets were generated using Matlab code. You actually sent me this matlab code, as part of a big folder of BASEX-related matlab scripts back in 2011, which spawned PyAbel :). I believe that the relevant script is "basis2.m".

The correspondence (hopefully, correct) between the article and code notations and the files

This is great! Should we consider changing our notation to better match the manuscript?

There are precomputed regularized matrices, which are used when reg = 20.0 is requested, although this does not seem to increase the speed noticeably.

So, with basex.exe, any regularization factor is possible, but reg=20.0 offers enhanced speeds? This is indeed very strange.

But, as @DanHickstein has pointed out, this implementation transforms full-width images. This does not give any advantages, but only makes is slower. I think, we should switch to a half-width transform, like in BASEX.

Oh, are the BASEX transforms still performed on the full image? Yes, it would be great to switch to the half-width image if possible, since this best matches the other transform methods and should provide the best performance.

@MikhailRyazanov
Copy link
Collaborator

Thanks, @DanHickstein! I'll then proceed to converting everything in abel.basex to half-width.

By the way, I was thinking about extracting basis sets for smaller images from a larger cached basis, if it is available, which should avoid this ridiculously long basis computation in most cases. I suspect that this can be done simply by slicing the array (discarding the outer part), but even if there are fringe effects, they could be treated much faster than a full basis computation. Moving to half-width operation should also make this easier.

Should we consider changing our notation to better match the manuscript?

I would like, but there are two problems. First, that transposition (the article convention is bad). Second, I don't like using Z for the image basis (in the r direction!). I would use B and Bp for the basis and its projection, but B is already used in the article for a different purpose. So my thoughts are to leave the current notation (it is at least compatible with the Matlab and C++ implementations) and explain in the comments what is what. If there are no better ideas...

As far as I can tell, the basis sets were generated using Matlab code. ... I believe that the relevant script is "basis2.m".

Partially. That script saves them as text files and does not produce mm, a, b and b2. The closest candidate is "makebasis1.m" (using the results of "basis2.m"), but it is still not what was used.

@MikhailRyazanov
Copy link
Collaborator

And a long-promised comparison of regularization effects.

Images ([−5, 5] scale):
img

Speed distributions:
speed

As can be seen, reg ~ 200 (in this case) almost completely removes the axial noise from the image and has a negligible effect on the extracted distribution. At reg ≳ 1000 the image starts to blur.

While regularization reduces the image noise, it almost does not reduce the speed noise because:

  1. The largest image noise is near the axis, where sin θ → 0.
  2. The noise has nearly zero average, so its integral is small.

That is, "denoising" the speed distributions requires relatively strong regularization (another example with reg = 1000 can be seen in my dissertation, Fig. 3.10), but for those who needs the images themselves, rather than their integral characteristics, even moderate regularization is a big advantage.

@DanHickstein
Copy link
Member Author

Thanks, @DanHickstein! I'll then proceed to converting everything in abel.basex to half-width.

Well, that's just great news!!

By the way, I was thinking about extracting basis sets for smaller images from a larger cached basis, if it is available, which should avoid this ridiculously long basis computation in most cases.

Interesting! If that works, then it would be fantastic. We could simple generate the 1000x1000 baiss set once and then crop all smaller sets from it. Another crazy idea: now that we have scrapped the vertical transform, does that mean that we can generate the basis set for a 1D transform and then somehow copy this into a larger matrix?

I would use B and Bp for the basis and its projection, but B is already used in the article for a different purpose. So my thoughts are to leave the current notation (it is at least compatible with the Matlab and C++ implementations) and explain in the comments what is what. If there are no better ideas...

I suppose the alternative would be to write our own explanation of the math and follow those conventions. At a minimum, as you say, we should explain in the comments what everything is.

The closest candidate is "makebasis1.m" (using the results of "basis2.m"), but it is still not what was used.

Oh, that's interesting. Kinda sketchy that the origin of the calculated basis sets is completely unknown for basex.exe :)

And a long-promised comparison of regularization effects.

Fantastic! What colormap did you use?

And this is fairly convincing evidence that strong noise suppression is possible with minimal broadening of sharp features. This is clearly a big advantage of the BASEX method, and I'm really happy that we are finally getting this feature into PyAbel. Thank you so much!

@stggh
Copy link
Collaborator

stggh commented Sep 19, 2018

Back from my travels :-(

Very nice coding and some clever ideas @MikhailRyazanov!

My only concern (discussion point) is that global basex basis arrays remain even when the basex method is no longer used. This may not play well with the other Abel transform methods operating on large image sizes.

BTW, long is not part of python3:

 205     elif isinstance(nbf, (int, long)):

Any chance that you can implement the anisotropy parameter calculation that was available in the original basex program?

extracting basis sets for smaller images from a larger cached basis

This is neat. I think this approach may also apply for the Dasch method basis calculation.

@MikhailRyazanov
Copy link
Collaborator

My only concern (discussion point) is that global basex basis arrays remain even when the basex method is no longer used.

Well, if we had Transform creating an object, they would be killed with it. ;–)

With the half-width transform, a 1023×1023 image requires three 512×512 arrays, which is 6 MiB + Python/numpy overhead. I think, this is not a big problem. But I can add a cleanup=False parameter to abel.Transform() and abel.basex.get_bs_basex_cached(), such that those who care could specify cleanup=True to get rid of these arrays. Then other methods that need caching can honor it as well.

BTW, long is not part of python3

Do we actually need that stuff? Non-"auto" bases never worked, I do not have plans to implement them (I do not really know the details, and "broad" bases are relatively useless), and it is unlikely that anybody will soon. The removal of the vertical transform has already changed the calling conventions (not the default ones, but it broke some tests), so maybe we should ignore nbf also (completely remove it or print a warning "not used", if compatibility is needed)?

Any chance that you can implement the anisotropy parameter calculation that was available in the original basex program?

It was painfully slow (~40 seconds on my computer). I think, calculating it from the reconstructed image using a common procedure is better in every sense.

@DanHickstein
Copy link
Member Author

With the half-width transform, a 1023×1023 image requires three 512×512 arrays, which is 6 MiB + Python/numpy overhead. I think, this is not a big problem. But I can add a cleanup=False parameter to abel.Transform() and abel.basex.get_bs_basex_cached(), such that those who care could specify cleanup=True to get rid of these arrays. Then other methods that need caching can honor it as well.

Yeah, I agree that it's probably not a huge problem, except if people are running benchmarks or something. I like the idea with cleanup=False.

so maybe we should ignore nbf also (completely remove it or print a warning "not used", if compatibility is needed)?

Wider basis sets worked in basex.exe, right? This seems like it might be useful for some people, so I am holding out hope that this eventually gets implemented. But yes, in it's current state, a warning or error should be raised.

It was painfully slow (~40 seconds on my computer). I think, calculating it from the reconstructed image using a common procedure is better in every sense.

Yeah, this also seems fine to me as well. @stggh, in the case of linbasex, it makes sense to extract the anisotropy directly from the functions, since the broad functions closely resemble the photoelectron/photoion distributions. I don't think that we would see the same advantage when using narrow gaussian functions.

@stggh
Copy link
Collaborator

stggh commented Sep 19, 2018

if we had Transform creating an object

It does ;-). But I get your point. We decided to keep each PyAbel transform as a function to facilitate independent (of PyAbel) use. A wrapper class would nice, but this is not relevant for your excellent(*) basex/PR improvements.

cleanup=True to get rid of these arrays

A good solution. I often transform 8192x8192 pixel images from good quality (signal/noise) experimental photoelectron measurements.

reconstructed image using a common procedure is better

Photoelectron people (aka Dan Neumark UCB) like to publish a continuous intensityXanisotropy parameter plots. In PyAbel this is only available using linbasex, a method, which at the moment, produces a poorer resolution transform cf the other transform methods e.g. example_anisotropy_parameter.py:
plot_example_pad
Once can cheat the product by using a higher resolution intensity from another method. However, this is not ideal. I agree that in the end, extracting anisotropy parameters from the reconstructed image is better.

One minor basex issue. The first two pixels (columns) of the transform are bad. e.g. 1-D profile 8:
profile8

import abel
import matplotlib.pyplot as plt

f = abel.tools.analytical.TransformPair(101, 8)

b = abel.basex.basex_transform(f.abel, dr=f.dr)
t = abel.dasch.three_point_transform(f.abel, dr=f.dr)

plt.plot(f.r, f.func, label='source')
plt.plot(f.r, b, 'o', label='basex')
plt.plot(f.r, t, label='three_point')
plt.legend()
plt.title(f.label)

plt.savefig('profile8.png', dpi=75)
plt.show()

PS: (*) Now that basex is fast, and will be faster with 1/2-image processing, it nicely complements three_point. Here is a higher resolution (4096x4096 pixel) of the O2- photoelectron spectrum:
basextvst

@MikhailRyazanov
Copy link
Collaborator

I have updated #227, switching to the half-width approach.

Now even for a full image (4 independent quadrants) it works faster than the BASEX program (≲1 s on my computer) and reproduces its results for symmetrized images (within that mystic scale factor).

I think, I'm satisfied with this regularization stuff now, so could you please review #227 and run all the tests? (I'm very interested in the efficiency comparison.) Then it would be nice to tidy up and merge it, and close this issue.

I'll then handle cache cleanup and basis cropping in #226 separately.

I also have the forward transform, but that's another story...

@MikhailRyazanov
Copy link
Collaborator

@DanHickstein,

Wider basis sets worked in basex.exe, right? This seems like it might be useful for some people, so I am holding out hope that this eventually gets implemented. But yes, in it's current state, a warning or error should be raised.

There was only one "wider basis set", and yes, it worked. But I do not know whether anybody used it. I suspect that using wider on narrower (with regularization, you can have more basis functions than pixels) basis sets is equivalent to resampling the images. Anyway, I'm leaving nbf as it is now. Maybe long should be removed (@stggh should know better).

@stggh,

I often transform 8192x8192 pixel images

If there are such brave people to wait for the basis generation... :–)

... produces a poorer resolution ...

In principle, images must have no features sharper than ~2 px (if we consider that Abel = Fourier−1 Hankel, then this is a requirement of the sampling theorem). If linbasex underperforms, and there is no general method yet, I would rather implement it than what BASEX had.

One minor basex issue. The first two pixels (columns) of the transform are bad.

Probably due to something in the basis and its projection. Since the axial pixel is actually a half-pixel, it needs a special treatment, and I'm not sure that this was done correctly. Perhaps, open another issue for implementing nbf ≠ n and investigating r = 0?

@MikhailRyazanov
Copy link
Collaborator

And a long-promised comparison of regularization effects.

Fantastic! What colormap did you use?

'seismic':
seismic

@DanHickstein
Copy link
Member Author

I also have the forward transform, but that's another story...

I'm interested in this story :)

I suspect that using wider on narrower (with regularization, you can have more basis functions than pixels) basis sets is equivalent to resampling the images.

Yeah, my impression is that using a wider basis set was essentially equivalent to blurring the image before processing, but maybe there is some advantage to the larger basis functions.

@MikhailRyazanov
Copy link
Collaborator

I also have the forward transform, but that's another story...

I'm interested in this story :)

Basically, exchanging M and Mc produces the forward transform. I just need to look how the forward/inverse selection is done in the other methods and code it accordingly.

But let's finish with this issue first.

@MikhailRyazanov
Copy link
Collaborator

@DanHickstein, thanks for merging #227!

Perhaps, this issue can be closed now?

@DanHickstein
Copy link
Member Author

Sounds good. Mission accomplished!

@DanHickstein
Copy link
Member Author

And a big THANK YOU to @MikhailRyazanov for implementing this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants