New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved mapmri implementation with laplacian regularization and new … #740

Closed
wants to merge 73 commits into
base: master
from

Conversation

Projects
None yet
6 participants
@rutgerfick
Contributor

rutgerfick commented Oct 19, 2015

…q-space scalar indices: q-space inverse variance and mean squared displacement. Also includes generalized cross-validation to find optimal regularization weights. Other convenience functions have been added.

@arokem

This comment has been minimized.

Member

arokem commented Oct 19, 2015

Nice one! @maurozucchelli : would this conflict in any way with the work you did on #733? I think we should probably merge that one first, before harmonizing with this. What do you both think?

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Oct 19, 2015

I took #733 as the 'dipy' mapmri template where I added my own code for laplacian regularization and other scalar indices. So there is no need to harmonize that one first.

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Oct 19, 2015

Ah sorry, I didn't see there was also some changes to a mapmri test in #733 . In any case I don't think the mapmri.py code here conflicts with #733.

@arokem

This comment has been minimized.

Member

arokem commented Oct 19, 2015

OK - let's still do this one-by-one, just to keep things nice and tidy. I suggest we still merge #733 first (once example is done there). Once that's done, you can rebase this one, to absorb the changes to the tests, and then maybe add to the example some of these maps as well.

A few comments for now:

  1. Testing: you will need to write unit tests for all these new methods. Try following the examples in test_mapmri.py. In particular, I see that you have found a bug in the ordering of the tensor eigenvectors. In addition to fixing the bug (great catch!), it is necessary to add a test that would fail with this bug, so that it doesn't somehow creep back into our code-base (bugs have the tendency to do that).
  2. Do I understand correctly your comments that this implementation no longer requires cvx? That would be excellent news! If that is indeed the case, you should remove the import of cvx at the top of the file, and you can scrub away all the warnings about cvx, GPL license and all that (lines 225-231 in your implementation). Well done on that!
@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Oct 19, 2015

Hi @rutgerfick very happy to see progress here. Can't wait to start playing with this.
Before removing cvxopt we should see here some timings and some result comparisons. Is there any specific paper or application which shows that your approach is doing a better job?
Also @maurozucchelli I hope you are looking at this PR. We need your eyes on this too.

@arokem

This comment has been minimized.

Member

arokem commented Oct 19, 2015

I'd say that if we don't see significant performance regression (I don't expect that, it's not like cvx is that fast...), and it passes this test: https://github.com/nipy/dipy/blob/master/dipy/reconst/tests/test_mapmri.py#L49, then we should be fine. What do you think @Garyfallidis?

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Oct 19, 2015

Completely agree and maybe the same idea can be used for SHORE.

@arokem

This comment has been minimized.

Member

arokem commented Oct 19, 2015

That would be just too beautiful.

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Oct 19, 2015

Sorry maybe I was unclear.

The positivity constraint is still using cvxopt, I didnt' change anything with that.
The added value of this implementation is that you can use the laplacian regularization on its own and not use the positivity constraint. In a work that's under review now I show that using the Laplacian you get more accurate results than the positivity constraint in terms of ODF reconstruction. Also, it's much faster :)

@arokem

This comment has been minimized.

Member

arokem commented Oct 19, 2015

Hrrm - IANAL, but I think that we would still need to leave the GPL warning
intact then, to be on the safe side.

On Mon, Oct 19, 2015 at 10:45 AM, rutgerfick notifications@github.com
wrote:

Sorry maybe I was unclear.

The positivity constraint is still using cvxopt, I didnt' change anything
with that.
The added value of this implementation is that you can use the laplacian
regularization on its own and not use the positivity constraint. In a work
that's under review now I show that using the Laplacian you get more
accurate results than the positivity constraint in terms of ODF
reconstruction. Also, it's much faster :)


Reply to this email directly or view it on GitHub
#740 (comment).

@maurozucchelli

This comment has been minimized.

Contributor

maurozucchelli commented Oct 20, 2015

I think the order of the merge is not important. My last pull request just
added some scalar maps, while Rutger's improved the fitting.

On Mon, Oct 19, 2015 at 7:49 PM, Ariel Rokem notifications@github.com
wrote:

Hrrm - IANAL, but I think that we would still need to leave the GPL warning
intact then, to be on the safe side.

On Mon, Oct 19, 2015 at 10:45 AM, rutgerfick notifications@github.com
wrote:

Sorry maybe I was unclear.

The positivity constraint is still using cvxopt, I didnt' change anything
with that.
The added value of this implementation is that you can use the laplacian
regularization on its own and not use the positivity constraint. In a
work
that's under review now I show that using the Laplacian you get more
accurate results than the positivity constraint in terms of ODF
reconstruction. Also, it's much faster :)


Reply to this email directly or view it on GitHub
#740 (comment).


Reply to this email directly or view it on GitHub
#740 (comment).

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Oct 21, 2015

Do you still get strictly positive estimates without the constraint when you only use regularisation or it's suggested to use both at the same time?

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Oct 21, 2015

The Laplacian only induces a smoothness into the fitting of the signal, so it does not strictly impose positivity in the EAP in any way. However, this smoothness does reduce the spurious behavior in the EAP (similarly but better than the low-pass filters in dipy's 3D-SHORE code). Using them both would make sure you positively recover all metrics, but I would like to give you these sidenotes on the Laplacian on HCP data and a dMRI phantom:

In terms of scalar indices, it typically recovers smooth, positive maps in the white and grey matter, only finding negative values near the skull.
In terms of ODFs, the laplacian makes the peaks a little bit less sharp, but reduces the underestimation of the crossing angle that is inherent in MAP-MRI.
In terms of signal fitting, it performs equally or better than the positivity constraint, especially in the extrapolation.
In terms of speed, it's a lot faster than the positivity constraint, especially for higher radial orders.

I'll put in a citation to these results as soon as my paper is out, but this was what I found. Still, you can use both and compare results yourself ;)

@arokem

This comment has been minimized.

Member

arokem commented Oct 22, 2015

Hey @rutgerfick - please rebase this on top of master (let me know if you need any help doing that). Once you have added tests for the new functions, I'd be happy to take another look.

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Oct 22, 2015

Of course, I might have time to figure out how to do do this and make some
tests tomorrow :)

2015-10-22 16:26 GMT+02:00 Ariel Rokem notifications@github.com:

Hey @rutgerfick https://github.com/rutgerfick - please rebase this on
top of master (let me know if you need any help doing that). Once you have
added tests for the new functions, I'd be happy to take another look.


Reply to this email directly or view it on GitHub
#740 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Dec 2, 2015

ping

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Feb 9, 2016

Hello @arokem @Garyfallidis. I have been working hard and added tests for all estimated q-space indices (RTOP, RTAP, RTPP, MSD, and QIV) and Laplacian regularization.

However, I have also added some new functionality, including their tests. If you prefer, I can remove these so that this pull-request is more clean, but I would like to leave this decision to you.

More specifically, the added functionality includes:

  • Estimation of Non-Gaussianity (NG), perpendicular NG and parallel NG, with corresponding tests.
  • The spherical MAP-MRI implementation, which is automatically used when anisotropic_scaling is set to False, greatly boosting fitting speed.
  • Test showing equality of the fitted signal between Cartesian MAP-MRI using isotropic scaling and the spherical implementation (i.e., showing equality with 3D-SHORE).
  • The Laplacian regularization for the spherical implementation and its test.
  • Estimation of all q-space indices in in the spherical implementation, with the corresponding tests.
  • Function odf_sh that analytically produces the spherical harmonic coefficients of the ODF instead of the discrete sphere function for the isotropic implementation.

Please let me know your preference :)
Rutger

@arokem

This comment has been minimized.

Member

arokem commented Feb 9, 2016

Hey @rutgerfick - thanks for all your work on this!

Did you commit and push the tests? I can't see them here.

My tendency is to say that we plow ahead with the full PR, including all the additional functionality you mentioned. But bear with us, as this might take a little while to review. In particular, we might not have time to give this a proper review before upcoming release (Friday!).

For the time being, could you please rebase this on top of the current master branch? There seem to be some conflicts.

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Feb 9, 2016

No problem! I didn't push it yet because I didn't know if you wanted all the extra stuff.

Could you please help me with how to rebase to the current master branch?
I suppose I'll then deal with the conflicts and push it for you guys :)

@arokem

This comment has been minimized.

Member

arokem commented Feb 9, 2016

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Feb 11, 2016

Hi @arokem,
I rebased on the dipy_master branch and added even more tests to make sure everything is working as it should.

Can you tell me why it still says that this branch has conflicts if I have already rebased? What else should I do?

@arokem

This comment has been minimized.

Member

arokem commented Feb 11, 2016

I don't know. Hard to tell from here... Just checking: did you push your rebased branch?

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Feb 11, 2016

When i push and pull it says I'm up-to-date either way. Also in the large list of commits above my last post it shows my commits twice now. Maybe because I tried to rebase twice?

@arokem

This comment has been minimized.

Member

arokem commented Feb 12, 2016

Yeah. There does seem to be something wrong with the rebase -- the history here includes commits that are already in master. Might require a more delicate approach... an interactive rebase, perhaps.

Might be more amenable to handling this face-to-face (over skype). Could you get in touch (gh-username@gmail.com)? Let's try to find a time that would work for both of us to do that.

rutgerfick added some commits Oct 19, 2015

Improved mapmri implementation with laplacian regularization and new …
…q-space scalar indices: q-space inverse variance and mean squared displacement. Also includes generalized cross-validation to find optimal regularization weights. Other convenience functions have been added.

@rutgerfick rutgerfick force-pushed the AthenaEPI:athena_mapmri branch from 164d853 to 76165b1 Feb 17, 2016

Improved mapmri implementation with laplacian regularization and new …
…q-space scalar indices: q-space inverse variance and mean squared displacement. Also includes generalized cross-validation to find optimal regularization weights. Other convenience functions have been added.
- Q-space Inverse Variance (QIV) is a measure of variance in the signal, which
is said to have higher contrast to white matter than the MSD
[Hosseinbor2013]_. We also showed that QIV has high sensitivity to tissue
composition change in a simulation study [Fick2016b].

This comment has been minimized.

@arokem

arokem Jun 10, 2016

Member

=> "[Fick2016b]_" (with the underscore!)

the probability that a proton will be along the axis of the main eigenvector
of a diffusion tensor during both diffusion gradient pulses. RTAP has been
related to the apparent axon diameter [Ozarslan2013, Fick2016]_ under several
strong hypothesis on the tissue composition and acquisition protocol.

This comment has been minimized.

@arokem

arokem Jun 10, 2016

Member

"hypothesis" => "assumptions"?

perpendicular NG and parallel NG. The NG ranges from 1 (completely
non-Gaussian) to 0 (completely Gaussian). The overall NG of a voxel is always
higher or equal than each of its components. It can be seen that NG has low
values in the CSF and higher in the white matter.

This comment has been minimized.

@arokem

arokem Jun 10, 2016

Member

Would be interesting to compare NG and MK. Should be really easy to do once this is merged!

cc: @RafaelNH

self.radial_order, mu[0], constraint_grid)
K = K_dependent * self.pos_K_independent

data = np.asarray(data / data[self.gtab.b0s_mask].mean())

This comment has been minimized.

@arokem

arokem Jun 10, 2016

Member

Is this where you are hitting #1074?

This comment has been minimized.

@arokem

arokem Jun 10, 2016

Member

Should we add a np.isscalar on the output here @matthew-brett ? It would ber really ugly to special-case this to what is essentially just a bug in one version of Numpy.

This comment has been minimized.

@rutgerfick

rutgerfick Jun 10, 2016

Contributor

Yeah this is exactly that. This is very ugly so any suggestion to deal with it more generally would be great.

This comment has been minimized.

@arokem

arokem Jun 10, 2016

Member

Would be good to add a test with a memmap here (and we might want to add this in other places as well...), so that we can be sure that we fix this.

@coveralls

This comment has been minimized.

coveralls commented Jun 13, 2016

Coverage Status

Changes Unknown when pulling b1e00c4 on AthenaEPI:athena_mapmri into * on nipy:master*.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Jun 13, 2016

Could be useful to have that directly in the reconst multivoxel fit part,
if it does not break anything. I currently set all my nibabel topmost
import to convert as an array upon loading since this issue was giving me
problems in other places also, but fixing it there would make it work for
any reconst object afterward.

2016-06-13 13:08 GMT+02:00 Coveralls notifications@github.com:

[image: Coverage Status] https://coveralls.io/builds/6570660

Changes Unknown when pulling b1e00c4
b1e00c4
on AthenaEPI:athena_mapmri
into * on nipy:master*.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#740 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/AC4-6FChKBsAKnHi9HLeEWj8T0fv9JMXks5qLTougaJpZM4GRWmI
.

@arokem

This comment has been minimized.

Member

arokem commented Jun 13, 2016

But we'd need to check whether you are on numpy 1.11 or not, no? Which would be a pain in the neck. Maybe we can just ask people using numpy 1.11 not to do that? :-)

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Jun 14, 2016

Nope, the exact line I use is
data = np.asarray(vol.get_data(caching='unchanged')) # To force ndarray instead of memmaps

Or as here just data = np.asarray(data) in this file https://github.com/nipy/dipy/blob/master/dipy/reconst/base.py#L37

But then I thought doing that might lead to people complaining that their hcp processing now takes 4go of ram for their hcp dataset (remember the dti takes too much memory issue?) and is plain impossible to run, so not a good idea in perspective.

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Jun 14, 2016

So, any more corrections? :)

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Jun 14, 2016

Well, I have some data which gives me degenerated metrics near the corpus
callosum and negative values near the ventricules, but I did not find out
yet if it's just bad data or maybe the fact that I did not enforce
positivity causing the problem.

2016-06-14 10:36 GMT+02:00 rutgerfick notifications@github.com:

So, any more corrections? :)


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#740 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/AC4-6A-XHt9FBGKtkM_tlt8Hj85I-3YVks5qLmf5gaJpZM4GRWmI
.

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Jun 14, 2016

Are degenerate and negative the same?

If you use the "GCV" option it'll find the optimal regularization weight for that particular voxel. However, this optimal weight can be very low in voxels that are very anisotropic or decay quickly (e.g. corpus callosum or ventricles). If you set the regularization weight a bit higher it will in almost all cases become positive.

However, the Laplacian does not guarantee positivity! I also note this in the example. When you care about q-space indices, the best results are obtained by using a combination of both laplacian with either GCV or a lower weight and the positivity constraint, where in this case you can use fewer constraint points since the solution is smoother with the laplacian.

However, if you only care about signal interpolation for example, the laplacian GCV does a better job than the positivity constraint.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Jun 14, 2016

Actually a bit of both, near the CC I get values which are much higher than
the rest (so high they screw up normalisation for visualisation, so they
are easy to spot) and negative metrics values (rtap/rtop) near the
ventricules.

I knew about the fact it does not enforce positivity (it was mentioned long
ago), but they are also not mutually exclusive. I tried once to have both,
but it was either taking ridiculously long (> 1 day compared to around 2-3
hours for laplacian regul) or it most probably hanged in some computation
with openmp/multiprocessing (which is a linalg issue, so not super
relevant). I guess I could give it a retry later this weekend to see, but
it should probably fix it.

I think I was on the default of 0.2 everywhere for this one (using the code
you sent me, so no option to save the object, but according to the script
should be that).

2016-06-14 10:59 GMT+02:00 rutgerfick notifications@github.com:

Are degenerate and negative the same?

If you use the "GCV" option it'll find the optimal regularization weight
for that particular voxel. However, this optimal weight can be very low in
voxels that are very anisotropic or decay quickly (e.g. corpus callosum or
ventricles). If you set the regularization weight a bit higher it will in
almost all cases become positive.

However, the Laplacian does not guarantee positivity! I also note this in
the example. When you care about q-space indices, the best results are
obtained by using a combination of both laplacian with either GCV or a
lower weight and the positivity constraint, where in this case you can use
fewer constraint points since the solution is smoother with the laplacian.

However, if you only care about signal interpolation for example, the
laplacian GCV does a better job than the positivity constraint, as I
recently found out.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#740 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/AC4-6IsEpIB6qa5d-zoh9vy-sKwI3JGKks5qLm2MgaJpZM4GRWmI
.

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Jun 14, 2016

once you guys decide to accept this pull request I'll do a second one right away that allows you to save fitted objects. It takes away a lot of the pressure to be 'faster' since you can fit and save it for later use.

Anyway, if you need me to change anything let me know. Also, the 'coveralls' thing tells me i'm 84% covered. Should that be 100%? how to get it there?

@arokem

This comment has been minimized.

Member

arokem commented Jun 14, 2016

I believe the 84% from coveralls is for the entire package. Coverage of the mapmri module is at 87% (https://travis-ci.org/nipy/dipy/jobs/137204729#L2386). That's actually not a great number. If you want to know which statements are not currently covered by tests, you should run nosetests --with-coverage --cover-package=dipy (see also: http://nose.readthedocs.io/en/latest/plugins/cover.html).

@matthew-brett, @Garyfallidis : do you have any thoughts on the array/memmap situation? The current solution seems like overkill to me. I would like for this to be taken as a final recourse only for numpy 1.11. What do you think?

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jun 15, 2016

Hi @arokem and @rutgerfick. The problem of saving the fit results has been discussed many times in the past and when it will be implement it should support as many reconstruction modules as possible. Hopefully all of them if possible. There is no point at this stage allowing this only for MAPMRI. We need to think the general design. Of course we need to save the fitted parameters. @MrBago had made an interesting idea of how to parallelize the fit too. Memory, parallelization and speed are all components that need to be considered carefully. As is saving temporary the fit parameters to be used again at a later stage.

@arokem what do you mean with "do you have any thoughts on the array/memmap situation? The current solution seems like overkill to me". Which solution? Please be more specific. Apologies, that I don't remember the exact issue.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jun 15, 2016

@rutgerfick these are you missing lines. The coverage is 89% on my machine. Please make sure that something is not critical and try to go beyond 90%. Also @arokem I noticed that the SFM tests have dropped for some reason.

Name                  Stmts   Miss  Cover   Missing
---------------------------------------------------
dipy.reconst.mapmri     758     84    89%   210-211, 222, 228, 234, 239-248, 256, 316, 355, 365-367, 370, 392-394, 437, 443, 501-503, 753-755, 781-783, 821-823, 854-863, 873, 881-882, 908-917, 1459-1479, 1485-1503, 1661, 1664, 1950

@rutgerfick your example has rendering issues. You need to add a line space between all code snippets and all comments in the example. See line 351 for example.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jun 15, 2016

Also cvxopt should be optional and an alternative should be used instead. But maybe there is no alternative. If there is no alternative then the tests need to be skipped when cvxopt is not available. Let us know if you need some help with that. Keep it up @rutgerfick! We are getting there!

@arokem

This comment has been minimized.

Member

arokem commented Jun 15, 2016

On Tue, Jun 14, 2016 at 8:06 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Hi @arokem https://github.com/arokem and @rutgerfick
https://github.com/rutgerfick. The problem of saving the fit results
has been discussed many times in the past and when it will be implement it
should support as many reconstruction modules as possible. Hopefully all of
them if possible. There is no point at this stage allowing this only for
MAPMRI. We need to think the general design. Of course we need to save the
fitted parameters. @MrBago https://github.com/MrBago had made an
interesting idea of how to parallelize the fit too. Memory, parallelization
and speed are all components that need to be considered carefully. As is
saving temporary the fit parameters to be used again at a later stage.

https://github.com/arokem

This is not in this PR. Let's discuss it when it becomes relevant.

@arokem https://github.com/arokem what do you mean with "do you have
any thoughts on the array/memmap situation? The current solution seems like
overkill to me". Which solution? Please be more specific. Apologies, that I
don't remember the exact issue.

#1074

And the solution in this commit:

b1e00c4

This would solve the problem for numpy 1.11, but might be a burden in any
other version of numpy (unless I am misunderstanding the situation).


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#740 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/AAHPNlj2FGIg0T6HZMwaRleg_OPYnWwwks5qL2w-gaJpZM4GRWmI
.

added errorcodes. I found that sometimes the data in a voxel is corru…
…pted for whatever reason and the matrix inversion would give a LinAlgError and quit the fitting. Now there are catches for these specific errors and an errorcode is given to the fitted model to see which voxels failed and for what reason, without stopping the fitting overall
@arokem

This comment has been minimized.

Member

arokem commented on 4bed7ea Jun 17, 2016

Good addition. Could you add tests for this? That is, tests that generate and verify the different error codes?

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 23, 2016

Hi @rutgerfick @arokem and @demianw. I would love to see this rebased and ready to be merged. Can we get back to it please? Many people would love to start using this great method and its updates. Thanks in advance.

@rutgerfick

This comment has been minimized.

Contributor

rutgerfick commented Nov 23, 2016

Hey Elef!!

Great to hear from you again!!
I want to let you know that finishing this contribution is a high priority
for me.
I'm currently preparing my thesis manuscript will finish in the next couple
of weeks.
Once this is done I will make the final corrections to the code and the
example file, as per your instructions, so it can finally be merged :)

Without dipy I definitely could not have accomplished as much as I have
during my PhD. My sincere thanks for your vision!

I'll get back to you soon,

Rutger

On 23 November 2016 at 01:25, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Hi @rutgerfick https://github.com/rutgerfick @arokem
https://github.com/arokem and @demianw https://github.com/demianw. I
would love to see this rebased and ready to be merged. Can we get back to
it please? Many people would love to start using this great method and its
updates. Thanks in advance.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#740 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AI-37Lfb2r6_qF7krbWM9avjeu_mpLQrks5rA4gJgaJpZM4GRWmI
.

@arokem arokem referenced this pull request Nov 25, 2016

Merged

Athena mapmri #1153

@arokem

This comment has been minimized.

Member

arokem commented Nov 25, 2016

@Garyfallidis : what comments are still pending once this is rebased?

I created a rebased version of this on #1153

@arokem

This comment has been minimized.

Member

arokem commented Nov 25, 2016

@Garyfallidis : does anything else need to happen apart from the rebase? I have a rebased version of this in #1153

@arokem

This comment has been minimized.

Member

arokem commented Dec 13, 2016

Superseded by #1153

@arokem arokem closed this Dec 13, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment