New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: New PIESNO example and small corrections #390

Closed
wants to merge 12 commits into
base: master
from

Conversation

Projects
None yet
8 participants
@mdesco
Contributor

mdesco commented Jul 9, 2014

The PIESNO method was merged in Dipy without an example. Here is an example for the Dipy Gallery.

At the same time, the PIESNO method was corrected by @samuelstjean to better match the Koay et al 2009 paper. Other examples have also been uniformized.

Noise estimation using PIESNO
=============================
Using PIESNO [Koay2009]_ one can detect the noise's standard deviation from

This comment has been minimized.

@arokem

arokem Jul 9, 2014

Member

=> "the standard deviation of the noise"

=============================
Using PIESNO [Koay2009]_ one can detect the noise's standard deviation from
diffusion-weighted imaging (DWI). PIESNO also works from multiple channel

This comment has been minimized.

@arokem

arokem Jul 9, 2014

Member

=> "...also works with multiple channel..."

DWI datasets that are acquired from N array coils for both SENSE and
GRAPPA reconstructions.
The PIESNO paper [Koay2009]_ is mathetically quite involved.

This comment has been minimized.

@arokem

arokem Jul 9, 2014

Member

typo : "mathematically"

the input number of coils N, PIESNO finds what sigma each Gaussian distributed
image profile from each of the N coils would have generated the observed Rician (N=1)
or non-central Chi (N>1) distributed noise profile in the DWI datasets.
[Koay2009]_ gives all the glory details.

This comment has been minimized.

@arokem

arokem Jul 9, 2014

Member

I think you mean "gory", but I would just say "all the details".

PIESNO makes an important assumption: the
Gaussian noise standard deviation is assumed to be uniform either
across multiple slice locations or across multiple images of the same location,
e.g., if the readout bandwidth is maintained at the same level for all the images.

This comment has been minimized.

@arokem

arokem Jul 9, 2014

Member

Could you break this sentence up a little bit? It's a bit hard to parse. Imagine that you are talking to a psychologist, and you want them to understand what this means :-)

for i in range(2):
ax[i].set_axis_off()
#plt.show()

This comment has been minimized.

@arokem

arokem Jul 9, 2014

Member

If it's commented out, maybe it doesn't need to be here?

This comment has been minimized.

@mdesco

mdesco Jul 25, 2014

Contributor

I like to keep it there for people not familiar with fvtk. They see what is needed to actually see the window without saving the PNG. I have added a comment.

But I can remove if people don't like it.

ax[0].set_title('Axial slice of the b=0 data')
ax[1].imshow(axial_piesno, cmap='gray', origin='lower')
ax[1].set_title('Background voxels from the data')
for i in range(2):

This comment has been minimized.

@arokem

arokem Jul 9, 2014

Member

for a in ax:
a.set_axis_off()

"""
Here, we obtained a noise standard deviation of 7.26. For comparison, a simple
standard deviation of all voxels in the estimated mask (as done in the previous example)

This comment has been minimized.

@arokem

arokem Jul 9, 2014

Member

What previous example? Could you please link to that one with the :ref: directive?

SNR estimation
~~~~~~~~~~~~~~
Basic SNR estimation
~~~~~~~~~~~~~~~~~~~~
- :ref:`example_snr_in_cc`

This comment has been minimized.

@arokem

arokem Jul 9, 2014

Member

Is this the example you are referring to?

This comment has been minimized.

@mdesco

mdesco Jul 25, 2014

Contributor

Yes.

# this should be stable with more or less 50% of the guesses at the same value.
#print(sigma)
sigma, num = mode(sigma, axis=None)

This comment has been minimized.

@arokem

arokem Jul 9, 2014

Member

It seems to me that the mode might fail in some cases, depending on the tolerance of the mode calculation. Is this what Koay et al. did? How about the median, for robustness?

This comment has been minimized.

@samuelstjean

samuelstjean Jul 10, 2014

Contributor

I just used the mode because most of the time you have something like 60%-75% of the slices with exactly the same sigma. The median could also be used I guess, I personaly sued it to double check that the mode worked correctly.

I don't think they actually to use the same sigma value for the whole volume, it's just really less of an hassle than to consider each slice separately for any purpose one might have afterward.

This comment has been minimized.

@samuelstjean

samuelstjean Jul 10, 2014

Contributor

One again, the slice by slice in a for loop is mostly for the end user convenience, and not backed in any way by theory. One can call the _piesno_3D function himself to get a slcie by slice estimate (as desired in spinal cord dMRI), so I'll think a a way to write that as a note in the docstring.

#print(sigma)
sigma, num = mode(sigma, axis=None)
#print(sigma, num)

This comment has been minimized.

@arokem

arokem Jul 9, 2014

Member

this is probably a debug statement (?). You can remove it or mark it with a #debug comment, so we know

Note
------
This function assumes two things : 1. The data has a noisy, non-masked

This comment has been minimized.

@arokem

arokem Jul 10, 2014

Member

Just out of curiosity (and the kind of data I have...): if this is not the case, for example if the data in the background gets masked out by the scanner, is it possible to use some of the brain data for this? For example, is it possible to use the ventricles?

This comment has been minimized.

@samuelstjean

samuelstjean Jul 10, 2014

Contributor

Best way to know is to try it, the method is fully automatic so it will pick what it can. Sometimes it will find the skull and being a region of noise, but not the ventricules from what I have seen.

Also, if there is no background, it might just not find anyhting qualifying as background/noise either.

This comment has been minimized.

@arokem

arokem Jul 10, 2014

Member

Another question about this: have you run this on the HCP data? What N are you putting in for that? cc: @klchan13

This comment has been minimized.

@samuelstjean

samuelstjean Jul 10, 2014

Contributor

Not personally for the public released datasets (I'm playing with a spinal cord one currently). Do they say the hardware specs of their scanner somewhere? They have a ton of papers about that, for example http://onlinelibrary.wiley.com/doi/10.1002/mrm.24623/abstract;jsessionid=1A58CFCAB23677366853EE1BC46F011A.d03t02

They used a SENSE based one on a 32 channel coils, so if they always used that one, and this is the right human connectome project (there are two of them), that would mean N=1 because of SENSE.

Another way to check that is just draw a manual roi, then check the nosie profile. It's easy to see if it's rician or not by checking the curve. In the latter case, it's harder to guess the value of N (anyway I can't).

@arokem

This comment has been minimized.

Member

arokem commented Jul 10, 2014

Looks good! I had a few comments/questions, and I think that the example can be a little bit more explanatory, so that users (such as myself) can better understand how/why this method is useful to them(and @Garyfallidis will come right along and say that it needs to be shorter...), but overall great addition.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Jul 10, 2014

Well, next thing to do is probably to revamp the restore and nlmeans example with this now, since it's exactly made for that kind of thing.

@arokem

This comment has been minimized.

Member

arokem commented Jul 10, 2014

+1 on that!

On Wed, Jul 9, 2014 at 6:00 PM, Samuel St-Jean notifications@github.com
wrote:

Well, next thing to do is probably to revamp the restore and nlmeans
example with this now, since it's exactly made for that kind of thing.


Reply to this email directly or view it on GitHub
#390 (comment).

same realisation of the volume, such as dMRI or fMRI data.
N : int
The number of phase array coils of the MRI scanner.

This comment has been minimized.

@samuelstjean

samuelstjean Jul 10, 2014

Contributor

Remark from line 32 to 36 could also be added here.

PIESNO works in two steps. 1) First, it finds voxels that are most likely
background voxels. Intuitively, these voxels have very similar
diffusion-weighted intensities (up to some noise) in the fourth dimension
of the DWI dataset, as opposed to tissue voxels that have diffuison intensities

This comment has been minimized.

@samuelstjean

samuelstjean Jul 10, 2014

Contributor

diffuison -> diffusion

"""
Now that we have fetched a dataset, we must call PIESNO with right number N

This comment has been minimized.

@samuelstjean

samuelstjean Jul 10, 2014

Contributor

the right number

of coil used to acquire this dataset. It is also important to know what
was the parallel reconstruction algorithm used.
Here, the data comes from a GRAPPA reconstruction from
a 12-element head coil available on the Tim Trio Siemens, for which

This comment has been minimized.

@samuelstjean

samuelstjean Jul 10, 2014

Contributor

12-elementS

Here, the data comes from a GRAPPA reconstruction from
a 12-element head coil available on the Tim Trio Siemens, for which
the 12 coil elements are combined into 4 groups of 3 coil elements
each. These groups are received through 4 distinct receiver channels,

This comment has been minimized.

@samuelstjean

samuelstjean Jul 10, 2014

Contributor

Are the group received? Seems weird, it's more like the signal is received through 4 distinct groups of receiver channel or something like that no?

@mdesco

This comment has been minimized.

Contributor

mdesco commented Jul 25, 2014

I have just pushed corrections and modifications asked.

Related to the following comment by @arokem: " I think that the example can be a little bit more explanatory, so that users (such as myself) can better understand how/why this method is useful to them(and @Garyfallidis will come right along and say that it needs to be shorter...)"

I have done everything I can to make this example clear. Have a go to read the paper and you will see that it is very hard to understand. I don't think it is possible to make the example shorter unless I cut on the blabla and explanations.

I think that the following points come across:

  1. We can estimate the noise std automatically with PIESNO
  2. It is robust to multi-channel acquisitions from parallel imaging
  3. Background voxels can be estimated because of the 4-th dimensions of diffusion. Basically, voxels that are background have a flat intensity profile. They are just noise. As opposed to a tissue voxel who has a highly variable 4-th dimension profile.

I let you guys take over from here.

return sigma
def _piesno_3D(data, N, alpha=0.01, l=100, itermax=100, eps=1e-5):

This comment has been minimized.

@samuelstjean

samuelstjean Jul 25, 2014

Contributor

Another nitpick by myself : The real version of the paper is used on 2D images as opposed to 3D volume like it is now made in the example and the new function. The mode/median part is just a hack I added myself for simplicuity and does not rely on any real theoretical background.

I try to specify that more explicitely in the functions, for example by removing the _ in the piesno_3D function.

Anyway, is it more of a convenience than anything else and seems to work so far on all our tests, but for spinal cord imaging for example, it is better to use the slice by slice version because of the huge gaps in the Z axis, which leads to a high variability in the standard deviation estimate of each slice.

@arokem

This comment has been minimized.

Member

arokem commented Nov 13, 2014

Yep - good point.

On Thu, Nov 13, 2014 at 11:45 AM, Matthew Brett notifications@github.com
wrote:

Sure, but if there was some way of guessing that the background had been
masked from the data, it would be reasonable to either 'warn' or better
raise an error, that can be overriden with something like
'check_background=False'.


Reply to this email directly or view it on GitHub
#390 (comment).

@jchoude

This comment has been minimized.

Contributor

jchoude commented Nov 13, 2014

I must agree here that having at least some heuristic check with a warning
would be ideal. Sam, you say that if there are assumptions, we should
assume that the user is going to follow them... I disagree with that, for
multiple reasons:

  1. People don't necessarily read the full doc, and can be unaware of some
    assumptions
  2. Not everyone knows what a "correct" background is
  3. It is not necessarily easy to see if your background is correct or not.
  4. If there is a way to, at least in some cases, identify potential
    failures, I think it would be a good plus for the project to be able to
    warn the user.
  5. Not everyone knows what a good sigma is... Just checking the sigmas is
    not a foolproof way that the user will see if the estimation was correct.

2014-11-13 14:49 GMT-05:00 Ariel Rokem notifications@github.com:

Yep - good point.

On Thu, Nov 13, 2014 at 11:45 AM, Matthew Brett notifications@github.com

wrote:

Sure, but if there was some way of guessing that the background had been
masked from the data, it would be reasonable to either 'warn' or better
raise an error, that can be overriden with something like
'check_background=False'.


Reply to this email directly or view it on GitHub
#390 (comment).


Reply to this email directly or view it on GitHub
#390 (comment).

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Nov 13, 2014

Well, actually instead of providing a switch to check for the background (where masked background is defined as a bunch of zeros), the user can simply check the mask. If it contains data instead of background noise (and actually it should simply not find anything and return zeros) , it is easy to see visually.

As long as there is randomly behaving voxels (which are noise), it will pick them up. I only think the problem lies with brain masked data, which probably never comes out of the scanner as is. In that case, none of the two available methods will do a good job in my opinion.

@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Nov 13, 2014

The problem is that there will be many users who will not check the mask.

If the problem is easy to see visually, surely there is some heuristic to pick it up?

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Nov 13, 2014

Well, the returned array should be a bunch of zeros if nothing is found. I don't have any data on hand, but from past experience no voxels should fit the internally set threshold if they are all data. Best way would be to test it on a bunch of various data from various scanner and check the results. It takes less than one minute usually, I can do the checking if people are willing to run it on their datasets.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Nov 13, 2014

Just did a quick run on a masked background dataset

[ 25.62295723 25.62295723 25.62295723 34.16394424 34.16394424
34.16394424 34.16394424 34.16394424 34.16394424 34.16394424
34.16394424 34.16394424 34.16394424 34.16394424 34.16394424
25.62295723 25.62295723 25.62295723 25.62295723 25.62295723
25.62295723 25.62295723 25.62295723 25.62295723 25.62295723
25.62295723 25.62295723 25.62295723 25.62295723 25.62295723
25.62295723 25.62295723 25.62295723 25.62295723 25.62295723
25.62295723 25.62295723 25.62295723 25.62295723 25.62295723
25.62295723 25.62295723 25.62295723 17.08197212 25.62295723
17.08197212 17.08197212 17.08197212 17.08197212 17.08197212
17.08197212 17.08197212 17.08197212 17.08197212 17.08197212
17.08197212 17.08197212 17.08197212 17.08197212 17.08197212
17.08197212 17.08197212 17.08197212 17.08197212 17.08197212
17.08197212 17.08197212 17.08197212 17.08197212 17.08197212
17.08197212 8.54098606 8.54098606 8.54098606 8.54098606
8.54098606 8.54098606 8.54098606 8.54098606 0. 0. 0.

  1.       0.           0.           0.           0.           0.                                                                                                   0.
    
  2.       0.           0.           0.           0.           0.                                                                                                   0.        ]
    

as expected, it did return the noise standard deviation for each slices, empty slices were identified as having no noisy voxels, and keeping all of the array did inform us of spatially varying noise.
As for a HCP dataset (since they are fully masked), we can check if most of the slices are returned as zeros also with some noise sometimes.

[ 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 128.83085632 129.95111084
126.03018188 122.66937256 130.51124573 132.19165039 138.91326904
124.34977722 134.43218994 135.55245972 133.31192017 124.34977722
131.07138062 136.1125946 130.51124573 131.6315155 119.86870575
116.50789642 124.90991211 123.78964233 129.39099121 131.07138062
129.39099121 141.15379333 135.55245972 137.23286438 138.35313416
133.87205505 138.35313416 134.43218994 138.35313416 137.23286438
133.86416626 140.03353882 129.95111084 138.84295654 138.91326904
134.43218994 133.31192017 128.83085632 129.39099121 126.59031677
1004.32049561 131.07138062 128.27072144 977.99420166 961.19018555
135.55245972 137.23286438 130.51124573 141.71392822 144.51460266
148.99568176 142.834198 869.88830566 132.19165039 132.37098694
138.91326904 132.75178528 135.97987366 131.07138062 130.51124573
126.03018188 119.30857086 122.10923767 116.37518311 117.06803131
122.10923767 126.59031677 118.74843597 123.22950745 117.06803131
124.34977722 117.6281662 109.78629303 110.90655518 107.54575348
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. ]

There are 4 slices with values around 1000, but other than that nothing seems farfetched. Hence having the full array gives more information than the median or the mode as previously.

@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Nov 13, 2014

For the heuristic, presumably we can't depend on zeros being returned? I guess these are from slices with no brain? There may be no such slices in a dataset.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Nov 14, 2014

Same HCP dataset, but cropped to the minimal bounding box.

[ 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
111.46669006 103.06468201 109.78629303 103.06468201 117.6281662
115.38763428 121.54910278 118.74843597 119.30857086 114.82749939
119.86870575 112.02682495 117.6281662 108.10588837 111.46669006
110.34642029 115.38763428 112.58695984 115.38763428 111.2700119
110.34642029 105.86534882 100.26400757 116.50789642 115.38763428
115.94776154 112.58695984 120.98897552 116.50789642 115.94776154
120.98897552 124.34977722 123.78964233 121.54910278 119.30857086
119.86870575 114.6308136 116.54057312 108.66602325 100.82414246
106.4254837 120.42884064 126.59031677 8.69328213 109.22615814
117.6281662 8.73809242 111.56021881 124.90991211 112.41056061
120.42884064 113.70722961 116.04129028 113.14709473 126.03018188
117.6281662 132.19165039 112.9706955 8.27318096 8.18355942
117.41120148 118.18830109 113.70722961 8.00431633 114.2673645
109.58960724 109.22615814 106.98561859 105.30521393 108.66602325
101.14702606 101.20787811 6.26229954 95.60653687 107.54575348
104.74507904 106.98561859 109.22615814 108.10588837 102.50454712
105.30521393 100.26400757 102.50454712 107.54575348 100.26400757
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. ]

Now the extreme values are lower than 10 instead of being close to 1000. Looking at the array in majority, they both get values around 100 to 140 , albeit a little lower for the cropped dataset.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Nov 14, 2014

As for flaguing stuff, Checking for zeroes background could be made by checking at the median for example, if it's zero, then most of the array is probably masked. As for cropped dataset, I don't know how it could be easily done.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Jan 18, 2015

Well, since this seems to only collect dust, should we get the discussion back up? I toyed a bit with some stuff regarding the background on the hcp data, and here are some findings:

  1. Since the initial estimation is based on a quantile, if there is more than half of zeros it will give back zero and just bail out of the heavy computations.
  2. Even if most background is masked, it still picks up nicely the ring of garbage/noise left around the skull on the HCP data.
  3. I think we should return a stack of arrays after all instead of relying on the mode, the user can then choose to use it as is since the interest lies in having a variable noise estimation, which is more robust for diffusion MRI than a single value based on the background like the other method provides currently, which is fine for lightly noisy data like T1 weighted images.
@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Feb 4, 2015

@arokem @matthew-brett @Garyfallidis

So, before this gets forgotten forever, anything else you want to add? I kinda lost track fo what happened after a while, but whatever, the detecting background or not stuff is taken care of built-in by the quantile code, that is, if the detected quantile value is 0 (so more than half of selected slice amongst volume is zero), the returned value is 0 since it's the first guess.

@arokem

This comment has been minimized.

Member

arokem commented Feb 4, 2015

Well, for starters, a rebase on master is needed. Second, how is this related to: #572? I haven't done a thorough review of that one yet, just a couple of clarifying comments. The truth is that I don't feel like I have a full picture yet, but that's probably because I don't have the time to dig into this right this moment.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Feb 4, 2015

#572 will allow you to supply a volume computed wiht piesno (or any other fancy method for that matter) as opposed to a single volume. This PR, if my memory serves correctly, adds a wrapper function to pass a 4D array, which is then iterated upon slice by slice. The old version was about the user wrapping it in a for loop (as intended in the paper), and for some reason that point did not reach through the masses.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Feb 14, 2015

@samuelstjean @arokem is telling you that this needs a rebase. Are you saying that first we should merge #572?

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Feb 15, 2015

They are not related directly in any way, so I don't think that is needed. It's just that #572 enables you to use the result piesno function directly instead of using hackery to supply a variable estimation of the noise.

@arokem

This comment has been minimized.

Member

arokem commented Feb 15, 2015

Cool. @mdesco - could you please rebase this one on top of current master?

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Feb 15, 2015

We still need to check everything is in working order as expected before
someone merges it early, this thing is so old I kinda lost track of it.

Le 2015-02-15 13:53, Ariel Rokem a écrit :

Cool. @mdesco https://github.com/mdesco - could you please rebase
this one on top of current master?


Reply to this email directly or view it on GitHub
#390 (comment).

@arokem

This comment has been minimized.

Member

arokem commented Feb 15, 2015

OK. I would be happy to try this out on some data before merging. Still needs a rebase for me to be able to do that...

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Feb 17, 2015

I added a check so that if the initial estimation of the quantile is zero (so it means that more than half of the slice is zero), a warning is raised and the function returns a zero in that case.

I also rebased on master.

@arokem

This comment has been minimized.

Member

arokem commented Feb 17, 2015

You sure about that rebase?

I am still getting a "can't merge due to conflict" message on the github
interface

On Tue, Feb 17, 2015 at 1:24 PM, Samuel St-Jean notifications@github.com
wrote:

I added a check so that if the initial estimation of the quantile is zero
(so it means that more than half of the slice is zero), a warning is raised
and the function returns a zero in that case.

I also rebased on master.


Reply to this email directly or view it on GitHub
#390 (comment).

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Feb 17, 2015

yes/no I actually made a PR on Maxime branch, with the rebase and everything, so he need to merge that beforehand in his personal repo, I posted too early and moved to something else XD We will review the code tomorrow. I also thought I didn't send that message, silly me.

@arokem arokem referenced this pull request Feb 19, 2015

Merged

Piesno example #576

@stefanv stefanv modified the milestone: 0.9 Mar 4, 2015

@fmorency

This comment has been minimized.

fmorency commented Mar 19, 2015

I've been working lately with the dipy denoise module and one of my conclusion is that the piesno() noise estimation is (almost) unusable in its current state. The documentation is currently lacking details on how to properly use it with 4D data.

By chance, I stumbled upon this pull request which seems to be exactly what's missing (+ #572) to be able to use piesno easily. Those PRs seem (almost) ready to be merged, but they haven't been updated since half-Feb. Is there anything we can do to help push the merge?

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Mar 19, 2015

It has been superseeded by #576 since it was becoming a mess as the master branch was updated, but the whole relevant discussion is still here indeed.

@arokem

This comment has been minimized.

Member

arokem commented Mar 19, 2015

Hi Felix - thanks for looking at this. We have indeed been very close to
merging this for a while now. The most helpful thing you could do at this
point is to look at #576, try it out, review it, and let me know whether
you think this is missing something, or if you think it works well. If you
think this is good, we can then go ahead and merge that one. Sorry for the
slow progress on this issue, and thanks for your help with this!

On Thu, Mar 19, 2015 at 7:11 AM, Félix C. Morency notifications@github.com
wrote:

I've been working lately with the dipy denoise module and one of my
conclusion is that the piesno() noise estimation is (almost) unusable in
its current state. The documentation is currently lacking details on how to
properly use it with 4D data.

By chance, I stumbled upon this pull request which seems to be exactly
what's missing (+ #572 #572) to be
able to use piesno easily. Those PRs seem (almost) ready to be merged, but
they haven't been updated since half-Feb. Is there anything we can do to
help push the merge?


Reply to this email directly or view it on GitHub
#390 (comment).

@arokem

This comment has been minimized.

Member

arokem commented Mar 19, 2015

Closed in favor of #576

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment