New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fiber to bundle coherence measures #828

Merged
merged 17 commits into from Jun 16, 2016

Conversation

Projects
None yet
3 participants
@stephanmeesters
Contributor

stephanmeesters commented Jan 7, 2016

Extends upon the previous pull request #762 with fiber to bundle coherence (FBC) measures used to clean up tractography results. Uses the lookup-table computations of pull request #762 .

Files added:
-- dipy/tracking/fbcmeasures.pyx
Computes the FBC measures.
-- doc/examples/fiber_to_bundle_coherence.py
Demonstration and explanation of FBC measures applied to tractography of the optic radiation.

in HARDI by Combining Contextual PDE flow with
Constrained Spherical Deconvolution. PLoS One.
"""
self.compute(streamlines, kernel)

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

I believe you need to pass num_threads here as well?

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

Fixed!

cdef int [:] streamline_length
cdef double [:, :, :] streamline_points
cdef double [:, :] streamlines_lfbc
cdef double [:] streamlines_rfbc

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

Why are these variables defined in this scope? Would it make sense to do these initializations inside the __init__ instead?

This comment has been minimized.

@stephanmeesters

stephanmeesters Mar 31, 2016

Contributor

I believe that the class variables need to be defined this way in Cython since they require an extension type. Check out this Cython docpage

This comment has been minimized.

@arokem

arokem Apr 1, 2016

Member

Awesome. Thanks for the pointer!

"""
self.compute(streamlines, kernel)

def get_points_rfbc_thresholded(self, threshold, showInfo=False, emphasis=.5):

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

I prefer verbose to showInfo.

Either way, please don't use camelCase.

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

It's been changed to verbose

# remove these.
streamlines_length = np.array([len(x) for x in py_streamlines],
dtype=np.int32)
minLength = min(streamlines_length)

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

minLength => min_length

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

If you are using an array, maybe call np.min here?

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

For some reason np.min led to some problems when I tried it, so I stuck with Python's min.

streamlines_length = np.array([len(x) for x in py_streamlines],
dtype=np.int32)
minLength = min(streamlines_length)
if minLength < 10:

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

Maybe make this threshold a key-word argument (in the __init__ and in the input to compute) with a reasonable default (e.g., 10)?

EDITED: added a question mark.

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

It's been added as a parameter


# if the fibers are too short FBC measures cannot be applied,
# remove these.
streamlines_length = np.array([len(x) for x in py_streamlines],

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

I believe using the shape attribute of the streamlines might be (marginally) faster than calling len on each one.

This comment has been minimized.

@Garyfallidis

Garyfallidis Apr 26, 2016

Member

true story

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

The streamlines are not of equal length so unfortunately calling shape on py_streamlines won't work. In order to gain speed I converted the Python array py_streamlines, which is variable length, into a fixed-size 2D Numpy array based on the longest fiber. The undefined fiber points of the shorter fibers are set to NaN.

This comment has been minimized.

@arokem

arokem May 5, 2016

Member

Oh - sorry. Should've been more specific. I meant something like:

 [x.shape[0] for x in py_streamlines]

This comment has been minimized.

@arokem

arokem May 5, 2016

Member

And I don't think that we want to allocate the fixed-size numpy array. That will be a huge memory hog for large sets of streamlines!

This comment has been minimized.

@stephanmeesters

stephanmeesters May 7, 2016

Contributor

In case of outliers in fiber length that is true. I will have a look at changing it to an array of arrays implementation. Sorry for delayed reactions -- I am currently attending the ISMRM

streamlines_length = np.array([len(x) for x in py_streamlines],
dtype=np.int32)
minLength = min(streamlines_length)
numberOfFibers = len(py_streamlines)

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

numberOfFibers => num_fibers (or some-such, but no camelCase, please).

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

Fixed

minLength = min(streamlines_length)
numberOfFibers = len(py_streamlines)
self.streamline_length = streamlines_length
maxLength = max(streamlines_length)

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

max_length?

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

Also, might want to use np.max instead?

streamlines_nearestp = np.zeros((numberOfFibers, maxLength),
dtype=np.int32)
streamline_scores = np.zeros((numberOfFibers, maxLength),
dtype=np.float64) * np.nan

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

Can any of these arrays be preallocated using cdef? Might make this even faster?

This comment has been minimized.

@stephanmeesters

stephanmeesters Mar 31, 2016

Contributor

Not completely sure, but I think these arrays are already taking advantage of the Cython speedup because all those variables were defined with cdef (see all the cdef declaration in the top of the function). I checked them with cython -a fbcmeasures.pyx and they appeared OK.

score_mp = np.zeros(numberOfFibers)
xd_mp = np.zeros(numberOfFibers, dtype=np.int32)
yd_mp = np.zeros(numberOfFibers, dtype=np.int32)
zd_mp = np.zeros(numberOfFibers, dtype=np.int32)

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

Again, I wonder whether these arrays need to be numpy, or can be allocated using cdef instead? I am really not sure, but thought I would ask.

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

See the previous answer regarding cdef

zd_mp = np.zeros(numberOfFibers, dtype=np.int32)

if have_openmp:
print("Running in parallel!")

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

Print messages only if verbose (or whatever it's called) is true.

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

I've added a verbose parameter

streamline_scores[lineId, pointId] = score_mp[lineId]

if have_openmp and num_threads is not None:
openmp.omp_set_num_threads(all_cores)

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

Out of curiosity: what does this do for you here?

This comment has been minimized.

@stephanmeesters

stephanmeesters Mar 31, 2016

Contributor

This line resets the number of OpenMP threads back to the default (all cores). I will add a comment there.

"""

# finds the region of the fiber with minimal length if 7 points in which the
# LFBC is the lowest

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

Where does the number 7 come from? Does that also need to be set as a key-word argument?

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

The number 7 was something we found experimentally a long time ago, unfortunately I don't have good evidence why. However it's now available as a parameter to the user.

1, streamline_scores)
averageTotal = np.mean(np.apply_along_axis(
lambda x:np.mean(np.extract(x[~np.isnan(x)] >= 0, x[~np.isnan(x)])), 1, streamline_scores))
if not averageTotal == 0:

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

I might be missing something, but isn't averageTotal (or whatever it will be named, once it's not camelCased...) a 1D array? Doesn't apply_along_axis reduce it down only by one dimension?

This comment has been minimized.

@stephanmeesters

stephanmeesters Mar 31, 2016

Contributor

If I recall correctly, apply_along_axis is applied to streamline_scores which is a 2D array. It is applied to the first axis and by doing so loops over each fiber, and then applies the lambda function. This results in a 1D array of moving average values, and finally np.mean is used to get the total average of all moving average values (confusing I know..). In the end average_total is a number.

This comment has been minimized.

@arokem

arokem Apr 1, 2016

Member

Gotcha : I missed the effect of that call to np.mean! Makes perfect sense.

On Thu, Mar 31, 2016 at 6:45 AM, Stephan Meesters notifications@github.com
wrote:

In dipy/tracking/fbcmeasures.pyx
#828 (comment):

  •    Contains the local fiber to bundle coherence (LFBC) for each streamline
    
  •    element.
    
  • Returns

  • output: normalized lowest average LFBC region along the fiber
  • """
  • finds the region of the fiber with minimal length if 7 points in which the

  • LFBC is the lowest

  • intLength = min(np.amin(streamlines_length), 7)
  • intValue = np.apply_along_axis(lambda x: min_moving_average(x[~np.isnan(x)], intLength),
  •                                1, streamline_scores)
    
  • averageTotal = np.mean(np.apply_along_axis(
  •            lambda x:np.mean(np.extract(x[~np.isnan(x)] >= 0, x[~np.isnan(x)])), 1, streamline_scores))
    
  • if not averageTotal == 0:

If I recall correctly, apply_along_axis is applied to streamline_scores
which is a 2D array. It is applied to the first axis and by doing so loops
over each fiber, and then applies the lambda function. This results in a 1D
array of moving average values, and finally np.mean is used to get the
total average of all moving average values (confusing I know..). In the end
average_total is a number.


You are receiving this because you commented.
Reply to this email directly or view it on GitHub
https://github.com/nipy/dipy/pull/828/files/c0cdf42759434257b496b818399ebbd38f0a9627#r58055178

@arokem

This comment has been minimized.

Member

arokem commented Mar 30, 2016

Exciting to have this follow-up! I haven't had an opportunity to look closely at the example yet. For now two general comments:

  • I haven't commented everywhere, but there's a lot of camelCased variables. Please follow PEP8, as much as possible on the variable naming.
  • Tests?

Great work!

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Apr 13, 2016

Thanks for the review. Based on it I've made a lot of formatting changes and added parameters. A new test is added that runs the FBC code with two fibers and checks if the calculated RFBC value is correct.

fbc.get_points_rfbc_thresholded(0, emphasis=0.01)

# check RFBC against tested value
npt.assert_almost_equal(np.mean(rfbc_orig), 1.0549502181194517)

This comment has been minimized.

@arokem

arokem Apr 13, 2016

Member

Where did that number come from? Is this test passing on your machine (it's not passing on Travis currently: https://travis-ci.org/nipy/dipy/builds/122739850). Could this just be a typo?

This comment has been minimized.

@stephanmeesters

stephanmeesters Apr 14, 2016

Contributor

The test passes on my Mac, so I'm not really sure why the Travis bots have a different outcome...

The Travis bots are running the code in parallel, however I'm testing it single thread. Maybe there's a bug somewhere that causes the values to be a little bit different, I'll have to investigate.

This comment has been minimized.

@stephanmeesters

stephanmeesters Apr 14, 2016

Contributor

Running the code in parallel on my Mac has the same outcome. I've tested it on a Windows machine too with the exact same outcome. So I don't know why the Travis bots return 1.0441612054810014 instead of the desired number. Any ideas?

This comment has been minimized.

@arokem

arokem Apr 14, 2016

Member

I can confirm that the test passes on my machine as well. Just to understand: the number in the test (``1.05...`) is based on running the code, or did you do a separate calculation to come up with this number?

This comment has been minimized.

@stephanmeesters

stephanmeesters Apr 14, 2016

Contributor

Yes the number was obtained from running this test code and saving its output (the mean RFBC for two fibers).

This comment has been minimized.

@arokem

arokem Apr 16, 2016

Member

Out of curiosity -- are you working on Anaconda Python? I am suspecting this could be the source of the difference... Otherwise/in addition, could you insert a few print statements into your code and pushing that, so that we can figure out where the divergence starts happening?

This comment has been minimized.

@stephanmeesters

stephanmeesters Apr 16, 2016

Contributor

I'm not using Anaconda on my Mac and the Windows machine probably isn't either, but I will have to check. Adding print statements is a good idea, I'll look into it.

@arokem

This comment has been minimized.

Member

arokem commented Apr 14, 2016

BTW - for some reason that is unclear to me as of yet, I am having trouble building the Cython on Python 3.5 on my machine, with this error:


cythoning dipy/tracking/fbcmeasures.pyx to dipy/tracking/fbcmeasures.c
Traceback (most recent call last):
  File "setup.py", line 245, in <module>
    main(**extra_setuptools_args)
  File "setup.py", line 238, in main
    **extra_args
  File "/Users/arokem/anaconda/lib/python3.5/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/Users/arokem/anaconda/lib/python3.5/distutils/dist.py", line 955, in run_commands
    self.run_command(cmd)
  File "/Users/arokem/anaconda/lib/python3.5/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/Users/arokem/anaconda/lib/python3.5/site-packages/Cython/Distutils/build_ext.py", line 164, in run
    _build_ext.build_ext.run(self)
  File "/Users/arokem/anaconda/lib/python3.5/distutils/command/build_ext.py", line 338, in run
    self.build_extensions()
  File "/Users/arokem/source/dipy/setup_helpers.py", line 194, in build_extensions
    build_ext_class.build_extensions(self)
  File "/Users/arokem/anaconda/lib/python3.5/site-packages/Cython/Distutils/build_ext.py", line 171, in build_extensions
    ext.sources = self.cython_sources(ext.sources, ext)
  File "/Users/arokem/anaconda/lib/python3.5/site-packages/Cython/Distutils/build_ext.py", line 324, in cython_sources
    full_module_name=module_name)
  File "/Users/arokem/anaconda/lib/python3.5/site-packages/Cython/Compiler/Main.py", line 682, in compile
    return compile_single(source, options, full_module_name)
  File "/Users/arokem/anaconda/lib/python3.5/site-packages/Cython/Compiler/Main.py", line 635, in compile_single
    return run_pipeline(source, options, full_module_name)
  File "/Users/arokem/anaconda/lib/python3.5/site-packages/Cython/Compiler/Main.py", line 492, in run_pipeline
    err, enddata = Pipeline.run_pipeline(pipeline, source)
  File "/Users/arokem/anaconda/lib/python3.5/site-packages/Cython/Compiler/Pipeline.py", line 365, in run_pipeline
    data = phase(data)
  File "/Users/arokem/anaconda/lib/python3.5/site-packages/Cython/Compiler/Pipeline.py", line 130, in inject_utility_code_stage
    tree = utilcode.get_tree(cython_scope=context.cython_scope)
TypeError: get_tree() takes no keyword arguments

This problem does not appear on master, so seems specific to something here.

@arokem

This comment has been minimized.

Member

arokem commented Apr 14, 2016

Oh - scratch that. Seems to be a general problem (just tried rebuilding with the -f flag)

[Edited to be more comprehensible]

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Apr 14, 2016

When you say a general problem?

@arokem

This comment has been minimized.

Member

arokem commented Apr 14, 2016

I mean I am getting this error on Python 3.5 (but not 2.7) on master.

On Thu, Apr 14, 2016 at 8:19 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

When you say a general problem?


You are receiving this because you commented.
Reply to this email directly or view it on GitHub
#828 (comment)

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Apr 14, 2016

What is your cython version?

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Apr 14, 2016

If not latest I would suggest to upgrade.

@arokem

This comment has been minimized.

Member

arokem commented Apr 14, 2016

#1029

On Thu, Apr 14, 2016 at 8:23 AM, Ariel Rokem arokem@gmail.com wrote:

I mean I am getting this error on Python 3.5 (but not 2.7) on master.

On Thu, Apr 14, 2016 at 8:19 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

When you say a general problem?


You are receiving this because you commented.
Reply to this email directly or view it on GitHub
#828 (comment)

@@ -0,0 +1,49 @@
from dipy.denoise.enhancement_kernel import EnhancementKernel

This comment has been minimized.

@Garyfallidis

Garyfallidis Apr 26, 2016

Member

We do not have automated coverage for cython files. Can you make a list of the functions that are being tested by this test? We need to make sure that all functions are tested.

This comment has been minimized.

@stephanmeesters

stephanmeesters May 4, 2016

Contributor

EnhancementKernel

  • __init
  • create_lookup_table
  • estimate_kernel_size
  • k2
  • coordinate_map
  • kernel
  • euler_angles
  • R
  • get_lookup_table
  • get_orientations
  • get_sphere (not tested here, but tested in test_kernel.py)

FBCMeasures

  • __init
  • get_points_rfbc_thresholded
  • compute
  • compute_rfbc
  • min_moving_average

All functions are tested

This comment has been minimized.

@Garyfallidis

Garyfallidis May 5, 2016

Member

Nice thank you for this.

tractography algorithms, since low FBCs indicate which fibers are isolated and
poorly aligned with their neighbors, see Fig. 1.
.. figure:: fbc_illustration.png

This comment has been minimized.

@Garyfallidis

Garyfallidis Apr 26, 2016

Member

This figure is not available.

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

I've included it in the recent commit.

Here we implement FBC measures based on kernel density estimation in the
non-flat 5D position-orientation domain. First we compute the kernel density
estimator induced by the full lifted output of the tractography. Then, the

This comment has been minimized.

@Garyfallidis

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

I've added some extra details here. Lifted means that it's defined in the space of positions and orientations.

Acknowledgements
~~~~~~~~~~~~~~~~~~~~~~
Funded by the European Research Council under the European Community's Seventh

This comment has been minimized.

@Garyfallidis

Garyfallidis Apr 26, 2016

Member

We usually do not put acknowledgements about funding in the tutorials. People can have a look at the papers for the specifics. If you have especially received money for software development in DIPY then this needs to be added in another page. I hope this is okay.

This comment has been minimized.

@stephanmeesters

stephanmeesters May 5, 2016

Contributor

I have removed the funding acknowledgement here. If we make a future paper about this we can mention it there I guess, but it's already in the abstracts.

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented May 4, 2016

Thanks for checking it out @Garyfallidis I've made some changes and added the figure.

@arokem

This comment has been minimized.

Member

arokem commented May 4, 2016

There are still a couple of comments outstanding, but I think it's very close to done!

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented May 5, 2016

@stephanmeesters are you on it? Have you seen @arokem's last message?

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented May 5, 2016

Sorry I forgot to respond to a couple of comments, I think I got them all now. Let me know if I missed something.

if min_length < min_fiberlength:
print("The minimum fiber length is 10 points. \
Shorter fibers were found and removed.")
py_streamlines = [x for x in py_streamlines if len(x) >= min_fiberlength]

This comment has been minimized.

@arokem

arokem May 5, 2016

Member

By the way, here as well:

[x for x in py_streamlines if x.shape[0] >= min_fiberlength]

would be faster

EDIT: I had a typo in the code, added ] :-)

This comment has been minimized.

@stephanmeesters

stephanmeesters May 7, 2016

Contributor

OK I see, thanks!

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented May 10, 2016

Regarding the fixed-size array for the streamlines, what I've found so far is that Numpy is really designed for contiguous memory. An array of pointers to ndarray's doesn't appear to be a straightforward thing.

Streamlines in DIPY are generally represented by a Python list of ndarrays, but that doesn't work well for Cython since it requires GIL (basically losing all speed benefits of Cython).

So the options are

  • Rewrite everything to c-style arrays, however this will make interfacing with Numpy functions harder
  • Rewrite it to use a 1D array and use an index map to reference each fiber. This may lead to perforce loss since I'm not sure if selecting a part of an 1D array is expensive.

Let me know what you think and if you have any other idea's.

@arokem

This comment has been minimized.

Member

arokem commented May 10, 2016

This will probably all become easier once nipy/nibabel#391 is merged, and we start refactoring things here.

For now, my take is that we should punt on this, and use the nipy convention. What are your thoughts, @Garyfallidis ?

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented May 27, 2016

Perhaps we can address this issue in a later pull request? The HBM conference is coming up and it would be great if I can put a link to the demo on my poster.

@arokem

This comment has been minimized.

Member

arokem commented May 27, 2016

This looks really close to ready to me. From my point of view the only thing remaining is the change of those two calls to len to references to the array shape attributes. @Garyfallidis : was there anything else you wanted addressed here before the merge?

Regarding link on your poster -- you should know that this will not appear on the dipy website until a release is made. You'll be able to link to the documentation examples on the master branch though.

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented May 27, 2016

I've made the requested changes, please see this new commit. Regarding the link, would I be linking directly to the source code or is there perhaps a parsed HTML page available somewhere of the master branch?

@arokem

This comment has been minimized.

Member

arokem commented May 27, 2016

This is good to go from my point of view. @Garyfallidis: did you have any other comments here?

The link would be something like this: https://github.com/nipy/dipy/blob/master/doc/examples/contextual_enhancement.py

For the time being, we don't have a rendered html of the documentation as it is currently on master, but that's probably going to change soon with the GSoC project on that.

@@ -0,0 +1,280 @@
# -*- coding: utf-8 -*-

This comment has been minimized.

@Garyfallidis

Garyfallidis May 30, 2016

Member

Isn't the same figure already available at http://nipy.org/dipy/_images/stochastic_process.jpg??
Please delete the figure from this PR. Although tutorials can generate pictures you cannot add external pictures in the examples folder. Please use the existing picture as you did in the other example!
Thanks in advance!

This comment has been minimized.

@stephanmeesters

stephanmeesters May 30, 2016

Contributor

Woops, I uploaded the wrong figure actually. I've now added fbc_illustration.png to doc/examples_built/_static, which is the same folder where stochastic_process.jpg was also placed. Let me know if this is OK.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented May 31, 2016

I still cannot merge this because of the many pep8 errors. Look for example at the example fiber_to_bundle_coherence.py

Please correct all pep8 errors asap so we can merge this hopefully before the conference and the release.

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented May 31, 2016

I've run it through a PEP8 checker and cleaned up the code. The trailing white spaces in figure captions are kept since those were giving problems before when rendering the HTML.

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Jun 8, 2016

Just checking if you noticed the last commit. Do you think it's ready for merging?

@arokem

This comment has been minimized.

Member

arokem commented Jun 10, 2016

@Garyfallidis : what are your thoughts?

@arokem

This comment has been minimized.

Member

arokem commented Jun 15, 2016

@Garyfallidis : this is waiting for your confirmation.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jun 16, 2016

Thank you Stephan. I am merging this now. But please @stephanmeesters make another small PR after where you remove lines 45, 222 and 223 of the tutorials as they are not used. Also in the same tutorial correct pep 8 for line 156.

@Garyfallidis Garyfallidis merged commit 82573c7 into nipy:master Jun 16, 2016

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Jun 16, 2016

Thanks. Will do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment