Skip to content
This repository has been archived by the owner on Sep 1, 2023. It is now read-only.

Create a simple tutorial app for using nupic #654

Closed
rhyolight opened this issue Feb 15, 2014 · 36 comments
Closed

Create a simple tutorial app for using nupic #654

rhyolight opened this issue Feb 15, 2014 · 36 comments
Assignees
Milestone

Comments

@rhyolight
Copy link
Member

This should be extremely newbie-friendly and as simple as possible. It should include a flat file that will be used as input data, and step-by-step instructions (assuming nupic is already installed) on how to write a python program that gets that data into nupic and analyzes the output predictions and does something useful with them.

This could start as a wiki, but should eventually go into some official docs on numenta.org or within the repo.

@kevinmartinjos
Copy link
Contributor

I would like to help. I suppose I can get some help at IRC

@rhyolight
Copy link
Member Author

Sure thing, @Lonesword. I am usually on IRC (rhyolight). I'll assign this issue to you once you've signed our contributor license. Let's discuss your ideas here on this ticket.

@kevinmartinjos
Copy link
Contributor

Contributor license signed.

@rhyolight rhyolight added this to the Sprint 16 milestone Feb 19, 2014
@rhyolight
Copy link
Member Author

Thanks @Lonesword, you've been assigned this issue. Do you have any thoughts about the tutorial?

@kevinmartinjos
Copy link
Contributor

For the hello world equivalent, how about making nupic learn an "ABCD" sequence (.flat file maybe?) and predict the next output when the user inputs an alphabet?

@subutai
Copy link
Member

subutai commented Feb 19, 2014

@Lonesword An interactive tutorial like that sounds like a great idea.

Brainstorming a bit, there are two different levels we could go with for a hello world tutorial:

  1. We could do a tutorial that is algorithm focused, like your suggestion. The best way to do this is to interact with the spatial pooler and temporal pooler classes directly. This is lower level and requires some algorithmic understanding. There's an early version of something like your suggestion in examples/tp/hello_tp.py - this code could be used as a starting point for the tutorial.

  2. We could do a tutorial that is dataset or application focused. This tutorial would focus on how to run data through the OPF, understand the results, etc. hotgym would be an example of this. This would be higher level and require less algorithm knowledge. It would be easier to see how nupic could be used in a real world scenario.

I think both are valuable. Any thoughts on this? Other approaches? Do you want to work on 1) for now?

@kevinmartinjos
Copy link
Contributor

@subutai I would like to work on 1). I believe such a low level example do not exist
for the spatial pooler. I've been fiddling around with the hotgym (client)
example yesterday and I was wondering what are the bare minimum fields
required in MODEL_PARAMS. The dictionary was quite huge. I checked the wiki
but the documentation is incomplete.

On Thu, Feb 20, 2014 at 1:27 AM, Subutai Ahmad notifications@github.comwrote:

@Lonesword https://github.com/lonesword An interactive tutorial like
that sounds like a great idea.

Brainstorming a bit, there are two different levels we could go with for a
hello world tutorial:

  1. We could do a tutorial that is algorithm focused, like your suggestion.
    The best way to do this is to interact with the spatial pooler and temporal
    pooler classes directly. This is lower level and requires some algorithmic
    understanding. There's an early version of something like your suggestion
    in examples/tp/hello_tp.py - this code could be used as a starting point
    for the tutorial.

  2. We could do a tutorial that is dataset or application focused. This
    tutorial would focus on how to run data through the OPF, understand the
    results, etc. hotgym would be an example of this. This would be higher
    level and require less algorithm knowledge. It would be easier to see how
    nupic could be used in a real world scenario.

I think both are valuable. Any thoughts on this? Other approaches? Do you
want to work on 1) for now?

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-35541036
.

@kevinmartinjos
Copy link
Contributor

Also, the TP class is defined in some python file, I believe. I'm afraid I
could not find it in the code base (probably because I'm not looking hard
enough). I'd like to know where the source code of the SP and TP resides
in.

On Fri, Feb 21, 2014 at 9:16 AM, Kevin Martin
youcancallmekevin@gmail.comwrote:

I would like to work on 1). I believe such a low level example do not
exist for the spatial pooler. I've been fiddling around with the hotgym
(client) example yesterday and I was wondering what are the bare minimum
fields required in MODEL_PARAMS. The dictionary was quite huge. I checked
the wiki but the documentation is incomplete.

On Thu, Feb 20, 2014 at 1:27 AM, Subutai Ahmad notifications@github.comwrote:

@Lonesword https://github.com/lonesword An interactive tutorial like
that sounds like a great idea.

Brainstorming a bit, there are two different levels we could go with for
a hello world tutorial:

  1. We could do a tutorial that is algorithm focused, like your
    suggestion. The best way to do this is to interact with the spatial pooler
    and temporal pooler classes directly. This is lower level and requires some
    algorithmic understanding. There's an early version of something like your
    suggestion in examples/tp/hello_tp.py - this code could be used as a
    starting point for the tutorial.

  2. We could do a tutorial that is dataset or application focused. This
    tutorial would focus on how to run data through the OPF, understand the
    results, etc. hotgym would be an example of this. This would be higher
    level and require less algorithm knowledge. It would be easier to see how
    nupic could be used in a real world scenario.

I think both are valuable. Any thoughts on this? Other approaches? Do you
want to work on 1) for now?

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-35541036
.

@subutai
Copy link
Member

subutai commented Feb 21, 2014

@Lonesword The SP is in py/nupic/research/spatial_pooler.py. The TP is in py/nupic/research/TP.py. I'll see if I can find a simple SP example.

Great that you want to work on 1). I'll create a separate issue to track task 2) then. This issue will track 1).

@subutai
Copy link
Member

subutai commented Feb 21, 2014

I created issue #667 to track 2).

@kevinmartinjos
Copy link
Contributor

@subutai
I went through the SP source code. I think it would be better to do an
example with the flat spacial pooler. I attempted a really simple program
but it throws me an error. I hosted the code here :

https://github.com/lonesword/nupichelloworld

The error was :

File
"/home/kevin/nta/eng/lib/python2.7/site-packages/nupic/research/flat_spatial_pooler.py",
line 186, in compute
assert (numpy.size(inputArray) == self._numInputs)
AssertionError

I don't know why this should happen. The numpy array I created is 1024
long, and self._numInputs too must be 1024. Or isn't it? The corresponding
line in flat_spatial_pooler.py regarding numInputs is this :

numInputs = numpy.array(inputShape).prod()

and,

inputShape=(32,32).

When I ran numpy.array((32,32)).prod() in the python interactive shell, it
gave me 1024

I do not know what went wrong.

On Sat, Feb 22, 2014 at 12:38 AM, Subutai Ahmad notifications@github.comwrote:

I created issue #667 #667 to
track 2).

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-35762291
.

@kevinmartinjos
Copy link
Contributor

I would also want to add that I did not understand the last parameter of compute(self,inputArray,learn,activeArray)

What is activeArray? The first two had explanations as comment.

@breznak
Copy link
Member

breznak commented Feb 22, 2014

just i quick glance..

On Sat, Feb 22, 2014 at 11:37 AM, lonesword notifications@github.comwrote:

I would also want to add that I did not understand the last parameter of
compute(self,inputArray,learn,activeArray)

What is activeArray? The first two had explanations as comment.

I'd think active array is a list of the active columns. It is documented
for sure..check the c++/py versions of SP for details.

@kevinmartinjos
Copy link
Contributor

The compute() function in spatial_pooler.py file had activeArray
documented.

It says :
an array whose size is equal to the number of columns. Before the function
returns this array will be populated with 1's at the indices of the active
columns, and 0's everywhere else.

Thank you.

On Sat, Feb 22, 2014 at 4:12 PM, breznak notifications@github.com wrote:

just i quick glance..

On Sat, Feb 22, 2014 at 11:37 AM, lonesword <notifications@github.com

wrote:

I would also want to add that I did not understand the last parameter of
compute(self,inputArray,learn,activeArray)

What is activeArray? The first two had explanations as comment.

I'd think active array is a list of the active columns. It is documented
for sure..check the c++/py versions of SP for details.

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-35799775
.

@kevinmartinjos
Copy link
Contributor

Sorry. I made a mistake. I had initialized FlatSpatialPooler to work with 4
element array and then supplied it 1024 long array. Fixed now

On Sat, Feb 22, 2014 at 4:20 PM, Kevin Martin
youcancallmekevin@gmail.comwrote:

The compute() function in spatial_pooler.py file had activeArray
documented.

It says :
an array whose size is equal to the number of columns. Before the function
returns this array will be populated with 1's at the indices of the active
columns, and 0's everywhere else.

Thank you.

On Sat, Feb 22, 2014 at 4:12 PM, breznak notifications@github.com wrote:

just i quick glance..

On Sat, Feb 22, 2014 at 11:37 AM, lonesword <notifications@github.com

wrote:

I would also want to add that I did not understand the last parameter of
compute(self,inputArray,learn,activeArray)

What is activeArray? The first two had explanations as comment.

I'd think active array is a list of the active columns. It is documented
for sure..check the c++/py versions of SP for details.

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-35799775
.

@kevinmartinjos
Copy link
Contributor

@subutai @rhyolight

I have written a simple program that accepts an input vector and prints the list of columns that are active after inhibition. Any suggestions as to how to proceed from here? The link :

https://github.com/lonesword/nupichelloworld/blob/master/helloworld.py

@subutai
Copy link
Member

subutai commented Feb 23, 2014

@Lonesword That's great! I also saw your post on the ML. This weekend has been very busy for me but I will take a look at it soon, hopefully later today or tomorrow.

@subutai
Copy link
Member

subutai commented Feb 24, 2014

@Lonesword We are actually in the process of removing FlatSpatialPooler (see issue #627) so using the SpatialPooler would be ideal. You are right that the same input vector should usually result in the same output, but since the system is always learning that property is not always guaranteed. For example, with boosting you could have some columns "steal" patterns from other columns.

Looking through the code in the github repo I notice that in these lines a new random vector gets created before each call to example.run:

for i in range(10): #We supply 10 different input vectors
  example.create_input()
  example.run()     

In this case you would definitely see new output for each input. A couple of other suggestions:

You can use the SpatialPooler.getXXX() methods to figure out the actual size of the input, number of columns, etc. For example, self.flat.getNumColumns() returns the number of columns so you don't have to hardcode any numbers.

Also with numpy you can do print self.activeArray.nonzero() to print the indices of non-zero elements, so you don't need the print_results method.

@kevinmartinjos
Copy link
Contributor

Thanks for the help. Yes, a random input vector gets created in the code
pointed to by the link. What I meant is that I appended these lines to the
end of the code :

for i in range(10):
example.flat.compute(testinput,True,active)
for i in range(4096):
if active[i]!=0:
print i,
print " "
active[0:]=0

Here the same testinput is given as the inputArray 10 times and I got
different results. Since the FlatSpatialPooler is going to be scrapped, I
guess I shouldn't be worried about it much. One thing I noticed is that the
SpatialPooler was many times slower than FlatSpatialPooler. And Scott gave
an explanation saying the latter was optimized for no topology.

On Tue, Feb 25, 2014 at 4:01 AM, Subutai Ahmad notifications@github.comwrote:

@Lonesword https://github.com/lonesword We are actually in the process
of removing FlatSpatialPooler (see issue #627#627)
so using the SpatialPooler would be ideal. You are right that the same
input vector should usually result in the same output, but since the system
is always learning that property is not always guaranteed. For example,
with boosting you could have some columns "steal" patterns from other
columns.

Looking through the code in the github repo I notice that in these lines a
new random vector gets created before each call to example.run:

for i in range(10): #We supply 10 different input vectors
example.create_input()
example.run()

In this case you would definitely see new output for each input. A couple
of other suggestions:

You can use the SpatialPooler.getXXX() methods to figure out the actual
size of the input, number of columns, etc. For example,
self.flat.getNumColumns() returns the number of columns so you don't have
to hardcode any numbers.

Also with numpy you can do print self.activeArray.nonzero() to print the
indices of non-zero elements, so you don't need the print_results method.

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-35948229
.

@kevinmartinjos
Copy link
Contributor

The hello world program [1] now works with the SpatialPooler. The code has
been restructured so as to be compatible with numenta's code styling.

[1] https://github.com/lonesword/nupichelloworld/blob/master/helloworld.py

On Tue, Feb 25, 2014 at 9:16 AM, Kevin Martin
youcancallmekevin@gmail.comwrote:

Thanks for the help. Yes, a random input vector gets created in the code
pointed to by the link. What I meant is that I appended these lines to the
end of the code :

for i in range(10):
example.flat.compute(testinput,True,active)
for i in range(4096):
if active[i]!=0:
print i,
print " "
active[0:]=0

Here the same testinput is given as the inputArray 10 times and I got
different results. Since the FlatSpatialPooler is going to be scrapped, I
guess I shouldn't be worried about it much. One thing I noticed is that the
SpatialPooler was many times slower than FlatSpatialPooler. And Scott gave
an explanation saying the latter was optimized for no topology.

On Tue, Feb 25, 2014 at 4:01 AM, Subutai Ahmad notifications@github.comwrote:

@Lonesword https://github.com/lonesword We are actually in the process
of removing FlatSpatialPooler (see issue #627#627)
so using the SpatialPooler would be ideal. You are right that the same
input vector should usually result in the same output, but since the system
is always learning that property is not always guaranteed. For example,
with boosting you could have some columns "steal" patterns from other
columns.

Looking through the code in the github repo I notice that in these lines
a new random vector gets created before each call to example.run:

for i in range(10): #We supply 10 different input vectors
example.create_input()
example.run()

In this case you would definitely see new output for each input. A couple
of other suggestions:

You can use the SpatialPooler.getXXX() methods to figure out the actual
size of the input, number of columns, etc. For example,
self.flat.getNumColumns() returns the number of columns so you don't
have to hardcode any numbers.

Also with numpy you can do print self.activeArray.nonzero() to print the
indices of non-zero elements, so you don't need the print_results method.

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-35948229
.

@rhyolight
Copy link
Member Author

Great job, Kevin! :)


Matt Taylor
OS Community Flag-Bearer
Numenta

On Tue, Feb 25, 2014 at 10:11 AM, lonesword notifications@github.comwrote:

The hello world program [1] now works with the SpatialPooler. The code has
been restructured so as to be compatible with numenta's code styling.

[1] https://github.com/lonesword/nupichelloworld/blob/master/helloworld.py

On Tue, Feb 25, 2014 at 9:16 AM, Kevin Martin
youcancallmekevin@gmail.comwrote:

Thanks for the help. Yes, a random input vector gets created in the code
pointed to by the link. What I meant is that I appended these lines to
the
end of the code :

for i in range(10):
example.flat.compute(testinput,True,active)
for i in range(4096):
if active[i]!=0:
print i,
print " "
active[0:]=0

Here the same testinput is given as the inputArray 10 times and I got
different results. Since the FlatSpatialPooler is going to be scrapped, I
guess I shouldn't be worried about it much. One thing I noticed is that
the
SpatialPooler was many times slower than FlatSpatialPooler. And Scott
gave
an explanation saying the latter was optimized for no topology.

On Tue, Feb 25, 2014 at 4:01 AM, Subutai Ahmad <notifications@github.com
wrote:

@Lonesword https://github.com/lonesword We are actually in the
process
of removing FlatSpatialPooler (see issue #627<
https://github.com/numenta/nupic/issues/627>)
so using the SpatialPooler would be ideal. You are right that the same
input vector should usually result in the same output, but since the
system
is always learning that property is not always guaranteed. For example,
with boosting you could have some columns "steal" patterns from other
columns.

Looking through the code in the github repo I notice that in these lines
a new random vector gets created before each call to example.run:

for i in range(10): #We supply 10 different input vectors
example.create_input()
example.run()

In this case you would definitely see new output for each input. A
couple
of other suggestions:

You can use the SpatialPooler.getXXX() methods to figure out the actual
size of the input, number of columns, etc. For example,
self.flat.getNumColumns() returns the number of columns so you don't
have to hardcode any numbers.

Also with numpy you can do print self.activeArray.nonzero() to print the
indices of non-zero elements, so you don't need the print_results
method.

Reply to this email directly or view it on GitHub<
https://github.com/numenta/nupic/issues/654#issuecomment-35948229>
.

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-36039053
.

@subutai
Copy link
Member

subutai commented Feb 26, 2014

Hi @Lonesword - nice! I understand your earlier comments now. I pulled your latest code and made a couple of changes. There are a couple of options to set when you want to use a flat SP (no topology). In those cases it is much faster. The options are globalInhibition and potentialRadius. With these options you can pass in a one dimensional shape, such as (1,1024) for inputShape

I also changed a couple of other things. First, I set the sparsity to be about 2% which is what we typically use (by setting numActiveColumnsPerInhArea). Second, the default learning rate in the class is actually way too high. With such a high learning rate the synapses of the first set of winning columns get fully connected and they will tend to win for everything after that. I set synPermActiveInc to be much smaller than the default.

Try it out and see if this makes sense!! Here are my changes:

  def __init__(self, inputShape, columnDimensions):
     :
    self.sp = SP(self.inputShape,
                self.columnDimensions,
                potentialRadius=self.inputSize,
                numActiveColumnsPerInhArea= int(0.02*self.columnNumber), # 2% sparsity
                globalInhibition = True,        # Don't use topology
                synPermActiveInc = 0.01         
                )

:

example = Example((1,1024), (1,2048))
print "Trying different vectors"
for i in range(3):
  example.create_input()
  example.run()
print "Trying identical vectors"
for i in range(4):
    example.run()

@kevinmartinjos
Copy link
Contributor

I have made the suggested changes. The active columns are listed much
faster now. Since learning is turned on, this example can be extended to
show the next step predictions right? That would involve the use of
encoders though, and it can be a separate tutorial. The reader might be
curious to know what he/she can do with the active set of columns.

On Wed, Feb 26, 2014 at 7:19 AM, Subutai Ahmad notifications@github.comwrote:

Hi @Lonesword https://github.com/lonesword - nice! I understand your
earlier comments now. I pulled your latest code and made a couple of
changes. There are a couple of options to set when you want to use a flat
SP (no topology). In those cases it is much faster. The options are
globalInhibition and potentialRadius. With these options you can pass in
a one dimensional shape, such as (1,1024) for inputShape

I also changed a couple of other things. First, I set the sparsity to be
about 2% which is what we typically use (by setting
numActiveColumnsPerInhArea). Second, the default learning rate in the
class is actually way too high. With such a high learning rate the synapses
of the first set of winning columns get fully connected and they will tend
to win for everything after that. I set synPermActiveInc to be much
smaller than the default.

Try it out and see if this makes sense!! Here are my changes:

def init(self, inputShape, columnDimensions):
:
self.sp = SP(self.inputShape,
self.columnDimensions,
potentialRadius=self.inputSize,
numActiveColumnsPerInhArea= int(0.02*self.columnNumber), # 2% sparsity
globalInhibition = True, # Don't use topology
synPermActiveInc = 0.01
)

:

example = Example((1,1024), (1,2048))
print "Trying different vectors"
for i in range(3):
example.create_input()
example.run()
print "Trying identical vectors"
for i in range(4):
example.run()

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-36082414
.

@subutai
Copy link
Member

subutai commented Feb 26, 2014

@Lonesword Great! Yes, we can extend it to show the next prediction. To do this we will need to include the temporal pooler and have it learn sequences. Turns out you can actually bypass the encoders if you want, and just train it on random sequences. The sequence learning can learn random sequences. (The code in examples/tp/hello_tp.py shows how to use the temporal pooler class.)

If you want to stick with the spatial pooler you can try adding some noise to the input pattern and see how the columns change. For example, if you add 10% noise, how much do the columns change?

@rhyolight rhyolight modified the milestones: Sprint 17, Sprint 16 Feb 28, 2014
@rhyolight rhyolight modified the milestones: Sprint 16, Sprint 17 Feb 28, 2014
@subutai
Copy link
Member

subutai commented Mar 4, 2014

@Lonesword Hi Kevin - just wondering if you plan to continue this? Even as-is I think your code could be very helpful to NuPIC users and useful to have in our examples directory. If you add in code to explore the effect of noise (or some other property) it could be really interesting.

@kevinmartinjos
Copy link
Contributor

Oh I'm sorry that I kept everyone waiting. It had been a busy week for me.
I would definitely add the noise factor within 2 days.

On Tue, Mar 4, 2014 at 10:20 PM, Subutai Ahmad notifications@github.comwrote:

@Lonesword https://github.com/lonesword Hi Kevin - just wondering if
you plan to continue this? Even as-is I think your code could be very
helpful to NuPIC users and useful to have in our examples directory. If you
add in code to explore the effect of noise (or some other property) it
could be really interesting.

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-36646201
.

@subutai
Copy link
Member

subutai commented Mar 4, 2014

@Lonesword No worries. Take your time - happy to have your stuff in whenever!

@kevinmartinjos
Copy link
Contributor

Effect of noise added. Please check the link [1]

[1] https://github.com/lonesword/nupichelloworld/blob/master/helloworld.py

On Tue, Mar 4, 2014 at 11:57 PM, Subutai Ahmad notifications@github.comwrote:

@Lonesword https://github.com/lonesword No worries. Take your time -
happy to have your stuff in whenever!

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-36656925
.

@subutai
Copy link
Member

subutai commented Mar 5, 2014

@Lonesword Great! I ran it and it worked well. I think this will be very helpful to others, as you've probably guessed from some of the recent mailing list messages. @rhyolight Should we add this as an example under examples/sp? What would be the next steps? Once it's a pull request I'll probably have some minor feedback on the code.

@rhyolight
Copy link
Member Author

@Lonesword Can you fork nupic and add your example to the examples directory and submit a PR? If you need help with this, email me.

@tleyden
Copy link

tleyden commented Mar 8, 2014

@Lonesword I played around with this a little, and when I bumped up the number of iterations on this line, to for i in range(15): I was getting identical SDR's after adding the 20% noise. Eg, more training makes it more robust to noise.

Great example btw!

@kevinmartinjos
Copy link
Contributor

@traun Thank you! I did not know that before.

@rhyolight : I'm having issues with my internet usage limits. I'll add it
to the repo as soon as I can. I have already forked the nupic repo but I
guess I will have to clone it into my hard drive before I can add the file
to the examples directory. Is there a way to do it without having to
download the entire repo?

On Sun, Mar 9, 2014 at 12:02 AM, Traun Leyden notifications@github.comwrote:

@Lonesword https://github.com/lonesword I played around with this a
little, and when I bumped up the number of iterations on this linehttps://github.com/lonesword/nupichelloworld/blob/c585b96af94188ccd25d381bd9d25684b6e9de3a/helloworld.py#L103,
to for i in range(15): I was getting identical SDR's after adding the 20%
noise. Eg, more training makes it more robust to noise.

Great example btw!

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-37105510
.

@breznak
Copy link
Member

breznak commented Mar 8, 2014

Is there a way to do it without having to download the entire repo?
check a "shallow copy" option for git clone

@kevinmartinjos
Copy link
Contributor

Here's what I have done :

  1. Forked the nupic repo
  2. Created a branch add_example
  3. Added the example program to the corresponding directory
  4. Made a pull request

Anything happened at the other end?

On Sun, Mar 9, 2014 at 2:57 AM, breznak notifications@github.com wrote:

Is there a way to do it without having to download the entire repo?
check a "shallow copy" option for git clone

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-37110421
.

@subutai
Copy link
Member

subutai commented Mar 9, 2014

Great, thanks! Yes, I saw the pull request - looks like you did everything correctly. I'll probably have a few minor comments regarding code format, which I'll do on that PR.

@kevinmartinjos
Copy link
Contributor

Sure. Glad that I got it right :)

On Sun, Mar 9, 2014 at 10:09 PM, Subutai Ahmad notifications@github.comwrote:

Great, thanks! Yes, I saw the pull request - looks like you did everything
correctly. I'll probably have a few minor comments regarding code format,
which I'll do on that PR.

Reply to this email directly or view it on GitHubhttps://github.com//issues/654#issuecomment-37131305
.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants