Skip to content
This repository has been archived by the owner on Jul 10, 2021. It is now read-only.

Construction from Layer specification objects. #19

Merged
merged 9 commits into from
Apr 27, 2015
Merged

Construction from Layer specification objects. #19

merged 9 commits into from
Apr 27, 2015

Conversation

alexjc
Copy link
Member

@alexjc alexjc commented Apr 27, 2015

New syntax for specifying layers that makes it easier to pass-in (and document) parameters. This provides error checking by default, too.

Updates #3, #6, #12, #18.

…ral network, also used internally. The previous tuple types are currently converted automatically. All tests pass.
…onstruction parameters. A global construction parameter called dropout (optionally a float, or default 0.5) is used as the fallback.

Updates #12.
…n layers, and expects a 2D tuple for the size of the pools. Default is no pooling, so (1,1) but can be (2,2) or (4,4).

Updates #3.
@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 99f9155 on params into c638eef on master.

@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling bea3bba on params into c638eef on master.

@alexjc
Copy link
Member Author

alexjc commented Apr 27, 2015

I think this one is ready for review, @ssamot. Thanks :-)

@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 28bbc16 on params into c638eef on master.

@ssamot
Copy link
Contributor

ssamot commented Apr 27, 2015

I think this is overall great, but it could be the case that having just on Layer type might prove too confusing (it might make new users confused about having to define a softmax layer with kernel). On the other hand if you have each layer type in each class it might be too confusing as well, too many objects flying around. I think the proper level of abstraction is Layer, PoolingLayer and ConvolutionLayer - what do you think?

@alexjc
Copy link
Member Author

alexjc commented Apr 27, 2015

I'm not sure about having a separate Pooling layer because it's implemented as part of the convolution in pylearn2 and therefore might be a bit fragile if we decouple it; we'd have to check that Pooling is only ever added after Convolution, and only once.

For the Convolution, that sounds reasonable... If we can support more convolution layer types at the same time, why not! I'll make a separate ticket though.

@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling fcc28a5 on params into c638eef on master.

@ssamot
Copy link
Contributor

ssamot commented Apr 27, 2015

Agreed - rest looks great and can be pulled

@alexjc
Copy link
Member Author

alexjc commented Apr 27, 2015

Created new ticket #20 for improving the convolution support.

ssamot added a commit that referenced this pull request Apr 27, 2015
Construction from Layer specification objects.
@ssamot ssamot merged commit e0e2b89 into master Apr 27, 2015
@ssamot ssamot deleted the params branch April 27, 2015 16:01
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants