-
Notifications
You must be signed in to change notification settings - Fork 221
Support for greedy layer-wise pretraining #35
Comments
I believe PyLearn2 supports auto-encoders, or would require minor additions to do so. However, it's likely those architectures are being deprecated in the code... My biggest question is, what problem are you trying to solve? Conventional wisdom for "modern" NN training is that you're probably better off doing supervised only, either with your own dataset or a related one if necessary. |
I am using an autoencoder because I only have a portion of my dataset which is labeled. I want to use the unlabeled portion of my dataset to do unsupervised training and then finetune using the labeled portion. Pylearn2 does support autoencoders and RBMs, however, since they are trained using the TransformerDataset, I am not sure how to do this in scikit-neuralnetwork. |
Ah, I see. Thanks for clarifying!
|
Thank you for the quick answer. I think I will try to do a prototype. I will do a pull request if I have something. |
@leconteur: Started looking into this... I presume the code will need to create a separate AutoEncoder model, train that, then extract the weights into a MLP model? |
That is what I have so far. Pylearn2 has a pretrained layer type in mlp. I created a separated class to train the autoencoder layers which returns the pretrained layer, and I use those layers in the mlp constructor. The only addition to the mlp class is to add a condition in the _create_layer method to call the pylearn2 pretrained layer constructor. |
@leconteur: Thanks! I found your fork, did you commit those changes to it? |
Not yet, I have to clean up my code. I plan to do this next week. |
I can cope with messy code, ask @ssamot :-) Just let me know when you're ready! |
For reference: prototype by @leconteur available as #50 and first-pass integration of just the auto-encoder in #52. |
OK, #52 has landed and going to prototype the pre-training code. |
Added new PR #58 that does weight transfer from the existing auto-encoder module. See the PRETRAIN conditional in @leconteur If you have any feedback about the API or functionality, let me know. |
To continue the discussion from the previous pull request, I'm not using the pylearn2 I think support for Rectifiers would be very welcome here, but it can be a separate PR. |
Closed by #58. |
I'm trying to build a deep-belief network. Is there a way to use a set of pretrained weights to initialize the parameters of the MLP? Is there plan to implement RBMs and autoencoders?
The text was updated successfully, but these errors were encountered: