-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Load Experiment #18
Comments
When you create your Experiment class to load in the saved model, you need to pass the same |
Oh, ok thanks. Just as a feedback I would like to add that it would be nice 2014-04-06 16:24 GMT-04:00 Leif Johnson notifications@github.com:
|
Ok, I'll reopen this as a feature request. It's a bit tricky to do, because currently theanets constructs the computation graph before the loading takes place, and then loads the parameters into the computation graph. I think the experiment saving/loading could just store all of the kwargs along with the parameters though. |
I think this is working now! If you get a chance to test and it works, please close the issue. |
Closing after a month, please reopen if needed. |
I have a array which is 1024,100(cols,rows), I want to use Autoencoder with 3 layers (Input,Hidden, Output layer). Should the size of neural n etwork be 1024,N,1024?? |
Yes. The rows in a dataset are treated as training examples, and the columns are treated as variables. |
Thank you for the prompt reply. |
There's not a command for that, but it's easy to do yourself:
Then |
I get such a statement at the end of the training of autoencoder. Is it something I should worry about Regards On Sat, Dec 20, 2014 at 12:10 AM, Leif Johnson notifications@github.com
|
No, this is how training ends using any of the SGD-based trainers (NAG, Rprop, RmsProp). It indicates that less than |
Thank you Mr. John's for the reply and assistance you have offered On Mon, Dec 22, 2014 at 10:37 AM, Leif Johnson notifications@github.com
|
I am interested in autoencoders and I prefer to use Layerwise optimization It would be good to know like how small can the learning rate go. Thanks for the help in advance Varghese On Mon, Dec 22, 2014 at 11:26 AM, varghese alex varghesealex90@gmail.com
|
During training are the weights and biases updated one row of training In case of autoencoders the "cost function" that is minimized is just the Regards On Tue, Dec 30, 2014 at 11:17 AM, varghese alex varghesealex90@gmail.com
|
Hi - I'll answer these questions here, but in the future would you mind posting usage questions like this on the mailing list for the project? You can sign up at https://groups.google.com/forum/#!forum/theanets. For the learning rate and momentum: the default learning rate is 1e-4, and the default momentum is 0.9. You can see the default values by invoking your training script with the For the training question: the weights and biases are updated after every minibatch. If you are using the logging output during training, then one line gets printed out during training after |
I save out a network like so:
e = Experiment(Regressor, layers=(input_layer, hidden1, output_layer), optimize='hf', num_updates=30, verbose='True')
e.run(dataset, dataset)
e.save('network.dat')
Then when I'm trying to load it back in:
network = theanets.Experiment(theanets.Network).load('network.dat')
I get the following error message, and I'm not sure what am I doing wrong.
Traceback (most recent call last):
File "test.py", line 10, in
network = theanets.Experiment(theanets.Network).load('network.dat')
File "/usr/local/lib/python2.7/dist-packages/theanets/main.py", line 90, in init
self.network = self._build_network(network_class, *_kw)
File "/usr/local/lib/python2.7/dist-packages/theanets/main.py", line 103, in _build_network
return network_class(activation=activation, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/theanets/feedforward.py", line 107, in init
self.x.tag.test_value = np.random.randn(DEBUG_BATCH_SIZE, layers[0])
TypeError: 'NoneType' object has no attribute 'getitem'
The text was updated successfully, but these errors were encountered: