Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Global name 'hidden_inputs' is not defined #4

Closed
Robert0812 opened this issue Dec 16, 2013 · 3 comments
Closed

Global name 'hidden_inputs' is not defined #4

Robert0812 opened this issue Dec 16, 2013 · 3 comments

Comments

@Robert0812
Copy link

When running optimizer.run(100), an error occurred: global name 'hidden_inputs' is not defined in line 323 of ./hebel/hebel/models/neurals_net.py

Where to define the global variable 'hidden_inputs'? Thanks!

@Robert0812
Copy link
Author

I suspect it may be 'hidden_activations' or just the 'input_data'.

@hannes-brt
Copy link
Owner

Can you post the script you were trying to run?

@Robert0812
Copy link
Author

Following the documentation at this website http://hebel.readthedocs.org/en/latest/getting_started.html

I run the following code lines, and error occurs at optimizer.run(100)

import pycuda.autoinit
from hebel.models import NeuralNet
from hebel.optimizers import SGD
from hebel.parameter_updaters import MomentumUpdate
from hebel.data_providers import MNISTDataProvider
from hebel.monitors import ProgressMonitor
from hebel.schedulers import exponential_scheduler, linear_scheduler_up

Initialize data providers

train_data = MNISTDataProvider('train', batch_size=100)
validation_data = MNISTDataProvider('val')
test_data = MNISTDataProvider('test')

D = train_data.D # Dimensionality of inputs
K = 10 # Number of classes

Create model object

model = NeuralNet(n_in=train_data.D, n_out=K,
layers=[1000, 500, 500],
activation_function='relu',
dropout=True)

Create optimizer object

progress_monitor = ProgressMonitor(
experiment_name='mnist',
save_model_path='examples/mnist',
save_interval=5,
output_to_log=True)

optimizer = SGD(model, MomentumUpdate, train_data, validation_data,
learning_rate_schedule=exponential_scheduler(1., .995),
momentum_schedule=linear_scheduler_up(.5, .9, 10))

Run model

optimizer.run(100)

Evaulate error on test set

test_error = 0
test_error = model.test_error(test_data)
print "Error on test set: %.3f" % test_error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants