Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For those who are struggling to find positions for many optimized parameters #140

Closed
mingfeisun opened this issue Dec 15, 2017 · 5 comments

Comments

@mingfeisun
Copy link

How to find the position correspondences for the random-named optimized parameters?

During the tuning process, there are some important outputs which can help to locate.

First, check the definition of get_space function in Hypers search space from the tuning outputs. You will find something like this:

get_space

This information helps us to know the naming for all parameters to be tuned.

Second, check the Resulting replaced keras model from the tuning outputs. Something like this:

space

You will see some codes are replaced by space['Dropout'], space['Dropout_1']. This is the corresponding positions for the optimized parameters. At this point, it's easy to fill in the optimized parameters.

Hope this helps.

@sanesanyo
Copy link

sanesanyo commented Sep 2, 2018

I have the following parameters in my model:
model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout({{uniform(0, 1)}}))
model.add(Dense({{choice([256, 512, 1024])}}))
model.add(Activation({{choice(['relu', 'sigmoid'])}}))
model.add(Dropout({{uniform(0, 1)}}))

model.add(Dense({{choice([128,256,512])}}))
model.add(Activation({{choice(['relu', 'sigmoid'])}}))
model.add(Dropout({{uniform(0, 1)}}))

model.add(Dense(10))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', metrics=['accuracy'],
              optimizer={{choice(['rmsprop', 'adam', 'sgd'])}})

model.fit(x_train, y_train,
          batch_size={{choice([64, 128])}},
          epochs=1,
          verbose=2,
          validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, verbose=0)

After the hyperas is done finding the most optimal hyperparameters, it gives the following info:
{'Activation': 1, 'Activation_1': 1, 'Dense': 2, 'Dense_1': 0, 'Dropout': 0.1602501347478713, 'Dropout_1': 0.11729755246044238, 'Dropout_2': 1, 'Dropout_3': 0.41266207281071243, 'add': 1, 'batch_size': 1, 'optimizer': 1}

So I understand till 'Dropout_1', after that I don't understand 'Dropout_2':1, 'add':1. I have only added three dense layers (with the first dense layer taking the input being fixed), except the output layer and here I am getting Dropout for 4 dense layers with 'Dropout_2' being 1. I am probably missing some caveat and is therefore hoping if someone could look at it and help me out.

Thanks a lot in advance.

@TBomberman
Copy link

TBomberman commented Feb 1, 2019

I also get 'Dropout_2': 1. What does that mean? That all the neurons in the next layer should be deleted?

@TBomberman
Copy link

I found out, Dropout_2 may not actually refer to the 2nd dropout value. You have to check the output create model code to see what Dropout_2 is actually referring to.

@dumkar
Copy link

dumkar commented Mar 9, 2019

If you want optim.minimize to print the values instead of the indices of the best parameters, use eval_space=True as an extra argument.

@dumkar
Copy link

dumkar commented Mar 9, 2019

@mingfeisun if a name is specified for a layer or dropout, Hyperas could use that name as a prefix for the parameter. In that way it is more tractable if you have a lot of parameters.

Now it doesn't take this into account:

model.add(Dropout(space['Dropout_1'], name='Dropout2'))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants