New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not getting good results while using my own Condition Network #11
Comments
@asadabbas09 : Thanks for your question! What is the architecture of your condition network? (e.g. GoogleNet or something?) |
It's a VGG16 network trained for face recognition. |
@asadabbas09 : usually, for hidden neurons it's very easy to make them reach high activations / probabilities. Are you able to get high probabilities at all with e3=0? |
@anguyen8 I've tried using e3=0 as well. but still, for
If I set
I've also tried couple of other caffe models, only one of them (having two outptut classes) converges and that for output neurons only. I must be missing something, as |
@asadabbas09 : One of the known problems is optimization becomes less effective when a neuron is in a deep layer (e.g. in ResNet). It's harder to get such neuron highly activated... however, this case VGG is not that deep. I guess it must be something specific to this model that you're using. |
I'm trying to use my own condition network and visualize some neurons in
conv5_2
layer. The network had different layer names so I changedself.fc_layers
andself.conv_layers
insampling_class.py
. I also changed3_hidden_conditional_sampling.sh
accordingly as well.I tried sweeping across epsilon1, epsilon2, epsilon3 and learning rates parameters and performed 5000 iterations, but network fails to generate good images with high output probabilities.
I'm not sure if I am sweeping across right parameters, some of the parameters that I have tried are:
Is there anything else I need to change to get it working for my own condition network or How would you recommend to proceed in this case?
The text was updated successfully, but these errors were encountered: