Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I have some problem. #1

Open
mostoo45 opened this issue Feb 28, 2018 · 19 comments
Open

I have some problem. #1

mostoo45 opened this issue Feb 28, 2018 · 19 comments

Comments

@mostoo45
Copy link

I currently use your code about ProtoNet-Omniglot.ipynb

I have not changed the code, but the accuracy and loss value is not changed.

I use tensorflow 1.3

@abdulfatir
Copy link
Owner

Hi @mostoo45

How do I reproduce the issue?

@mostoo45
Copy link
Author

Hi abdulfatir
Thank you first, I am studying your code.
I got the following error in In [7]:

ValueError: Variable encoder/conv_1/conv2d/kernel already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

File "", line 3, in conv_block
conv = tf.layers.conv2d(inputs, out_channels, kernel_size=3, padding='SAME')
File "", line 3, in encoder
net = conv_block(x, h_dim, name='conv_1')
File "", line 16, in
emb_x = encoder(tf.reshape(x, [num_classes * num_support, im_height, im_width, channels]), h_dim, z_dim)

so I added the following line.
tf.reset_default_graph()-->add line
x = tf.placeholder(tf.float32, [None, None, im_height, im_width, channels])
q = tf.placeholder(tf.float32, [None, None, im_height, im_width, channels])
x_shape = tf.shape(x)

@mijung-kim
Copy link

@mostoo45 , I just ran this on my server and it worked flawless. FYI, I use tf 1.6

@bdutta19
Copy link

I have problems reproducing results too.. tried to reproduce these results but the loss value doesnt change. I thought it could be a initializer issue and tried a few different initializers but no dice. Also tried tf 1.3 and tf 1.6 neither of them converge.

@mostoo45
Copy link
Author

Instead of ipynb, I use py.
By using py instead of ipynb, I got a loss and acc that is similar to the existing code.

@bdutta19
Copy link

Hi - are you CPU or GPUs .. I just tried by converting the code to a py file. below are the losses.. as you see they are not changing at all
also here is a gist of my py file proto-net-omniglot

I dont see anything wrong with the code so will keep looking..

(tf16) ➜ Experiments python proto-nets-omnoglot.py
(4112, 20, 28, 28)
2018-03-27 10:00:22.417629: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
[epoch 1/20, episode 50/100] => loss: 2.30259, acc: 0.10000
[epoch 1/20, episode 100/100] => loss: 2.30259, acc: 0.10000
[epoch 2/20, episode 50/100] => loss: 2.30259, acc: 0.10000
[epoch 2/20, episode 100/100] => loss: 2.30259, acc: 0.10000
[epoch 3/20, episode 50/100] => loss: 2.30259, acc: 0.10000
[epoch 3/20, episode 100/100] => loss: 2.30259, acc: 0.10000
[epoch 4/20, episode 50/100] => loss: 2.30259, acc: 0.10000
[epoch 4/20, episode 100/100] => loss: 2.30259, acc: 0.10000
[epoch 5/20, episode 50/100] => loss: 2.30259, acc: 0.10000
[epoch 5/20, episode 100/100] => loss: 2.30259, acc: 0.10000
[epoch 6/20, episode 50/100] => loss: 2.30259, acc: 0.10000
[epoch 6/20, episode 100/100] => loss: 2.30259, acc: 0.10000

@mostoo45
Copy link
Author

I use both of them , tf 1.3 and python3

@PytaichukBohdan
Copy link

@mostoo45 Restarting the kernel with clearing an output worked for me

@themis0888
Copy link

Same problem to me..
I ran Proto-MiniImagenet and I got followings
[epoch 93/100, episode 100/100] => loss: 2.99573, acc: 0.05000
[epoch 94/100, episode 50/100] => loss: 2.99573, acc: 0.05000
[epoch 94/100, episode 100/100] => loss: 2.99573, acc: 0.05000
[epoch 95/100, episode 50/100] => loss: 2.99573, acc: 0.05000
[epoch 95/100, episode 100/100] => loss: 2.99573, acc: 0.05000
[epoch 96/100, episode 50/100] => loss: 2.99573, acc: 0.05000
[epoch 96/100, episode 100/100] => loss: 2.99573, acc: 0.05000
[epoch 97/100, episode 50/100] => loss: 2.99573, acc: 0.05000
[epoch 97/100, episode 100/100] => loss: 2.99573, acc: 0.05000
[epoch 98/100, episode 50/100] => loss: 2.99573, acc: 0.05000
[epoch 98/100, episode 100/100] => loss: 2.99573, acc: 0.05000
[epoch 99/100, episode 50/100] => loss: 2.99573, acc: 0.05000
[epoch 99/100, episode 100/100] => loss: 2.99573, acc: 0.05000
[epoch 100/100, episode 50/100] => loss: 2.99573, acc: 0.05000
[epoch 100/100, episode 100/100] => loss: 2.99573, acc: 0.05000

same accuracy for every episode.
And weird thing is that this wasn't happening for Proto-Omniglot.
My tensorflow is for GPU and version 1.3

@ylfzr
Copy link

ylfzr commented Jun 5, 2018

@themis0888, I think the problem should be you put your data into wrong place so the data actually is not fed into the model

@abdulfatir
Copy link
Owner

Many people are facing this issue. Can someone look into it?

@themis0888
Copy link

themis0888 commented Sep 27, 2018

@guohan950106 Hello~ this is the student who faced this issue #1.
Actually, I didn't face that problem anymore after that day. I did nothing but it just suddenly gone, so I could not figure out what was the problem.
Maybe you can reimplement this on your own to solve this problem.

@ankishb
Copy link

ankishb commented Jan 19, 2019

@abdulfatir
I am facing same issue, this is output after running your code:

[[epoch 1/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 1/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 2/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 2/20, episode 100/100] => loss: 4.09434, acc: 0.01667
^L[epoch 3/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 3/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 4/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 4/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 5/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 5/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 6/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 6/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 7/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 7/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 8/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 8/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 9/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 9/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 10/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 10/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 11/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 11/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 12/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 12/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 13/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 13/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 14/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 14/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 15/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 15/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 16/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 16/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 17/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 17/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 18/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 18/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 19/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 19/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 20/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 20/20, episode 100/100] => loss: 4.09434, acc: 0.01667

I found that problem is that, gradient is not flowing backward. It is zero at each step.

Did you find any solution? Any sugestion?

@NanYoMy
Copy link

NanYoMy commented Mar 13, 2019

awesome job!

@sebastianpinedaar
Copy link

@abdulfatir
I am facing same issue, this is output after running your code:

[[epoch 1/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 1/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 2/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 2/20, episode 100/100] => loss: 4.09434, acc: 0.01667
^L[epoch 3/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 3/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 4/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 4/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 5/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 5/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 6/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 6/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 7/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 7/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 8/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 8/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 9/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 9/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 10/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 10/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 11/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 11/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 12/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 12/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 13/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 13/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 14/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 14/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 15/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 15/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 16/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 16/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 17/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 17/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 18/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 18/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 19/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 19/20, episode 100/100] => loss: 4.09434, acc: 0.01667
[epoch 20/20, episode 50/100] => loss: 4.09434, acc: 0.01667
[epoch 20/20, episode 100/100] => loss: 4.09434, acc: 0.01667

I found that problem is that, gradient is not flowing backward. It is zero at each step.

Did you find any solution? Any sugestion?

I found that what @ylfzr mentioned is the issue. I was getting the same numbers. It turns out that managing the folders in Colab can be a little messy and if you don't pay attention you can miss the right data location (it was my case).

@wdayang
Copy link

wdayang commented Jul 4, 2020

I also faced the problem with acc and loss unchanged

@wdayang
Copy link

wdayang commented Jul 4, 2020

@themis0888, I think the problem should be you put your data into wrong place so the data actually is not fed into the model

Yes, after managed the the place of the data. the acc and loss changes

@ali7amdi
Copy link

ali7amdi commented Sep 22, 2020

@themis0888, I think the problem should be you put your data into wrong place so the data actually is not fed into the model

Yes, after managed the the place of the data. the acc and loss changes

Hi @wdayang
How did you manage the place of the data to have the acc and loss changing?

@HonFii
Copy link

HonFii commented Jul 5, 2022

If your acc and loss did not change at all after multiple episodes, it is most likely due to your dataset being misplaced. The correct location should be: prototypical-networks-tensorflow-master\data\omniglot\data\Alphabet_of_the_Magi, ,,,,, blah blah blah

[epoch 1/20, episode 5/100] => loss: 3.60291, acc: 0.43667
[epoch 1/20, episode 10/100] => loss: 3.25432, acc: 0.55667
[epoch 1/20, episode 15/100] => loss: 3.09199, acc: 0.57333
[epoch 1/20, episode 20/100] => loss: 2.91092, acc: 0.60333
[epoch 1/20, episode 25/100] => loss: 2.78092, acc: 0.59000
[epoch 1/20, episode 30/100] => loss: 2.63616, acc: 0.62667
[epoch 1/20, episode 35/100] => loss: 2.50083, acc: 0.61333
[epoch 1/20, episode 40/100] => loss: 2.40846, acc: 0.69000
[epoch 1/20, episode 45/100] => loss: 2.27202, acc: 0.72667
[epoch 1/20, episode 50/100] => loss: 2.05044, acc: 0.79000
[epoch 1/20, episode 55/100] => loss: 2.03263, acc: 0.78667
[epoch 1/20, episode 60/100] => loss: 1.90013, acc: 0.79667
[epoch 1/20, episode 65/100] => loss: 1.90940, acc: 0.74000
[epoch 1/20, episode 70/100] => loss: 1.69886, acc: 0.80333
[epoch 1/20, episode 75/100] => loss: 1.66013, acc: 0.81000
[epoch 1/20, episode 80/100] => loss: 1.66992, acc: 0.83333

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests