-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Each time I run the Keras, I get different result. #2743
Comments
by default Keras's model.compile() sets the shuffle argument as True. You should the set numpy seed before importing keras. e.g.:
most of the provided Keras examples follow this pattern. |
Hi amirothman, Thanks for the prompt response. model.compile is not taking any shuffle argument. whereas, model.fit takes shuffle argument. So, I passed 'False' value to the argument. Even after that it is giving me inconsistent result. I will be highly grateful, if there is any other way to obtain consistent result. FYI, please find my code
|
The model weights are initialised randomly according to the initialization On Wed, 18 May 2016 00:18 spraveengupta, notifications@github.com wrote:
|
Hi Antreas, Thank you for the response. Thanks in advance. |
i also read somewhere (the link escapes me right now), that your version of Theano may ignore your seed. Make sure you have the latest version of Theano. |
Hi @amirothman Thanks for the information. |
@spraveengupta Possibly relate to this #2479? |
Hi Joel, I see the issue #2479 is closed. However, my theano is a bleeding edge version and I have latest version of Keras installed on my machine. Regards, |
you could arbitrarily recreate the same result by saving the model weights immediately upon initalization and then loading them for future learning experiments. aka: initialize once, reuse. |
@spraveengupta In your example, you don't include the Keras import statements. Have you really set |
Hi @mbollmann, Thanks for the suggestion. I have set the seed before importing the Keras libraries. Here, just to show that I have put the seed, I have edited in that way. Anyway, as there is a way to produce the consistent result, I am closing the issue. Thanks once again for all your valuable suggestions. |
It seems that I solved this problem in this way: |
I have fixed random seed like:
Disable shuffle like this:
I don't use ipython notebook (I use python2) and I'm using Tensorflow backend, but still have slightly different results at each run if I look at Also running in CPU only mode via Also setting tensorflow seed not helped:
|
All the above answers look perfect but i have given snippet of complete code. This code is for tensorflow backend.
|
I addition to everything that has already been mentioned here, keep |
Hey everyone, I've tried all the settings from this post but I still cannot get the same results, I don't know what's wrong or what am I missing.
|
@jruivo-dev If you use SGD instead of Adam, do you get reproducible results? My preliminary results indicate that Adam, RMSProp etc... contains some unknown random source which I have not yet locked down and with SGD I get reproducible results which I don't get if I use any of the more complex optimizers instead ... |
Yes, SGD seems to be indeed more stable, i.e. I get "much more reproducible" results with SGD, but sadly they are still not identical. |
write the below line just before the model.fit line |
Could anyone figure out how to correct this? I get different accuracies and loss values for every run. |
it is because every time Keras layers use different values as
initialization weights to different parameters.
in order to fix it, use the below line just before model.add() line
np.random.seed(0)
…On Thu, Dec 27, 2018 at 10:38 PM sri9s ***@***.***> wrote:
Could anyone figure out how to correct this? I get different accuracies
and loss values for every run.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2743 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/Arn7WulBQpH9dW-Y3a4o2Xh1g--wpDQxks5u9JULgaJpZM4IgXvU>
.
--
*Thanks with Regards*
*Ramya A V*
|
Like you said I tried it, but I still get different scores for each run. Here is my code:
|
when I use model.save I also encounter the strange problem--yield random result when predict., so then I save the model architecture to .yaml and weights to .h5. then reload the model, but also face the same problem but an exciting thing happen when I change from it works, yield the same prediction results! |
still don't work for me :( |
Seems in pytorch docs they say that reproducibility is not guaranteed |
@mrgloom @nvinayvarma189 Thanks for your help. Following your instruction, I've set the random seeds of Numpy, Tensorflow, and Random at the beginning of my script test.py. Then run CUDA_VISIBLE_DEVICES="" PYTHONHASHSEED=0 python test.py, and I can finally get same result every time. (repeat almost 100 times) |
Seems reproducible results is not guaranteed in case of GPU: |
Simple test that shows results are not reproducible:
V2 with added fixed random seed before other imports (for tensorflow, numpy, python random):
For single epoch on CPU:
V2 (single core and fixed random seeds):
V3(multi core and fixed random seeds):
So result is only reproducible for single CPU. |
I found a solution from #2280. the following worked for me import os import numpy as np rn.seed(1) from keras import backend as k |
@vamshi-1 can I ask what os.environ['CUDA_VISIBLE_DEVICES']='-1' is used for. Does -1 assign the load to all GPU's, since typically you see a non-negative value such as 0 or 1 for example . |
To disable the use of GPU |
I had the same problem, I solved it with fixing seeds before importing keras+ adding suffle = false to model.fit. The result it the same if you run the code many times, if you restart kernel, to reinitialize weights import numpy as np history = model.fit(X_train, dummy_Y_train, batch_size=1200, epochs=5, verbose=2, shuffle=False) |
None of these suggestions work for me. |
I do not check with GPU, but for CPU it seems not to work when we set fixing seeds as above recommendations with TensorFlow 1 as Keras back end. Therefore, we need to change Tensorflow 1 to Tensorflow 2, then the fixing seeds will work. For example, this works for me. |
Uhm, I use Tensorflow 2 on a CPU
|
Hi, I had a similar question when I was running Keras models using a TF backend. The answer, in my opinion, is that reproducing the exact same results depends on the dataset and parameters that you use. Yes, setting things like the seed will help when Keras pseudo-randomly selects data for training but if you are finding that you are getting very different results after each run, perhaps your learning rate for your model is too high. |
why a high learning rate would produce different results? I tried: os.environ['PYTHONHASHSEED']= '0' replace initializer with tf.keras.initializers.GlorotUniform(seed=100) I am playing with the pix2pix GAN from TF official website. |
@llodds, I would have to say that it's because there is another hidden randomness in Tensorflow. The reason a high learning rate could cause different results is because, when the model gets a prediction wrong during training, the error value is multiplied by the learning rate. If the learning rate is high then the model could correct itself too far. On the other hand if the learning rate is lower then the chances of the weight updates 'over-shooting is less likely. |
@jozi98 I still don't get the randomness here. Even if we got "over-shooting", it should over-shoot at the same degree at every epoch, then get the same error, same learning rate, and hence the same correction for the gradient. Yesterday I found this: Yet after setting |
This may be the up-to-date solution |
Hope this helps someone. I got the exactly the same results on each run on GPU with Tensorflow version 2.4.1 using link mentioned by @llodds above:
At the very beginning:
And one line before model.fit: Same results I got after each run of pair In case of Example:
I find out and test this using function |
this is working for mehttps://keras.io/getting_started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation, Lambda,GlobalAveragePooling2D batch_size = 32 train_datagen = ImageDataGenerator( validation_datagen = ImageDataGenerator( test_datagen = ImageDataGenerator( train_generator = train_datagen.flow_from_directory( validation_generator = validation_datagen.flow_from_directory( from collections import Counter os.environ['PYTHONHASHSEED'] = '0' conv_base = VGG16(weights='imagenet', include_top=False, input_shape=img_full_size) adam = Adam(lr=0.0001) train_samples = train_generator.samples |
Each time I run the Keras, I get inconsistent result.
Is there any way that it converges to the same solution as we have 'random_state' in sklearn which helps us getting the same solution how many ever times we run it.
Please help me to get out of this issue
The text was updated successfully, but these errors were encountered: