New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
model.fit ValueError: I/O operation on closed file #2110
Comments
I had the same problem (also with IPython), and I solved it by adding a |
This looks like an IO bug with IPython, you might want to file a bug with On 3 April 2016 at 07:35, artix41 notifications@github.com wrote:
|
@fchollet yea that's what I found as well. Thanks for looking into it. |
@fchollet Right, I'm experiencing the same with Spyder (because it uses IPython kernel too). With each epoch completes at 2s, it results on I/O operation error too. |
@rilut thanks for the trick!! |
@artix41 - Thanks! adding the |
@rilut Setting verbose = 0 strangely results in the termination of my run in a single epoch. Any idea what is happening here? I added time.sleep(0.1) too. Isn't helping much. I am still trying to find a workaround :( |
Instead of writing to sys.stdout directly, could those calls be wrapped on an auxiliary function with a try/catch block to help prevent the entire learning from breaking down if/when this happens? Although it is clear this is not the best solution, I have to say it is very depressing to find that after 50h of training your model was lost simply because some text could not be printed 😞 I am glad ModelCheckpoint works very well, though! |
This is a workaround for keras-team#2110 where calling `model.fit` with `verbose=1` using IPython can intermittently raise "ValueError: I/O operation on closed file". This exception appears to be caused by an unknown IO bug with IPython that is raised when updating the `ProgbarLogger` progress bar . To workaround this bug and prevent users from unexpectedly losing their model: - The minimum progress bar refresh interval is now 0.1 seconds (up from 0.01 seconds). - Progress bar updates are now wrapped in `try/catch` blocks that ignore `ValueError` exceptions raised when calling `progbar.update` An ideal solution would resolve the IPython at the source, however, this is an important workaround for users who want to use IPython with `verbose=1`.
…le"" This reverts commit ae6933c.
This is a workaround for keras-team#2110 where calling `model.fit` with verbose=1 using IPython can intermittently raise "ValueError: I/O operation on closed file". This exception appears to be caused by an unknown IO bug with IPython that is raised when updating the ProgbarLogger progress bar . To workaround this bug and prevent users from unexpectedly losing their model: - The minimum progress bar refresh interval is now 0.1 seconds (up from 0.01 seconds). - Progress bar updates are now wrapped in try/catch blocks that ignore ValueError exceptions raised when calling progbar.update An ideal solution would resolve the IPython at the source, however, this is an important workaround for users who want to use IPython with verbose=1.
I get ethe same error when I am trying to call Convoluted Neural Network (CNN) via Keras library using the best estimated GridSearchCV estimator.. I cannot set the verbose parameter to false false since it is called from via GridSearchCV and cannot set the time lag as well for the same reason.. I am pasting my code snippet for reference.. from sklearn.grid_search import GridSearchCV grid search epochs, batch size and optimizeroptimizers = ['rmsprop', 'adam'] Create the parameters to tuneparameters = dict(optimizer=optimizers, nb_epoch=epochs, batch_size=batches, init=init) Initialize the classifierclf = KerasClassifier(build_fn=ConvulatedNeuralNet) Make an f1 scoring function using 'make_scorer'f1_scorer = make_scorer(f1_score, pos_label = 1) Perform grid search on the classifier using the f1_scorer as the scoring methodgrid_obj = GridSearchCV(estimator=clf, param_grid=parameters, verbose = 0) Get the estimator#clf = grid_obj.best_estimator_ #Hot encode Yes,No to 1,0 else the library would throw error.. Fit the grid search object to the training data and find the optimal parametersdatasets = [train_test_split(X_all, y_encoded, train_size=x, test_size=95) for x in [100, 200, 300]] |
Where should I use |
I don't think you can set the timeout if you are using GridSerachCV since the classifier is called from within GridSearchCV and not by the user... I tried the katest version of IPython, released two days back, and still have this issue.. You can try few alternatives..
|
I also have the same problem on IPython but it only arises if I do something on the notebook (create , delete, or modify a cell) during the execution of the fit function. Otherwise it works fine. |
I fix this problem with adding |
Any update on this? I've been seeing the same problem on Windows 64 Anaconda 2.7 Jupyter Notebook and Lab. |
@Givemeyourgits you may try to update jupyter notebook, current version is 4+ and someone says that the problem was fixed, didn't help for me. So I recommend to set verbose =0 |
Running latest version of Notebook for 2.7.x |
As the IPython link says (ipython/ipython#9168) this seems to be fixed in ipython trunk, but not yet in a released version (will be in version 4.4). In the meantime, I've made a quick hack simply by wrapping the |
4.4 is out. Indeed fixed this. I think this can be closed. |
I just ran into this issue with ipykernel 4.4.1. I am working from this example with a 2gb csv file. I only get around 2-3m lines in before I get |
I had this problem too and setting verbose=0 in the argument of model.fit() seems to have fixed it. |
I use conda install ipykernel advised by @shachar-i, my problem is fixed. |
Thanks @shachar-i , my problem fixed too. One point that may be helpful to others is that you need to check if your jupyter is installed by conda or pip. You can check with: For me, I used pip to install jupyter, so use pip to install ipykernel as well: |
conda install ipykernel solved it for me too, thank @shachar-i and @jf003320018 |
I experienced too this problem. As workaround I set verbose=2 in the argument of model.fit() and it has fixed it. Moreover setting verbose =2 at least logs the epoch and accuracy of the training compared to verbose = 0 which doesn't log anything. |
I can confirm that |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs, but feel free to re-open it if needed. |
IndexError Traceback (most recent call last) ~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) ~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs) ~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing) ~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py in select_data_adapter(x, y) ~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py in (.0) ~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py in can_handle(x, y) ~\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py in _is_list_of_scalars(inp) IndexError: list index out of range i am also getting the same ..some one please help me... |
editing in google colab and getting error in the model.fit part of the code. showing value error: in user code. anyone knows the solution |
Any idea what could be causing this error? I've been trying to solve this for a week. Thanks in advance.
The text was updated successfully, but these errors were encountered: