Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EOFError: Ran out of input on windows when training with CSV #967

Closed
devedse opened this issue Apr 11, 2019 · 4 comments
Closed

EOFError: Ran out of input on windows when training with CSV #967

devedse opened this issue Apr 11, 2019 · 4 comments

Comments

@devedse
Copy link

devedse commented Apr 11, 2019

I'm currently running into an issue when I run the following command:

retinanet-train C:/XGitPrivate/*****/ImagesCarsExport/annotations.csv C:/XGitPrivate/*****/ImagesCarsExport/classes.csv

The logs of this command:

classification (Concatenate)    (None, None, 1)      0           classification_submodel[1][0]
                                                                 classification_submodel[2][0]
                                                                 classification_submodel[3][0]
                                                                 classification_submodel[4][0]
                                                                 classification_submodel[5][0]
==================================================================================================
Total params: 36,382,957
Trainable params: 36,276,717
Non-trainable params: 106,240
__________________________________________________________________________________________________
None
Epoch 1/50
Exception in thread Thread-2:
Traceback (most recent call last):
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\threading.py", line 916, in _bootstrap_inner
    self.run()
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\site-packages\keras\utils\data_utils.py", line 565, in _run
    with closing(self.executor_fn(_SHARED_SEQUENCES)) as executor:
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\site-packages\keras\utils\data_utils.py", line 548, in <lambda>
    initargs=(seqs,))
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\multiprocessing\context.py", line 119, in Pool
    context=self.get_context())
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\multiprocessing\pool.py", line 174, in __init__
    self._repopulate_pool()
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\multiprocessing\pool.py", line 239, in _repopulate_pool
    w.start()
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle generator objects

Using TensorFlow backend.
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "C:\Users\****\AppData\Local\conda\conda\envs\POMP\lib\multiprocessing\spawn.py", line 115, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

I already found a solution here: #857 that you should add --workers 0.

However since this is not really a 'solution' and is more of a workaround I would like to leave this issue open to see if this can be fixed at the underlying level.

@hgaiser
Copy link
Contributor

hgaiser commented Apr 18, 2019

However since this is not really a 'solution' and is more of a workaround I would like to leave this issue open to see if this can be fixed at the underlying level.

The underlying issue comes from Keras it seems; not much we can do about that in this repository.

@devedse
Copy link
Author

devedse commented Apr 18, 2019

Would you be able to help produce a minimal sample to reproduce this issue so we could pass it on to the Keras team?

@hgaiser
Copy link
Contributor

hgaiser commented Apr 18, 2019

I'm not using windows (it seems to only be an issue on windows); I think you're better equipped to work out a minimal example.

@hgaiser
Copy link
Contributor

hgaiser commented Jun 6, 2019

Closing this as it seems there is no issue for keras-retinanet.

@hgaiser hgaiser closed this as completed Jun 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants