Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError during the training example with OIM #4

Closed
GBJim opened this issue Jun 14, 2017 · 21 comments
Closed

RuntimeError during the training example with OIM #4

GBJim opened this issue Jun 14, 2017 · 21 comments

Comments

@GBJim
Copy link

GBJim commented Jun 14, 2017

Hi all

After I executed the command
python examples/resnet.py -d viper -b 64 -j 2 --loss oim --logs-dir logs/resnet-viper-oim
I encountered the following errors:

Process Process-4:
Traceback (most recent call last):
File "/root/miniconda2/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/root/miniconda2/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/root/miniconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 45, in _worker_loop
data_queue.put((idx, samples))
File "/root/miniconda2/lib/python2.7/multiprocessing/queues.py", line 392, in put
return send(obj)
File "/root/miniconda2/lib/python2.7/site-packages/torch/multiprocessing/queue.py", line 17, in send
ForkingPickler(buf, pickle.HIGHEST_PROTOCOL).dump(obj)
File "/root/miniconda2/lib/python2.7/pickle.py", line 224, in dump
self.save(obj)
File "/root/miniconda2/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda2/lib/python2.7/pickle.py", line 554, in save_tuple
save(element)
File "/root/miniconda2/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda2/lib/python2.7/pickle.py", line 606, in save_list
self._batch_appends(iter(obj))
File "/root/miniconda2/lib/python2.7/pickle.py", line 639, in _batch_appends
save(x)
File "/root/miniconda2/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda2/lib/python2.7/multiprocessing/forking.py", line 67, in dispatcher
self.save_reduce(obj=obj, *rv)
File "/root/miniconda2/lib/python2.7/pickle.py", line 401, in save_reduce
save(args)
File "/root/miniconda2/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda2/lib/python2.7/pickle.py", line 554, in save_tuple
save(element)
File "/root/miniconda2/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda2/lib/python2.7/multiprocessing/forking.py", line 66, in dispatcher
rv = reduce(obj)
File "/root/miniconda2/lib/python2.7/site-packages/torch/multiprocessing/reductions.py", line 113, in reduce_storage
fd, size = storage.share_fd()
RuntimeError: unable to write to file </torch_29225_1654046705> at /py/conda-bld/pytorch_1493669264383/work/torch/lib/TH/THAllocator.c:267

When switch to the xentropy loss with
python examples/resnet.py -d viper -b 64 -j 1 --loss xentropy --logs-dir logs/resnet-viper-xentropy
The following error occured:

Exception in thread Thread-1:
Traceback (most recent call last):
File "/root/miniconda2/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/root/miniconda2/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/root/miniconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 51, in _pin_memory_loop
r = in_queue.get()
File "/root/miniconda2/lib/python2.7/multiprocessing/queues.py", line 378, in get
return recv()
File "/root/miniconda2/lib/python2.7/site-packages/torch/multiprocessing/queue.py", line 22, in recv
return pickle.loads(buf)
File "/root/miniconda2/lib/python2.7/pickle.py", line 1388, in loads
return Unpickler(file).load()
File "/root/miniconda2/lib/python2.7/pickle.py", line 864, in load
dispatchkey
File "/root/miniconda2/lib/python2.7/pickle.py", line 1139, in load_reduce
value = func(*args)
File "/root/miniconda2/lib/python2.7/site-packages/torch/multiprocessing/reductions.py", line 68, in rebuild_storage_fd
fd = multiprocessing.reduction.rebuild_handle(df)
File "/root/miniconda2/lib/python2.7/multiprocessing/reduction.py", line 155, in rebuild_handle
conn = Client(address, authkey=current_process().authkey)
File "/root/miniconda2/lib/python2.7/multiprocessing/connection.py", line 169, in Client
c = SocketClient(address)
File "/root/miniconda2/lib/python2.7/multiprocessing/connection.py", line 308, in SocketClient
s.connect(address)
File "/root/miniconda2/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 111] Connection refused

In both situations, the terminal is frozen after these errors prompt. I have to kill the corresponding Python process in order to exit.
Any suggestions to solve this?

@Cysu
Copy link
Owner

Cysu commented Jun 14, 2017

I wonder if it is fine to run the official mnist example?

@GBJim
Copy link
Author

GBJim commented Jun 15, 2017

Hi @Cysu
After going through the MNIST example. No errors happen.

I also tried to train the inception net in example: python examples/inception.py -d viper -b 64 -j 2 --loss xentropy --logs-dir logs/inception-viper-xentropy
No errors happen as well.

The interesting thing is that I tried to train ResNet again:
The training process froze like the following:, but no errors.

Files already downloaded and verified
VIPeR dataset loaded
subset | # ids | # images

train | 216 | 432
val | 100 | 200
trainval | 316 | 632
query | 316 | 632
gallery | 316 | 632
Epoch: [0][1/7] Time 160.275 (160.275) Data 0.446 (0.446) Loss 5.375 (5.375) Prec 0.00% (0.00%)
Epoch: [0][2/7] Time 0.563 (80.419) Data 0.001 (0.223) Loss 10.057 (7.716) Prec 0.00% (0.00%)

Is this caused by the GPU resource usage?
Currently, some Caffe process is also using my GPUs.

@Cysu
Copy link
Owner

Cysu commented Jun 15, 2017

I'm not sure if it is caused by some deadlocks between pytorch and caffe, especially when both are using NCCL. You may try to run it again when the caffe experiments are finished.

@GBJim
Copy link
Author

GBJim commented Jun 19, 2017

Hi @Cysu
Sorry for late response.
I tried it again after my Caffe process is terminated.

The training will be frozen when the -j (worker) argument is set to be bigger than 1.
If the -j argument is set to be 1, I get error: [Errno 111] Connection refused

@Cysu
Copy link
Owner

Cysu commented Jun 19, 2017

@GBJim Could you please change the num_workers in the official mnist example and see if it has the same problem?

@GBJim
Copy link
Author

GBJim commented Jun 19, 2017

@Cysu:

I tested the MNIST example with 16 workers. Everything is correct

@Cysu
Copy link
Owner

Cysu commented Jun 19, 2017

Sorry but currently I have no idea why it happened. There should be no much difference between our data loader with the mnist ones. I'm not sure if it is related to using root instead of normal user on Linux.

@GBJim
Copy link
Author

GBJim commented Jun 19, 2017

Thanks @Cysu
I will try to figure it out!

@Cysu
Copy link
Owner

Cysu commented Jul 4, 2017

@GBJim any luck on this?

@GBJim
Copy link
Author

GBJim commented Jul 4, 2017

Hi @Cysu
I've built a new environment for open RE-ID and cloned the latest commit.
But it seems like the resnet.py and inception.py are removed from the example folder.

Is there new tutorial of how to do a training or testing?
Thanks!

@GBJim
Copy link
Author

GBJim commented Jul 4, 2017

It seems like the codes are re-organized into oim_loss.py, softmax_loss.py and, triplet_loss.py
Let me check if my these scripts can work

@GBJim
Copy link
Author

GBJim commented Jul 4, 2017

@Cysu

I tried these commands: python examples/oim_loss.py -d viper or python examples/softmax_loss.py -d viper and python examples/triplet_loss.py -d viper as well.
Tthe following output is prompted and then the process was frozen. I need to use ctrl+z to exit for the process

root@e50f76502ce4:~/open-reid# python examples/oim_loss.py -d viper
Files already downloaded and verified
VIPeR dataset loaded
  subset   | # ids | # images
  ---------------------------
  train    |   216 |      432
  val      |   100 |      200
  trainval |   316 |      632
  query    |   316 |      632
  gallery  |   316 |      632

@Cysu
Copy link
Owner

Cysu commented Jul 4, 2017

@GBJim Oh, I forgot to update the tutorials. Just finished. Please check here.

Does the previous error still occur when -j 1 is use?

@GBJim
Copy link
Author

GBJim commented Jul 4, 2017

@Cysu

The process is still frozen when I set to single job. (Maybe I should wait for the process for longer time)

I set job to 1 and tried the following combinations:

OIM + ResNet --> Frozen

OIM + Inception --> RuntimeError: The expanded size of the tensor (128) must match the existing size (64) at non-singleton dimension 1. at /root/pytorch/torch/lib/THC/generic/THCTensor.c:323

SOFTMAX + ResNet --> Frozen

SOFTMAX + Inception --> Works Normally

And thank you for updating the documentation!

@Cysu
Copy link
Owner

Cysu commented Jul 5, 2017

That's weird... What's the script for OIM + Inception?

@GBJim
Copy link
Author

GBJim commented Jul 5, 2017

@Cysu
python examples/oim_loss.py -d viper -a inception -j 1

@lzj322
Copy link

lzj322 commented Jul 17, 2017

I meet the same issue. The problems that @GBJim had happen to me as well. Particularly, this, inception.py has nothing wrong, but resnet.py is Frozen.

@GBJim
Copy link
Author

GBJim commented Jul 17, 2017

@lzj322 Do you use Nvidia-docker to host the environment?

@lzj322
Copy link

lzj322 commented Jul 17, 2017

@GBJim yes. Would that be a problem? I don't know much about it. I asked the administrator to reset the docker. Now it has normal results. But we don't know why.
I am afraid that this issue could happen someday again.

@lzj322
Copy link

lzj322 commented Jul 17, 2017

@GBJim, @Cysu I guess that Dataparallel of pytorch doesn't work well with Nvidia-docker. Or maybe it is caused by pytorch pytorch forum

@Cysu
Copy link
Owner

Cysu commented Jul 18, 2017

@lzj322 Yeah, two programs cannot run on the same device if using NCCL.

@Cysu Cysu closed this as completed Sep 14, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants