Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about self.orig_loss #12

Closed
FingerRec opened this issue May 1, 2019 · 4 comments
Closed

about self.orig_loss #12

FingerRec opened this issue May 1, 2019 · 4 comments

Comments

@FingerRec
Copy link

i found baseline_exp/async_tf_i3d_charades.py cannot run directly.
so i modify line 81 in models/criteria/async_tf_criterion.py as follow
idtime = [] for i in range(len(meta)): idtime.append((meta[i]['id'], meta[i]['time']))

i was confused about line 105 in models/criteria/async_tf_criterion.py
loss += self.loss(torch.nn.Sigmoid()(a), target) * self.orig_loss
what's the self.orig_loss mean?

@gsig
Copy link
Owner

gsig commented May 7, 2019

Hi!

This baseline definitely needed some updating, I just added fixes in commit ded24bd and it's running now on 4 gpus.

self.orig_loss was just a legacy parameter that had been set to 1, so it could safely be removed.. it was historically to adjust for the difference between the original softmax loss and the new sigmoid loss.

This baseline includes my experiments with simplifying asynchronous temporal fields, and extending to multi-label sigmoid loss, and i3d base architecture etc. I hope it helps! Let me know if you have any questions.

@FingerRec
Copy link
Author

FingerRec commented May 7, 2019

Thanks for your reply!

This code works very well now, just two small problem, as i use the pertained model, at the begin, the Prec@5 is often bigger than 100, like bellow

Train Epoch: [0][60/2660(2660)] Time 1.629 (2.227) Data 0.032 (0.119) Loss 0.0362 (0.0438) Prec@1 2.051 (47.684) Prec@5 168.718 (135.191)

Another question is

ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).

may be need to lower the memory_size or video_size?

@gsig
Copy link
Owner

gsig commented May 7, 2019

That's just due to how I extended Prec@1 and Prec@5 to work with multi-label ground truth. It's easy to add your own metrics under metrics/ and then just include them under --metrics in the config. My extension just counts all the are correct, either in top 1 or top 5. I just use if for analyzing training and over/underfitting, but then I use mAP for all proper evaluations.

This error is due to memory usage of the dataloading threads. The way multithreading works in pytorch/python is that it requires duplicating some of the data across the threads etc, and furthermore the images are queued into memory while they are waiting to be used, and the number of queued images is proportional to the number of workers (2x?). The easiest fix is to reduce the number of --workers. You can also try optimizing the dataloader by using torch.Tensors where possible (they aren't duplicated like lists of strings/numpy arrays/etc I believe).

If this error is happening at the start of the val_video phase you can try changing the number of workers in the val_video phase ( datasets/get.py ) either by just manually setting a number there or creating a new args parameter for it. This is because each dataloader is loading in much larger batch (whole video) in the val_video phase, and thus requires much more memory to store the queue of images.

Hope that helps!

@FingerRec
Copy link
Author

fixed, thanks a lot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants