Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saving checkpoint at 1 epochs! #10

Open
why228430 opened this issue Sep 23, 2020 · 2 comments
Open

Saving checkpoint at 1 epochs! #10

why228430 opened this issue Sep 23, 2020 · 2 comments

Comments

@why228430
Copy link

When I run sh rtools/train.sh, the code stop here not run.

, sr0.loss_cls: 0.3226, sr0.loss_bbox: 0.4130, sr1.loss_cls: 0.4546, sr1.loss_bbox: 0.3601, loss: 2.5557, grad_norm: 11.3823
2020-09-22 21:20:43,374 - mmdet - INFO - Epoch [1][4400/4502] lr: 3.567e-03, eta: 1 day, 1:22:21, time: 0.874, data_time: 0.006, memory: 6929, s0.loss_cls: 0.4921, s0.loss_bbox: 0.5438, sr0.loss_cls: 0.3270, sr0.loss_bbox: 0.4025, sr1.loss_cls: 0.3991, sr1.loss_bbox: 0.3454, loss: 2.5098, grad_norm: 10.8762
2020-09-22 21:21:27,289 - mmdet - INFO - Epoch [1][4450/4502] lr: 3.603e-03, eta: 1 day, 1:21:34, time: 0.878, data_time: 0.006, memory: 6929, s0.loss_cls: 0.5008, s0.loss_bbox: 0.5919, sr0.loss_cls: 0.3339, sr0.loss_bbox: 0.4640, sr1.loss_cls: 0.4932, sr1.loss_bbox: 0.4270, loss: 2.8106, grad_norm: 10.2408
2020-09-22 21:22:11,102 - mmdet - INFO - Epoch [1][4500/4502] lr: 3.639e-03, eta: 1 day, 1:20:44, time: 0.876, data_time: 0.006, memory: 6929, s0.loss_cls: 0.4992, s0.loss_bbox: 0.5868, sr0.loss_cls: 0.3604, sr0.loss_bbox: 0.4742, sr1.loss_cls: 0.5154, sr1.loss_bbox: 0.4449, loss: 2.8810, grad_norm: 10.0068
2020-09-22 21:22:12,855 - mmdet - INFO - Saving checkpoint at 1 epochs
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 458/458, 0.5 task/s, elapsed: 962s, ETA: 0s

@zhuruihe
Copy link

zhuruihe commented Nov 2, 2020

Have you solved your problems?

@zhuruihe
Copy link

zhuruihe commented Nov 5, 2020

I have figure out the way to solve this problem.
this is the problem related to mmdet/core/evaluation/rmean_ap.py, and sometimes when they try to calculate tp and fp,we will encounter a multiprocess hang problem by "pool.starmap".

        tpfp = pool.starmap(
            rtpfp_default,
            zip(cls_dets, cls_gts, cls_gts_ignore,
                [iou_thr for _ in range(num_imgs)],
                [area_ranges for _ in range(num_imgs)]))
        tp, fp = tuple(zip(*tpfp))

so I just give up using multiprocess, I try to use 'for' loops to replace 'pool.starmap' and problem solved.

        mytp = []
        myfp = []
        for cdt, cgt, cgti, iout, arear in zip(cls_dets, cls_gts, cls_gts_ignore,
                                              [iou_thr for _ in range(num_imgs)],
                                              [area_ranges for _ in range(num_imgs)]):
            tp, fp = rtpfp_default(cdt, cgt, cgti, iout, arear)
            mytp.append(tp)
            myfp.append(fp)
        mytp = tuple(mytp)
        myfp = tuple(myfp)
        tp, fp = mytp, myfp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants