Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add PoolDataLoader for parrots #134

Merged
merged 7 commits into from
Sep 3, 2020

Conversation

magicdream2222
Copy link
Contributor

In order to increase the speed, add PoolDataLoader for parrots.

@codecov
Copy link

codecov bot commented Sep 1, 2020

Codecov Report

Merging #134 into master will decrease coverage by 0.11%.
The diff coverage is 17.64%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #134      +/-   ##
==========================================
- Coverage   82.44%   82.32%   -0.12%     
==========================================
  Files         143      143              
  Lines        6609     6621      +12     
  Branches      979      986       +7     
==========================================
+ Hits         5449     5451       +2     
- Misses       1062     1071       +9     
- Partials       98       99       +1     
Flag Coverage Δ
#unittests 82.32% <17.64%> (-0.12%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmedit/apis/train.py 19.51% <0.00%> (-2.11%) ⬇️
mmedit/datasets/builder.py 89.47% <60.00%> (-2.98%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4c2345f...85ee4e5. Read the comment docs.

@magicdream2222
Copy link
Contributor Author

In order to increase the speed, add PoolDataLoader for parrots.


from .dataset_wrappers import RepeatDataset
from .registry import DATASETS
from .samplers import DistributedSampler

if torch.__version__ == 'parrots':
from torch.utils.data import PoolDataLoader
DataLoader = partial(PoolDataLoader, prefetch_num=2)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prefetch_num may be set by config.

@@ -133,6 +140,8 @@ def build_dataloader(dataset,
worker_init_fn, num_workers=num_workers, rank=rank,
seed=seed) if seed is not None else None

if torch.__version__ == 'parrots':
pin_memory = False
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pin_memory can also be set by config.

@nbei nbei merged commit 7253ba8 into open-mmlab:master Sep 3, 2020
Yshuo-Li pushed a commit to Yshuo-Li/mmediting that referenced this pull request Jul 15, 2022
@OpenMMLab-Assistant-007
Copy link

Hi!
@magicdream2222
First of all, we want to express our gratitude for your significant PR in the OpenMMLab project. Your contribution is highly appreciated, and we are grateful for your efforts in helping improve this open-source project during your personal time. We believe that many developers will benefit from your PR.

We would also like to invite you to join our Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. Look forward to seeing you there! Join us :https://discord.gg/UjgXkPWNqA

If you have WeChat account,welcome to join our community on WeChat. You can add our assistant :openmmlabwx. Please add "mmsig + Github ID" as a remark when adding friends:)
Thank you again for your contribution❤

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants