Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] Fix bug in nondistributed multi-gpu training #1406

Merged
merged 2 commits into from Feb 8, 2022

Conversation

kennymckormick
Copy link
Member

No description provided.

@michael-camilleri
Copy link
Contributor

If someone can approve this, I think it is a useful update. I have had issues with training failing at the end (although I could recover all data) due to non-distributed training.

@kennymckormick
Copy link
Member Author

If someone can approve this, I think it is a useful update. I have had issues with training failing at the end (although I could recover all data) due to non-distributed training.

Hi, Michael. We merged this PR and added a message to encourage users to use distributed training instead.

@kennymckormick kennymckormick merged commit 01bde69 into open-mmlab:master Feb 8, 2022
@kennymckormick kennymckormick deleted the fix_non_dist branch February 16, 2022 06:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants