Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tools: fixed remove broken copyset after remove peer #373

Merged
merged 1 commit into from
Jun 4, 2021
Merged

tools: fixed remove broken copyset after remove peer #373

merged 1 commit into from
Jun 4, 2021

Conversation

Wine93
Copy link
Contributor

@Wine93 Wine93 commented May 31, 2021

What problem does this PR solve?

Issue Number: close #xxx

Problem Summary:

What is changed and how it works?

What's Changed:

How it Works:

Side effects(Breaking backward compatibility? Performance regression?):

Check List

  • Relevant documentation/comments is changed or added
  • I acknowledge that all my contributions will be made under the project's license

if (!copysetNodeManager_->IsExist(poolId, copysetId)) {
response->set_status(COPYSET_OP_STATUS_COPYSET_NOTEXIST);
LOG(WARNING) << "Remove copyset node " << groupId << " not exist";
} else if (!copysetNodeManager_->PurgeCopysetNodeData(poolId, copysetId)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we check copyset status before purge its data?

Copy link
Contributor Author

@Wine93 Wine93 Jun 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we check copyset status before purge its data?

I think there is a bug in my implemention. Actually, the copyset node will not exist in the copysetNodeManager_ if its data has broken, because the copysetNodeManager_ will ignore the data broken's copyset in the load phase when chunkserver start. So we will never delete the copyset node success.

Copy link
Contributor Author

@Wine93 Wine93 Jun 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we check copyset status before purge its data?

In the latest implementation, i will only delete the copyset which data has broken.

@Wine93 Wine93 changed the title tools: fixed remove copyset after remove peer tools: fixed remove broken copyset after remove peer Jun 2, 2021
@@ -30,6 +30,9 @@ namespace chunkserver {

using ::google::protobuf::RpcController;
using ::google::protobuf::Closure;
using COPYSET_OP_STATUS::COPYSET_OP_STATUS_COPYSET_IS_HEALTHY;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For legacy C enum define, its internal enumerators are already exposed to its outer namespace, you can use COPYSET_OP_STATUS_COPYSET_IS_HEALTHY directly without the prefix COPYSET_OP_STATUS::, so no need to use those using.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For legacy C enum define, its internal enumerators are already exposed to its outer namespace, you can use COPYSET_OP_STATUS_COPYSET_IS_HEALTHY directly without the prefix COPYSET_OP_STATUS::, so no need to use those using.

yeap, it's a bad use of plain enums, i will improve it.

@@ -35,6 +35,8 @@ DEFINE_string(peer,
"", "Id of the operating peer");
DEFINE_string(new_conf,
"", "new conf to reset peer");
DEFINE_bool(remove_copyset, false, "Whether need to remove broken copyset "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why default is false? if the broken copyset data keep there, there will some problem

Copy link
Contributor Author

@Wine93 Wine93 Jun 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why default is false? if the broken copyset data keep there, there will some problem

Actually, it's a danger action to delete copyset even if it is a broken copyset, we want to let user know what he is doning now, instead of delete it silently.

BTW: The copyset contain multi contents: chunk data, raft { log, meta, snapshot }.

const CopysetRequest* request,
CopysetResponse* response,
Closure* done) {
LOG(INFO) << "Receive delete broken copyset request";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add groupId in this LOG ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add groupId in this LOG ?

The below logic has guranteed the group id will be print.

@ilixiaocui ilixiaocui merged commit 184ba59 into opencurve:master Jun 4, 2021
ilixiaocui pushed a commit to ilixiaocui/curve that referenced this pull request Feb 6, 2023
Update Summer-of-Code mentor for 2021 CoreDNS project
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants