Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cephadm: eliminate duplication of sections #50318

Merged
merged 3 commits into from Mar 8, 2023
Merged

Conversation

Svelar
Copy link
Member

@Svelar Svelar commented Mar 1, 2023

cephadm: eliminate duplication of sections when applying set-extra-ceph-conf

In the past, multiple same sections in one conf file are allowed. But now it will cause error. Even now minimal conf only contains [global] section, custom minimal conf should be considered. Thanks to Adam King's advice and code sample, I combine minimal conf with extra conf according to same section.

Fixes: https://tracker.ceph.com/issues/58948

Contribution Guidelines

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard cephadm
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox
  • jenkins test windows

…extra-ceph-conf

Signed-off-by: Rongqi Sun <sunrongqi@huawei.com>
@Svelar Svelar requested a review from a team as a code owner March 1, 2023 08:23
Signed-off-by: Rongqi Sun <sunrongqi@huawei.com>
@Svelar Svelar changed the title cephadm: eliminate duplication of [global] section cephadm: eliminate duplication of sections Mar 7, 2023
Signed-off-by: Rongqi Sun <sunrongqi@huawei.com>
@adk3798
Copy link
Contributor

adk3798 commented Mar 8, 2023

https://pulpito.ceph.com/adking-2023-03-08_02:46:19-orch:cephadm-wip-adk-testing-2023-03-07-1710-distro-default-smithi/

Reruns of failed/dead jobs: https://pulpito.ceph.com/adking-2023-03-08_13:40:40-orch:cephadm-wip-adk-testing-2023-03-07-1710-distro-default-smithi/

After reruns, 1 failed and 2 dead jobs:

  • both dead jobs were due to Failed to download packages: qemu-img-15:6.2.0-28.module_el8.8.0+1257+0c3374ae.x86_64: Cannot download, all mirrors were already tried without success which is not relevant to us (didn't want to have to do another set of reruns)
  • 1 failed job was https://tracker.ceph.com/issues/58535

Overall, nothing in that run that would block merging

@adk3798 adk3798 merged commit 88ef4e2 into ceph:main Mar 8, 2023
@adk3798
Copy link
Contributor

adk3798 commented Mar 8, 2023

@Svelar do you want this backported to quincy or pacific?

@Svelar
Copy link
Member Author

Svelar commented Mar 9, 2023

Yes please, should I create an issue first?

@adk3798
Copy link
Contributor

adk3798 commented Mar 9, 2023

Yes please, should I create an issue first?

typically, we would have had a tracker beforehand and then set it to pending backport which would cause some automation to automatically open backport trackers. We can then feed those backport tracker numbers into a tool that automatically attempts to open a backport. (e.g. https://tracker.ceph.com/issues/58465 put to pending_backport caused https://tracker.ceph.com/issues/58776 to be created). Since we've already merged this, you could either create a tracker after the fact, link this PR to it ("Pull request ID" field in first tracker I linked) and then set it to pending backport, OR I could just manually open the backports since we've already merged this and it wouldn't be too much work for me. Whatever you prefer.

@Svelar
Copy link
Member Author

Svelar commented Mar 9, 2023

Yes please, should I create an issue first?

typically, we would have had a tracker beforehand and then set it to pending backport which would cause some automation to automatically open backport trackers. We can then feed those backport tracker numbers into a tool that automatically attempts to open a backport. (e.g. https://tracker.ceph.com/issues/58465 put to pending_backport caused https://tracker.ceph.com/issues/58776 to be created). Since we've already merged this, you could either create a tracker after the fact, link this PR to it ("Pull request ID" field in first tracker I linked) and then set it to pending backport, OR I could just manually open the backports since we've already merged this and it wouldn't be too much work for me. Whatever you prefer.

Never done this before. Maybe I could try it very first time. Thanks for so detailed guide.

@Svelar
Copy link
Member Author

Svelar commented Mar 10, 2023

I have created a tracker( https://tracker.ceph.com/issues/58948), but I can't set it to 'pending backport' status. Once its status is changed, I could use 'ceph-backpoet.sh' to backport it.

continue
section = ''
for line in conf.split('\n'):
if line.strip().startswith('#') or not line.strip():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really have to build our own config parser for this? I guess we have to, right? But is this the right place to start building a config parser in Python for ceph?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually for now, minimal conf can't be created by user, so config parser is simply. The main purpose is to mix two confs. Any suggest?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably once we start refactoring cephadm (in reef by using the new packaged version) we can add/reuse support to parse conf and use it to perform this merging moving this code outside of cephadm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants