-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
quincy: cephadm: only pull host info from applied spec, don't try to parse yaml #49854
Conversation
We don't need to actually try to properly parse the yaml spec for our purposes, jsut pull the host info out from it. Fixes: https://tracker.ceph.com/issues/57870 Signed-off-by: Adam King <adking@redhat.com> (cherry picked from commit 5db03a6) Conflicts: src/cephadm/tests/test_cephadm.py
1 failure in test_cephadm task due to mistake in another PR that made task bootstrap with main image rather than quincy. 1 dead job in nfs-ingress test due to failure to re-image machine. Other nfs-ingress tests in the run passed okay so didn't bother with rerun. |
jenkins test api |
Is there a reason the yaml parsing library isn't being used here? Seems like reinventing the wheel... EDIT: Here because I'm getting "Unable to parse" when I pass |
@adk3798 Okay looking closely at this PR, I think it will solve my "Unable to parse" issue (that was actually due to a blank line at the end of the YAML file). I still think that the yaml parsing library should be used here, as it's already a dependency of the orchestrator module anyway. Then all of these edge cases can be avoided. Just to further the example, this is valid YAML that would fail to parse, even with this PR: service_type: host
hostname: |
myhostname
addr: >
10.0.0.10 Or even something as simple as I'll try to get a PR in to use the yaml library to solve these problems. |
@tnyeanderson we can't use |
Oof, that's disappointing :( but wouldn't it essentially be as straightforward as adding a Obviously you are much more familiar with the codebase than I am! For example, I have no idea how cephadm actually gets distributed/packaged on various distros, and that would be an integral part of this discussion. I just find it odd that such a critical part of ceph administration (the supported tool, all others being essentially deprecated) has such a significant limitation. And frankly, that limitation appears to have led to some complicated, unreliable, and not-so-great ways of doing things... I'm more than happy to help get the tool "packaged" so these third-party libraries can be used. Thanks for the insight! |
@tnyeanderson We'll be looking into adding dependencies for the cephadm binary a bit later this year. The build stuff that's there now is actually fairly new, and currently only on main branch. On quincy and earlier, the cephadm binary is just a standalone python script, that was often pulled in just by curling the file from github, hence why we had to avoid having any dependencies within it. What we were hoping for is to just have the new "build" aspect for the initial reef release be extremely simple (it's literally just a single python file put through python zipapp right now) so that if we messed it up it could be easily worked around by users (they could just rename the "cephadm.py" python script to "cephadm" and it would effectively be the same as before). If there are no issues around it after the initial reef release ("built" version properly gets published on download.ceph.com, users have no problems, etc.), we were planning to start doing a lot more around it, including breaking the 10000+ line file into multiple, more manageable files, and introducing dependencies for things like this. We have an etherpad talking about it here https://pad.ceph.com/p/cephadm-refactoring |
Makes sense, I'll follow along on the etherpad! And so glad to hear that there are plans to break up the 10k line file into more manageable chunks... Now I see why this is more complex as it seems, as is usually the case! :) I do have a draft PR #50269 in to use the pyyaml library. Do you think I should keep it open with a note that it is dependent on the refactor, or just close it and wait for the refactor to complete? Thanks again for the explanation! |
I think whether you keep it open is up to you. I think if you do, you might end up with the stalebot complaining about lack of activity, but if you're willing to rebase it or something every once in a while, I personally don't care whether it's kept open or closed and remade later. The reef release is a while out (I think May or June? Don't remember off the top of my head) because it's been pushed back due to issues in our testing lab, but I've linked your PR in the etherpad to try and remember it when the time comes. |
I'll leave it open and keep track of rebases as needed (I don't think there will be many conflicts in this area). It will be a good litmus test of the new packaging process as it becomes ready :) thanks again! |
backport tracker: https://tracker.ceph.com/issues/58454
backport of #48496
parent tracker: https://tracker.ceph.com/issues/57870
this backport was staged using ceph-backport.sh version 16.0.0.6848
find the latest version at https://github.com/ceph/ceph/blob/main/src/script/ceph-backport.sh