New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
operate first a pulp+python_plugin instance (or more) #176
Comments
|
I'm gonna move this around a bit. |
|
adding @mikedep333 as he is the SME on https://github.com/pulp/pulp-operator |
|
we currently provide 3 ways of installing pulp:
We have a brief explanation of them here: |
|
Ohh, long time no see, Pulp team! 🙂 How are you doing these days? 🙂 Welcome! I think the |
|
Hi @tumido, We would love for you to adopt pulp-operator. What internals do you see as important / remaining to be abstracted away? |
|
Hey @mikedep333, we do have one other operator (https://github.com/observatorium/operator) deployed which is in active development. We have set up the crds/clusterroles/bindings in a central location here and other required resources in a separate directory like here. |
I don't think there's anything remaining to be abstracted away in the case of the operator. That's why I prefer it as the solution here. 🙂 I think we may get an idea on what might be improved once we start using it. Right now my comment was directed mostly to comparison of the 3 methods @fao89 outlined above - the operator is abstracting away tons of complexity compared to the other installers and is declarative. And we can appreciate that. I'm gonna go ahead and start creating a namespace for the operator to live at - and we will automate this as a custom deployment of the operator (custom meaning directly deploying the I'm also gonna create a new user group for you with full access to this new namespace so you can manage and monitor the operator yourself if you want. The deployment of the operator will be managed via ArgoCD using the manifests copied/referenced from here: Once the operator is available in the community operator hub we can either switch to a deployment from there or keep using a custom "manual" deployment for more rapid dev cycles on it if you want. |
|
@tumido I'm a noob on k8s world, I never worked with ArgoCD. I have a "CI knowledge" of pulp-operator, meaning I only used pulp-operator on these cases: https://github.com/pulp/pulp-operator/actions/runs/728145612 |
|
Just a friendly ping here. What is the current state of this? We are monitoring this work on the package index meeting with Pulp team. Thanks in advance. CC @ipanova |
|
Yeah, sorry we had no upgrade on this so far, we've got hammered by a ton of work elsewhere. @fridex I see the operator didn't reach OperatorHub yet but you have CSV available. It also seems to me that the cluster role/role specified in the direct manifests is not yet prepared for an @fridex do you want to have the operator namespace scoped only within it's own namespace or available to multiple namespaces? I assume you'd rather to have the operator available globally, is that correct? If so, we either have to change the direct manifests a bit or create our own operator catalog source image and install via CSV. |
Ideally, the operator could be available globally. Short-term, it would be great for us to have just one instance of pulp in one namespace for a selected group of people, small steps could work here. The very first outcome for us is the fact we can run pulp on op1st and can experiment with features it provides to us. The cluster-scope operator can be done in parallel (low priority for us now). |
|
I'm sorry for the constant delays on this. I'm prioritizing this now I hope I can it something in place in few days. |
|
Hey folks, so.. I can offer you 2 options. I think it's up to you to decide which way is more maintainable for you. Note - either of these solutions is temporary. Once you submit your operator to OperatorHub, this model changes - we would consume the operator manifest via subscription from community-operators. Option 1 - Direct manifestsImplemented in operate-first/apps#663 Pulp team would need to track for changes all the
Option 2 - Install via OLM via a custom catalogImplemented in operate-first/apps#664 This PR is based on your My custom catalog is available for you, there's even an updater script that will keep the catalog up to date with This option is much easier to migrate once you submit your operator to OperatorHub since we would just point the SummaryThe decision is up to you, both approaches are valid. Either you want to maintain an OLM catalog for your dev purposes (you already have CSV up to date, so the overhead is not that big) or you'd rather copy and paste the cluster-scoped resources into our repository via PRs. Either is fine with us I think. 🙂 |
|
we are planning to submit our operator to OperatorHub, so I would vote option 2 |
|
yeah using the custom catalog/subscription sounds good to me 👍 |
Thanks, Fabricio. What is needed on the deployment side to apply the change on the Operate First instance? (I'm still getting task errors, I understand your fix is related) |
|
you need to pull the images again, like this |
|
Tried to reprovision the whole pulp instance after some experimental changes. It looks like the whole deployment was brought up with the operator. Unfortunatelly, when trying to create a pulp python repository, I'm getting the following error on API: Thanks for any pointers how to fix this. |
|
@gerrod3 I believe we need to pair on this ^ |
|
Yesterday we had a debugging session with @gerrod3 and @fao89. The issue I reported above does not look relevant anymore after reprovisioning the pulp instance. twine uploads work now, we are able to upload built packages. However, the download is not operational. @gerrod3 suggested adjusting |
|
@fridex thank you for the summary, that was helpful to understand where is the problem and that there is progress around it. |
|
@tumido I've been seeing this at the pulp-operator logs: I suspect it may be related to NFS settings, could you please take a look? |
|
Hey, pulp was migrated to a new cluster - Smaug, because Zero wen't down. Is there anything to be done for this issue or can we consider the onboarding complete as of now? I think we should maybe think about closing this issue if the initial setup is done and rather resort to opening new issues if any problems show up. |
|
Thanks for the work and migration 👍🏻 Checking https://pulp.operate-first.cloud, the index is not yet reachable.
Sounds like a good idea. Let's finish deployment and have mvp up so we can close this and report any issues along the way. Thanks again. |
|
@asmacdo You might find this interesting, if you want to go back to your pulp roots |
|
Facing the issue on getting the pods on same node, so created the issue on upstream with pulp-operator |
|
Due release of pulp-operator: v0.6.1, the readwrite issue is fixed with the node selection. newer issue causing the hinderance: |
|
Opened upstream issue : pulp/pulp-operator#308 |
|
I've hit a similar issue to this in the past during development if I didn't delete the persistent volume claim for postgres (which isn't deleted when you delete your pulp custom resource) and then I create a new instance with a new postgres deploy on top of the same pvc. The generated password (secret) for the new pulp instance's postgres doesn't match what has been initialized in the previous and the db user can't access. Just a though, in case that was the issue. |
|
thank you , that was the issue |
|
The setup is running. status can be seen here: https://pulp.operate-first.cloud/pulp/api/v3/status/ |
|
The pulp instance is working and running on the operate-first cluster. The status can be seen here: https://pulp.operate-first.cloud/pulp/api/v3/status/ Admin can create the index: and upload the package via twine: |
|
I think this can be closed and we can track issues separately as discussed above. Thanks a lot to all for delivering this solution. 👏🏻 /close |
|
@fridex: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@harshad16 @fridex @tumido this needs refinement
@fridex could you add the Pulp team?
The text was updated successfully, but these errors were encountered: