Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

operate first a pulp+python_plugin instance (or more) #176

Closed
goern opened this issue Mar 29, 2021 · 82 comments · Fixed by operate-first/apps#664
Closed

operate first a pulp+python_plugin instance (or more) #176

goern opened this issue Mar 29, 2021 · 82 comments · Fixed by operate-first/apps#664
Assignees
Labels
onboarding Requesting onboarding to a cluster
Projects

Comments

@goern
Copy link
Member

goern commented Mar 29, 2021

Feature: op1st is operating one Pulp instance

Scenario: multi-index pulp
  Given we can deploy Pulp via an Operator and kustomize manifests 
  When pip installed a module from it
  And there are multiple variants of the same package version
  Then pip should separate the multiple variants by multiple index url

Scenario: use RBAC on multi-index pulp
  Given I have access to Pulp
  When I publish a module to an index URL
  And when I am not the 'owner' of that index
  Then I should be denied from publishing the module

Scenario: publish module
  Given I am a Tekton pipeline user
  When I publish a module to an index url
  Then I see the module on the simple index

@harshad16 @fridex @tumido this needs refinement

@fridex could you add the Pulp team?

@tumido
Copy link
Member

tumido commented Mar 29, 2021

I'm gonna move this around a bit.

@tumido tumido transferred this issue from operate-first/apps Mar 29, 2021
@tumido tumido added this to Backlog in Master Board via automation Mar 29, 2021
@tumido tumido added the onboarding Requesting onboarding to a cluster label Mar 29, 2021
@fridex
Copy link

fridex commented Mar 29, 2021

@fridex could you add the Pulp team?

CC @ipanova @fao89 @dralley

Feel free to add others you find relevant. As discussed in the meeting, we would like to deploy pulp with the pulp_python plugin.

@fao89
Copy link

fao89 commented Mar 29, 2021

adding @mikedep333 as he is the SME on https://github.com/pulp/pulp-operator

@fao89
Copy link

fao89 commented Mar 29, 2021

@tumido
Copy link
Member

tumido commented Mar 30, 2021

Ohh, long time no see, Pulp team! 🙂 How are you doing these days? 🙂 Welcome!

I think the pulp-operator serves our purpose the best. I think we'd like to be abstracted from the internals of Pulp as much as possible. If the experience of running an operator in active development in a prod-like environment would benefit the Pulp team, I see that as a plus as well.

@HumairAK HumairAK moved this from Backlog to To do/Next in Master Board Mar 31, 2021
@HumairAK HumairAK moved this from To do/Next to Backlog in Master Board Mar 31, 2021
@mikedep333
Copy link

mikedep333 commented Apr 1, 2021

Hi @tumido,

We would love for you to adopt pulp-operator.

What internals do you see as important / remaining to be abstracted away?

@4n4nd
Copy link

4n4nd commented Apr 1, 2021

Hey @mikedep333, we do have one other operator (https://github.com/observatorium/operator) deployed which is in active development. We have set up the crds/clusterroles/bindings in a central location here and other required resources in a separate directory like here.
You should be able to follow the same structure for setting up the pulp-operator. If you have any suggestions or questions please lmk.

@tumido
Copy link
Member

tumido commented Apr 6, 2021

@mikedep333

What internals do you see as important / remaining to be abstracted away?

I don't think there's anything remaining to be abstracted away in the case of the operator. That's why I prefer it as the solution here. 🙂 I think we may get an idea on what might be improved once we start using it. Right now my comment was directed mostly to comparison of the 3 methods @fao89 outlined above - the operator is abstracting away tons of complexity compared to the other installers and is declarative. And we can appreciate that.

I'm gonna go ahead and start creating a namespace for the operator to live at - and we will automate this as a custom deployment of the operator (custom meaning directly deploying the Deployment resource, creating service account and so on) into this new namespace - similar to the observatorium operator @4n4nd linked above.

I'm also gonna create a new user group for you with full access to this new namespace so you can manage and monitor the operator yourself if you want.

The deployment of the operator will be managed via ArgoCD using the manifests copied/referenced from here:
https://github.com/pulp/pulp-operator/tree/main/deploy

Once the operator is available in the community operator hub we can either switch to a deployment from there or keep using a custom "manual" deployment for more rapid dev cycles on it if you want.

@fao89
Copy link

fao89 commented Apr 8, 2021

@tumido I'm a noob on k8s world, I never worked with ArgoCD. I have a "CI knowledge" of pulp-operator, meaning I only used pulp-operator on these cases: https://github.com/pulp/pulp-operator/actions/runs/728145612
But I see a great opportunity for us to improve our docs: https://pulp-operator.readthedocs.io/en/latest/
Let me know how can I help or at least what you are missing from the docs

@fridex
Copy link

fridex commented May 6, 2021

Just a friendly ping here. What is the current state of this? We are monitoring this work on the package index meeting with Pulp team. Thanks in advance.

CC @ipanova

@tumido
Copy link
Member

tumido commented May 6, 2021

Yeah, sorry we had no upgrade on this so far, we've got hammered by a ton of work elsewhere. @fridex

I see the operator didn't reach OperatorHub yet but you have CSV available. It also seems to me that the cluster role/role specified in the direct manifests is not yet prepared for an AllNamespaces role and if we deploy the operator this way the scope is limited to its current namespace only, is that a correct observation?

@fridex do you want to have the operator namespace scoped only within it's own namespace or available to multiple namespaces? I assume you'd rather to have the operator available globally, is that correct? If so, we either have to change the direct manifests a bit or create our own operator catalog source image and install via CSV.

@fridex
Copy link

fridex commented May 6, 2021

@fridex do you want to have the operator namespace scoped only within it's own namespace or available to multiple namespaces? I assume you'd rather to have the operator available globally, is that correct? If so, we either have to change the direct manifests a bit or create our own operator catalog source image and install via CSV.

Ideally, the operator could be available globally. Short-term, it would be great for us to have just one instance of pulp in one namespace for a selected group of people, small steps could work here. The very first outcome for us is the fact we can run pulp on op1st and can experiment with features it provides to us. The cluster-scope operator can be done in parallel (low priority for us now).

@tumido tumido removed this from Backlog in Master Board May 14, 2021
@tumido tumido added this to Backlog (undiscussed) in Undecided via automation May 14, 2021
@tumido tumido moved this from Backlog (undiscussed) to High Priority in Undecided May 14, 2021
@tumido
Copy link
Member

tumido commented May 14, 2021

I'm sorry for the constant delays on this. I'm prioritizing this now I hope I can it something in place in few days.

@tumido
Copy link
Member

tumido commented May 18, 2021

Hey folks, so.. I can offer you 2 options. I think it's up to you to decide which way is more maintainable for you. Note - either of these solutions is temporary. Once you submit your operator to OperatorHub, this model changes - we would consume the operator manifest via subscription from community-operators.

Option 1 - Direct manifests

Implemented in operate-first/apps#663

Pulp team would need to track for changes all the CustomResourceDefinitions, ClusterRoles.. basically any resource defined in that PR.

  1. You would need to maintain it and update all the manifests defined within cluster-scope/ path it in our repos. Basically copy and paste those resources back here if they change in your repos.
  2. The resources within pulp-operator/ path in that PR are transferable and can be deployed from any repo since they are namespace scoped and you already have full control over the pulp-operator namespace.

Option 2 - Install via OLM via a custom catalog

Implemented in operate-first/apps#664

This PR is based on your ClusterServiceVersion and defines a CatalogSource. Right now it points to my image, but the intention is that you own this image and keep the content of the catalog updated - Every time you change the CSV in your repos, you also update the catalog image. You can either use your own catalog or base it on my catalog I've created for this purpose.

My custom catalog is available for you, there's even an updater script that will keep the catalog up to date with pulp-operator repository master branch. Once you push an updated catalog image, the rest of the update in cluster happens automatically.

This option is much easier to migrate once you submit your operator to OperatorHub since we would just point the Subscription resource to a different catalog.

Summary

The decision is up to you, both approaches are valid. Either you want to maintain an OLM catalog for your dev purposes (you already have CSV up to date, so the overhead is not that big) or you'd rather copy and paste the cluster-scoped resources into our repository via PRs. Either is fine with us I think. 🙂

cc @fao89 @fridex @ipanova

@fao89
Copy link

fao89 commented May 18, 2021

we are planning to submit our operator to OperatorHub, so I would vote option 2

@tumido
Copy link
Member

tumido commented May 19, 2021

cc @HumairAK @4n4nd are we also good on using the custom catalog/subscription for the time being (until the pulp operator reaches OperatorHub)?

@4n4nd
Copy link

4n4nd commented May 19, 2021

yeah using the custom catalog/subscription sounds good to me 👍

@fridex
Copy link

fridex commented Jul 15, 2021

The twine support should be fixed, thanks to @gerrod3 - now waiting for pulp/pulp-operator#165 to get merged.

I just merged it!

Thanks, Fabricio. What is needed on the deployment side to apply the change on the Operate First instance? (I'm still getting task errors, I understand your fix is related)

@fao89
Copy link

fao89 commented Jul 15, 2021

you need to pull the images again, like this
#176 (comment)

@fridex
Copy link

fridex commented Jul 19, 2021

Tried to reprovision the whole pulp instance after some experimental changes. It looks like the whole deployment was brought up with the operator. Unfortunatelly, when trying to create a pulp python repository, I'm getting the following error on API:

Internal Server Error: /pulp/api/v3/repositories/python/python/
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
psycopg2.errors.UndefinedColumn: column "retained_versions" of relation "core_repository" does not exist
LINE 1: ...thoth', "description" = NULL, "next_version" = 0, "retained_...
                                                             ^


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 34, in inner
    response = get_response(request)
  File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 115, in _get_response
    response = self.process_exception_by_middleware(e, request)
  File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 113, in _get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
    return view_func(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
    return self.dispatch(request, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
    response = self.handle_exception(exc)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 469, in handle_exception
    self.raise_uncaught_exception(exc)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
    raise exc
  File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
    response = handler(request, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 19, in create
    self.perform_create(serializer)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 24, in perform_create
    serializer.save()
  File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 205, in save
    self.instance = self.create(validated_data)
  File "/usr/local/lib/python3.9/site-packages/pulpcore/app/serializers/base.py", line 161, in create
    instance = super().create(validated_data)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 939, in create
    instance = ModelClass._default_manager.create(**validated_data)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/manager.py", line 82, in manager_method
    return getattr(self.get_queryset(), name)(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 422, in create
    obj.save(force_insert=True, using=self.db)
  File "/usr/local/lib/python3.9/site-packages/pulpcore/app/models/repository.py", line 90, in save
    super().save(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/pulpcore/app/models/base.py", line 149, in save
    return super().save(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/django_lifecycle/mixins.py", line 134, in save
    save(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 743, in save
    self.save_base(using=using, force_insert=force_insert,
  File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 779, in save_base
    parent_inserted = self._save_parents(cls, using, update_fields)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 808, in _save_parents
    updated = self._save_table(
  File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 853, in _save_table
    updated = self._do_update(base_qs, using, pk_val, values, update_fields,
  File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 903, in _do_update
    return filtered._update(values) > 0
  File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 760, in _update
    return query.get_compiler(self.db).execute_sql(CURSOR)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1471, in execute_sql
    cursor = super().execute_sql(result_type)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1142, in execute_sql
    cursor.execute(sql, params)
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 99, in execute
    return super().execute(sql, params)
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute
    return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
    return executor(sql, params, many, context)
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/usr/local/lib/python3.9/site-packages/django/db/utils.py", line 89, in __exit__
    raise dj_exc_value.with_traceback(traceback) from exc_value
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: column "retained_versions" of relation "core_repository" does not exist
LINE 1: ...thoth', "description" = NULL, "next_version" = 0, "retained_...
                                                             ^

pulp [1f49704d05e94a3fba0980b84bd254ee]: django.request:ERROR: Internal Server Error: /pulp/api/v3/repositories/python/python/
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
psycopg2.errors.UndefinedColumn: column "retained_versions" of relation "core_repository" does not exist
LINE 1: ...thoth', "description" = NULL, "next_version" = 0, "retained_...
                                                             ^


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 34, in inner
    response = get_response(request)
  File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 115, in _get_response
    response = self.process_exception_by_middleware(e, request)
  File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 113, in _get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
    return view_func(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
    return self.dispatch(request, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
    response = self.handle_exception(exc)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 469, in handle_exception
    self.raise_uncaught_exception(exc)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
    raise exc
  File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
    response = handler(request, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 19, in create
    self.perform_create(serializer)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 24, in perform_create
    serializer.save()
  File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 205, in save
    self.instance = self.create(validated_data)
  File "/usr/local/lib/python3.9/site-packages/pulpcore/app/serializers/base.py", line 161, in create
    instance = super().create(validated_data)
  File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 939, in create
    instance = ModelClass._default_manager.create(**validated_data)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/manager.py", line 82, in manager_method
    return getattr(self.get_queryset(), name)(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 422, in create
    obj.save(force_insert=True, using=self.db)
  File "/usr/local/lib/python3.9/site-packages/pulpcore/app/models/repository.py", line 90, in save
    super().save(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/pulpcore/app/models/base.py", line 149, in save
    return super().save(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/django_lifecycle/mixins.py", line 134, in save
    save(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 743, in save
    self.save_base(using=using, force_insert=force_insert,
  File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 779, in save_base
    parent_inserted = self._save_parents(cls, using, update_fields)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 808, in _save_parents
    updated = self._save_table(
  File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 853, in _save_table
    updated = self._do_update(base_qs, using, pk_val, values, update_fields,
  File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 903, in _do_update
    return filtered._update(values) > 0
  File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 760, in _update
    return query.get_compiler(self.db).execute_sql(CURSOR)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1471, in execute_sql
    cursor = super().execute_sql(result_type)
  File "/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1142, in execute_sql
    cursor.execute(sql, params)
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 99, in execute
    return super().execute(sql, params)
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute
    return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
    return executor(sql, params, many, context)
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
  File "/usr/local/lib/python3.9/site-packages/django/db/utils.py", line 89, in __exit__
    raise dj_exc_value.with_traceback(traceback) from exc_value
  File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
    return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: column "retained_versions" of relation "core_repository" does not exist
LINE 1: ...thoth', "description" = NULL, "next_version" = 0, "retained_...
                                                             ^

pulp [d073c5b6fcbd47a5997cc442f1ff3863]: pulpcore.app.views.status:ERROR: Connection to Redis failed during status check!
pulp [b6e1dc5d4d5645eab6202b9576fe5615]: pulpcore.app.views.status:ERROR: Connection to Redis failed during status check!

Thanks for any pointers how to fix this.

@fao89
Copy link

fao89 commented Jul 19, 2021

@gerrod3 I believe we need to pair on this ^

@fridex
Copy link

fridex commented Jul 23, 2021

Yesterday we had a debugging session with @gerrod3 and @fao89. The issue I reported above does not look relevant anymore after reprovisioning the pulp instance. twine uploads work now, we are able to upload built packages. However, the download is not operational. @gerrod3 suggested adjusting PULP_CONTENT_ORIGIN (see docs) environment variable for api service so it correctly redirects to the content service. We are experiencing limitations in resources allocated for the namespace - see #321 request.

@ipanova
Copy link

ipanova commented Jul 23, 2021

@fridex thank you for the summary, that was helpful to understand where is the problem and that there is progress around it.

@fao89
Copy link

fao89 commented Aug 31, 2021

@tumido I've been seeing this at the pulp-operator logs:

TASK [pulp-api : pulp-file-storage persistent volume claim] ********************\r
\u001b[1;30mtask path: /opt/ansible/roles/pulp-api/tasks/main.yml:21\u001b[0m
\u001b[0;31mfailed: [localhost] (item=pulp-file-storage) => {\"ansible_loop_var\": \"item\", \"changed\": false, \"error\": 403, \"item\": \"pulp-file-storage\", \"msg\": \"Failed to patch object: b'{\\\"kind\\\":\\\"Status\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"metadata\\\":{},\\\"status\\\":\\\"Failure\\\",\\\"message\\\":\\\"persistentvolumeclaims \\\\\\\\\\\"example-pulp-file-storage\\\\\\\\\\\" is forbidden: exceeded quota: thoth-pulp-experiments-custom, requested: requests.storage=50Gi, used: requests.storage=59Gi, limited: requests.storage=60Gi\\\",\\\"reason\\\":\\\"Forbidden\\\",\\\"details\\\":{\\\"name\\\":\\\"example-pulp-file-storage\\\",\\\"kind\\\":\\\"persistentvolumeclaims\\\"},\\\"code\\\":403}\\\
'\", \"reason\": \"Forbidden\", \"status\": 403}

I suspect it may be related to NFS settings, could you please take a look?

@tumido
Copy link
Member

tumido commented Oct 18, 2021

Hey, pulp was migrated to a new cluster - Smaug, because Zero wen't down. Is there anything to be done for this issue or can we consider the onboarding complete as of now?

I think we should maybe think about closing this issue if the initial setup is done and rather resort to opening new issues if any problems show up.

@fridex
Copy link

fridex commented Oct 18, 2021

Thanks for the work and migration 👍🏻

Checking https://pulp.operate-first.cloud, the index is not yet reachable.

I think we should maybe think about closing this issue if the initial setup is done and rather resort to opening new issues if any problems show up.

Sounds like a good idea. Let's finish deployment and have mvp up so we can close this and report any issues along the way.

Thanks again.

@schwesig
Copy link
Contributor

@asmacdo You might find this interesting, if you want to go back to your pulp roots

@harshad16
Copy link
Member

Facing the issue on getting the pods on same node, so created the issue on upstream with pulp-operator
pulp/pulp-operator#289

@harshad16
Copy link
Member

Due release of pulp-operator: v0.6.1, the readwrite issue is fixed with the node selection.

newer issue causing the hinderance:

  File "/usr/local/lib64/python3.9/site-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: FATAL:  password authentication failed for user "pulp"

@harshad16
Copy link
Member

Opened upstream issue : pulp/pulp-operator#308

@chambridge
Copy link

I've hit a similar issue to this in the past during development if I didn't delete the persistent volume claim for postgres (which isn't deleted when you delete your pulp custom resource) and then I create a new instance with a new postgres deploy on top of the same pvc. The generated password (secret) for the new pulp instance's postgres doesn't match what has been initialized in the previous and the db user can't access. Just a though, in case that was the issue.

@harshad16
Copy link
Member

thank you , that was the issue
deleting the old PVC helped.

@harshad16
Copy link
Member

harshad16 commented Dec 9, 2021

The setup is running.
The api is having trouble connecting the redis db:

pulp [eff4343985714d8bb0b891b0b6e3e5ce]: pulpcore.app.views.status:ERROR: Connection to Redis failed during status check!

status can be seen here: https://pulp.operate-first.cloud/pulp/api/v3/status/

@harshad16
Copy link
Member

The pulp instance is working and running on the operate-first cluster.
https://console-openshift-console.apps.smaug.na.operate-first.cloud/k8s/ns/opf-pulp/pods

The status can be seen here: https://pulp.operate-first.cloud/pulp/api/v3/status/
Example upload: https://pulp.operate-first.cloud/pypi/hello-world/simple/aicoe-hello-world/

Admin can create the index:

pulp python repository create --name hello-world
pulp python distribution create --base-path hello-world --name hello-world --repository hello-world

and upload the package via twine:

twine upload dist/* --repository-url="https://pulp.operate-first.cloud/pypi/hello-world/simple/"

@fridex
Copy link

fridex commented Jan 17, 2022

I think this can be closed and we can track issues separately as discussed above.

Thanks a lot to all for delivering this solution. 👏🏻

/close

@sesheta sesheta closed this as completed Jan 17, 2022
@sesheta
Copy link
Member

sesheta commented Jan 17, 2022

@fridex: Closing this issue.

In response to this:

I think this can be closed and we can track issues separately as discussed above.

Thanks a lot to all for delivering this solution. 👏🏻

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
onboarding Requesting onboarding to a cluster
Projects