Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase memoryLimit on PVC cleanup job to prevent failures #304

Merged
merged 2 commits into from
Mar 9, 2021

Conversation

amisevsk
Copy link
Collaborator

@amisevsk amisevsk commented Mar 5, 2021

What does this PR do?

This PR is kind of a shot in the dark for resolving #301: after some searching, there's a few bugzillas suggesting that the process being used to create the pod might be bumping into the memory limit (e.g. here). I've increased the memoryLimit to 100Mi and tested a few times on one of the clusters that reproduces the issue consistently and did not see it.

Please test thoroughly to make sure this fixes the problem.

What issues does this PR fix or reference?

Fixes #301

Is it tested? How?

  1. Apply samples/flattened_theia-next.yaml
  2. Delete theia devworkspace and make sure job completes successfully, finalizer is cleared.

Setting a too-low memory limit on pods can cause the pod to fail to be
created on the cluster. This results in failures to delete DevWorkspaces
that use storage, as the cleanup job can never complete.

Signed-off-by: Angel Misevski <amisevsk@redhat.com>
@amisevsk
Copy link
Collaborator Author

amisevsk commented Mar 5, 2021

Testing the memoryLimit directly using the spec:

apiVersion: v1
kind: Pod
metadata:
  name: "test"
  labels:
    app: test
spec:
  containers:
    - image: quay.io/fedora/fedora:34
      name: test
      volumeMounts:
        - mountPath: /tmp/claim-devworkspace
          name: claim-devworkspace
      command: ["tail"]
      args: ["-f", "/dev/null"]
      resources:
        limits:
          memory: 100Mi
  volumes:
    - name: claim-devworkspace
      persistentVolumeClaim:
        claimName: claim-devworkspace 

I see failures when the memory limit is 32Mi and success when it's 100Mi.

Signed-off-by: Angel Misevski <amisevsk@redhat.com>
@amisevsk
Copy link
Collaborator Author

amisevsk commented Mar 6, 2021

/test v5-devworkspaces-operator-e2e

Copy link
Member

@sleshchenko sleshchenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not tested but changes LGTM, who could assume that it's memory limit issue =)

@openshift-ci-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: amisevsk, sleshchenko

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [amisevsk,sleshchenko]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@sleshchenko
Copy link
Member

Merging but @JPinkney feel free to provide feedback if any

@sleshchenko sleshchenko merged commit 5a72c81 into devfile:main Mar 9, 2021
@amisevsk amisevsk deleted the pvc-cleanup-memory-limit branch February 8, 2023 15:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

PVC clean up job is not stable
3 participants