Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tar archive is created on /home/k6 instead of tmp #160

Closed
rgordill opened this issue Oct 24, 2022 · 5 comments · Fixed by #202
Closed

tar archive is created on /home/k6 instead of tmp #160

rgordill opened this issue Oct 24, 2022 · 5 comments · Fixed by #202
Labels
bug Something isn't working good first issue Good for newcomers

Comments

@rgordill
Copy link

Brief summary

That means user should be 65534 or have access rights to that folder.

https://github.com/grafana/k6-operator/blob/main/pkg/resources/jobs/initializer.go#L57

If it is created in /tmp folder, any user could run the tests without any particular uid privilege.

k6-operator version or image

latest (8.0.0rc3)

K6 YAML

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
  name: k6-sample-with-extensions
  namespace: k6-tests
spec:
  parallelism: 4
  script:
    configMap:
      name: sample-stress-test
      file: test.js
  arguments: --http-debug -o output-prometheus-remote
  runner:
    image: xxxx:latest

Other environment details (if applicable)

No response

Steps to reproduce the problem

Run k6 tests from a namespace with restricted users. For example

securityContext:
runAsUser: 1000

Expected behaviour

Archive stored and read successfully.

Actual behaviour

Permission denied errors.

@rgordill rgordill added the bug Something isn't working label Oct 24, 2022
@yorugac yorugac added the good first issue Good for newcomers label Nov 4, 2022
@rgordill
Copy link
Author

Any update on this? In a secured cluster, usually it is not allowed by administrators to run containers with a defined user.

Thanks.

@yorugac
Copy link
Collaborator

yorugac commented Jan 26, 2023

Sadly, not yet. I don't think this is a hard issue to fix or test, hence the 'good first issue' label - I was kind of wondering if someone would want to push a contribution and then forgot it myself... @rgordill, thanks for the ping; I'll see if we can move it up and fix sooner.

@yorugac
Copy link
Collaborator

yorugac commented Feb 9, 2023

@rgordill I've grok-ed this problem a bit more and realized that I probably misunderstood it initially 😅 More precisely, I'm no longer sure what you mean by a "restricted namespace". Initializer pod is not using security context at the moment at all: shouldn't it be used for all pods in your setup? E.g. what you described in the "step to repeat" is not actually sufficient to repeat the bug.

We can definitely add a /tmp folder to the archive path as here if that would help your use case but for my understanding, could you please clarify a bit more what "restricted namespace" means?

@rgordill
Copy link
Author

rgordill commented Feb 9, 2023

Hi, @yorugac.

When a container writes a file in a defined folder, it needs privileges to do it. In this case, it needs to be run as user 65534. Although the initializer is not using a security context, some security products through webhooks can enforce the user to be randomized, to be more secured.

That leads to this issue, and the security constraints should be relaxed, but for the need to write on a folder. If the selected folder is writable by anyone (like /tmp), then the container can be run by any user and keep the security at maximum.

@yorugac
Copy link
Collaborator

yorugac commented Feb 9, 2023

some security products through webhooks can enforce the user to be randomized, to be more secured.

This is the part I didn't realize. Seems something like OpenShift guidelines, I suppose.

OK, I'll finish the fix and merge it in. Thank you for clarification!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants