Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extraVolumeMounts not mounting volumes #369

Closed
amitlt opened this issue Feb 19, 2020 · 3 comments
Closed

extraVolumeMounts not mounting volumes #369

amitlt opened this issue Feb 19, 2020 · 3 comments
Labels
bug Something isn't working

Comments

@amitlt
Copy link
Contributor

amitlt commented Feb 19, 2020

Bugs should be filed for issues encountered whilst operating logging-operator.
You should first attempt to resolve your issues through the community support
channels, e.g. Slack, in order to rule out individual configuration errors. #logging-operator
Please provide as much detail as possible.

Describe the bug:
Volumes specified under fluentbit.extraVolumeMounts in a logging object do not get mounted.

  1. I installed the operator using the the process described in the quickstart here

  2. I created a Logging object from the example given here (except the namespace was monitoring)

apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
  name: default-logging-tls
spec:
  fluentd: {}
  fluentbit:
    extraVolumeMounts:
    - source: /opt/docker
      destination: /opt/docker
      readOnly: true
  controlNamespace: monitoring

Expected behaviour:
A hostPath volume to be created and added to every fluentbit pod in the daemonset.

Steps to reproduce the bug:

  1. install the operator using the the process described in the quickstart here

  2. create a Logging object from the example given here

apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
  name: default-logging-tls
spec:
  fluentd: {}
  fluentbit:
    extraVolumeMounts:
    - source: /opt/docker
      destination: /opt/docker
      readOnly: true
  controlNamespace: logging
  1. describe any fluentbit pod for the mounted volumes kubectl describe pod default-logging-tls-fluentbit-xxxx

in my case, no extra volume was mounted:

Volumes:
  varlibcontainers:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/docker/containers
    HostPathType:
  varlogs:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:
  config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-logging-tls-fluentbit
    Optional:    false
  positiondb:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  buffers:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  default-logging-tls-fluentbit-token-smc5q:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-logging-tls-fluentbit-token-smc5q
    Optional:    false

Steps to reproduce the bug should be clear and easily reproducible to help people
gain an understanding of the problem.

Additional context:
Add any other context about the problem here.

Environment details:

  • Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:42:50Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2020-01-14T00:09:19Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud-provider/provisioner: kind/minikube (did not work on both)
  • logging-operator version: 2.7.0 (what came in the helm values)
  • Install method: helm
  • Logs from the misbehaving component:
  • Resource definition (possibly in YAML format) that caused the issue, without sensitive data: provided above

/kind bug

@amitlt amitlt added the bug Something isn't working label Feb 19, 2020
@pepov
Copy link
Member

pepov commented Feb 19, 2020

hi @amitlt , sorry for the confusion, but that is currently not available with 2.7.0 and the docs reflect the state of the master branch. If you need this feature you have the following options:

  1. Use the master tag for the operator image. In this case watch out, because there are some previously deprecated fields that has been removed, check out: KubernetesStorage type to operator-tools #332
  2. Wait for the next release, which should be coming soon
  3. Backport the commit 37fa29c to the 2.7.x branch and we can build an image from there

@amitlt
Copy link
Contributor Author

amitlt commented Feb 23, 2020

Thanks for the speedy reply @pepov
I've set the operator tag to be master and the mounts work wonderfully!

@amitlt amitlt closed this as completed Feb 23, 2020
@pepov
Copy link
Member

pepov commented Feb 24, 2020

@amitlt you're welcome. I recommend to use the 3.0.0-rc.1 instead of master since it's out now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants