Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

Cannot mount elasticsearch keystore in pod, device busy #90

Closed
jhouston1604 opened this issue Apr 8, 2019 · 19 comments
Closed

Cannot mount elasticsearch keystore in pod, device busy #90

jhouston1604 opened this issue Apr 8, 2019 · 19 comments
Assignees

Comments

@jhouston1604
Copy link

I'm trying to mount the elasticsearch keystore per the documentation but I keep getting the error below. I've verified the the elasticsearch keystore file is valid (by adding it to a test continer and testing the values).

Error:

Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy

Steps taken:

  1. Command to create the secret:
    kubectl create secret generic elasticsearch-keystore --from-file=./elasticsearch.keystore

  2. SecretMounts in my yaml configuration:

  secretMounts:
  - name: elastic-certificates
    secretName: elastic-certificates
    path: /usr/share/elasticsearch/config/certs
  - name: elastic-license
    secretName: elastic-license
    path: /usr/share/elasticsearch/config/license
  - name: elasticsearch-keystore
    secretName: elasticsearch-keystore
    path: /usr/share/elasticsearch/config/elasticsearch.keystore
    subPath: elasticsearch.keystore 

Is there something I am missing?

@Crazybus
Copy link
Contributor

It looks like you did everything right to me.

Can you give me the output of the following commands:

You can attach into one of the containers by running kubectl exec -ti elasticsearch-master-0 bash

head -n1 /usr/share/elasticsearch/config/elasticsearch.keystore
ls -lhatr /usr/share/elasticsearch/config/
df -h
elasticsearch-keystore list

Could you also give me the following:

  1. The rest of your configuration
  2. Output of kubectl get events after attempting to deploy
  3. Kubernetes provider and version (e.g. Google Kubernetes Engine 1.12)

@ofaz
Copy link

ofaz commented Jun 2, 2019

I'm also running into this same issue after following the instructions for using the keystore in the readme - created the same way as in this issue.

This is running on GKE 1.11.8-gke-6 and I'm not seeing anything of note in kubectl get events, just appears to create the container/pull container etc, and then "back off restarting failed container".

Full error log from the container is:

Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy
	at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100)
	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
	at java.base/sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:417)
	at java.base/sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:267)
	at java.base/java.nio.file.Files.move(Files.java:1424)
	at org.elasticsearch.common.settings.KeyStoreWrapper.save(KeyStoreWrapper.java:500)
	at org.elasticsearch.common.settings.AddStringKeyStoreCommand.execute(AddStringKeyStoreCommand.java:97)
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124)
	at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:77)
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124)
	at org.elasticsearch.cli.Command.main(Command.java:90)
	at org.elasticsearch.common.settings.KeyStoreCli.main(KeyStoreCli.java:41)

@Crazybus
Copy link
Contributor

Crazybus commented Jun 3, 2019

@ofaz Thanks for the report! Could you give me the following information to try to reproduce it:

  1. Output of helm get elasticsearch (or whatever your release name is)
  2. Output of kubectl get pod -o yaml elasticsearch-master-0
  3. Output of kubectl get events
  4. The output of these commands run from within one of the containers:
head -n1 /usr/share/elasticsearch/config/elasticsearch.keystore
ls -lhatr /usr/share/elasticsearch/config/
df -h
elasticsearch-keystore list

@Crazybus
Copy link
Contributor

Crazybus commented Jun 3, 2019

I just bumped one of our internal clusters to use 7.1.0 and got the same error. So this seems to be an issue with how 7.1.0 is accessing the keystore differently to previous releases. No need to send anymore debug information now that I can reproduce it.

@Crazybus Crazybus self-assigned this Jun 3, 2019
@Crazybus
Copy link
Contributor

Crazybus commented Jun 3, 2019

Well that was a fun journey. I found it though!

In elastic/elasticsearch#41701 the internal format for the keystore was bumped from version 3 to version 4. On startup Elasticsearch notices this and attempts to upgrade the format of the keystore. All mounted secrets and configmaps in Kubernetes are always readonly. So when it tries to make the change you get the error as seen above.

So the fix is to either:

  1. Make sure you are creating the keystore with the right version of Elasticsearch. So if you are using 7.1.1 in the cluster you should use the same version when creating the keystore
  2. If you already have an existing keystore you should run elasticsearch-keystore upgrade with the right Elasticsearch version to get it upgraded to the right format.

@JanKowalik
Copy link

JanKowalik commented Jul 17, 2019

Hi,
I have the same issue with version 6.8.1. Keystore was created using the same version, so the fix above does not do the trick for me.

@Crazybus
Copy link
Contributor

@JanKowalik if it really was made with the same version than this sounds like a different issue. Can you give me the exact commands you used to create the keystore and the output of helm get elasticsearch (replace elasticsearch with your release name).

@JanKowalik
Copy link

I will try it again to make sure and if it does not work I will provide the information you are sking for.
Thank you.

@JanKowalik
Copy link

JanKowalik commented Jul 18, 2019

It did not work this time too.
I generated the keystore within docker using elastic 6.8.1 and the created a secrete out of it.
The command I ran within docker image is:
elasticsearch-keystore create
I moved the created file out of docker container and used this command to create the secret:
kubectl create secret generic elasticsearch-keystore --from-file=./elasticsearch-keystore/elasticsearch.keystore -o yaml --dry-run > manifests/elasticsearch-keystore-config-secret.yaml

I used helm chart to generate manifests only. I can attach the manifest files and values I used if that helps?

elasticsearch-master-nodes.txt
values-data-nodes.txt
values-master-nodes.txt
elasticsearch-data-nodes.txt

I did not include secrets here and image needs to be replaced with:
docker.elastic.co/elasticsearch/elasticsearch:6.8.1
I use kustomize for that.

The error message I get:

Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy
        at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100)
        at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
        at java.base/sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:417)
        at java.base/sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:267)
        at java.base/java.nio.file.Files.move(Files.java:1424)
        at org.elasticsearch.common.settings.KeyStoreWrapper.save(KeyStoreWrapper.java:500)
        at org.elasticsearch.common.settings.AddStringKeyStoreCommand.execute(AddStringKeyStoreCommand.java:97)
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124)
        at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:77)
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124)
        at org.elasticsearch.cli.Command.main(Command.java:90)
        at org.elasticsearch.common.settings.KeyStoreCli.main(KeyStoreCli.java:41)

@Crazybus
Copy link
Contributor

@JanKowalik Thank you for providing the extra details. I think that the issue you are running into is because you have the $ELASTIC_PASSWORD set but aren't adding it to your keystore as the bootstrap password. The docker image startup script tries to add this to the keystore when it is set.

You want to add it with:

elasticsearch-keystore add -x bootstrap.password

Once #154 is finished off there won't be any need to manually create and update the keystore anymore.

@JanKowalik
Copy link

Yeah, I did not add the bootstrap.password. I did not think it was necessary if I am using the default password.
I will give it a go and report back.

Thank you for your help!

@JanKowalik
Copy link

Nearly works. But I think this is a different problem now.
How does setting bootstarp.password influence elastic user password? I have 2 out of 3 nodes in either master-nodes or data-nodes groups connected fine but thirds are complaining about authentication.

[2019-07-18T16:20:48,202][INFO ][o.e.x.s.a.AuthenticationService] [elasticsearch-data-2] Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]

@Crazybus
Copy link
Contributor

Nearly works. But I think this is a different problem now.
How does setting bootstarp.password influence elastic user password? I have 2 out of 3 nodes in either master-nodes or data-nodes groups connected fine but thirds are complaining about authentication.

Which version of Elasticsearch are your running? In the manifest output I see the image is image: "elasticsearch:pulse". The reason that I ask is that in Elasticsearch versions before security was included in basic (before 6.8 release) you needed to activate the license for the cluster to fully form. What would happen is that the first 2 nodes would start bootstrapping the cluster, and the third one would fail to join until the license was enabled. The output from curl -u elastic:$ELASTIC_PASSWORD localhost:9200/ will show the version.

@JanKowalik
Copy link

It uses version 6.8.1

@Crazybus
Copy link
Contributor

How does setting bootstarp.password influence elastic user password?

To be clear, the bootstrap.password should be set to what you have for $ELASTIC_PASSWORD. Have all nodes in the cluster been restarted with the new keystore btw?

@JanKowalik
Copy link

It is all working fine now. I deleted everything and recreated a cluster from scratch and it worked.

I tried scaling everything down to 0 and then back up, but it did not help. Not sure why. After that all nodes had authentication errors. Maybe adding bootstrap.password to an existing cluster is not advisable.

Thank you for your time and help.

@JanKowalik
Copy link

JanKowalik commented Jul 19, 2019

I have an issue with kibana-keystore if mounted as k8s secret now.

onfiguring logger failed: { Error: EISDIR: illegal operation on a directory, read
    at Object.readSync (fs.js:494:3)
    at tryReadSync (fs.js:333:20)
    at readFileSync (fs.js:370:19)
    at Keystore.load (/usr/share/kibana/src/server/keystore/keystore.js:97:45)
    at new Keystore (/usr/share/kibana/src/server/keystore/keystore.js:46:10)
    at readKeystore (/usr/share/kibana/src/cli/serve/read_keystore.js:40:20)
    at applyConfigOverrides (/usr/share/kibana/src/cli/serve/serve.js:186:41)
    at applyConfigOverrides (/usr/share/kibana/src/cli/serve/serve.js:50:42)
    at config_1.RawConfigService.rawConfig (/usr/share/kibana/src/core/server/bootstrap.js:33:134)
    at MapSubscriber.RawConfigService.config$.rawConfigFromFile$.pipe.operators_1.map.rawConfig [as project] (/usr/share/kibana/src/core/server/config/raw_config_service.js:41:24) errno: -21, syscall: 'read', code: 'EISDIR' }

 FATAL  Error: EISDIR: illegal operation on a directory, read

@Crazybus: Shall I open another ticket for that?

Crazybus added a commit that referenced this issue Jul 22, 2019
Closes: #90

Adds a kubernetes native way to add strings and files to the
Elasticsearch keystore.

Previously you needed to manually create the keystore and upload
it as a secret. There were a couple of issues with this approach.

1. The Elasticsearch keystore has an internal version for the format. If
this is changed it meant needing to recreate each keystore again.

2. If you wanted to add a single new value it meant recreating the
entire keystore again
@Crazybus
Copy link
Contributor

@Crazybus: Shall I open another ticket for that?

Yes please! If I'm honest I have never actually used the keystore for Kibana with the helm-charts. My bet is that its going to be failing for the same issue as Elasticsearch (docker image trying to automatically add the ELASTICSEARCH_PASSWORD to the keystore on startup.

@Crazybus
Copy link
Contributor

The changes in #90 will also be ported to the other charts which will make this a lot easier to manage.

Crazybus added a commit that referenced this issue Aug 1, 2019
Closes: #90

Adds a kubernetes native way to add strings and files to the
Elasticsearch keystore.

Previously you needed to manually create the keystore and upload
it as a secret. There were a couple of issues with this approach.

1. The Elasticsearch keystore has an internal version for the format. If
this is changed it meant needing to recreate each keystore again.

2. If you wanted to add a single new value it meant recreating the
entire keystore again
Crazybus added a commit that referenced this issue Aug 2, 2019
Closes: #90

Adds a kubernetes native way to add strings and files to the
Elasticsearch keystore.

Previously you needed to manually create the keystore and upload
it as a secret. There were a couple of issues with this approach.

1. The Elasticsearch keystore has an internal version for the format. If
this is changed it meant needing to recreate each keystore again.

2. If you wanted to add a single new value it meant recreating the
entire keystore again
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants