Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kibana server is not ready yet #25806

Open
st3rling opened this Issue Nov 16, 2018 · 32 comments

Comments

Projects
None yet
@st3rling
Copy link

st3rling commented Nov 16, 2018

After upgrading ELK to 6.5 from 6.4.3 I get error message in browser:
Kibana server is not ready yet

elasticsearch -V
Version: 6.5.0, Build: default/rpm/816e6f6/2018-11-09T18:58:36.352602Z, JVM: 1.8.0_161

https://www.elastic.co/guide/en/kibana/current/release-notes-6.5.0.html#known-issues-6.5.0 doesn't apply since I don't have X-Pack

Kibana log:

{"type":"log","@timestamp":"2018-11-16T16:14:02Z","tags":["info","migrations"],"pid":6147,"message":"Creating index .kibana_1."}
{"type":"log","@timestamp":"2018-11-16T16:14:02Z","tags":["warning","migrations"],"pid":6147,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}

Deleting kibana_1 and restarting Kibana just recreates kibana_1 and produces same error.

@tylersmalley

This comment has been minimized.

Copy link
Member

tylersmalley commented Nov 16, 2018

How many instances of Kibana do you have running? Are you sure no other Kibana instance are pointing to this index?

@elasticmachine

This comment has been minimized.

Copy link

elasticmachine commented Nov 16, 2018

@st3rling

This comment has been minimized.

Copy link
Author

st3rling commented Nov 16, 2018

Pretty sure only one instance is running:
[root@rconfig-elk-test tmp]# sudo systemctl status kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-11-16 11:13:52 EST; 1h 41min ago
Main PID: 6147 (node)
Tasks: 10
Memory: 292.0M
CGroup: /system.slice/kibana.service
└─6147 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana....
Nov 16 11:13:52 rconfig-elk-test systemd[1]: Started Kibana.
Nov 16 11:13:52 rconfig-elk-test systemd[1]: Starting Kibana...

[root@rconfig-elk-test tmp]# ps -ef | grep -i kibana
root 6081 3357 0 11:10 pts/1 00:00:00 tail -f /var/log/kibana/error.log
kibana 6147 1 0 11:13 ? 00:00:46 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
root 7078 1763 0 12:55 pts/0 00:00:00 grep --color=auto -i kibana

I'm attaching full error log from Kibana. Somewhere in the middle you will see fatal error:
"message":"Request Timeout after 30000ms"}

Then it goes through bunch of plugin message again and ends with:

"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}

Elasticsearch is running:

[root@rconfig-elk-test tmp]# curl -GET http://10.10.99.144:9200
{
"name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "x-Am75D7TuCpewv7H9_45A",
"version" : {
"number" : "6.5.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "816e6f6",
"build_date" : "2018-11-09T18:58:36.352602Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
kibana error log.txt

@tylersmalley

This comment has been minimized.

Copy link
Member

tylersmalley commented Nov 16, 2018

Can you try deleting both .kibana_1 and .kibana_2 then restarting?

@st3rling

This comment has been minimized.

Copy link
Author

st3rling commented Nov 16, 2018

I only have .kibana_1 index:

[root@rconfig-elk-test tmp]# curl -GET http://10.10.99.144:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-es-6-2018.11.13 bKC7vTkYQ5KJITjDR5ERdA 1 0 0 0 261b 261b
green open .monitoring-kibana-6-2018.11.13 VaXQT5fHQrSbtTiuYeQK5w 1 0 0 0 261b 261b
red open .kibana_1 oQH-qDGaTLWGqngLWLv0fg 1 0
green open .monitoring-es-6-2018.11.15 L4_xGqabTDqCa1YGP9ZoQw 1 0 428 64 483.8kb 483.8kb
green open ha-network-t3-2018.11 o9FlJzUjTUi59gT-ahAZQQ 5 0 15505 0 4.2mb 4.2mb
red open .monitoring-es-6-2018.11.16 UNszj2VJTHuhk670gjo6Zg 1 0
green open .monitoring-kibana-6-2018.11.15 J797uGhLRVO709cayCE2hQ 1 0 0 0 261b 261b
green open undefined-undefined-t2-2018.11 blNcjE-MTVWfFLAtkQ5eTw 5 0 514 0 462.6kb 462.6kb
[root@rconfig-elk-test tmp]#

I did try deleting .kibana_1 and restarting kibana - it produces same exact log

@tylersmalley tylersmalley referenced this issue Nov 16, 2018

Open

Tracking of migration related issues #25821

0 of 5 tasks complete
@tylersmalley

This comment has been minimized.

Copy link
Member

tylersmalley commented Nov 16, 2018

Can I get the Elasticsearch logs during this time?

@st3rling

This comment has been minimized.

Copy link
Author

st3rling commented Nov 16, 2018

Attached full log (for some reason elastic logs have proper timestamps and kibana's timestamps are ahead)
elasticsearch.zip

@CRCinAU

This comment has been minimized.

Copy link

CRCinAU commented Nov 20, 2018

I've just come across this same thing. CentOS 7.5, upgraded to ES 6.5.0 today.

Everything else restarts except Kibana.

@cdoggyd

This comment has been minimized.

Copy link

cdoggyd commented Nov 20, 2018

I'm having the same issue on CentOS 7.5.1804 after upgrading ELK to 6.5.0 last night.

@littlebunch

This comment has been minimized.

Copy link

littlebunch commented Nov 20, 2018

I too had the same issue after a CentOS 7.5 upgrade. After verifying that .kibana had been migrated to .kibana_1, I deleted .kibana and then manually aliased .kibana to .kibana_1. Seems to have resolved the issue.

@CRCinAU

This comment has been minimized.

Copy link

CRCinAU commented Nov 20, 2018

Any chance of posting a step by step for myself and probably others that have / will be hit by this?

@littlebunch

This comment has been minimized.

Copy link

littlebunch commented Nov 20, 2018

First, try deleting the versioned indices and then restart as suggested above:

curl -XDELETE http://localhost:9200/.kibana_1
systemctl restart cabana

This worked for me on other servers and don't know why it didn't on this particular one -- I'm no kibana expert.

If it doesn't work then verify you have a versioned index that's been created, e.g. byte counts the same, etc. After that then delete the original .kibana:

curl -XDELETE http://localhost:9200/.kibana

then alias it:

curl -X POST "localhost:9200/_aliases" -H 'Content-Type: application/json' -d' { "actions" : [ { "add" : { "index" : ".kibana_1", "alias" : ".kibana" } } ] }'

Then restart kibana.
HTH

@tylersmalley

This comment has been minimized.

Copy link
Member

tylersmalley commented Nov 20, 2018

@littlebunch if you have a versioned index, .kibana should not exist and it should already be an alias.

For anyone still experiencing this - IF the original .kibana index still exists, and there is a message telling you another instance is still running - try deleting the .kibana_1 and .kibana_2 if it exists. Then restart Kibana and provide the logs there, and additionally any logs from Elasticsearch. I presume something is failing along the way.

@tylersmalley

This comment has been minimized.

Copy link
Member

tylersmalley commented Nov 20, 2018

@st3rling in your logs there are a lot of primary shart timeouts. Not sure if that is related.

[2018-11-16T11:13:59,384][WARN ][o.e.x.m.e.l.LocalExporter] [node-1] unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: UnavailableShardsException[[.monitoring-es-6-2018.11.16][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-es-6-2018.11.16][0]] containing [25] requests]]
	at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:128) ~[?:?]
@littlebunch

This comment has been minimized.

Copy link

littlebunch commented Nov 20, 2018

@tylersmalley Yes, I agree. As I stated, the suggested "fix" worked for me on other servers not this particular one. Don't know why -- just reporting what I had to do to get it to work.

@tylersmalley

This comment has been minimized.

Copy link
Member

tylersmalley commented Nov 20, 2018

Yes, and thank you for sharing your experience. Just making sure folks don't accidentally delete their data.

@littlebunch

This comment has been minimized.

Copy link

littlebunch commented Nov 20, 2018

It was a desperation move, yes, and it was on a dev machine. ;) I will poke around the logs a bit more later today and see if I find anything that might shed some light.

@st3rling

This comment has been minimized.

Copy link
Author

st3rling commented Nov 20, 2018

@tylersmalley Those errors were due to:
"no allocations are allowed due to cluster setting [cluster.routing.allocation.enable=none]"

Fixed it by issuing:

cluster.routing.allocation.enable : "all"

Restarted Elastic and Kibana
After running:
curl -GET http://10.10.99.144:9200/_cluster/allocation/explain
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]"}],"type":"illegal_argument_exception","reason":"unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]"},"status":400}

The error is gone from Elasticsearch log but I'm having same issues and same messages in Kibana log. Both logs are attached.
kibana error log.txt
elasticsearch.log

@pjhampton

This comment has been minimized.

Copy link
Contributor

pjhampton commented Nov 20, 2018

I got this issue after there was a version mismatch between elasticsearch and kibana. Maybe I'm being captain obvious here, but make sure both kibana and elasticsearch are 6.5.0

@st3rling

This comment has been minimized.

Copy link
Author

st3rling commented Nov 20, 2018

Yep, they are:
/usr/share/kibana/bin/kibana -V
6.5.0
/usr/share/elasticsearch/bin/elasticsearch -V
Version: 6.5.0, Build: default/rpm/816e6f6/2018-11-09T18:58:36.352602Z, JVM: 1.8.0_161

@tylersmalley

This comment has been minimized.

Copy link
Member

tylersmalley commented Nov 20, 2018

@st3rling I believe that allocation setting is what initially caused the migration issue in Kibana. If that is the case, you should be able to delete the .kibana_1 and .kibana_2 indices and try again.

@CRCinAU

This comment has been minimized.

Copy link

CRCinAU commented Nov 21, 2018

Ok, with a few hints from this topic, I found we didn't have a 'kibana_1' index, but we did have 'kibana_2'.

I removed this via:

curl -XDELETE http://localhost:9200/.kibana_2

Restarted Kibana and everything was happy again.

@st3rling

This comment has been minimized.

Copy link
Author

st3rling commented Nov 21, 2018

@tylersmalley Yes, that did it and all is happy now. User error - I missed step 8 in the rolling upgrades guide:
https://www.elastic.co/guide/en/elasticsearch/reference/6.4/rolling-upgrades.html
Thank you Tyler and all others for your help and input.

@rubpa

This comment has been minimized.

Copy link

rubpa commented Nov 30, 2018

Faced this issue on one (single-node) cluster. Here's what happened:

Error:

{"type":"log","@timestamp":"2018-11-30T05:51:01Z","tags":["warning","migrations"],"pid":24358,"message":"Another Kibana instance appears to
be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_7 and restarting Kibana."}

List of indices:

curl -X GET "localhost:9200/_cat/indices/.kib*?v&s=index"
health status index     uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana-6 MSm8voH-RrOkZsihyEZpRw   1   1        123            3    269.4kb        269.4kb
green  open   .kibana_7 hJ4NByb6SJOKpUGCNqtRoA   1   0          0            0       261b           261b

Deleted .kibana_7:

curl -X DELETE "localhost:9200/.kibana_7"

Restart kibana:

systemctl restart kibana
{"type":"log","@timestamp":"2018-11-30T05:58:45Z","tags":["info","migrations"],"pid":24883,"message":"Creating index .kibana_7."}
{"type":"log","@timestamp":"2018-11-30T05:58:45Z","tags":["info","migrations"],"pid":24883,"message":"Migrating .kibana-6 saved objects to .kibana_7"}
{"type":"log","@timestamp":"2018-11-30T05:58:46Z","tags":["info","migrations"],"pid":24883,"message":"Pointing alias .kibana to .kibana_7."}
{"type":"log","@timestamp":"2018-11-30T05:58:46Z","tags":["info","migrations"],"pid":24883,"message":"Finished in 1307ms."}
{"type":"log","@timestamp":"2018-11-30T05:58:46Z","tags":["listening","info"],"pid":24883,"message":"Server running at http://x.x.x.x:5601"}
@jorgelbg

This comment has been minimized.

Copy link

jorgelbg commented Dec 8, 2018

Found a similar issue, related to a closed index named .tasks #25464 (comment)

@oligarx

This comment has been minimized.

Copy link

oligarx commented Dec 12, 2018

Same issue. Anyone have work around?

@uhlhosting

This comment has been minimized.

Copy link

uhlhosting commented Jan 10, 2019

Had same issue and we did this:
Notice that what I've made to fix this was to check all indices that had 4-6~Kb in size and basically removed them. The one that I did not had was .kibana had only .kibana_1 and one that was getting my attention was .trigered_watches . once that was removed and .kibana_1 all worked.

Note: At first attempt I removed only .kibana_1 and this did not helped I had to double check my indices and then I found the size correlation, hence the kibana_1 index was rebuilt upon restart.

curl -GET http://127.0.0.1:9200/_cat/indices?v
curl -XDELETE http://localhost:9200/.triggered_watches
curl -XDELETE http://localhost:9200/.kibana_1

Followed by
sudo systemctl restart kibana

was bringing back my instance.

Hope this helps!

@josephsmartinez

This comment has been minimized.

Copy link

josephsmartinez commented Jan 10, 2019

First, try deleting the versioned indices and then restart as suggested above:

curl -XDELETE http://localhost:9200/.kibana_1
systemctl restart cabana

This worked for me on other servers and don't know why it didn't on this particular one -- I'm no kibana expert.

If it doesn't work then verify you have a versioned index that's been created, e.g. byte counts the same, etc. After that then delete the original .kibana:

curl -XDELETE http://localhost:9200/.kibana

then alias it:

curl -X POST "localhost:9200/_aliases" -H 'Content-Type: application/json' -d' { "actions" : [ { "add" : { "index" : ".kibana_1", "alias" : ".kibana" } } ] }'

Then restart kibana.
HTH

Thank you. This fixed my issue loading Kibana. (Kibana server is not ready yet)

The below command fixed the issue.

curl -XDELETE http://localhost:9200/.kibana

@shoeb240

This comment has been minimized.

Copy link

shoeb240 commented Jan 23, 2019

Can you try deleting both .kibana_1 and .kibana_2 then restarting?

I had sililar issue, it worked for me

@shoeb240

This comment has been minimized.

Copy link

shoeb240 commented Jan 23, 2019

Also I had to do is:

localhost:9200/.kibana_2/_settings

{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}

@sureshthagunna

This comment has been minimized.

Copy link

sureshthagunna commented Jan 31, 2019

I got this issue after there was a version mismatch between elasticsearch and kibana. Maybe I'm being captain obvious here, but make sure both kibana and elasticsearch are 6.5.0

Thanks a ton!

@makibroshett

This comment has been minimized.

Copy link

makibroshett commented Feb 21, 2019

While migrating from ELK 6.3.2 to 6.5.0 I ran into this issue, but none of the suggested fixes worked, and i too don't have security so the known issue is not applicable...

The way I worked around this was to:

  • shutdown kibana
  • delete the .kibana_1 and .kibana_2
  • re-index my .kibana as .kibana_3
  • create an alias .kibana_main pointing to .kibana3
  • change my kibana config so that KIBANA_INDEX is set to .kibana_main
  • restart kibana

It looks like a stunt but it worked (see logs below).
The fix by elastic supposes we're using security, but I suggest it'd be completed for users not using security.

Hope this helps

Olivier

Logs:

{"type":"log","@timestamp":"2019-02-21T09:02:10Z","tags":["warning","stats-collection"],"pid":1,"message":"Unable to fetch data from kibana collector"} {"type":"log","@timestamp":"2019-02-21T09:02:10Z","tags":["license","info","xpack"],"pid":1,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active"} {"type":"log","@timestamp":"2019-02-21T09:02:12Z","tags":["reporting","warning"],"pid":1,"message":"Enabling the Chromium sandbox provides an additional layer of protection."} {"type":"log","@timestamp":"2019-02-21T09:02:12Z","tags":["info","migrations"],"pid":1,"message":"Creating index .kibana_main_4."} {"type":"log","@timestamp":"2019-02-21T09:02:13Z","tags":["info","migrations"],"pid":1,"message":"Migrating .kibana_3 saved objects to .kibana_main_4"} {"type":"log","@timestamp":"2019-02-21T09:02:13Z","tags":["info","migrations"],"pid":1,"message":"Pointing alias .kibana_main to .kibana_main_4."} {"type":"log","@timestamp":"2019-02-21T09:02:14Z","tags":["info","migrations"],"pid":1,"message":"Finished in 1353ms."} {"type":"log","@timestamp":"2019-02-21T09:02:14Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"} {"type":"log","@timestamp":"2019-02-21T09:02:14Z","tags":["status","plugin:spaces@6.5.0","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.