This is a good answer on so, but i´ll give you a complete guide here:
If you´re on elasticsearch 2.x like me, the 4.x docs of curator are a good choice :)
- install 5.x, if you have elasticsearch 5.x via
sudo pip install -Iv elasticsearch-curator
- install 4.3.1, if you have elasticsearch 2.x via
sudo pip install -Iv elasticsearch-curator==4.3.1
- install 3.5.1, if you have elasticsearch 1.x via
sudo pip install -Iv elasticsearch-curator==3.5.1
If curator isn´t working (e.g. if you have a already installed but corrupt installation of curator, remove the package with a
sudo rm -rf /usr/local/lib/python2.7/dist-packages/curator
, https://stackoverflow.com/a/14572899/4964553)
You can copy the contents from here - you only need to change the unit_count: 14
to the quantity of days you don´t want to delete.
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
client:
hosts:
- 127.0.0.1
port: 9200
url_prefix:
use_ssl: False
ssl_no_validate: False
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile:
logformat: default
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete indices older than 45 days (based on index name), for logstash-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
filters:
- filtertype: pattern
kind: prefix
value: logstash-
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 14
exclude:
e.g. to folder /home/userName/.curator
Start with a dry-run:
curator --dry-run --config curator-configfile.yml curator-actionfile.yml
If that looks good, delete your Indices with:
curator --config curator-configfile.yml curator-actionfile.yml
Put delete-logstash-indices bash script into /etc/cron.daily
and you´re done with that issue!
curl -XGET 'http://localhost:9200/_all/_settings?pretty=true'
curl -XGET 'localhost:9200'
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
curl -XGET 'http://localhost:9200/_cat/nodes?pretty=true'
curl -XGET http://localhost:9200/_cat/shards
curl -XGET 'http://localhost:9200/_cat/indices?v'
One specific:
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "logstash-2017.12.09",
"shard" : 0,
"node" : "Human Torch II",
"allow_primary" : true
}
}
]
}'
All of them:
curl -XGET http://localhost:9200/_cat/shards | grep UNASSIGNED | awk '{print 1,2}' | while read var_index var_shard; do
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "$var_index",
"shard" : $var_shard,
"node" : "Human Torch II",
"allow_primary" : true
}
}
]
}'
sleep 5;
done
curl -XPUT http://localhost:9200/_cluster/settings -d '{
"persistent": {
"cluster.routing.allocation.enable": "none"
}
}'
curl -XPUT http://localhost:9200/_cluster/settings -d '{
"persistent": {
"cluster.routing.allocation.enable": "all"
}
}'
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "logstash-2016.09.21",
"shard" : 5,
"node" : "Dormammu",
"allow_primary" : true
}
}
]
}'
for shard in $(curl -XGET http://localhost:9200/_cat/shards | grep UNASSIGNED | awk '{print $2}'); do
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : "logstash-2016.06.02",
"shard" : $shard,
"node" : "Assassin",
"allow_primary" : true
}
}
]
}'
sleep 5
done
curl -XPOST "http://localhost:9200/elasticsearch/_flush/synced"
curl -XPUT localhost:9200/_settings -d '{
"index" : {
"number_of_replicas" : 0
}
}'
curl -XPUT 'http://localhost:9200/logstash-2016.09.24/' -d '{
"settings" : {
"index" : {
"number_of_shards" : 3,
"number_of_replicas" : 0
}
}
}'
curl -XGET localhost:9200/_template/logstash?pretty=true
curl -XPUT 'http://localhost:9200/_cluster/settings/' -d '{
"transient" : {
"logger.discovery" : "DEBUG"
}
}'
Only start the upgrade-process, if status is green! check with:
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
When not green, first make sure it get´s there! Then start.
curl -XPUT http://localhost:9200/_cluster/settings -d '{
"persistent": {
"cluster.routing.allocation.enable": "none"
}
}'
curl -XPOST "http://localhost:9200/elasticsearch/_flush/synced"
sudo service elasticsearch stop
deb http://packages.elasticsearch.org/elasticsearch/{{ elk_elasticsearch.version }}/debian stable main
apt-get install elasticsearch
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
curl -XPUT http://localhost:9200/_cluster/settings -d '{
"persistent": {
"cluster.routing.allocation.enable": "all"
}
}'
https://www.elastic.co/guide/en/logstash/current/_upgrading_logstash_and_elasticsearch_to_2_0.html
Because of a known issue (http://stackoverflow.com/questions/32761038/elk-unable-to-fetch-mapping-do-you-have-indices-matching-the-pattern, https://discuss.elastic.co/t/elasticseach-2-geoip-problem/33424/4), you have to manually change the logstash elasticsearch template. Therefor look into your logstash template with:
curl -XGET localhost:9200/_template/logstash?pretty=true
if it has a line "path" : "full", in it like:
{
"order" : 0,
"template" : "logstash-*",
"settings" : {
"index" : {
"refresh_interval" : "5s"
}
},
"mappings" : {
"_default_" : {
"dynamic_templates" : [ {
"string_fields" : {
"mapping" : {
"index" : "analyzed",
"omit_norms" : true,
"type" : "string",
"fields" : {
"raw" : {
"ignore_above" : 256,
"index" : "not_analyzed",
"type" : "string"
}
}
},
"match_mapping_type" : "string",
"match" : "*"
}
} ],
"_all" : {
"enabled" : true
},
"properties" : {
"geoip" : {
"path" : "full",
"dynamic" : true,
"type" : "object",
"properties" : {
"location" : {
"type" : "geo_point"
}
}
},
"@version" : {
"index" : "not_analyzed",
"type" : "string"
}
}
}
},
"aliases" : { }
}
then we have to delete this line with:
curl -XPUT http://localhost:9200/_template/logstash -d '{
"order" : 0,
"template" : "logstash-*",
"settings" : {
"index" : {
"refresh_interval" : "5s"
}
},
"mappings" : {
"_default_" : {
"dynamic_templates" : [ {
"string_fields" : {
"mapping" : {
"index" : "analyzed",
"omit_norms" : true,
"type" : "string",
"fields" : {
"raw" : {
"ignore_above" : 256,
"index" : "not_analyzed",
"type" : "string"
}
}
},
"match_mapping_type" : "string",
"match" : "*"
}
} ],
"_all" : {
"enabled" : true
},
"properties" : {
"geoip" : {
"dynamic" : true,
"type" : "object",
"properties" : {
"location" : {
"type" : "geo_point"
}
}
},
"@version" : {
"index" : "not_analyzed",
"type" : "string"
}
}
}
},
"aliases" : { }
}'
add the line template_overwrite => true to the output-section of your logstash.conf:
output {
elasticsearch {
hosts => [ "localhost:9200" ]
template_overwrite => true
}
}
tail -f /var/log/logstash/logstash.log
Add the following to your logstash output section:
output {
file {
path => "/var/log/logstash/jsonparsefailure.debug.log"
codec => "rubydebug"
}
}
If you get {:timestamp=>"2016-09-29T11:10:05.559000+0200", :message=>"Parsed JSON object/hash requires a target configuration option", :source=>"message", :raw=>"", :level=>:warn}
If kibana says: "unable to fetch mapping", when you want to create an index, then you have to manually create an logstash-index in elasticsearch:
curl -XPUT 'http://localhost:9200/logstash-2016.09.24/' -d '{
"settings" : {
"index" : {
"number_of_shards" : 3,
"number_of_replicas" : 0
}
}
}'
If the Settings/Indices Setup-Page has an empty Time-field name dropdownbox, do these steps: http://stackoverflow.com/a/29535262/4964553
https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html
https://www.elastic.co/guide/en/elasticsearch/reference/1.4/cluster-nodes-shutdown.html
http://blog.florian-hopf.de/2015/02/fixing-elasticsearch-allocation-issues.html
http://blog.kiyanpro.com/2016/03/06/elasticsearch/reroute-unassigned-shards/
https://t37.net/how-to-fix-your-elasticsearch-cluster-stuck-in-initializing-shards-mode.html