Skip to content
This repository has been archived by the owner on Apr 12, 2022. It is now read-only.

Kibana instances increasing on Docker #12

Closed
mathewvino opened this issue Jan 7, 2017 · 4 comments
Closed

Kibana instances increasing on Docker #12

mathewvino opened this issue Jan 7, 2017 · 4 comments

Comments

@mathewvino
Copy link

mathewvino commented Jan 7, 2017

I am trying to run kibana5.1 and elasticsearch5.1 on docker using docker-compose.

  1. Using volumes for elastic search data
  2. Using volume for kibana config file

The challenge I am facing is when I use monitoring on kibana dashboard the no.of instances is increasing each time when I bounce docker-compose

• When I clear the elastic volumes for data indices first time the no.of instances show up 1
• Second time its increases to 2 …. So on
• If we clear the elastic volume data and bounce again the instance will be 1 and again each time we bounce the no.of instances increase for kibana

Any help is really appreciated.

Thanks

docker-compose:

version: '2'
services:
  elasticsearch1:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.1.1
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    volumes:
      - /c/Users/dockeroffice/elknode1/data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elk_docker
  kibana:
    image: docker.elastic.co/kibana/kibana:5.1.1
    ports:
      - 5601:5601
    networks:
      - elk_docker
    volumes:
      - /c/Users/dockeroffice/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
    depends_on:
      - elasticsearch1
  
 
networks:
  elk_docker:
    driver: bridge

image

@jarpy
Copy link
Contributor

jarpy commented Jan 9, 2017

Hi,

Thanks for the detailed report. Unfortunately, I haven't been able to replicate the behaviour.

Could you provide your kibana.yml? I'm guessing that the ID of your Kibana instance is changing with each docker-compose up.

Thanks.

@skearns64
Copy link

Upon startup, Kibana generates an instance UUID if one doesn't exist, which it stores in the Kibana data directory. If that directory is being cleared upon restart, then it will generate a new UUID.

Monitoring will show one "Kibana instance" per UUID, though note that it will only display Kibana Instance/UUIDs that have sent data in the currently selected time window. So if you are looking at the last 1 hour window (the default), it will show all nodes that had data in that window. If you wait an hour after your last restart of Kibana, you will see just one node displayed, I believe.

@mathewvino
Copy link
Author

mathewvino commented Jan 9, 2017

Thanks jarpy/sk

I dont have any kibana.yml --> based on the docker i am using the default which is present inside the docker kibana image which i don't provide any volume mapping.

Currently based on your suggestion i do provide a volume mapping to the kibana data where it create the UUID so my configuration looks like

  • /c/Users/dockeroffice/kibanadata/data:/usr/share/kibana/data

Could you please confirm is that the right approach. I have tested multiple times after I stopped the docker-compose and restarted it seems to be working where it shows only only instance all the time

Below is the complete

version: '2'
services:
  kibana:
    image: docker.elastic.co/kibana/kibana:5.1.1
    ports:
      - 5601:5601
    networks:
      - esnet
    environment:
      SERVER_NAME: kibana.docker
      ELASTICSEARCH_URL: http://192.168.99.100:9200/
    volumes:
      - /c/Users/dockeroffice/kibanadata/data:/usr/share/kibana/data
  elasticsearch1:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.1.1
    container_name: elasticsearch1
    environment:
      - node.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    mem_limit: 1g
    cap_add:
      - IPC_LOCK
    volumes:
      - /c/Users/dockeroffice/elknode1/data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - esnet
 
networks:
  esnet:
    driver: bridge

@jarpy
Copy link
Contributor

jarpy commented Jan 9, 2017

I think that's a great solution. Nice work, @mathewvino, and thank you, @skearns64 for the tip.

@jarpy jarpy closed this as completed Jan 10, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants