Skip to content
This repository has been archived by the owner on Sep 20, 2023. It is now read-only.

Docker-Compose Setup "None of the configured nodes are available" #8

Open
Deastrom opened this issue Aug 19, 2017 · 8 comments
Open
Assignees

Comments

@Deastrom
Copy link

https://github.com/CERT-BDF/TheHiveDocs/blob/master/installation/docker-guide.md

The instructions in the above link reads as though you either need to use docker-compose OR manually install and configure ElasticSearch.

I'm pretty new to this so I went the docker-compose route and the instructions give the the following error...

thehive_1 | [error] a.a.OneForOneStrategy - None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9200}] thehive_1 | org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9200}] thehive_1 | at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:290) thehive_1 | at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:207) thehive_1 | at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55) thehive_1 | at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:288) thehive_1 | at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359) thehive_1 | at org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:582) thehive_1 | at com.sksamuel.elastic4s.SearchDsl$SearchDefinitionExecutable$$anonfun$apply$1.apply(SearchDsl.scala:40) thehive_1 | at com.sksamuel.elastic4s.SearchDsl$SearchDefinitionExecutable$$anonfun$apply$1.apply(SearchDsl.scala:40) thehive_1 | at com.sksamuel.elastic4s.Executable$class.injectFutureAndMap(Executable.scala:21) thehive_1 | at com.sksamuel.elastic4s.SearchDsl$SearchDefinitionExecutable$.injectFutureAndMap(SearchDsl.scala:37)

The instructions may need to be modified to be more clear.

@npratley
Copy link

npratley commented Sep 6, 2017

Hi, did you get this working? Did you modify docker-compose.yml at all?

I interpret the installation docs the same as you; either use docker-compose or manually install and configure Elasticsearch, but I just copied the docker-compose.yml file to an empty directory and ran 'sudo docker-compose up' and I don't get the same error, it seems to work fine.

Can you provide the exact steps you went through and the full output?

@Deastrom
Copy link
Author

Deastrom commented Sep 6, 2017 via email

@Theory5
Copy link

Theory5 commented Jan 29, 2018

Ok, so for those who don't want to follow @Deastrom in laboriously setting up elasticsearch the hard way just to see if The Hive and it's associated Cortex stuff can do what you need it to do (before putting it into production), you may find buried in the docker messages something about your virtual memory stuff being too low.

vm.max_map_count tells you it doesn't have enough memory.

If you have this error, you may also realize that when you start up docker-compose, it doesn't ask you to setup an account when you try and access The Hive with your browser, and just asks for your Username and PW (Then says the elasticsearch cluster isn't reachable).

The fix is simple. Either run this in the command line (tested and works on CENTOS 7),

sysctl -w vm.max_map_count=262144

Or put vm.max_map_count=262144 in /etc/sysctl.conf then reboot!

NOTE: Also, the documentation for installing docker IS telling you it's one or the other. If you use docker-compose, you do NOT have to build out an elasticsearch image, as the docker-compose.yml file provides that too.

@Deastrom
Copy link
Author

I did get it working using that exact command. You'll also need at least 2 gb ram to your docker for Windows vm (if that's what you're using).
The hardest lesson in this process was to take out the ip addresses to elastic and cortex instances in your conf file. The hive docker image is designed to look for these services by name and connect automatically.

@Deastrom
Copy link
Author

version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.5.2
environment:
- http.host=0.0.0.0
- transport.host=0.0.0.0
- xpack.security.enabled=false
- cluster.name=hive
- script.inline=true
- thread_pool.index.queue_size=100000
- thread_pool.search.queue_size=100000
- thread_pool.bulk.queue_size=100000
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
cortex:
image: certbdf/cortex:latest
ports:
- "0.0.0.0:9001:9000"
volumes:
- ./cortex/conf/application.conf:/etc/cortex/application.conf
thehive:
image: certbdf/thehive:latest
depends_on:
- elasticsearch
- cortex
ports:
- "0.0.0.0:9000:9000"
volumes:
- ./thehive/conf/application.conf:/etc/thehive/application.conf

@Theory5
Copy link

Theory5 commented Jan 30, 2018

Thanks for adding that file! That looks great, and cleared up a few other things I was looking into!

@cellango
Copy link

Error:

[info] o.e.ErrorHandler - POST /api/login returned 500
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{PBRk09qYSgumacUfFfRNVg}{0.0.0.0}{0.0.0.0:9300}]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:366)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1256)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)
at com.sksamuel.elastic4s.admin.IndexAdminExecutables$IndexExistsDefinitionExecutable$.apply(IndexAdminExecutables.scala:53)
at com.sksamuel.elastic4s.admin.IndexAdminExecutables$IndexExistsDefinitionExecutable$.apply(IndexAdminExecutables.scala:50)

Docker compose:
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.5.2
ports:
- "0.0.0.0:9200:9200"
networks:
- thehive
environment:
- http.host=0.0.0.0
- transport.host=0.0.0.0
- xpack.security.enabled=false
- cluster.name=hive
- script.inline=true
- thread_pool.index.queue_size=100000
- thread_pool.search.queue_size=100000
- thread_pool.bulk.queue_size=100000
ulimits:
nofile:
soft: 65536
hard: 65536
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
cortex:
#image: certbdf/cortex:latest
image: thehiveproject/cortex:latest
ports:
- "0.0.0.0:9001:9001"
networks:
- thehive
#volumes:
# - ./cortex/conf/application.conf:/etc/cortex/application.conf
thehive:
#image: certbdf/thehive:latest
#image: thehiveproject/thehive:latesta
build: .
depends_on:
- elasticsearch
- cortex
ports:
- "0.0.0.0:9000:9000"
networks:
- thehive
volumes:
- ./thehive/conf/application.conf:/etc/thehive/application.conf
networks:
thehive:
driver: "bridge"

Tried with and without this Dockerfile:
FROM thehiveproject/thehive:latest
USER root
RUN chgrp -R 0 /opt
USER daemon

My config in thehive/conf/application.conf:

Secret Key

The secret key is used to secure cryptographic functions.

WARNING: If you deploy your application on several servers, make sure to use the same key.

play.http.secret.key="VLROlUsB5yVvZFBGM3KRRHO4ihFaat8wpNfwjsQWzVmcL6c8jspbb2pTL6SvhyLT"

Elasticsearch

search {

Name of the index

index = the_hive

Name of the Elasticsearch cluster

cluster = hive

Address of the Elasticsearch instance

host = ["0.0.0.0:9300"]

Scroll keepalive

keepalive = 1m

Size of the page for scroll

pagesize = 50

Number of shards

nbshards = 5

Number of replicas

nbreplicas = 1

Arbitrary settings

settings {
# Maximum number of nested fields
mapping.nested_fields.limit = 100
}

XPack SSL configuration

Username for XPack authentication

#search.username

Password for XPack authentication

#search.password

Enable SSL to connect to ElasticSearch

search.ssl.enabled = false
client.transport.sniff = true

Path to certificate authority file

#search.ssl.ca

Path to certificate file

#search.ssl.certificate

Path to key file

#search.ssl.key

SearchGuard configuration

Path to JKS file containing client certificate

#search.guard.keyStore.path

Password of the keystore

#search.guard.keyStore.password

Path to JKS file containing certificate authorities

#search.guard.trustStore.path

Password of the truststore

#search.guard.trustStore.password

Enforce hostname verification

#search.guard.hostVerification

If hostname verification is enabled specify if hostname should be resolved

#search.guard.hostVerificationResolveHostname
}

Authentication

auth {
# "provider" parameter contains authentication provider. It can be multi-valued (useful for migration)
# available auth types are:
# services.LocalAuthSrv : passwords are stored in user entity (in Elasticsearch). No configuration is required.
# ad : use ActiveDirectory to authenticate users. Configuration is under "auth.ad" key
# ldap : use LDAP to authenticate users. Configuration is under "auth.ldap" key
provider = [local]

By default, basic authentication is disabled. You can enable it by setting "method.basic" to true.

method.basic = false

    ad {
            # The name of the Microsoft Windows domain using the DNS format. This parameter is required.
            #domainFQDN = "mydomain.local"

# Optionally you can specify the host names of the domain controllers. If not set, TheHive uses "domainFQDN".
#serverNames = [ad1.mydomain.local, ad2.mydomain.local]

            # The Microsoft Windows domain name using the short format. This parameter is required.
            #domainName = "MYDOMAIN"

            # Use SSL to connect to the domain controller(s).
            #useSSL = true
    }

ldap {
# LDAP server name or address. Port can be specified (host:port). This parameter is required.
#serverName = "ldap.mydomain.local:389"

# If you have multiple ldap servers, use the multi-valued settings.
#serverNames = [ldap1.mydomain.local, ldap2.mydomain.local]

            # Use SSL to connect to directory server
            #useSSL = true

            # Account to use to bind on LDAP server. This parameter is required.
            #bindDN = "cn=thehive,ou=services,dc=mydomain,dc=local"

            # Password of the binding account. This parameter is required.
            #bindPW = "***secret*password***"

            # Base DN to search users. This parameter is required.
            #baseDN = "ou=users,dc=mydomain,dc=local"

            # Filter to search user {0} is replaced by user name. This parameter is required.
            #filter = "(cn={0})"
    }

}

Maximum time between two requests without requesting authentication

session {
warning = 5m
inactivity = 1h
}

Streaming

stream.longpolling {

Maximum time a stream request waits for new element

refresh = 1m

Lifetime of the stream session without request

cache = 15m
nextItemMaxWait = 500ms
globalMaxWait = 1s
}

Max textual content length

play.http.parser.maxMemoryBuffer=1M

Max file size

play.http.parser.maxDiskBuffer=1G

Cortex

TheHive can connect to one or multiple Cortex instances. Give each

Cortex instance a name and specify the associated URL.

In order to use Cortex, first you need to enable the Cortex module by uncomment the next line

Enable the Cortex module

play.modules.enabled += connectors.cortex.CortexConnector
cortex {
"CORTEX-SERVER-ID" {
# URL of the Cortex server
url = "http://cortex:9001"
# Key of the Cortex user, mandatory for Cortex 2
key = "SqQ28IbY11D4t0QzlbP3VpZnkyNIGlAd"
}

HTTP client configuration, more details in section 8

ws {

proxy {}

ssl {}

}

Check job update time interval

#refreshDelay = "1 minute"

Maximum number of successive errors before give up

#maxRetryOnError = 3

Check remote Cortex status time interval

#statusCheckInterval = "1 minute"
}

MISP

TheHive can connect to one or multiple MISP instances. Give each MISP

instance a name and specify the associated Authkey that must be used

to poll events, the case template that should be used by default when

importing events as well as the tags that must be added to cases upon

import.

Prior to configuring the integration with a MISP instance, you must

enable the MISP connector. This will allow you to import events to

and/or export cases to the MISP instance(s).

Enable the MISP module (import and export)

Datastore

datastore {
name = data

Size of stored data chunks

chunksize = 50k
hash {
# Main hash algorithm /!\ Don't change this value
main = "SHA-256"
# Additional hash algorithms (used in attachments)
extra = ["SHA-1", "MD5"]
}
attachment.password = "malware"
}
webhooks {
myLocalWebHook {
url = "http://159.203.179.39:5000"
}
}

What I know:
elasticsearch is define in /etc/hosts
cxe@thehive:~/TheHive$ curl 'elasticsearch:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .monitoring-es-6-2018.12.22 _F-w6qd1S1S6ueXmnYNr5w 1 1 46348 100 30.3mb 30.3mb
yellow open .monitoring-alerts-6 DsCpHe7xQRW9SzpzrH4e1w 1 1 1 0 12.4kb 12.4kb
yellow open .watcher-history-3-2018.12.22 O6dPES_GTS-mhZ2jx9tfRg 1 1 3835 0 5.7mb 5.7mb
yellow open .triggered_watches AhcQJ8skS_Wf_MluhDH6Qw 1 1 0 0 9.5kb 9.5kb
yellow open cortex_2 T_2-kX-QRvmcr39xRtoblw 5 1 2 0 12kb 12kb
yellow open .watches kWm96sv1Tx6sB2UnbV_c7Q 1 1 4 0 22.8kb 22.8kb

This also works from inside the "thehive" docker container

No index created for thehive

cxe@thehive:~/TheHive$ curl 'localhost:9200'
{
"name" : "RRI823_",
"cluster_name" : "hive",
"cluster_uuid" : "mVwIDm4kRhqrPXvvysBFww",
"version" : {
"number" : "5.5.2",
"build_hash" : "b2f0c09",
"build_date" : "2017-08-14T12:33:14.154Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}

Able to log into Cortex
Not able to log into thehive

When I do not use my config file I am able to log into thehive.

@cellango
Copy link

I got it figured out.

in application.conf in the search section changed from
Address of the Elasticsearch instance
host = ["0.0.0.0:9300"]
to
host = ["elasticsearch:9300"]

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants