Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TLS Self-Signed Certs #12

Closed
Crispy1975 opened this issue Dec 20, 2016 · 15 comments
Closed

TLS Self-Signed Certs #12

Crispy1975 opened this issue Dec 20, 2016 · 15 comments

Comments

@Crispy1975
Copy link
Contributor

We are using Search Guard SSL to secure the transport and REST endpoints for our ES install. When starting monstache up what I guess is the Go ES client is expecting a validated cert, so it throws an error:

INFO 2016/12/20 17:07:23 GET request sent to https://storage.example.com:9200/
ERROR 2016/12/20 17:07:23 Unable to validate connection to elasticsearch using https://storage.example.com:9200: Get https://storage.example.com:9200/: x509: certificate signed by unknown authority
panic: Unable to validate connection to elasticsearch using https://storage.example.com:9200: Get https://storage.example.com:9200/: x509: certificate signed by unknown authority

goroutine 1 [running]:
panic(0x869240, 0xc4204a4500)
	/usr/local/go/src/runtime/panic.go:500 +0x1a1
log.Panicf(0x91e6a0, 0x43, 0xc42014bd68, 0x4, 0x4)
	/usr/local/go/src/log/log.go:327 +0xe3
main.main()
	/vagrant/Go/src/github.com/rwynn/monstache/monstache.go:752 +0x1884

With the JavaScript client you can specify to accept non-validated certs, I guess this is possible with the Go client also? If not any idea where we would need to import the root CA cert so that the Go client does not complain?

@rwynn
Copy link
Owner

rwynn commented Dec 20, 2016

Can you try supplying the argument elasticsearch-pem-file

flag.StringVar(&configuration.ElasticPemFile, "elasticsearch-pem-file", "", "Path to a PEM file for secure connections to elasticsearch")

You would never know because I failed to document this.

@Crispy1975
Copy link
Contributor Author

Will give it a go this evening.

@Crispy1975
Copy link
Contributor Author

@rwynn it worked perfectly, thanks. Now replicating over TLS. 👍

@rwynn
Copy link
Owner

rwynn commented Dec 21, 2016

Great. Just FYI you can put this in your TOML file or pass it as a flag on the command line. Simple options work either way. I will close this issue once I've added the docs for the mongo and elasticsearch pem file options.

@Crispy1975
Copy link
Contributor Author

Crispy1975 commented Dec 22, 2016

Thanks. Slightly off-topic, but relating to the config options. Does the syntax for the ES connection string as per the Go client remain the same in the TOML file?

"https://user:pass@es01.example.com:9200", "https://user:pass@es01.example.com:9200", "..."

Or will the client sniff out the other nodes given one?

@rwynn
Copy link
Owner

rwynn commented Dec 22, 2016

The syntax is the same for the connection string in the TOML file or when passed on the command line.

According to

https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html

It seems that all nodes know about one another and can forward requests appropriately. So I'm assuming you can pick any one.

If you are using es5 and gridfs replication it seems it would be best to setup a dedicated node/nodes for ingestion according to that doc.

@rwynn
Copy link
Owner

rwynn commented Dec 22, 2016

Looked into this a bit more and the elastigo go client connection supports a Hosts array in addition to the connection string. Monstache is not surfacing the ability to set the hosts array currently. I will take a look at putting this in. It would be an array of host names in addition to the connection string. Looks like elastigo then chooses a host from the pool based on some heuristics (previously failed?) for each request.

@Crispy1975
Copy link
Contributor Author

We would be able to test this whenever it's available, have a few clusters running. 👍

@rwynn
Copy link
Owner

rwynn commented Dec 22, 2016

@Crispy1975 Are you looking for a copy of each index request to go to N hosts? Or are you looking to configure a pool of hosts and each index request would go to one from the pool?

@rwynn
Copy link
Owner

rwynn commented Dec 22, 2016

The best way to do multiple clusters I think would be to run a monstache process for each cluster. For nodes within a cluster a single monstache process can send requests to a pool of nodes using the previously mentioned Hosts setting on the connection. That would be a new config option.

@rwynn rwynn closed this as completed in e80e9f7 Dec 22, 2016
@Crispy1975
Copy link
Contributor Author

Yeah, we are running different installs of monstache for each cluster to partition data. This works best for us.

@rwynn
Copy link
Owner

rwynn commented Dec 22, 2016

Sounds good. If you are using the resume feature make sure to set the resume-name to something unique for each process (cluster) otherwise they will overwrite one another.

I need to fix an issue with using multiple processes combined with the resume feature combined with the multi-worker feature. But that's probably not a combination you are using.

@Crispy1975
Copy link
Contributor Author

Thanks for the heads-up with that, will be sure to make sure the resume-name is unique.

@rwynn
Copy link
Owner

rwynn commented Feb 25, 2017

@Crispy1975,
Just checkin in to see how things are going with monstache. Have you had any pain points using it? Any suggestions for improvements? Thanks.

@Crispy1975
Copy link
Contributor Author

All good so far, we haven't seen any issues at this point. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants