Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sometimes client requests fail with "no cached connection was available" #3203

Closed
jdeppe-pivotal opened this issue Aug 17, 2017 · 7 comments
Closed

Comments

@jdeppe-pivotal
Copy link

Feature Request:

The vault client library is being used in Concourse (http://concourse.ci - http://github.com/concourse/concourse). Occasionally the client code emits errors like this:

Finding variable 'docker-username': Get https://vault01.foo.com:8200/v1/concourse/main/docker-username: http2: no cached connection was available

This appears to be related here: golang/go#16582 (an issue fixed in go 1.8)

This request is to update your use of http2 (golang.org/x/net/http2) to that included with 1.8 (I guess by implication that means only supporting golang >= 1.8).

Thanks.

@vito
Copy link

vito commented Aug 21, 2017

Is this fixed or something?

@jefferai
Copy link
Member

@vito It's unclear this was ever a problem since multiple recent Vault versions have been built against Go 1.8. I think that's why the OP closed it.

@EugenMayer
Copy link

EugenMayer commented Nov 19, 2017

its not fixed for me, i am using vault 0.9.0. , i have this pretty often right now. Means, every 30 minutes. It seems it is not based on a single request - all requests made in that timeframe are getting those

Finding variable 'docker.password': Get https://vault:8200/v1/secret/concourse/infra/docker_base/docker: http2: no cached connection was available

Looking at the vault logs, nothing logged at all. Coming from the same background as vito here, using concourse.

Could that be a connection pooling issue on the client side (implementation) ?

those are the vault logs for that whole day:

19/11/2017 00:36:322017/11/18 23:36:32.446484 [INFO ] core: vault is unsealed
19/11/2017 00:36:322017/11/18 23:36:32.447010 [INFO ] core: post-unseal setup starting
19/11/2017 00:36:322017/11/18 23:36:32.447309 [INFO ] core: loaded wrapping token key
19/11/2017 00:36:322017/11/18 23:36:32.447351 [INFO ] core: successfully setup plugin catalog: plugin-directory=
19/11/2017 00:36:322017/11/18 23:36:32.447791 [INFO ] core: successfully mounted backend: type=kv path=secret/
19/11/2017 00:36:322017/11/18 23:36:32.447991 [INFO ] core: successfully mounted backend: type=system path=sys/
19/11/2017 00:36:322017/11/18 23:36:32.448484 [INFO ] core: successfully mounted backend: type=identity path=identity/
19/11/2017 00:36:322017/11/18 23:36:32.448555 [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
19/11/2017 00:36:322017/11/18 23:36:32.450071 [INFO ] expiration: restoring leases
19/11/2017 00:36:322017/11/18 23:36:32.450166 [INFO ] rollback: starting rollback manager
19/11/2017 00:36:322017/11/18 23:36:32.452817 [INFO ] expiration: lease restore complete
19/11/2017 00:36:322017/11/18 23:36:32.457716 [INFO ] identity: entities restored
19/11/2017 00:36:322017/11/18 23:36:32.457784 [INFO ] identity: groups restored
19/11/2017 00:36:322017/11/18 23:36:32.457862 [INFO ] core: post-unseal setup complete
19/11/2017 01:18:312017/11/19 00:18:31.869472 [INFO ] expiration: revoked lease: lease_id=auth/cert/login/XXXXX

Running vault as a docker-container, local network for the client ( same docker-network + same host ). This issue can be reproduced when running https://github.com/EugenMayer/concourseci-server-boilerplate for a while (docker-compose up)

Seems like this could indeed be a go-lang issue golang/go#16582 .. but i guess in that case, it seems like rather the client implementation, the concourse time. It also fits the actual case it happens, when triggers are executed (scheduled tasks) .. probably those triggered are scheduled at the same time, running a lot of requests at the same time

@jefferai
Copy link
Member

Depending on the concourse version you may have one that is built with a Go version where this is not fixed. Remember, this is a client issue, not a server issue. The OP probably needs to provide any further advice as I don't know much about concourse, and they closed it themselves.

@EugenMayer
Copy link

EugenMayer commented Nov 19, 2017

@jefferai using the newest concourse stable, 3.6.0 - i would expect 1.9, but i am not sure. I got you, that this is a client issue, reading up more sources that was exactly the same impression - thank you for clarifying that

@jefferai
Copy link
Member

Unfortunately I think you need to file this as a bug against concourse. I can honestly say I've no memory of hearing of this issue anywhere else and they seem to think it's resolved.

@MayaKhan12
Copy link

any method to close the connection forcefully.
var microsoftSqlConnectionInfo = new MicrosoftSqlConnectionInfo
{
ConnectionString = "server=localhost\sqlexpress;port=1433;user id=sa;password=****;database=master;app name=vault",
MaximumOpenConnections = 5,
VerifyConnection = true
};
await vaultClient.MicrosoftSqlConfigureConnectionAsync(microsoftSqlConnectionInfo, mountPoint);
as on last line application throw following exception "System.Exception: '400 BadRequest. {"errors":["Error validating connection info: failed to send SQL Batch: write tcp [::1]:9468-\u003e[::1]:1433: wsasend: An existing connection was forcibly closed by the remote host."]}"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants