-
Notifications
You must be signed in to change notification settings - Fork 345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CloudSQL Proxy 175% slower than direct connection #87
Comments
Please try using connection pooling. It will drastically decrease the
amount of time it takes to execute queries due to a decrease in the total
number of "new connection handshakes". You will much shorter latency
with/without the proxy using a connection pool.
Also, the first connection via the Proxy per hour will be a little longer,
as we have to request a certificate from our backend which only is valid
for 1 hour. That certificate is cached for that whole hour for all other
connections via the proxy (regardless of connection pooling).
In any case, please share the exact setup you used to get these numbers.
The Proxy is expected to have some overhead, but the numbers you quoted are
higher than I'd expect.
|
I read your post a little more closely and it does appear that you're
hitting some spikes during cert refresh, or at least it's part of your
problem. The Proxy code could easily be smarter about refreshing it's
certificate: instead of waiting for the hour to be up and blocking new
connections until a new certificate is retrieved, it could ask for a new
cert 5 minutes beforehand (in the background) so that the new cert can be
swapped in without any latency increase.
If the spike in latency is not your only problem, would you mind forking
off a new Issue to track that sort of feature request?
|
Thanks for your suggestion @Carrotman42 but unfortunately PHP does not support connection pooling unless using the ODBC driver which I guess is more inefficient. The spike every hour is my primary problem as it is triggering my monitoring alerts. Maybe these details might be helpful: Note: The comparison might not be truly fair as I was using a non-secure direct connection. The general overhead could be caused by the encryption? By removing the first query from the result makes the difference smaller: 22ms vs 31ms for 29 simple select queries. |
In my case cloud sql proxy is way too slow as compare to direct connection using ip whitelisting .
I am using hibernate to interact with mysql and using below config to connect to mysql for both deployments :
|
Did you do a "warm up" before starting your benchmark? There is some overhead when the first connection is opened. Would be nice to know if the latency is attributable to that. Do you have other percentiles, like 95%? P.S. there's a native java library that doesn't require installing the go proxy: https://github.com/GoogleCloudPlatform/cloud-sql-mysql-socket-factory |
@Laixer |
Double-checking: were all of the resources (GKE cluster, GCE VM, Cloud SQL
database) all in the same zone?
Is it possible for you to share your test setup for me to try to reproduce
your numbers?
Also, 99% is telling but not the whole story. If you have more percentile
data at all it would be useful in investigating (if you can't share your
repro code).
…On May 19, 2017 1:19 AM, "rigalrock" ***@***.***> wrote:
@Laixer <https://github.com/laixer>
Yes I warmed up the set up before doing my test . It was not a performance
test .I was sending 2 requests per seconds only . Latency is not attributed
in these metrics as these metrics have been captured at server side .
After test I have deleted kubernetes cluster so I don't have exact 95th
percentile data but it was also around 99th % , (-10 ms ) .
I am getting Insufficient Permission error while using
cloud-sql-mysql-socket-factory on kubernetes cluster and reason behind this
error as per my understanding is that this library uses application default
credentials which are not provided by kubernetes cluster .
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#87 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAiy7_KUKXbnKg3aevXr7nWDdtM6Kml4ks5r7VB1gaJpZM4NM-FS>
.
|
@rigalrock application default credentials support reading credentials from a file [1]. assuming you already have a secret mounted with the credentials, you can point GOOGLE_APPLICATION_CREDENTIALS to that file and the library should work [1] https://developers.google.com/identity/protocols/application-default-credentials#howtheywork |
Closing due to lack of updates. Please feel free to reopen the issue if there are any other questions. I believe the summary here is "the Proxy has high new connection latency, but when using connection pooling the latency should not be significantly higher than using native mysql SSL connectivity" |
We are writing from a Java application, running in Kubernetes to CloudSQL. Both, the Kubernetes cluster and the CloudSQL instance are running in the same region (europe-west-1b). Our application is using the CloudSQL proxy which is running along with our POD. Unfortunately, the write access (read not measured yet) is around 20 times slower than when I run it on my local machine, which has a comparable installation, but without Kubernetes (Java Process communicating with a local MySQL, SSD). We are using connection pooling, so the problem does not seem to be related to creating new connections. The amount of data sent is around 1,5 MB and if I just import the SQL file into CloudSQL using my tool of choice (which connects without the proxy) it is super fast. This is an excerpt of our Java Application's Kubernetes YAML file
We also use this configuration file
|
Have you tried the native Java library? https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory |
No, I did not try that out yet. I will give it a try and come back to - but it may take some time (~2-3 weeks). |
@prismec pls did you find a solution ?. I'm having a similar issue and I'm quite confused. It connects but after some time it takes time to reconnect again . |
Finally got a solution with socket factory .We loaded the service account cloud sql permissions as a secret. Then we used the GOOGLE_APP_CREDENTIALS env variable for the pod and used the socket factory to connect to the instance with connection pooling . |
@boogie4eva Can you please share the YAML files, and the connection strings for our reference? |
@vamsipkris will do once I get to my system |
@vamsipkris Sorry for the delay but this is what we did .
4) |
@prismec, I'm curious what the bottleneck is that is resulting in the limited write throughput, is the proxy client using significant CPU? The main difference I can think of between the Java socket factory vs the proxy is different SSL libraries and communication overhead from the socket... |
@hfwang The proxy uses a side car pattern .The response time using the proxy can be quite high . In my case sometimes queries times out and it takes time to reconnect again. I read this is because the proxy has to reauthenticate with cloud sql at intervals. This can be a pain in the neck for your user experience .Anyways with socket factory we don't have these issues . |
The socket factory does not proactively refresh the cert either iirc, so in
that respect there's no difference between the two solutions.
|
@Carrotman42 There is a huge difference from my personal observation . Like i stated earlier for me the proxy option causes delays in response times . |
@hfwang Our problem was not related to the proxy. The actual problem on our side was that the Java MySQL driver does not send batch requests to the server even if you use the JDBC batching support. Therefore the driver sent 1000s of single INSERT / UPDATE statements to the server instead of a single one. We've solved the problem by using the MySQL's JDBC Driver's |
Thanks for that update! |
Thanks for the examples. I solved my problems using the same method
mentioned by you
…On Fri 30 Mar, 2018, 4:44 AM Iyenemi, ***@***.***> wrote:
@vamsipkris <https://github.com/vamsipkris> Sorry for the delay but this
is what we did .
1. We created a service account with a Cloud sql with cloud sql client
permission
2. Download the .json file and use it to create a secret like this
kubectl create secret generic cloudsql-instance-credentials \
--from-file=credentials.json=[actual path to service account json file]
.
3. Mount the json file to your container volume
env: - name: GOOGLE_APPLICATION_CREDENTIALS value:
/secrets/cloudsql/mysql-sa.json volumeMounts: - name:
service-account-credentials-volume mountPath: /secrets/cloudsql readOnly:
true volumes: - name: service-account-credentials-volume secret:
secretName: cloudsql-instance-credentials items: - key: credentials.json
path: mysql-sa.json
Notice this env variable GOOGLE_APPLICATION_CREDENTIALS . it is needed
for socket factory to work so socket factory will load the service account
using this variable to authenticate with cloud sql .
4)<!--
https://mvnrepository.com/artifact/com.google.cloud.sql/mysql-socket-factory
--> <dependency> <groupId>com.google.cloud.sql</groupId>
<artifactId>mysql-socket-factory</artifactId> <version>1.0.5</version>
<exclusions> <exclusion> <groupId>com.google.guava</groupId>
<artifactId>guava-jdk5</artifactId> </exclusion> </exclusions> </dependency>
Load socket factory in your pom.xml and make the connection using socket
factory with connection pooling .
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#87 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEuRi5JnuNgRCenxwkkz1t2JofM2ml25ks5tjWrAgaJpZM4NM-FS>
.
|
Allow array entrypoints, and suppress exec if not safe
Is the proxy expected to be 175% slower than a direct connection?
Every hour I'm also experiencing some spikes. The first query takes up to 1 second. Could it be some kind of reauthentication?
In my small test, the first query (which creates the connection) takes 5ms without proxy and >15ms with proxy.
Can I do any configuration to reduce the latency?
The text was updated successfully, but these errors were encountered: