Replies: 4 comments 7 replies
-
What’s your machine specs? Are users requesting concurrently or it’s only one request after the other? One way, regardless of the answers, to increase performance: don’t do docker! Don’t even do valhalla_service. Set up all workers with their own processes and have prime_server proxy between them. It’s a bit more complex but when you look at loki_worker.cc etc, it might become clearer. |
Beta Was this translation helpful? Give feedback.
-
Also, are you using HTTP I assume? Is the graph showing the timings from the client or the server? Was it benchmarked with every request creating a new connection? |
Beta Was this translation helpful? Give feedback.
-
My tips:
|
Beta Was this translation helpful? Give feedback.
-
Just a follow up. Looks like we are hitting our preliminary numbers. This was not 4mil / hr but more like 1000 / min. We reconfigured our AWS setup in a much "smarter" way and added a few more instances. |
Beta Was this translation helpful? Give feedback.
-
Hello, and thank you for all of your hard work on Valhalla.
We have setup a server and are running it through its paces and have found that the time to compute a route, as compared to other systems, can be "long". I'm wondering if there are some settings or tweaks that I can do to decrease the time it takes to compute a route or is it something fundamental?
I've included a graph of our results with a lot of queries of various length and the amount of time it takes to return the results.
Thank you, again.
Beta Was this translation helpful? Give feedback.
All reactions