New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add http connection pool between replicas #3594
Conversation
@@ -62,6 +62,7 @@ | |||
|
|||
#define DEFAULT_HTTP_READ_BUFFER_TIMEOUT 1800 | |||
#define DEFAULT_HTTP_READ_BUFFER_CONNECTION_TIMEOUT 1 | |||
#define DEFAULT_COUNT_OF_HTTP_CONNECTIONS_PER_ENDPOINT 15 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's Ok to make this macro more local.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the number is unmotivated, you can clearly state that fact in comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This macro is used in MergeTreeSettings and HTTPCommon. I don't want to include these headers to each other.
Comment about motivation is good idea.
I also don't know. But I think that 4 parallel transfers should be enough to utilize 10 GBit network even with moderate packet loss. |
dbms/src/IO/HTTPCommon.cpp
Outdated
std::tie(pool_ptr, std::ignore) = endpoints_pool.emplace( | ||
key, std::make_shared<SingleEndpointHTTPSessionPool>(host, port, https, max_connections_per_endpoint)); | ||
|
||
auto session = pool_ptr->second->get(-1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does it mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It means wait forever, until any session will be released by other user. But actually, it's better to pass there some positive number. In this case we will also wait forever, but sometimes log messages about our retries.
return session; | ||
PooledHTTPSessionPtr makePooledHTTPSession(const Poco::URI & uri, const ConnectionTimeouts & timeouts, size_t per_endpoint_pool_size) | ||
{ | ||
return HTTPSessionPool::instance().getSession(uri, timeouts, per_endpoint_pool_size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: maybe it will lead to artificial replication delay when pool happened to be full? (Due to background pool task delay after exception). Maybe it will lead to 10 second delays when we suddenly want to download many small parts.
I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla/?lang=en
I don't know what is better default for per endpoint connection pools?