Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove delay exiting node in request processor #1904

Merged

Conversation

3 participants
@wezrule
Copy link
Collaborator

commented Apr 13, 2019

I noticed this when running this CLI command nano_node --network=live --debug_account_count (Debug/Windows) but I'd imagine it will be caused by other commands depending on timing. It can take a while for the node to actually stop when it should be pretty instant. It was caused by the request loop, so I now check if stopped is true to break early out of some unnecessary processing.

@wezrule wezrule added this to the V19.0 milestone Apr 13, 2019

@wezrule wezrule requested a review from SergiySW Apr 13, 2019

@wezrule wezrule self-assigned this Apr 13, 2019

@wezrule wezrule added this to CP3 (2019-04-10) in V19 Apr 13, 2019

@guilhermelawless

This comment has been minimized.

Copy link
Contributor

commented Apr 13, 2019

Had the same thing today with --wallet_list :)

@@ -2777,6 +2777,14 @@ void nano::active_transactions::confirm_frontiers (nano::transaction const & tra
size_t elections_count (0);
for (auto i (node.store.latest_begin (transaction_a, next_frontier_account)), n (node.store.latest_end ()); i != n && elections_count < max_elections; ++i)
{
{
std::lock_guard<std::mutex> guard (mutex);
if (stopped)

This comment has been minimized.

Copy link
@SergiySW

SergiySW Apr 13, 2019

Collaborator

May be better make stopped atomic, then it can be just for (auto i (node.store.latest_begin (transaction_a, next_frontier_account)), n (node.store.latest_end ()); i != n && && !stopped && elections_count < max_elections; ++i)

This comment has been minimized.

Copy link
@wezrule

wezrule Apr 13, 2019

Author Collaborator

Stopped still needs to be protected by a mutex in some scenarios otherwise you can still have delays, e.g:

std::unique_lock<std::mutex> lk (mutex);
...
const auto extra_delay (std::min (roots.size (), max_broadcast_queue) * node.network.broadcast_interval_ms * 2);
if (stopped) // read stopped = false
{
  break;
}
// [another thread] `stopped` = true (no lock)
// Still delay
condition.wait_for (lock, std::chrono::milliseconds (node.network_params.network.request_interval_ms + extra_delay));

However stopped = true currently is protected by a lock, so it should work as you've described and is what I've done.

@SergiySW

This comment has been minimized.

Copy link
Collaborator

commented Apr 13, 2019

I guess request_loop shouldn't even start for inactive_node. As well as networking and some other threads

@wezrule

This comment has been minimized.

Copy link
Collaborator Author

commented Apr 13, 2019

I guess request_loop shouldn't even start for inactive_node. As well as networking and some other threads

Yeh probably :), but the daemon will probably encounter similar issues when stopped via RPC.

@wezrule wezrule merged commit adbb93a into nanocurrency:master Apr 13, 2019

2 checks passed

continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details

@wezrule wezrule deleted the wezrule:remove_slow_node_exiting_request_loop branch Apr 13, 2019

guilhermelawless added a commit to guilhermelawless/nano-node that referenced this pull request Apr 15, 2019

Remove delay exiting node in request processor (nanocurrency#1904)
* Remove delay exiting node

* Make stopped atomic
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.