Skip to content
This repository has been archived by the owner. It is now read-only.

Ticket28061 priority #281

Merged
merged 2 commits into from Oct 29, 2018
Merged

Ticket28061 priority #281

merged 2 commits into from Oct 29, 2018

Conversation

Labels
None yet
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

2 participants
@juga0
Copy link
Contributor

@juga0 juga0 commented Oct 26, 2018

Travis error comes from new flake8 version, not from changes in this branch

Copy link
Member

@pastly pastly left a comment

I don't think this does anything to solve the problem.

rp.best_priority() still returns a generator. That generator is started stored in relays. Then you iterate over the generator (just like before) but now there's a debug line logged about how long it took to iterate through all the relays we were told to measure.

Is my understanding of how the code works wrong? Assuming my understanding is correct ...

The actual fix would:

  • go through all the the relays the generator gives us
  • when we run out, wait until there are no more pending results <-- new thing
  • then go back to the beginning.
while True:
    for target in rp.best_priority():
        # As before, tell a thread to measure the target
        # and store the async_result in pending_results
        if len(pending_results) >= max_pending_results:
            # As before, sleep and trim the list of pending results
    while len(pending_results) > 0:
        # NEW: sleep and trim the list of pending results
    # Go back to the top, now knowing all workers are idle 

@juga0
Copy link
Contributor Author

@juga0 juga0 commented Oct 26, 2018

i didn't see this comment before.
I got to think that using iterators would make the method being called for each iteration, (and i forgot to convert to list in my changes).
The new thing you propose might make sense, but yet, while that condition is not reached before the last iteration?.
And how is that i didn't get results close to each other this time?: https://trac.torproject.org/projects/tor/ticket/28061#comment:22

@juga0 juga0 force-pushed the ticket28061_priority branch from 976e90b to 49c6cea Oct 28, 2018
juga0 added 2 commits Oct 28, 2018
After the all the relays in a subset that has been prioritized
have been measured, there might be still pending results, what would
make that before a thread has ended measuring it, the results
have not been dumped and prioritization is calculated again, and
that relay will have a high priority.
@juga0 juga0 force-pushed the ticket28061_priority branch from 49c6cea to 565350c Oct 28, 2018
@juga0
Copy link
Contributor Author

@juga0 juga0 commented Oct 28, 2018

I realized that change make sense. I made that change and left it running for a day. There're no relays measured within minutes.
There're still relays being measured within one hour, but that's other issue.
Thanks for the suggestion.
Rebased also to master.

@juga0 juga0 merged commit 565350c into torproject:master Oct 29, 2018
1 check passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.