Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix over-estimated rate #96

Closed
wants to merge 3 commits into from
Closed

Fix over-estimated rate #96

wants to merge 3 commits into from

Conversation

kcwu
Copy link
Contributor

@kcwu kcwu commented Jan 1, 2016

The formula of average rate should be something similar to
1/average(t for t in samples). But current implementation is similar to
average(1/t for t in samples), which is incorrect.

Following code can demo this problem

import random
import time
from tqdm import trange

for i in trange(10000, smoothing=0.01, leave=True, miniters=1, mininterval=0, ncols=0):
    time.sleep(random.random() / 100)

Output before this fix:

100% 10000/10000 [00:53<00:00, 413.96it/s]

After:

100% 10000/10000 [00:54<00:00, 187.06it/s]

10000/54=185.18 near 187.06

@codecov-io
Copy link

Current coverage is 100.00%

Merging #96 into master will not affect coverage as of 5f1d5ac

@@            master     #96   diff @@
======================================
  Files            5       5       
  Stmts          239     239       
  Branches        49      49       
  Methods          0       0       
======================================
  Hit            239     239       
  Partial          0       0       
  Missed           0       0       

Review entire Coverage Diff as of 5f1d5ac

Powered by Codecov. Updated on successful CI builds.

@casperdcl casperdcl self-assigned this Jan 1, 2016
@@ -194,7 +194,7 @@ def __iter__(self):

ax.set_title(format_meter(
n, total, elapsed, 0,
self.desc, ascii, unit, unit_scale, avg_rate,
self.desc, ascii, unit, unit_scale, 1 / avg_time,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be 1 / avg_time if avg_time else None like in _tqdm.py

@lrq3000
Copy link
Member

lrq3000 commented Jan 1, 2016

Indeed, the previous computation was biased (but only when smoothing was enabled), but the primary goal was to facilitate dynamic adaptation for unregularly long iterations. Your PR should fix the issue, but we have to first check if the dynamic adaptation is OK (else we will need to fix that too :) ).

@kcwu
Copy link
Contributor Author

kcwu commented Jan 1, 2016

Patch revised.

@lrq3000
Copy link
Member

lrq3000 commented Jan 1, 2016

Wow, thank's @kcwu , but you didn't need to, I intended these notes more as reminders for us tqdm devs before merging :)

The formula of average rate should be something similar to
1/average(t for t in samples). But current implementation is similar to
average(1/t for t in samples), which is incorrect.

Following code can demo this problem
```
import random
import time
from tqdm import trange

for i in trange(10000, smoothing=0.01, leave=True, miniters=1, mininterval=0, ncols=0):
    time.sleep(random.random() / 100)
```

Output before this fix:
```
100% 10000/10000 [00:53<00:00, 413.96it/s]
```
After:
```
100% 10000/10000 [00:54<00:00, 187.06it/s]
```
Google asserts copyright, even on work I do in my own time.
To be clear, Google only owns the copyright of the part of my patch, not
the whole project.
@kcwu
Copy link
Contributor Author

kcwu commented Jan 6, 2016

FYI, slightly revised the patch and add my employer to the copyright line.

@casperdcl
Copy link
Sponsor Member

merged (and rebased) without the delta_it == 0 assumption. update(0) will be interpreted as update(1) internally by default...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants