Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

print() is 68 times faster than ic() #36

Open
offchan42 opened this issue Sep 27, 2019 · 18 comments
Open

print() is 68 times faster than ic() #36

offchan42 opened this issue Sep 27, 2019 · 18 comments

Comments

@offchan42
Copy link

offchan42 commented Sep 27, 2019

Try the following code to test speed of print() vs ic():

from icecream import ic
from time import time

start = time()
for i in range(1000):
    print("i:", i)
stop = time()
elapsed = stop - start

start = time()
for i in range(1000):
    ic(i)
stop = time()
elapsed2 = stop - start

print(elapsed)
print(elapsed2)
print(f"print() is {elapsed2 / elapsed} times faster than ic()")

I found that print is 68 times faster!
image

How to make it faster? Why is it so slow? Is it because of the coloring?

@offchan42 offchan42 changed the title print() is 765 times faster than ic() print() is 68 times faster than ic() Sep 27, 2019
@alexmojaki
Copy link
Collaborator

Finding where ic was called from is a very complicated operation, Python doesn't offer it for free. I'm surprised the difference isn't bigger. In fact you call print 100 times but ic 1000 times, which suggests ic is just 6.8 times faster. If you replace print with sys.stdout.write you might get a fairer comparison.

#33 should make repeated calls faster.

@offchan42
Copy link
Author

offchan42 commented Sep 28, 2019

Sorry about the 100 times vs 1000 times. The elapsed time is actually correct, I just put in wrong code.
I've changed it now to 1000 vs 1000. So it's still 68 times faster using print.
Note for future reader: You won't see 100 because I already modified the code to 1000. For you, the code is correct, so skip this comment.

@gruns
Copy link
Owner

gruns commented Sep 30, 2019

Is ic()'s performance presenting a problem? If so, how and where?

IceCream is a debugging library and thus should not be performance-critical (eg used in a tight loop in production). As @alexmojaki mentioned, ic() does a lot of work to determine output (open source files, parse Python code, etc).

If you want to dive in to improve IceCream's performance, I'd ebulliently welcome and merge such PRs. But performance is not my focus with IceCream until it presents itself as a problem.

As for tight loops and/or production, the forthcoming #33 (also a huge thanks to @alexmojaki there) will help.

@offchan42
Copy link
Author

offchan42 commented Sep 30, 2019

When I was interacting with UI elements and the UI will report some number in the command line. E.g. dragging a slider and then it computes something. The ic() will make the slider look unresponsive if you move the slider passes through many callback points quickly (like from 0 to 100).
But it is fine in that case I change to use print(). It would be better though if I can use ic there.

@gruns
Copy link
Owner

gruns commented Sep 30, 2019

Aha. That would do it. Great example of an ic() use case where performance matters.

A threaded implementation would solve this. Do all ic() work in a non-UI thread, eg via the threading library. File I/O is requisite for ic(), and there's no easy way to speed that up.

Is this something you could tackle?

@offchan42
Copy link
Author

No. I don't know threading in python yet.

@alexmojaki
Copy link
Collaborator

Try using the pp function in snoop.

@offchan42
Copy link
Author

I see that snoop seems to be more intense in its number of features, providing more options than ic. Though ic is simple and easy to get started.
I tried using snoop and it looks good.
Do you suggest me to try pp function because you think it's faster?

@alexmojaki
Copy link
Collaborator

Yes, pp uses executing which is more accurate than ic and has caching which makes it much faster over many calls. #33 is to bring the same functionality to icecream.

Using pp should be the same experience as using ic. The rest of snoop is also great but serves different purposes.

@offchan42
Copy link
Author

@alexmojaki Yes. I tried experimenting with this code

from time import time

from icecream import ic
from snoop import pp


def process(print_func, n_iter):
    start = time()
    for i in range(n_iter):
        print_func(i)
    stop = time()
    elapsed = stop - start
    return elapsed


n_iter = 10_000
e1 = process(lambda i: print("i:", i), n_iter)
e2 = process(ic, n_iter)
e3 = process(pp, n_iter)

print("print elapsed:", e1)
print("ic elapsed:", e2)
print("pp elapsed:", e3)
print(f"print() is {e2 / e1} times faster than ic()")
print(f"print() is {e3 / e1} times faster than pp()")

And here is the result:

print elapsed: 0.5972144603729248
ic elapsed: 43.248204708099365
pp elapsed: 3.786238431930542
print() is 72.41653974870843 times faster than ic()
print() is 6.339830468214487 times faster than pp()

So yeah, pp is definitely faster than ic! I don't know why I would not use it. You need a more catchy naming of the library and that's it. Also it doesn't have the issue I mentioned here: #35
It looks to me that it's better in every way, as of now.
image
It should print within one line though if the print can fit in one line.

@offchan42
Copy link
Author

offchan42 commented Oct 6, 2019

After testing both pp and ic, I've found that pp works better in cmd, but ic is better in jupyter lab.
image
See that pp does not report the argument name. It only reports the argument name if it's inside a function. And I like the one line style more.

@alexmojaki
Copy link
Collaborator

Thanks for pointing this out to me. IPython does some extra magic which makes it harder for executing to know that everything is working OK. If you pip install -U executing you should find that pp works again.

@offchan42
Copy link
Author

It seems you know what you doing!
image

@offchan42
Copy link
Author

offchan42 commented Oct 6, 2019

I will close this issue for now as it seems like I can get around with slowness issue by using pp.
I think improving speed of ic will need some significant core upgrades, so better leave it as is now.

@matthewdm0816
Copy link

After testing both pp and ic, I've found that pp works better in cmd, but ic is better in jupyter lab.
image
See that pp does not report the argument name. It only reports the argument name if it's inside a function. And I like the one line style more.

I don't know it's due to the problem from VSCode or somewhere else
using ic to print something is literally much more slower (like print character by character) in jupyter in VSCode (remote)
But pprint works fine.
also ic works fine on jupyter in browser

@gruns
Copy link
Owner

gruns commented Jun 27, 2021

using ic to print something is literally much more slower (like print character by character)

this sounds like an issue, perhaps, outside icecream. behind the scenes icecream just calls print() itself:

def stderrPrint(*args):
    print(*args, file=sys.stderr)

can you

  1. record a video of the behavior so we can see exactly what you see

and

  1. try printing to stdout directly with
ic.configureOutput(outputFunction=print)

is the same character by character delay seen?

@mikeseven
Copy link

using ic to print something is literally much more slower (like print character by character)

this sounds like an issue, perhaps, outside icecream. behind the scene's icecream just calls print() itself:

def stderrPrint(*args):
    print(*args, file=sys.stderr)

can you

  1. record a video of the behavior so we can see exactly what you see

and

  1. try printing to stdout directly with
ic.configureOutput(outputFunction=print)

is the same character by character delay seen?

Not niche issue.
On VSCode, using this configuration is sooooo fast and really useful!

@gruns
Copy link
Owner

gruns commented Aug 18, 2021

On VSCode, using this configuration is sooooo fast and really useful!

to be explicit, using this configuration

ic.configureOutput(outputFunction=print)

right?

@gruns gruns reopened this Aug 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants