Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

context manager for measuring duration of blocks of code #63694

Open
dmoore mannequin opened this issue Nov 4, 2013 · 15 comments
Open

context manager for measuring duration of blocks of code #63694

dmoore mannequin opened this issue Nov 4, 2013 · 15 comments
Assignees
Labels
3.8 only security fixes stdlib Python modules in the Lib dir type-feature A feature request or enhancement

Comments

@dmoore
Copy link
Mannequin

dmoore mannequin commented Nov 4, 2013

BPO 19495
Nosy @tim-one, @birkenfeld, @rhettinger, @ncoghlan, @giampaolo, @rbtcollins, @ezio-melotti, @stevendaprano, @bitdancer, @cjrh, @vadmium, @csabella, @xgid

Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

Show more details

GitHub fields:

assignee = 'https://github.com/stevendaprano'
closed_at = None
created_at = <Date 2013-11-04.18:07:31.225>
labels = ['3.8', 'type-feature', 'library']
title = 'context manager for measuring duration of blocks of code'
updated_at = <Date 2019-03-30.14:11:14.670>
user = 'https://bugs.python.org/dmoore'

bugs.python.org fields:

activity = <Date 2019-03-30.14:11:14.670>
actor = 'ncoghlan'
assignee = 'steven.daprano'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2013-11-04.18:07:31.225>
creator = 'dmoore'
dependencies = []
files = []
hgrepos = []
issue_num = 19495
keywords = []
message_count = 15.0
messages = ['202151', '202153', '220402', '220526', '220546', '220688', '220691', '220692', '220693', '220703', '221065', '311294', '311295', '338797', '339201']
nosy_count = 15.0
nosy_names = ['tim.peters', 'georg.brandl', 'rhettinger', 'ncoghlan', 'giampaolo.rodola', 'rbcollins', 'ezio.melotti', 'steven.daprano', 'r.david.murray', 'THRlWiTi', 'cjrh', 'martin.panter', 'dmoore', 'cheryl.sabella', 'xgdomingo']
pr_nums = []
priority = 'normal'
resolution = None
stage = 'needs patch'
status = 'open'
superseder = None
type = 'enhancement'
url = 'https://bugs.python.org/issue19495'
versions = ['Python 3.8']

@dmoore
Copy link
Mannequin Author

dmoore mannequin commented Nov 4, 2013

It would be useful if timeit had a class like timeblock below that would allow one to easily time code inside a with block:

import time

class timeblock:
    def __init__(self,descr=''):
        self.descr=descr
    def __enter__(self):
        self.t0=time.time()
        return self
    def __exit__(self, type, value, traceback):
        self.t1=time.time()
        self.elapsed = self.t1-self.t0
        if self.descr:
            print self.descr,'took',self.elapsed,'seconds'

This would be used as follows:

with timeblock('cumsum') as t:
    a=0
    for x in range(10000000):
        a+=x

and would output:
cumsum took 2.39 seconds

This is useful when trying to find bottlenecks in large programs without interfering with their operation, which would be harder to do with timeit.timeit and more verbose with time.

@dmoore dmoore mannequin added stdlib Python modules in the Lib dir type-feature A feature request or enhancement labels Nov 4, 2013
@giampaolo
Copy link
Contributor

I think I have already proposed something similar in past but I can't find any reference on the tracker (so maybe I didn't after all).
BTW I have something like this [1] in a "utils" module I use across different projects and I would definitively welcome an inclusion in the stdlib.

[1] a modified version of http://dabeaz.blogspot.it/2010/02/function-that-works-as-context-manager.html

@stevendaprano
Copy link
Member

I have been using something like this for many years now and it is very handy. I have an early version of the code posted here:

http://code.activestate.com/recipes/577896

Over the next week or so, I'll prepare a patch.

Because it's a new feature, it must be for 3.5 only. The other versions are in feature-freeze.

@stevendaprano stevendaprano self-assigned this Jun 13, 2014
@rhettinger
Copy link
Contributor

This idea is a bit at odds with the design of the rest of the time-it module which seeks to:

  • run small snippets of code in volume rather than "trying to find bottlenecks in large programs without interfering with their operation". The latter task is more suited to the profiling modules.

  • run code inside a function (local variables have difference performance characteristics than global lookups).

  • take care of running the loop for you (including repeated measurements and setup code) instead of your writing your own loop as done the proposed code fragment).

  • return the output rather than print it

  • minimize the overhead of the timing suite itself (using itertools.repeat to control the number of loops, etc) unlike the class based version which introduces its own overhead.

It might still be worth having a time elapsed decorator, but I wanted to note that this isn't in the spirit of the rest of the module.

@giampaolo
Copy link
Contributor

It might still be worth having a time elapsed decorator

This is exactly what I think. It's a very common task.

@rhettinger
Copy link
Contributor

Perhaps a time-elapsed context manager would be a better fit in the contextlib module (which contains more general purpose
macro-level tools usable for many different tasks)
rather than the timeit module (which is more
narrowly tailored to high-quality reproducable in-vitro performance analysis at a fine-grained level).

It's a very common task.

That said, the task is sometimes solved in different ways. Hard-wiring this to "print" would preclude its use in cases where you want to save the result to a variable, log it, or build some cumulative total-time statistics.

Also, I think it ignores some of the realities about how difficult it is to do meaningful performance analysis.
The "cumsum" example isn't the norm. Most code is harder to time and doesn't have a high-volume tight-loop that you can that you can easily wrap a course-grained context manager around. Getting reproduceable timings (creating a consistent setup, using repeated calls to average-out noise, and isolating the part you want to test) can be tricky in the middle of real applications.

I occasionally have code where the proposed content manager would have been nice (saving me from typing the usual start=time() ... end=time()-start pairs). However, most of the time this technique is too simple and something like Robert Kern's line-profiler or cProfile are a better fit for identifying hotspots.

@stevendaprano
Copy link
Member

On Mon, Jun 16, 2014 at 02:09:22AM +0000, Raymond Hettinger wrote:

Perhaps a time-elapsed context manager would be a better fit in the
contextlib module (which contains more general purpose macro-level
tools usable for many different tasks) rather than the timeit module
(which is more narrowly tailored to high-quality reproducable in-vitro
performance analysis at a fine-grained level).

Perhaps you're right, but "timeit" is a fairly broad description, and I
would expect anything related to the task "time it" to be found there.
I'm +0.9 on timeit and +0.4 on contextlib.

Does anyone else have an opinion on where this belongs?

@gvanrossum
Copy link
Member

I agree with Raymond -- this is a common pattern but there are many variations that are hard to catch in a single implementation. E.g. at Dropbox we have a decorator like this that lets you specify an identifier for the block you name, and which logs the measured times to a special log file. Then there is separate analysis software that shows the plots the average time for each named block over time. Incredibly useful, but hard to share the code.

I think that Jeff Preshing's blog post giving the basic idea (to which Steven's recipe links) is more useful than a implementation in the stdlib could ever be. Steven's class already has three keyword arguments, and that feels like either not enough or already too many, depending on your needs. It's too easy to bikeshed it to death. (E.g. the "disable+restore gc" feature could easily be moved into a separate decorator; and why not make the verbose printing a little more flexible by adding another parameter to either specify the file to print to or the function to be used to print. Oh, and the message could be customizable. Then again, you might want to subclass this in order to do all those customizations, eventually allowing this to be used as the basis for e.g. the Dropbox version. Etc. Not that I propose to go that way -- I'm just bringing this up to indicate how hopeless it would be to try and put some version in the stdlib.)

@gvanrossum
Copy link
Member

FWIW, I definitely don't think this belongs in the timeit module, unless you are going to treat that module as a "namespace" package, which I don't like much (though in Java I think it's a pretty common pattern). If we have to have it, contextlib sounds fine (it's a grab-bag already). The sub-idea of a context manager to disable gc (or perhaps manipulate other state of the gc module) sounds fine even if you follow my recommendation not to add the timer context manager to the stdlib.

@ncoghlan
Copy link
Contributor

I'm with Guido on this one - I don't believe there's a clean sweet spot to hit. There *might* be a plausible case to be made for tracing functionality in the logging module, but even there the number of options multiplies fast enough that writing your own context manager would likely be easier than learning a new API with lots of settings to tweak.

@ezio-melotti
Copy link
Member

I often write code like:
import time
start = time.time()
...
end = time.time()
print(end - start)

Usually I don't do this to measure accurately the performance of some piece of code, but rather I do it for tasks that take some time (e.g. downloading a file, or anything that I can leave there for a while and come back later to see how long it took).

So I'm +1 on a simple context manager that replaces this common snippet, and -0 on something that tries to measure accurately some piece of code (if it takes a few seconds or more, high-accuracy doesn't matter; if it takes a fraction of seconds, I won't trust the result without repeating the measurement in a loop).

Regarding the implementation I can think about 2 things I might want:
1) a way to retrieve the time (possibly as a timedelta-like object [0]), e.g.:
    with elapsed_time() as t:
        ...
    print(t.seconds)
2) a way to print a default message (this could also be the default behavior, with a silent=True to suppress the output), e.g.:
    >>> with elapsed_time(print=True):
    ...     ...
    ...
    Task terminated after X seconds.

For the location I think that the "time" module would be the first place where I would look (since I would have to otherwise import time() from there). I would probably also look at "timeit" (since I'm going to time something), even though admittedly it doesn't fit too much with the rest. While it might fit nicely in "contextlib", I won't probably expect to find it there unless I knew it was there in the first place.

[0] would making timedelta a context manager be too crazy?

@rbtcollins rbtcollins changed the title Enhancement for timeit: measure time to run blocks of code using 'with' context manager for measuring duration of blocks of code Mar 17, 2015
@csabella
Copy link
Contributor

Hello everyone,

I was interested in similar functionality for timeit and found this discussion. I know it's too late for 3.7, but I was wondering if it would be worth putting this on python-ideas or python-dev to try to get some consensus for a PEP or a patch? With Barry's addition of breakpoint(), I would find something similar for timeit (that is, a simpler way to invoke it) would be quite helpful.

Thanks!

@bitdancer
Copy link
Member

FWIW, we just ran in to the need for this on our project, and used 'Time' from the contexttimer package on pypi. In our use case we want the number so we can log it, and what we care about is the actual elapsed time, not a precise measurement of the code block, because it for system monitoring and metrics, not hardcore performance analysis.

Having this always print would indeed be sub-optimal :)

@bitdancer bitdancer added the 3.8 only security fixes label Jan 30, 2018
@cjrh
Copy link
Mannequin

cjrh mannequin commented Mar 25, 2019

Somehow I missed that there's been an open issue on this. Like others I've written a bunch of different incarnations of an "elapsed" context manager over the years. Always for the more crude "how long did this take" reason like David mentioned, never the microbenchmarks scenario that timeit serves. The work is never quite substantial enough to make a Pypi package of it, but always annoying to have to do again the next time the need arises.

The overall least fiddly scheme I've found is to just use a callback. It's the simplest option:

@contextmanager
def elapsed(cb: Callable[[float, float], None], counter=time.perf_counter):
    t0 = counter()
    try:
        yield
    finally:
        t1 = counter()
        cb(t0, t1)

The simple case, the one most people want when they just want a quick check of elapsed time on a chunk of code, is then quite easy:

with elapsed(lambda t0, t1: print(f'read_excel: {t1 - t0:.2f} s')):
    # very large spreadsheet
    df = pandas.read_excel(filename, dtype=str)

(I rarely need to use a timer decorator for functions, because the profiler tracks function calls. But within the scope of a function it can sometimes be difficult to get timing information, particularly if the calls made there are into native extensions)

One of the consequences of using a callback strategy is that an additional version might be required for async callbacks (I have used these in production also):

@asynccontextmanager
async def aioelapsed(acb: Callable[[float, float], Awaitable[None]],
counter=time.perf_counter):
t0 = counter()
try:
yield
finally:
t1 = counter()
await acb(t0, t1)

So, the interesting thing here is that there is a general form for which an "elapsed" function is just a special case:

T = TypeVar('T')


@contextmanager
def sample_before_and_after(cb: Callable[[T, T], None], sample: Callable[[], T]):
    before = sample()
    try:
        yield
    finally:
        after = sample()
        cb(before, after)

The version of "elapsed" given further above is just this with kwarg sample=time.perf_counter. So, it might be sufficient to cover the use-case of an "elapsed" context manager instead with something like the above instead, which is more general. However, I don't actually have any use cases for this more general thing above, other than "elapsed", but I thought it was interesting.

Whether any of this merits being in the stdlib or not is hard to say. These code snippets are all short and easy to write. But I've written them multiple times to make "elapsed".

---

Once the "elapsed" abstraction is available, other cool ideas become a little bit easier to think about. These would be things that are user code (not be in the stdlib), but which can make use of the "elapsed" cm; for example, a clever logger for slow code blocks (written a few of these too):

@contextmanager
def slow_code_logging(logger_name, msg, *args, threshold_sec=1.0, **kwargs):
    logger = logging.getLogger(logger_name)

    if logger.isEnabledFor(logging.INFO):
        def cb(t0: float, t1: float) -> None:
            dt = t1 - t0
            if dt < threshold_sec:
                # Change the logger level depending on threshold
                logger.debug(msg, dt, *args, **kwargs)
            else:
                logger.info(msg, dt, *args, **kwargs)

        cm = elapsed(cb)
    else:
        # Logger is not even enabled, do nothing.
        cm = nullcontext()

    with cm:
        yield


with slow_code_logging(__name__, 'Took longer to run than expected: %.4g s'):
    ...

And a super-hacky timing histogram generator (which would be quite interesting to use for measuring latency in network calls, e.g. with asyncio code):

@contextmanager
def histobuilder(counter, bin_size):
    def cb(t0, t1):
        dt = t1 - t0
        bucket = dt - dt % bin_size
        counter[bucket, bucket + bin_size] += 1

    with elapsed(cb, counter=time.perf_counter_ns):
        yield

counter = Counter()

for i in range(100):
    with histobuilder(counter, bin_size=int(5e4)):  # 50 us
        time.sleep(0.01)  # 10 ms

for (a, b), v in sorted(counter.items(), key=lambda _: _[0][0]):
    print(f'{a/1e6:6.2f} ms - {b/1e6:>6.2f} ms: {v:4} ' + '\u2588' * v)

output:

9.85 ms - 9.90 ms: 1 █
9.90 ms - 9.95 ms: 10 ██████████
9.95 ms - 10.00 ms: 17 █████████████████
10.00 ms - 10.05 ms: 8 ████████
10.05 ms - 10.10 ms: 12 ████████████
10.10 ms - 10.15 ms: 5 █████
10.15 ms - 10.20 ms: 4 ████
10.20 ms - 10.25 ms: 4 ████
10.25 ms - 10.30 ms: 6 ██████
10.30 ms - 10.35 ms: 9 █████████
10.35 ms - 10.40 ms: 3 ███
10.40 ms - 10.45 ms: 5 █████
10.45 ms - 10.50 ms: 12 ████████████
10.50 ms - 10.55 ms: 3 ███
10.55 ms - 10.60 ms: 1 █

Things like the histogram builder above become (for me) much simpler when "elapsed" is just a thing that already exists even though it's very simple on its own.

If the right call is that we shouldn't add such a thing to the stdlib, should we instead just add an example of an elapsed context manager to the contextlib docs? Perhaps copy-paste from the docs is good enough for most users (like we do with the itertools recipes for example). I'd be willing to add something to the docs if it's decided that's the way to go.

(On the off chance some version of "elapsed" goes into the stdlib, I think contextlib is right place rather than timeit, particularly because of how the "sample_before_and_after" pattern generalises)

@ncoghlan
Copy link
Contributor

I think Caleb's "sample_before_and_after" idea hints that this may be an idea that could benefit from the ExitStack treatment where contextlib provides a building block that handles the interaction with the context management machinery, with the documentation shows recipes for particular use cases (such as implementing timers for blocks of code)

For example:

    class ContextMonitor:
        """Invoke given sampling callbacks when a context is entered and exited, and optionally combine the sampling results
    *on_entry*: zero-arg sampling function to call on context entry
    *on_exit*: zero-arg sampling function to call on context exit
    *combine_samples*: two-arg sample combination function. If not given, samples are combined as 2-tuples.
    *keep_results*: whether to keep results for retrieval via ``pop_combined_result()``

    Instances of this context manager are reusable and reentrant.
    """
    def __init__(self, on_entry=None, on_exit=None, combine_samples=None, *, keep_results = False):
        self._on_entry = lambda:None if on_entry is None else on_entry
        self._on_exit = lambda:None if on_exit is None else on_exit
        self._combine_samples = lambda *args: args if combine_samples is None else combine_samples
        self._entry_samples = []
        self._keep_results = keep_results
        self._combined_results = [] if keep_results else None
        @classmethod
        def sample(cls, on_event=None, check_results=None):
            """Context monitor that uses the same sampling callback on entry and exit"""
            return cls(on_event, on_event, check_results)

        def pop_combined_result(self):
            """Pops the last combined result. Raises RuntimeError if no results are available"""
            results = self._combined_results
            if not results:
                raise RuntimeError("No sample results to report")
            return self.checked_results.pop()

        def __enter__(self):
            self._entry_samples.append(self._on_entry())
            return self

        def __exit__(self, *args):
            entry_sample = self._entry_samples.pop()
            exit_sample = self._on_exit()
            result = self._combine_samples(entry_sample, exit_sample)
            if self._keep_results:
                self._combined_results.append(result)
 
And then a recipe like the following (adapted from Caleb's example):

    def log_if_slow(logger_name, msg, *args, threshold_sec=1.0, **kwargs):
        """"Context manager that logs monitored blocks that take too long"""
        logger = logging.getLogger(logger_name)
        if not logger.isEnabledFor(logging.INFO):
            # Avoid the timer overhead if the logger will never print anything
            return nullcontext()
        def _log_slow_blocks(start, end):
            duration = end - start
            if dt >= threshold_sec:
                logger.info(msg, duration, *args, **kwargs)
        return ContextMonitor.sample(time.perfcounter, _log_slow_blocks)
    
    with log_if_slow(__name__, 'Took longer to run than expected: %.4g s'):
        ...

The question is whether anyone would actually find it easier to learn to use an API like ContextMonitor over just writing their own generator based context manager.

Depending on how familiar they are with the context management protocol, it's plausible that they would, as the construction API only asks a few questions:

  • whether to use the same sampling function on entry and exit or different ones
  • which sampling function(s) to use
  • how to combine the two samples into a single result (defaulting to producing a two-tuple)
  • whether to keep the results around for later retrieval (useful if you want to post-process the samples rather than dealing with them directly in the sample combination callback)

@ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3.8 only security fixes stdlib Python modules in the Lib dir type-feature A feature request or enhancement
Projects
None yet
Development

No branches or pull requests

8 participants