Skip to content

Conversation

idiomaticrefactoring
Copy link

refactoring code with var unpack which is more pythonic, concise, readable and efficient; how do think this change which has practical value?

refactoring code with var unpack which is more pythonic, concise, readable and efficient
@dlech
Copy link

dlech commented Jan 20, 2022

Unpacking args requires extra memory allocation behind the scenes, which can be undesirable in some cases (e.g. can lead to memory fragmentation).

@idiomaticrefactoring
Copy link
Author

idiomaticrefactoring commented Feb 17, 2022

Thank you @dlech. I am not very clear about the extra memory allocation. Would you like to explain why there is extra memory allocation? Is it slice or something?

Unpacking args requires extra memory allocation behind the scenes, which can be undesirable in some cases (e.g. can lead to memory fragmentation).

@dlech
Copy link

dlech commented Feb 17, 2022

It creates a tuple object.

@idiomaticrefactoring
Copy link
Author

Thank you. Yes, however, when I test it, the print is weird, it shows no difference about memory usage.

from memory_profiler import profile
@profile
def print_arg(*arg):
    print(*arg)
# instantiating the decorator
@profile
# code for which memory has to
# be monitored
def my_func():
    a=[i for i in range(100)]
    print_arg(a[0],a[1],a[2],a[3],a[4],a[5],a[6],a[7],a[8],a[9],a[10])
    print_arg(*a[0:11:1])
   

if __name__ == '__main__':
    my_func()

The output is:


Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
     6     41.7 MiB     41.7 MiB           1   @profile
     7                                         # code for which memory has to
     8                                         # be monitored
     9                                         def my_func():
    10     41.7 MiB      0.0 MiB         103       a=[i for i in range(100)]
    11     41.7 MiB      0.0 MiB           1       print_arg(a[0],a[1],a[2],a[3],a[4],a[5],a[6],a[7],a[8],a[9],a[10])
    12     41.7 MiB      0.0 MiB           1       print_arg(*a[0:11:1])

@dlech
Copy link

dlech commented Feb 17, 2022

It looks like you performed the test using Python, not MicroPython. CPython uses reference counting, so all of the implicit tuple and slice objects are most likely freed by the time the method returns. Also you would have to allocate at least 100KiB in order for anything to show up when the memory is being measured in MiB with one decimal place.

@idiomaticrefactoring
Copy link
Author

Thank you. However, I am a little confused because when I set breakpoints for the two lines of code (arg=print_arg_1(*a[0:10:1]) in line 17 and arg2=print_arg_1(a[0], a[1], a[2], a[3], a[4], a[5], a[6], a[7], a[8], a[9]) in line 19) from figure 1. It seems no additional tuple is created. Specifically, in figure 2, we could see the args is a tuple. And then in figure 3, the args is still a tuple.
image
image
image

@dpgeorge
Copy link
Member

Doing a slice like ss[0:7:1] will allocate a new bytes/bytearray object. So this is not an efficient way to do it.

@dpgeorge dpgeorge closed this Jul 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants