Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in ArrayBuilder/GrowableBuffer #567

Closed
jpivarski opened this issue Dec 3, 2020 · 20 comments · Fixed by #570
Closed

Memory leak in ArrayBuilder/GrowableBuffer #567

jpivarski opened this issue Dec 3, 2020 · 20 comments · Fixed by #570
Labels
bug The problem described is something that must be fixed

Comments

@jpivarski
Copy link
Member

My response to @sbuse's report on Scikit-HEP/awkward-array (Gitter).

I did a dirty but conclusive-enough experiment (watching htop while running commands in a Zoom meeting). The following consistently increased memory linearly: 100 MB in 80 seconds.

% python -i -c 'import awkward as ak; import numpy as np; import gc'
>>> for i in range(1000000):
...   a = ak.ArrayBuilder()
...   a.integer(i)
...   del a
...   tmp = gc.collect()
... 

and the following did not increase by even 10 MB in 80 seconds (where 10 MB is the level of noise—other applications allocating and freeing memory on my system).

% python -i -c 'import awkward as ak; import numpy as np; import gc'
>>> for i in range(1000000):
...   a = np.array([i])
...   b = ak.Array(a)
...   del a
...   del b
...   tmp = gc.collect()
... 

The first is a simplified version of yours: your example creates arrays from Python data, which internally invoke the ArrayBuilder. The second makes Awkward Arrays by wrapping NumPy arrays. Your example,

% python -i -c 'import awkward as ak; import numpy as np; import gc'
>>> for i in range(1000000):
...   a = ak.Array([i])
...   del a
...   tmp = gc.collect()
... 

is pretty much a combination of the two steps: ArrayBuilder, then wrap as ak.Array. Doing the above explicitly accumulated 120 MB in 80 seconds.

So it sounds like this is a memory leak in ArrayBuilder, very likely the GrowableBuffer that gets allocated is somehow not getting freed. The memory that GrowableBuffer allocates is in a std::shared_ptr that should be kept alive only by the fact that the ArrayBuilder is held as a Python reference. In my first example, del a should have dropped the Python reference count to zero and gc.collect() should have deleted the ArrayBuilder, then GrowableBuffer instance, which should have dropped the std::shared_ptr reference count to zero to immediately free the memory. That doesn't seem to be happening, but I know where to look.

(For future triage: memory leaks are bugs, not performance issues.)

@jpivarski jpivarski added the bug The problem described is something that must be fixed label Dec 3, 2020
@jpivarski jpivarski moved this from C++ to In Progress in Imminent fixes/features Dec 4, 2020
@jpivarski jpivarski linked a pull request Dec 4, 2020 that will close this issue
@jpivarski
Copy link
Member Author

PR #570 fixes it:

>>> import awkward as ak
>>> import gc
>>> builder = ak.ArrayBuilder()
>>> builder.integer(123)
CPU  malloc at 0x56286a4b4a50 (8192 bytes)
>>> builder.integer(456)
>>> del builder
CPU  free   at 0x56286a4b4a50
>>> builder = ak.ArrayBuilder()
>>> builder.integer(123)
CPU  malloc at 0x56286a4b4a50 (8192 bytes)
>>> builder.integer(456)
>>> array = builder.snapshot()
>>> del builder
>>> array
<Array [123, 456] type='2 * int64'>
>>> del array
>>> gc.collect()
CPU  free   at 0x56286a4b4a50
7

The ArrayBuilder nodes weren't freeing their memory before because each node had a circular reference. They had to return a std::shared_ptr to themselves, so they kept a "that" pointer, which was my std::shared_ptr-wrapped version of "this". What I really needed was the C++11 feature shared_from_this(). Switching to shared_from_this(), the reference counts apparently drop to zero at the appropriate times: when an unsnapshotted builder gets deleted or when all references to the data get deleted (snapshot shares the underlying buffer as a std::shared_ptr).

I also have these snazzy new malloc/free messages that I can turn on when debugging.

Imminent fixes/features automation moved this from In Progress to Done Dec 4, 2020
@sbuse
Copy link

sbuse commented Dec 7, 2020

Hi @jpivarski , it is amazing to see how fast you react to problems and start fixing things. Last Friday i asked myself how long it will take until this issue will be fixed and now on Monday morning there are already two new versions! So i'm sad to report that even with the version (awkward 1.0.0) the memory still grows when i overwrite awkward arrays in a loop. Did you test the loops that you posted in Gitter with the new version?

@jpivarski
Copy link
Member Author

I wonder if you're using the version you think you're using. I just ran

% python -i -c 'import awkward as ak; import numpy as np; import gc'
>>> for i in range(1000000):
...   a = ak.Array([i])
...   del a
...   tmp = gc.collect()
... 

and your original

% python -i -c 'import awkward as ak; import numpy as np; import gc'
>>> for i in range(1000000):
...   a = ak.Array([i])
...   del a
... 

(garbage collector outside of the loop, and therefore not relevant for scaling during the loop), and I don't see any significant increases in memory in two minutes. "Significant" for me means 100 MB, since this is the level of background noise on htop.

htop isn't targeted, so I turned on my debugging statements and ran the loop without garbage collector:

% python -i -c 'import awkward as ak; import numpy as np; import gc'
>>> for i in range(10):
...   a = ak.Array([i])
...   del a
... 
CPU  malloc at 0x55615983ce70 (8192 bytes)
CPU  free   at 0x55615983ce70
CPU  malloc at 0x55615983ce70 (8192 bytes)
CPU  free   at 0x55615983ce70
CPU  malloc at 0x55615983ce70 (8192 bytes)
CPU  free   at 0x55615983ce70
CPU  malloc at 0x55615983ce70 (8192 bytes)
CPU  free   at 0x55615983ce70
CPU  malloc at 0x55615983ce50 (8192 bytes)
CPU  free   at 0x55615983ce50
CPU  malloc at 0x55615983ce50 (8192 bytes)
CPU  free   at 0x55615983ce50
CPU  malloc at 0x55615983ce50 (8192 bytes)
CPU  free   at 0x55615983ce50
CPU  malloc at 0x55615983ce50 (8192 bytes)
CPU  free   at 0x55615983ce50
CPU  malloc at 0x55615983ce50 (8192 bytes)
CPU  free   at 0x55615983ce50
CPU  malloc at 0x55615983ce50 (8192 bytes)
CPU  free   at 0x55615983ce50

Each del is freeing memory. Doing it again without del:

% python -i -c 'import awkward as ak; import numpy as np; import gc'
>>> for i in range(10):
...   a = ak.Array([i])
... 
CPU  malloc at 0x5620d2865bc0 (8192 bytes)
CPU  malloc at 0x5620d2868440 (8192 bytes)
CPU  free   at 0x5620d2865bc0
CPU  malloc at 0x5620d2865bc0 (8192 bytes)
CPU  free   at 0x5620d2868440
CPU  malloc at 0x5620d2868440 (8192 bytes)
CPU  free   at 0x5620d2865bc0
CPU  malloc at 0x5620d2865bc0 (8192 bytes)
CPU  free   at 0x5620d2868440
CPU  malloc at 0x5620d2868440 (8192 bytes)
CPU  free   at 0x5620d2865bc0
CPU  malloc at 0x5620d2865bc0 (8192 bytes)
CPU  free   at 0x5620d2868440
CPU  malloc at 0x5620d2868440 (8192 bytes)
CPU  free   at 0x5620d2865bc0
CPU  malloc at 0x5620d2865bc0 (8192 bytes)
CPU  free   at 0x5620d2868440
CPU  malloc at 0x5620d2868440 (8192 bytes)
CPU  free   at 0x5620d2865bc0
>>> del a
CPU  free   at 0x5620d2868440

Just replacing a deletes the previous one. Before the explicit del a, there are 10 allocations and 9 deallocations.

@sbuse
Copy link

sbuse commented Dec 7, 2020

If i run the following, i reach your limit of 100 MB in one and a half minutes. To be honest i don't know if tracemalloc also fills the memory with something but the tree largest positions in the print are due to the awkward package.

%%time
import awkward as ak
import tracemalloc
import gc

print("Awkward version:"+str(ak.__version__))

tracemalloc.start()

for i in range(700000):
    b = ak.Array([i])
    del b
    
gc.collect()
    
snapshot = tracemalloc.take_snapshot()
for stat in snapshot.statistics("lineno")[:3]:
    #print the 3 largest memory allocations 
    print(stat)

current, peak = tracemalloc.get_traced_memory()
print("currently using {} MB; peak was {} MB".format(current/10**6,peak/10**6))
tracemalloc.stop()
Awkward version:1.0.0
/.local/lib/python3.6/site-packages/awkward/highlevel.py:268: size=35.4 MiB, count=700000, average=53 B
/.local/lib/python3.6/site-packages/awkward/_util.py:151: size=35.4 MiB, count=700000, average=53 B
/.local/lib/python3.6/site-packages/awkward/_util.py:146: size=35.4 MiB, count=700000, average=53 B
currently using 148.420465 MB; peak was 148.420945 MB
CPU times: user 1min 24s, sys: 288 ms, total: 1min 24s
Wall time: 1min 24s

@jpivarski
Copy link
Member Author

100 MB wasn't my limit—it's the level of noise: things jump up and down by 100 MB. Focusing in on the Python process could help, but still, memory accounting is tricky because OS-level memory can be shared among processes. That's why I was looking for a much stronger signal than 100 MB, to be sure it was a reproducible thing.

On the opposite end of the granularity scale, I do see that your example of repeatedly creating ak.Array([i]) with the same Python name does not leak memory from Awkward's C++ layer. 100% of the array allocations go through the function that produced the above print-outs—I banished free new operators in PR #570 (and found one additional memory leak in the process, in ak.combinations for ListArrays). Whatever tracemalloc is seeing is not Awkward-C++ related, though it could be Python-related and happening in the awkward library. Remember that you're not garbage-collecting in the loop.

@sbuse
Copy link

sbuse commented Dec 7, 2020

Doing the garbage-collecting in the loop unfortunately increases the runtime so much that it becomes absolutely impossible to run the code. The event loop in my project is way to large and for each event i have to compute quite a few variables.

@jpivarski
Copy link
Member Author

I wasn't recommending running gc.collect() in a loop—or at all, in production code. I meant it as a debugging technique.

@sbuse
Copy link

sbuse commented Dec 10, 2020

Hi @jpivarski , i'm wondering if you still consider this memory leakage an open issue or not ? I'm sorry that i keep annoying you about it but i have to know if i should start looking for a possible work around.

@jpivarski
Copy link
Member Author

I don't see a memory leak anymore—at least, I can't reproduce it.

@sbuse
Copy link

sbuse commented Dec 10, 2020

I have to agree that the growth of the memory is no longer as easy to observe as it was before the version 1.0.0 but it i think it is still there. Try to run something like this inside the loop.

 for i in range(500000): 
    a = ak.Array([[i],[i],[],[i],[i],[i],[]])
    b = ak.Array([[i],[i],[],[i],[i],[i],[]])
    c = ak.Array([[i],[i],[],[i],[i],[i],[]])
    d = ak.Array([[i],[i],[],[i],[i],[i],[]])
    
    del a,b,c,d 

I just put in a couple of variables in the loop with a structure that is not completely trivial. Running this code requires 425 MB and takes 0:02:26. As structures gets more complicated by adding a loop inside the loop and computing more variables this will reach the memory limit rather quickly.

@jpivarski
Copy link
Member Author

I believe you're seeing some memory issues, but if you're using the latest release, they're not in ak.Arrays. (I deployed 1.0.1rc1 last night; it has no differences with respect to the main branch right now.) The entire codebase has no new operators anymore (with the exception of some objects for JSON-handling, which you're not doing). All class instances are allocated with std::make_shared and all arrays are allocated with kernel::malloc:

https://github.com/scikit-hep/awkward-1.0/blob/030eb02d2bd7b4d9077a8bcd8ab0986cb0972121/include/awkward/kernel-dispatch.h#L159-L186

which creates non-GPU arrays as std::shared_ptrs with an array_deleter:

https://github.com/scikit-hep/awkward-1.0/blob/030eb02d2bd7b4d9077a8bcd8ab0986cb0972121/include/awkward/kernel-dispatch.h#L59-L81

Both of these awkward_malloc and awkward_free functions (for non-GPUs) resolve to:

https://github.com/scikit-hep/awkward-1.0/blob/030eb02d2bd7b4d9077a8bcd8ab0986cb0972121/src/cpu-kernels/allocators.cpp#L7-L23

Since this has 100% coverage, we can track every array allocation and deletion. (For class instances, we have to trust C++'s shared memory handling.)

In your new example, turning on the print-outs for two passes of the loop prints:

CPU  malloc at 0x55fa5839ca80 (8192 bytes)
CPU  malloc at 0x55fa58403f00 (8192 bytes)
CPU  malloc at 0x55fa58399250 (8192 bytes)
CPU  malloc at 0x55fa57cfd080 (8192 bytes)
CPU  malloc at 0x55fa58407da0 (8192 bytes)
CPU  malloc at 0x55fa58417780 (8192 bytes)
CPU  malloc at 0x55fa58460400 (8192 bytes)
CPU  malloc at 0x55fa583713c0 (8192 bytes)
CPU  free   at 0x55fa58403f00
CPU  free   at 0x55fa5839ca80
CPU  free   at 0x55fa57cfd080
CPU  free   at 0x55fa58399250
CPU  free   at 0x55fa58417780
CPU  free   at 0x55fa58407da0
CPU  free   at 0x55fa583713c0
CPU  free   at 0x55fa58460400
CPU  malloc at 0x55fa58403f00 (8192 bytes)
CPU  malloc at 0x55fa5839ca80 (8192 bytes)
CPU  malloc at 0x55fa58399250 (8192 bytes)
CPU  malloc at 0x55fa57cfd080 (8192 bytes)
CPU  malloc at 0x55fa58407da0 (8192 bytes)
CPU  malloc at 0x55fa58417780 (8192 bytes)
CPU  malloc at 0x55fa58460400 (8192 bytes)
CPU  malloc at 0x55fa583713c0 (8192 bytes)
CPU  free   at 0x55fa5839ca80
CPU  free   at 0x55fa58403f00
CPU  free   at 0x55fa57cfd080
CPU  free   at 0x55fa58399250
CPU  free   at 0x55fa58417780
CPU  free   at 0x55fa58407da0
CPU  free   at 0x55fa583713c0
CPU  free   at 0x55fa58460400

Every malloc is paired with a free for the same buffer. (Each array has two buffers for the ListOffsetArray offsets and the NumpyArray. Since your examples don't use up the 8 kB initial allocation, it's the depth that allocates more, not the length of your examples.)

In your original report, you identified a real memory leak, so I made the codebase airtight for all array allocations. (The four JSON-handling nodes should be investigated more deeply at a later date, but they're not relevant here.) Before the fix, I observed linearly increasing memory use, though I had to watch it more than a minute for the trend to be clear the way I was measuring it. After the fix, not only do we have this demonstration that all the allocations are paired with frees, but also I don't observe the linearly increasing memory, even after two minutes. That original error was fixed.

You're still seeing memory leaks, but we know (from the above) that it's not any buffer in Awkward Array and I trust C++'s shared_ptr handling for class instances. Deleting Python objects sometimes frees them and sometimes you have to wait for the next garbage collection for them to be freed. In this particular case, they do get freed right away (probably because there are no cyclic references).

Maybe you have a situation in which objects are waiting for the garbage collector, but you have a memory limit on the process that the garbage collector doesn't know about, so the garbage collector doesn't kick in before the process already dies? Anyway, I don't have any evidence of a memory leak in Awkward Array (any more), so I have nothing to follow up on.

@sbuse
Copy link

sbuse commented Dec 11, 2020

I see that the awkward arrays are working flawlessly on the C++ level but on the python level there seems to be something odd happening. Overwriting an awkward array 2 million times (with version 1.0.1rc1) takes up 424 MB and doing the same with numpy array or a list does not even use 1MB...

I'm afraid that in the end i'll have to rewrite the my whole analysis and abandon the awkward arrays, just because the automatic garbage collection does not work and i haven't found a way to free memory. What would you do in my position? Move on and rewrite everything?

@jpivarski
Copy link
Member Author

If I were in your situation, I would clone the awkward-1.0 GitHub repo, turn on the memory print-outs, and pip install . to try to see them in my own environment. The simplest explaination for the discrepancy is that you're using a different version of the software than you think you are, and that's happened to me countless times. It's a nuisance to have to get a dependency from GitHub and compile from source (be sure to have make, CMake, and a C++ compiler installed), rather than just grabbing a wheel from PyPI, but it's probably a less involved step than rearranging an entire analysis.

Also, thinking about this in a wider context, why are you creating millions of Awkward Arrays? Even without explicit leaks, that's not an efficient way to work. There's a significant time overheard in creating an array, so we want to use a small number of large arrays, using array-at-a-time operations ("vectorization").

If your real code looks like this benchmark, then it should probably be refactored anyway if running time is a concern. If what you're doing can't be reexpressed as a vectorized operation and must be in for loops, then you should consider Numba. You can't create Awkward Arrays inside of a Numba-compiled function, but you can create the equivalent of Python lists for temporary work. (That's what it looks like this function is doing.)

@sbuse
Copy link

sbuse commented Dec 14, 2020

Hi @jpivarski , thanks for your advice. I thought about the structure and if it is possible to use operations on an axis at the time but the issue is that structure that I face is really not trivial. I have a nested structure with vectors of vectors and with an “instruction” which elements belong together. I added a little example to illustrate what I'm talking about.

Event 1:

a = [[1,2,3,4],[5],[6,7,8],[9]]

Pulse 1:
instruction for first level: [0,1]
instruction for second level: [0,0]

a[[0,1],[0,0]] = [1,5] and now i do some operations on this array and save the result. 

Pulse 2:
instruction for first level: [0,2]
instruction for second level: [2,1]

a[[0,2],[2,1]] = [3,7] and now I do the same operations on this array and save the result. 

So this I why I’m creating so many arrays. I have around half a million events and per event there are like 10 pulses. I realized that accessing the elements in this fashion is really the limiting memory factor for me. Running something like the following already uses more than 200 MB and keeps growing with more accesses.

a = ak.Array([[1,2,3,4],[5],[6,7,8],[9]])

for i in range(1000000): 
    b = a[[0,1],[0,0]]

Do you also observe this behavior and is it actually what you would expect? If you observe the same thing, I think should switch to ROOT and do it there.

@jpivarski
Copy link
Member Author

If the garbage collector stops the world and cleans up before you physically run out of memory, as it's supposed to (but maybe don't if some strange ulimit constraints are applied on your system?), then yes, a lot of memory will be used, but it won't error-out due to running out of memory. We've established that the Python reference count goes to zero on these objects and when I run it on my system, it does not run out of memory.

The reason I was arguing for array-at-a-time functions or Numba is because of time, not memory usage. Creating a million Awkward Arrays is much slower than creating a million Python lists:

>>> import awkward as ak
>>> a = ak.Array([[1,2,3,4],[5],[6,7,8],[9]])   # "a" is an Awkward Array
>>> def f(n):
...     for i in range(n):
...         b = a[[0, 1], [0, 0]]    # addressed with an advanced slice
... 
>>> starttime = time.time(); f(100000); time.time() - starttime
9.222622871398926

>>> a = [[1,2,3,4],[5],[6,7,8],[9]]    # "a" is a Python list of lists
>>> def g(n):
...     for i in range(n):
...         b = [a[0][0], a[1][0]]    # addressed with explicit indexes
... 
>>> starttime = time.time(); g(100000); time.time() - starttime
0.04160714149475098

That was just 100,000, but already we see that the plain Python lists are 220× faster. Creating millions of Awkward Arrays shouldn't be a memory leak, but it's not often tested because it's such an antipattern for time usage.

I don't have a big picture of what you're trying to do—it might be possible with array-at-a-time functions, but it seems like it's easier to think about this problem in an element-at-a-time way, so I'll show you how to get what you want, at scale, without having to rethink the problem to cast it into an array-at-a-time form.

You can use Numba. Let's say you want a 2-D NumPy array of bs:

>>> import numba as nb
>>> import numpy as np
>>> a = ak.Array([[1,2,3,4],[5],[6,7,8],[9]])
>>> @nb.njit
... def h(n):
...     out = np.empty((n, 2), np.int64)
...     for i in range(n):
...         out[i, 0] = a[0][0]   # in Numba, Awkward Arrays have to be accessed as though they were lists
...         out[i, 1] = a[1][0]   # that is, no a[[0, 1], [0, 0]] syntax; do things one at a time
...     return out
... 
>>> h(1)    # the first time you call it compiles it; not representative of the eventual speed
array([[1, 5]])
>>> starttime = time.time(); h(100000); time.time() - starttime
array([[1, 5],
       [1, 5],
       [1, 5],
       ...,
       [1, 5],
       [1, 5],
       [1, 5]])
0.001569509506225586

And that gives you an additional factor of 26× over the Python lists, which is a total of 5800× times faster than what you're doing with Awkward Arrays outside of Numba. The Numba implementation does not create intermediate Awkward Arrays: a[1] and a[1][0] is just accessing elements from Awkward Array's internal buffers, not creating in-memory objects representing those structures—all of the structure manipulation is compile-time logic that creates LLVM instructions to directly get the data at runtime.

So this method short-circuits any memory problems: it doesn't create any objects that would need to be deleted (whatever is going wrong in your application to not delete those objects), but it also addresses a many-thousands-of-times speedup that's probably more important than the memory issue itself.

(Sadly, that's probably how the original memory issue went unnoticed, but based on the arguments I gave previously, I don't think Awkward Array has a memory issue in creating ak.Array anymore, though I don't know what's happening on your system to make you run out of memory anyway.)

@sbuse
Copy link

sbuse commented Dec 15, 2020

Man you are absolutely right! I played a bit with the lists and the numba functions and it instantly gave me a pretty good speed improvement. I decided to transform all the awkward arrays that I get from uproot into lists and only work with lists. Unfortunately numba does not allow for nested lists but even with python functions the analysis is way faster.

Unfortunately I still had the issue with the continuously growing memory, so I tried to only run the uproot command and I figured out the memory leakage I was seeing, was coming from uproot.iterate and not from the awkward arrays. Luckily you actually solved that in a never version (0.1.2). Thanks for all the help!

@jpivarski
Copy link
Member Author

Yes, there was a memory leak in uproot.iterate. I remember that one! (In Python, "leak" is harder to define because if it's in memory, it's accessible by some reference, but we definitely didn't intend to keep data from previous iterations in subsequent iterations.) I'm glad you managed to track this down.

If you're writing for loop-style code, lists will be faster than Awkward Arrays. And then Numba will be faster than the pure Python, but Numba doesn't deal with nested structures in lists very well, and the transformation of Python objects into Numba objects can itself be expensive. The optimal pairing is Awkward Arrays with Numba: any Awkward data structure can be viewed in Numba without conversion (i.e. it's an O(1) view, rather than an O(n) conversion). The limitation is that if you're using Numba, Awkward Arrays have to be addressed one element at a time: the array-at-a-time style outside of Numba is mutually exclusive with the element-at-a-time style inside of Numba. This was deliberately added to provide flexibility.

But if the speed of Python lists is good enough (keep in mind how your analysis will eventually scale), then stick with it! It's probably simplest. You might also be able to save an unnecessary conversion step if you set library="np" (or the global uproot.default_library) to get NumPy arrays everywhere, including NumPy arrays of NumPy arrays, which behave like lists (and not like 2D NumPy arrays). The NumPy developers have put more effort into streamlining NumPy objects, and they might have speed comparable to Python lists. Uproot's library="np" skips Awkward Arrays entirely.

@sbuse
Copy link

sbuse commented Dec 15, 2020

Shit i celebrated to early.... the issue it is still there when i combine uproot and awkward. I tried to bypass awkward but unfortunately if have to deal with vectors of vectors within the root file so when i use library="np i get back STLVectors. Is there a way to transform them into python lists?

@jpivarski
Copy link
Member Author

I think STLVector objects have a tolist method. I know that __getitem__ works on them. Here's the documentation:

https://uproot.readthedocs.io/en/latest/uproot.containers.STLVector.html

If your dataset has STLVectors, then the Awkward Arrays were being produced by converting the STLVectors into Awkward Arrays, so skipping that conversion will be a speedup.

@sbuse
Copy link

sbuse commented Dec 17, 2020

Thanks a lot for your help! I finally managed to get the memory issues under control and run the jobs on the universities cluster without crashing the machines. The solution for me was to completely avoid the awkward arrays and go from the uproot.STLVectors directly to nested lists and work with them in python. I still don't know what exactly the problem was but if it is a real problem it will appear again and hopefully i can be then reproduced more consistently. As a learning i take away that the real power of the awkward arrays are vectorized operations and in loops they should be avoided.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug The problem described is something that must be fixed
Projects
No open projects
Development

Successfully merging a pull request may close this issue.

2 participants