Skip to content

Commit

Permalink
RF: use new cached append for multi-seq read
Browse files Browse the repository at this point in the history
Use the new cached append to replace the coroutines for the read across
multiple array sequences in ``create_arraysequences_from_generator``.

Before this change (using coroutines):

```
Old: Loaded 5,000 streamlines in   2.85

New: Loaded 5,000 streamlines in    3.6
Speedup of 0.79
Old: Loaded 5,000 streamlines with scalars in   5.16
New: Loaded 5,000 streamlines with scalars in   7.13
Speedup of 0.723703
```

After this change (using cached append):

```
Old: Loaded 5,000 streamlines in   3.21

New: Loaded 5,000 streamlines in    3.9
Speedup of 0.82
Old: Loaded 5,000 streamlines with scalars in   5.21
New: Loaded 5,000 streamlines with scalars in   7.16
Speedup of 0.727654
```

This seems to be well within run-to-run measurement error.
  • Loading branch information
matthew-brett committed Jun 4, 2016
1 parent e3b4db5 commit 6ff7e64
Showing 1 changed file with 4 additions and 10 deletions.
14 changes: 4 additions & 10 deletions nibabel/streamlines/array_sequence.py
Original file line number Diff line number Diff line change
Expand Up @@ -444,17 +444,11 @@ def create_arraysequences_from_generator(gen, n):
Number of :class:`ArraySequences` object to create.
"""
seqs = [ArraySequence() for _ in range(n)]
coroutines = [seq._extend_using_coroutine() for seq in seqs]

for coroutine in coroutines:
coroutine.send(None)

for data in gen:
for i, coroutine in enumerate(coroutines):
for i, seq in enumerate(seqs):
if data[i].nbytes > 0:
coroutine.send(data[i])

for coroutine in coroutines:
coroutine.close()
seq.append(data[i], cache_build=True)

for seq in seqs:
seq.finalize_append()
return seqs

0 comments on commit 6ff7e64

Please sign in to comment.