Skip to content

Conversation

@cmaloney
Copy link
Contributor

@cmaloney cmaloney commented Nov 19, 2025

When less than half the buffer is taken just copy that small part out rather than doing a big alloc + memcpy + big shrink.

cc: @vstinner, @encukou

When less than half the buffer is taken just copy that small part out
rather than doing a big alloc + memmove + big shrink.
Copy link
Member

@vstinner vstinner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@encukou
Copy link
Member

encukou commented Nov 19, 2025

Hmm, would it make sense to take this path on (self->ob_start != self->ob_bytes), too? In that case the code currently goes on to memmove data.
OTOH, doing this would cause the bytearray into using this path in future takes, until a reallocation happens...

cmaloney and others added 2 commits November 19, 2025 12:02
Co-authored-by: Victor Stinner <vstinner@python.org>
Co-authored-by: Petr Viktorin <encukou@gmail.com>
@cmaloney
Copy link
Contributor Author

cmaloney commented Nov 20, 2025

Hmm, would it make sense to take this path on (self->ob_start != self->ob_bytes), too?

I'd need to measure more / not sure. The realloc code does memcpy + malloc if 25% of space can be saved:

if (4 * nbytes > 3 * size) {

Doing a malloc + memcpy + free instead of the memmove + malloc + memcpy + free would be good but not sure how much to tune to the particular allocator constants.

My theory for usage pattern has been reading a block of bytes off a stream (network or terminal), appending (or readinto) the bytearray, then taking little chunks out of that as bytes. With that bytearray realloc code should recompact periodically effectively meaning this turns into a reasonably efficient ringbuffer.... (for that use case a "please repack but don't reduce capacity flag may be useful"). Not sure how common that usage is going to be though

@encukou
Copy link
Member

encukou commented Nov 20, 2025

That sounds like readlines().

Anyway, no need to block this PR on that. Thank you!

@encukou encukou merged commit e265ce8 into python:main Nov 20, 2025
44 checks passed
@bedevere-bot
Copy link

⚠️⚠️⚠️ Buildbot failure ⚠️⚠️⚠️

Hi! The buildbot AMD64 Debian root 3.x (tier-1) has failed when building commit e265ce8.

What do you need to do:

  1. Don't panic.
  2. Check the buildbot page in the devguide if you don't know what the buildbots are or how they work.
  3. Go to the page of the buildbot that failed (https://buildbot.python.org/#/builders/345/builds/12729) and take a look at the build logs.
  4. Check if the failure is related to this commit (e265ce8) or if it is a false positive.
  5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.

You can take a look at the buildbot page here:

https://buildbot.python.org/#/builders/345/builds/12729

Failed tests:

  • test.test_os.test_os

Failed subtests:

  • test_timerfd_interval - test.test_os.test_os.TimerfdTests.test_timerfd_interval

Summary of the results of the build (if available):

==

Click to see traceback logs
Traceback (most recent call last):
  File "/root/buildarea/3.x.angelico-debian-amd64/build/Lib/test/test_os/test_os.py", line 4009, in test_timerfd_interval
    self.assertEqual(self.read_count_signaled(fd), 1)
    ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 2 != 1


Traceback (most recent call last):
  File "/root/buildarea/3.x.angelico-debian-amd64/build/Lib/test/test_os/test_os.py", line 4017, in test_timerfd_interval
    self.assertEqual(self.read_count_signaled(fd), count)
    ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 4 != 3

@cmaloney cmaloney deleted the ba_tb_keep_majority branch November 20, 2025 08:13
@cmaloney
Copy link
Contributor Author

Test failure looks independent of this change, os_read_impl doesn't use bytearray and this doesn't touch the other part of it int.from_bytes (in general things don't use .take_bytes very much yet)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants