Public git conversion mirror of OpenBSD's official CVS src repository. Pull requests not accepted - send diffs to the tech@ mailing list.
openbsd/src
master
Name already in use
Code
-
Clone
Use Git or checkout with SVN using the web URL.
Work fast with our official CLI. Learn more.
- Open with GitHub Desktop
- Download ZIP
Sign In Required
Please sign in to use Codespaces.
Launching GitHub Desktop
If nothing happens, download GitHub Desktop and try again.
Launching GitHub Desktop
If nothing happens, download GitHub Desktop and try again.
Launching Xcode
If nothing happens, download Xcode and try again.
Launching Visual Studio Code
Your codespace will open once ready.
There was a problem preparing your codespace, please try again.
Latest commit
The basic idea is simple: one of the reasons the recent sshd bug is potentially exploitable is that a (erroneously) freed malloc chunk gets re-used in a different role. malloc has power of two chunk sizes and so one page of chunks holds many different types of allocations. Userland malloc has no knowledge of types, we only know about sizes. So I changed that to use finer-grained chunk sizes. This has some performance impact as we need to allocate chunk pages in more cases. Gain it back by allocation chunk_info pages in a bundle, and use less buckets is !malloc option S. The chunk sizes used are 16, 32, 48, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320, 384, 448, 512, 640, 768, 896, 1024, 1280, 1536, 1792, 2048 (and a few more for sparc64 with its 8k sized pages and loongson with its 16k pages). If malloc option S (or rather cache size 0) is used we use strict multiple of 16 sized chunks, to get as many buckets as possible. ssh(d) enabled malloc option S, in general security sensitive programs should. See the find_bucket() and bin_of() functions. Thanks to Tony Finch for pointing me to code to compute nice bucket sizes. ok tb@
d7c8d7e
Git stats
Files
PermalinkAbout
Public git conversion mirror of OpenBSD's official CVS src repository. Pull requests not accepted - send diffs to the tech@ mailing list.