Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upAllocators, take III #1398
Conversation
pnkfelix
added some commits
Dec 1, 2015
This comment has been minimized.
This comment has been minimized.
|
cc @Gankro |
This comment has been minimized.
This comment has been minimized.
|
An |
This comment has been minimized.
This comment has been minimized.
I am quite sure that arithmetic overflow during computation of the input size is an OOM basically by definition. |
This comment has been minimized.
This comment has been minimized.
Not really. If you are running 4GiB of RAM with overcommit/swap disabled and try to malloc all of it, your malloc is going to fail and will not succeed until the system's configuration changes. Of course, allocators SHOULD NOT leak memory on dealloc. |
This comment has been minimized.
This comment has been minimized.
|
@arielb1 wrote:
True. I spent a little while trying to find weasel wording here that would cover zero sized allocations (which are also an Error in this API). I don't remember offhand how each part of the text addressed it, but the the phrasing here is not great. |
This comment has been minimized.
This comment has been minimized.
Hmm, okay yes I see, the returned blocks alias the embedded array, but LLVM is allowed to assume that only the |
This comment has been minimized.
This comment has been minimized.
... But this does not seem quite right to me ... this would allow multiple clients to reference the pool, but the point of using Hmm. I am not sure how to resolve this for the example. |
This comment has been minimized.
This comment has been minimized.
rphmeier
commented
Dec 7, 2015
|
Really glad that this topic is getting some love. Ironically, I had just started rehashing my allocators crate for the first time in a month, including adding a re-implementation of a few key data structures. I am slightly doubtful of the necessity for an associated In the case you describe with Consider this extremely contrived example. fn use_alloc<A>(alloc: A) where A: Allocator {
let my_block = alloc::Kind { size: 1024, align: 8 };
let my_addr;
// try the allocation until it works or hits a non-transient error
loop {
match alloc.alloc(&my_block) => {
Ok(addr) => { my_addr = addr; break; }
Err(e) => {
if !e.is_transient() {
// panic or something
}
}
}
}
// use my_addr here
}I know we're trying to move above and beyond the old-school mechanisms of |
This comment has been minimized.
This comment has been minimized.
jnicholls
commented
Dec 7, 2015
|
Huge fan of this concept, I'm actually currently struggling with the fact that I can't use a specific allocator for any of the libstd data structures, which would make my life a lot easier working with shared memory pages... |
This comment has been minimized.
This comment has been minimized.
|
Bikeshedding, but is the name "Kind" going to get confusing if we ever get higher-kinded anything? |
This comment has been minimized.
This comment has been minimized.
The points you mention are important, but the raison d'être of the This RFC explores this tangentially for GC, but I would like to also see some examples for computing devices (like GPGPUs or XeonPhis), for example:
[0] From the Alexander Stepanov and Meng Lee, The Standard Template Library, HP Technical Report HPL-95-11(R.1), 1995 (emphasis is mine):
[1] The example in |
This comment has been minimized.
This comment has been minimized.
This function already exists in the form |
This comment has been minimized.
This comment has been minimized.
TyOverby
commented
Dec 7, 2015
|
What happens when you drop an allocator that still has memory that is being used? |
This comment has been minimized.
This comment has been minimized.
|
@TyOverby You will use-after-free |
This comment has been minimized.
This comment has been minimized.
|
I thought that is prevented by implementing |
This comment has been minimized.
This comment has been minimized.
|
You can either have the user own the allocator (so |
This comment has been minimized.
This comment has been minimized.
Hmm I will admit that I had not considered this drawback. I'll have to think on it. |
This comment has been minimized.
This comment has been minimized.
|
The associated error type seems somewhat similar to when we were considering the same thing for |
This comment has been minimized.
This comment has been minimized.
|
Note: discussion on IRC found that let alloc = RefCell::new(Pool::new());
let vec = Vec::with_cap_and_alloc(10, &alloc);
*alloc.get_mut() = Pool::new();
// vec is now using-after-freeSeveral solutions can be taken to this. Off the top of my head the easiest would be a new-type wrapper over RefCell that doesn't expose |
This comment has been minimized.
This comment has been minimized.
Ability to have type erased allocators (i.e. *I'm not talking from my own experience |
This comment has been minimized.
This comment has been minimized.
I had originally thought that there would not be much demand for But @petrochenkov 's recent comment clearly indicates that there may well be demand for I'm still not entirely convinced... I would be a little disappointed if the only type of allocator error available was the zero-sized |
nrc
added
T-lang
T-libs
labels
Dec 8, 2015
nrc
assigned
pnkfelix
Dec 8, 2015
This comment has been minimized.
This comment has been minimized.
|
It would be nice to know what exactly happened/is going to happen with It seemed that after 20 years of having the allocator in the container When you don't want to pay for virtual dispatch, you can always specify a On Mon, Dec 7, 2015 at 11:17 PM, Felix S Klock II notifications@github.com
|
This was referenced Jan 12, 2017
rkruppe
referenced this pull request
Apr 17, 2017
Merged
Prepare global allocators for stabilization #1974
This comment has been minimized.
This comment has been minimized.
|
What's the story here on internal locking? I'm working on an allocator implementation, and I've run across a problem: There's no way for me to express to types that use the |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
That's a cool way of doing it for a particular concrete type, but is there any way that code that was generic on any BackpressureAnother unrelated idea: It'd be good if there were some way to have backpressure between allocators. For example, if I'm implementing an allocator that provides extra functionality on top of another existing allocator, and my allocator performs caching, it would be useful if the allocator I'm wrapping could inform me if memory was getting tight so I'd know to free some of the caches I was using. One option off the top of my head would be to allow registering "low memory" callbacks that an allocator can invoke to poke downstream allocators to try freeing any memory if they can. A good example of this is in Section 3.4 of this paper. |
This comment has been minimized.
This comment has been minimized.
|
@joshlf |
This comment has been minimized.
This comment has been minimized.
|
But there's no way to specialize, right? No way to make it so that |
This comment has been minimized.
This comment has been minimized.
|
@joshlf I'm not too familiar with the trait specialization stuff, it might be possible. My hunch is taking advantage of it would entail a vastly different algorithm, but try it out! |
This comment has been minimized.
This comment has been minimized.
|
@Ericson2314 Unfortunately I think it's going to be impossible soon thanks to issue 36889. Here's a short example: https://is.gd/xgT6cG |
This comment has been minimized.
This comment has been minimized.
|
Make a trait that just you implement? |
This comment has been minimized.
This comment has been minimized.
|
I don't follow - how does that solve this? |
This comment has been minimized.
This comment has been minimized.
|
rust-lang/#36889 only applies to inherent impls, not trait impls. |
This comment has been minimized.
This comment has been minimized.
|
Hmmm interesting. Seeing as the inherent impl variant is going away, maybe the trait impl variant will soon too? Or is there a good reason to keep the trait impl variant around that doesn't apply to inherent impls? |
This comment has been minimized.
This comment has been minimized.
|
Maybe I'm missing something, but it looks like that doesn't work either: https://is.gd/YdiPhl |
This was referenced Jun 19, 2017
This comment has been minimized.
This comment has been minimized.
|
From the appendix:
@pnkfelix, is that last equation right? An allocator can return less memory than requested? Or should the equation be |
This comment has been minimized.
This comment has been minimized.
|
@SimonSapin no, an allocator cannot return less memory than requested. The significance of |
This comment has been minimized.
This comment has been minimized.
|
@pnkfelix I see, thanks. I think it would be worth expanding the doc-comment of |
pnkfelix commentedDec 6, 2015
Update: RFC has been accepted:
text on master: https://github.com/rust-lang/rfcs/blob/master/text/1398-kinds-of-allocators.md
tracking issue: rust-lang/rust#32838
Tasks before FCP
Kind: Copy, rename Kind,fn deallocreturn type, ...)&mut self(vsselfor&self)fn oomAPI design and the associated protocolErrortypefn extend_in_placefn realloc_in_placemethod (returning aResult<(), SeparateUnitError>)fn oomAPI design given that associatedErroris now goneNonZeroLayoutsupport zero-sized inputs (delaying all checks, if any, to the allocator itself).HashMap,Vec(and associated iterators)BinaryHeapBTreeSetBtreeMapVec(signatures)LinkedListVecDequeHashSetHashMaphash::RawTable(signatures, implementation)alloc::RawVec(signatures, implementation)String(or not...)BoxRcArcSummary
Add a standard allocator interface and support for user-defined allocators, with the following goals:
Regarding GC: We plan to allow future allocators to integrate themselves with a standardized reflective GC interface, but leave specification of such integration for a later RFC. (The design describes a way to add such a feature in the future while ensuring that clients do not accidentally opt-in and risk unsound behavior.)
rendered