Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upAllocator traits and std::heap #32838
Comments
nikomatsakis
added
B-RFC-approved
T-lang
T-libs
B-unstable
labels
Apr 8, 2016
This comment has been minimized.
This comment has been minimized.
|
I unfortunately wasn't paying close enough attention to mention this in the RFC discussion, but I think that
Note that these can be added backwards-compatibly next to For consistency, |
This comment has been minimized.
This comment has been minimized.
|
Additionally, I think that the default implementations of This makes it easier to produce a high-performance implementation of |
eddyb
added a commit
to eddyb/rust
that referenced
this issue
Oct 18, 2016
eddyb
added a commit
to eddyb/rust
that referenced
this issue
Oct 19, 2016
eddyb
added a commit
to eddyb/rust
that referenced
this issue
Oct 19, 2016
This comment has been minimized.
This comment has been minimized.
|
Another issue: The doc for To me this implies that it must check that the alignment of the given address matches any constraint implied by However, I don't think the spec for the underlying
So, should the implementation of |
This comment has been minimized.
This comment has been minimized.
|
@gereeter you make good points; I will add them to the check list I am accumulating in the issue description. |
This comment has been minimized.
This comment has been minimized.
|
(at this point I am waiting for |
This comment has been minimized.
This comment has been minimized.
|
I'm new to Rust, so forgive me if this has been discussed elsewhere. Is there any thought on how to support object-specific allocators? Some allocators such as slab allocators and magazine allocators are bound to a particular type, and do the work of constructing new objects, caching constructed objects which have been "freed" (rather than actually dropping them), returning already-constructed cached objects, and dropping objects before freeing the underlying memory to an underlying allocator when required. Currently, this proposal doesn't include anything along the lines of Where would an object allocator type or trait fit into this proposal? Would it be left for a future RFC? Something else? |
This comment has been minimized.
This comment has been minimized.
|
I don't think this has been discussed yet. You could write your own Future work would be modifying collections to use your trait for their nodes, instead of plain ole' (generic) allocators directly. |
This comment has been minimized.
This comment has been minimized.
I guess this has happened? |
This comment has been minimized.
This comment has been minimized.
|
@Ericson2314 Yeah, writing my own is definitely an option for experimental purposes, but I think there'd be much more benefit to it being standardized in terms of interoperability (for example, I plan on also implementing a slab allocator, but it would be nice if a third-party user of my code could use somebody else's slab allocator with my magazine caching layer). My question is simply whether an |
This comment has been minimized.
This comment has been minimized.
Yes, it would be another RFC.
that depends on the scope of the RFC itself, which is decided by the person who writes it, and then feedback is given by everyone. But really, as this is a tracking issue for this already-accepted RFC, thinking about extensions and design changes isn't really for this thread; you should open a new one over on the RFCs repo. |
This comment has been minimized.
This comment has been minimized.
|
@joshlf Ah, I thought @steveklabnik yeah now discussion would be better elsewhere. But @joshlf was also raising the issue lest it expose a hitherto unforeseen flaw in the accepted but unimplemented API design. In that sense it matches the earlier posts in this thread. |
This comment has been minimized.
This comment has been minimized.
|
@Ericson2314 Yeah, I thought that was what you meant. I think we're on the same page :) @steveklabnik Sounds good; I'll poke around with my own implementation and submit an RFC if it ends up seeming like a good idea. |
This comment has been minimized.
This comment has been minimized.
|
@joshlf I don't any reason why custom allocators would go into the compiler or standard library. Once this RFC lands, you could easily publish your own crate that does an arbitrary sort of allocation (even a fully-fledged allocator like jemalloc could be custom-implemented!). |
This comment has been minimized.
This comment has been minimized.
|
@alexreg This isn't about a particular custom allocator, but rather a trait that specifies the type of all allocators which are parametric on a particular type. So just like RFC 1398 defines a trait ( |
This comment has been minimized.
This comment has been minimized.
|
@alexreg See my early point about using standard library collections with custom object-specific allocators. |
This comment has been minimized.
This comment has been minimized.
|
Sure, but I’m not sure that would belong in the standard library. Could easily go into another crate, with no loss of functionality or usability.
… On 4 Jan 2017, at 21:59, Joshua Liebow-Feeser ***@***.***> wrote:
@alexreg <https://github.com/alexreg> This isn't about a particular custom allocator, but rather a trait that specifies the type of all allocators which are parametric on a particular type. So just like RFC 1398 defines a trait (Allocator) that is the type of any low-level allocator, I'm asking about a trait (ObjectAllocator<T>) that is the type of any allocator which can allocate/deallocate and construct/drop objects of type T.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#32838 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AAEF3IhyyPhFgu1EGHr_GM_Evsr0SRzIks5rPBZGgaJpZM4IDYUN>.
|
This comment has been minimized.
This comment has been minimized.
|
I think you’d want to use standard-library collections (any heap-allocated value) with an *arbitrary* custom allocator; i.e. not limited to object-specific ones.
… On 4 Jan 2017, at 22:01, John Ericson ***@***.***> wrote:
@alexreg <https://github.com/alexreg> See my early point about using standard library collections with custom object-specific allocators.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#32838 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AAEF3CrjYIXqcv8Aqvb4VTyPcajJozICks5rPBbOgaJpZM4IDYUN>.
|
This comment has been minimized.
This comment has been minimized.
Yes but you probably want some standard library functionality to rely on it (such as what @Ericson2314 suggested).
Ideally you'd want both - to accept either type of allocator. There are very significant benefits to using object-specific caching; for example, both slab allocation and magazine caching give very significant performance benefits - take a look at the papers I linked to above if you're curious. |
This comment has been minimized.
This comment has been minimized.
|
But the object allocator trait could simply be a subtrait of the general allocator trait. It’s as simple as that, as far as I’m concerned. Sure, certain types of allocators can be more efficient than general-purpose allocators, but neither the compiler nor the standard really need to (or indeed should) know about this.
… On 4 Jan 2017, at 22:13, Joshua Liebow-Feeser ***@***.***> wrote:
Sure, but I’m not sure that would belong in the standard library. Could easily go into another crate, with no loss of functionality or usability.
Yes but you probably want some standard library functionality to rely on it (such as what @Ericson2314 <https://github.com/Ericson2314> suggested).
I think you’d want to use standard-library collections (any heap-allocated value) with an arbitrary custom allocator; i.e. not limited to object-specific ones.
Ideally you'd want both - to accept either type of allocator. There are very significant benefits to using object-specific caching; for example, both slab allocation and magazine caching give very significant performance benefits - take a look at the papers I linked to above if you're curious.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#32838 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AAEF3L9F9r_0T5evOtt7Es92vw6gBxR9ks5rPBl9gaJpZM4IDYUN>.
|
This comment has been minimized.
This comment has been minimized.
Ah, so the problem is that the semantics are different.
This is not compatible with |
This comment has been minimized.
This comment has been minimized.
|
Are these all the methods that you mean?
|
This comment has been minimized.
This comment has been minimized.
|
@gnzlbg Yes. |
This comment has been minimized.
This comment has been minimized.
|
@Amanieu Sounds ok to me, but this issue is already huge. Consider filing a separate issue (or even a stabilization PR) that we could FCP separately? |
Amanieu
referenced this issue
Oct 25, 2018
Merged
Add tracking issue for Layout methods (and some API changes) #55366
bors
added a commit
that referenced
this issue
Nov 7, 2018
bors
added a commit
that referenced
this issue
Nov 8, 2018
This comment has been minimized.
This comment has been minimized.
|
from Allocators and lifetimes:
Does this imply, that an Allocator must be |
This comment has been minimized.
This comment has been minimized.
|
Good catch! Since the |
This comment has been minimized.
This comment has been minimized.
shanemikel
commented
Feb 25, 2019
|
@gnzlbg Yes, I'm aware of the huge differences in the generics systems, and that not everything he details is implementable in the same way in Rust. I've been working on the library on-and-off since posting, though, and I'm making good progress. |
This comment has been minimized.
This comment has been minimized.
It doesn't. |
This comment has been minimized.
This comment has been minimized.
|
But can't |
This comment has been minimized.
This comment has been minimized.
|
Another question regarding
For the first we have two cases (as in the documentation, the implementer can choose between these):
The second is ensured by the following safety constraint:
This means, that we must call Edit: Regarding the last point: Even if
This only holds, as the trait requires an |
This comment has been minimized.
This comment has been minimized.
Where did you get this information from? All |
This comment has been minimized.
This comment has been minimized.
|
Docs of
|
This comment has been minimized.
This comment has been minimized.
|
So one of the error conditions of /// # Safety
///
/// * the layout of `[T; n]` must *fit* that block of memory.
///
/// # Errors
///
/// Returning `Err` indicates that either `[T; n]` or the given
/// memory block does not meet allocator's size or alignment
/// constraints.If The other error condition is "Always returns
Indeed. I find it weird that so many methods (e.g. |
This comment has been minimized.
This comment has been minimized.
Yup, I think this error condition can be dropped as it's redundant. Either we need the safety clause or an error condition, but not both.
IMO, it is guarded by the same safety clause as above; if the capacity of
Did you mean |
This comment has been minimized.
This comment has been minimized.
Note that this trait can be implemented by users for their own custom allocators, and that these users can override the default implementations of these methods. So when considering whether this should return
Yes, sorry. |
This comment has been minimized.
This comment has been minimized.
I see, but implementing |
This comment has been minimized.
This comment has been minimized.
|
Every API left pointing here for a tracking issue is the |
This comment has been minimized.
This comment has been minimized.
|
Simple background question: What is the motivation behind the flexibility with ZSTs? It seems to me that, given that we know at compile-time that a type is a ZST, we can completely optimize out both the allocation (to return a constant value) and the deallocation. Given that, it seems to me that we should say one of the following:
Is there a reason that the flexibility we have with the current API is needed? |
This comment has been minimized.
This comment has been minimized.
It's a trade-off. Arguably, the Alloc trait is used more often than it is implemented, so it might make sense to make using Alloc as easy as possible by providing built-in support for ZSTs. This would mean that implementers of the Alloc trait will need to take care of this, but more importantly to me that those trying to evolve the Alloc trait will need to keep ZSTs in mind on every API change. It also complicates the docs of the API by explaining how ZSTs are (or could be if it is "implementation defined") handled. C++ allocators pursue this approach, where the allocator tries to solve many different problem. This did not only make them harder to implement and harder to evolve, but also harder for users to actually use because of how all these problems interact in the API. I think that handling ZSTs, and allocating/deallocating raw memory are two orthogonal and different problems, and therefore we should keep the Alloc trait API simple by just not handling them. Users of Alloc like libstd will need to handle ZSTs, e.g., on each collection. That's definitely a problem worth solving, but I don't think the Alloc trait is the place for that. I'd expect an utility to solve this problem to pop out within libstd out of necessity, and when that happens, we can maybe try to RFC such an utility and expose it in std::heap. |
This comment has been minimized.
This comment has been minimized.
|
That all sounds reasonable.
Doesn't that imply that we should have the API explicitly not handle ZSTs rather than be implementation-defined? IMO, an "unsupported" error is not very helpful at runtime since the vast majority of callers will not be able to define a fallback path, and will therefore have to assume that ZSTs are unsupported anyway. Seems cleaner to just simplify the API and declare that they're never supported. |
This comment has been minimized.
This comment has been minimized.
burdges
commented
Mar 11, 2019
|
Would specialization be used by |
This comment has been minimized.
This comment has been minimized.
The latter should be sufficient; the appropriate code paths would be trivially removed at compile time. |
This comment has been minimized.
This comment has been minimized.
For me, an important constraint is that if we ban zero-sized allocations, the There are multiple ways to achieve this. One would be to add another Alternatively, we could ban zero-sized For example, , some types like
We can't specialize on ZSTs yet. Right now all code uses |
This comment has been minimized.
This comment has been minimized.
It'd be interesting to consider whether there are ways to make this a compile-time guarantee, but even a |
This comment has been minimized.
This comment has been minimized.
|
I haven't paid any attention to discussions about allocators, so sorry about that. But I've long wished that the allocator had access to the type of the value its allocating. There may be allocator designs that could use it. |
This comment has been minimized.
This comment has been minimized.
|
Then we'd probably have the same issues as C++ and it's allocator api. |
nikomatsakis commentedApr 8, 2016
•
edited by SimonSapin
FCP proposal: #32838 (comment)
FCP checkboxes: #32838 (comment)
Tracking issue for rust-lang/rfcs#1398 and the
std::heapmodule.struct Layout,trait Allocator, and default implementations inalloccrate (#42313)alloccrate, butLayout/Allocatorcould be inlibcore...) (#42313)Layoutfor overflow errors, (potentially switching to overflowing_add and overflowing_mul as necessary).realloc_in_placeshould be replaced withgrow_in_placeandshrink_in_place(comment) (#42313)fn dealloc. (See discussion on allocator rfc and global allocator rfc and traitAllocPR.)alignthat you allocate with? Concerns have been raised that allocators like jemalloc don't require this, and it's difficult to envision an allocator that does require this. (more discussion). @ruuda and @rkruppe look like they've got the most thoughts so far on this.AllocErrbeErrorinstead? (comment)usable_sizebusiness we may wish to allow, for example, that you if you allocate with(size, align)you must deallocate with a size somewhere in the range ofsize...usable_size(size, align). It appears that jemalloc is totally ok with this (doesn't require you to deallocate with a precisesizeyou allocate with) and this would also allowVecto naturally take advantage of the excess capacity jemalloc gives it when it does an allocation. (although actually doing this is also somewhat orthogonal to this decision, we're just empoweringVec). So far @Gankro has most of the thoughts on this. (@alexcrichton believes this was settled in #42313 due to the definition of "fits")alloc_systemis buggy on huge alignments (e.g. an align of1 << 32) #30170 #43217Layoutprovide afn stride(&self)method? (See also rust-lang/rfcs#1397, #17027 )Allocator::ownsas a method? #44302State of
std::heapafter #42313: