Prepare global allocators for stabilization #1974

Merged
merged 7 commits into from Jun 18, 2017

Conversation

@sfackler
Member

sfackler commented Apr 16, 2017

Rendered

@sfackler sfackler added the T-libs label Apr 16, 2017

@mark-i-m

This comment has been minimized.

Show comment
Hide comment
@mark-i-m

mark-i-m Apr 17, 2017

Contributor

Thanks @sfackler! This is an exciting feature to someone who enjoys writing embedded and OS code :)

Another pain point not addressed in this RFC is that if you have project that defines its own allocator, you still need define something like the allocator_stub crate. This is mildly annoying. I don't know how difficult this would be, but it would be nice if you could also allow a submodule to define itself as an allocator and let the crate use the submodule. And maybe the crate root can define itself as #[allocator] so that dependent crates know.

As its name would suggest, the global allocator is a global resource - all crates in a dependency tree must agree on the selected global allocator.

This seems a bit inflexible, which makes me nervous. It would be nice if crates could indicate whether they absolutely must have allocator X or just would prefer allocator X. Perhaps an additional annotation like #[must_have_allocator]? Or alternately, should crate writers be encouraged to make allocators optional dependencies if they can?

The standard library will gain a new stable crate - alloc_system. This is the default allocator crate and corresponds to the "system" allocator (i.e. malloc etc on Unix and HeapAlloc etc on Windows).

It would be nice if liballoc did not depend on the name of allocator crate. On no_std projects which use liballoc, the allocator crate currently has to have name alloc_system because of this. It would be nice to be able to name it whatever I want. I don't know if this falls in the scope of this RFC, though...

Contributor

mark-i-m commented Apr 17, 2017

Thanks @sfackler! This is an exciting feature to someone who enjoys writing embedded and OS code :)

Another pain point not addressed in this RFC is that if you have project that defines its own allocator, you still need define something like the allocator_stub crate. This is mildly annoying. I don't know how difficult this would be, but it would be nice if you could also allow a submodule to define itself as an allocator and let the crate use the submodule. And maybe the crate root can define itself as #[allocator] so that dependent crates know.

As its name would suggest, the global allocator is a global resource - all crates in a dependency tree must agree on the selected global allocator.

This seems a bit inflexible, which makes me nervous. It would be nice if crates could indicate whether they absolutely must have allocator X or just would prefer allocator X. Perhaps an additional annotation like #[must_have_allocator]? Or alternately, should crate writers be encouraged to make allocators optional dependencies if they can?

The standard library will gain a new stable crate - alloc_system. This is the default allocator crate and corresponds to the "system" allocator (i.e. malloc etc on Unix and HeapAlloc etc on Windows).

It would be nice if liballoc did not depend on the name of allocator crate. On no_std projects which use liballoc, the allocator crate currently has to have name alloc_system because of this. It would be nice to be able to name it whatever I want. I don't know if this falls in the scope of this RFC, though...

@ranma42

This comment has been minimized.

Show comment
Hide comment
@ranma42

ranma42 Apr 17, 2017

Contributor

Why is reallocate_inplace the only optional function?
It looks like all of the API could be implemented (possibly losing some efficiency) on top of just allocate_zeroed (or allocate) + deallocate.

Contributor

ranma42 commented Apr 17, 2017

Why is reallocate_inplace the only optional function?
It looks like all of the API could be implemented (possibly losing some efficiency) on top of just allocate_zeroed (or allocate) + deallocate.

text/0000-global-allocators.md
+/// The new size of the allocation is returned. This must be at least
+/// `old_size`. The allocation must always remain valid.
+///
+/// Behavior is undefined if the requested size is 0 or the alignment is not a

This comment has been minimized.

@ranma42

ranma42 Apr 17, 2017

Contributor

Should we go for "Behavior is undefined if the requested size is less than old_size or..."?
It might be worth spelling out explicitly whether old_size and size are the only legitimate return values or if the function can also return something inside that range.

@ranma42

ranma42 Apr 17, 2017

Contributor

Should we go for "Behavior is undefined if the requested size is less than old_size or..."?
It might be worth spelling out explicitly whether old_size and size are the only legitimate return values or if the function can also return something inside that range.

This comment has been minimized.

@sfackler

sfackler Apr 17, 2017

Member

Yeah, I just copied these docs out of alloc::heap - they need to be cleaned up.

@sfackler

sfackler Apr 17, 2017

Member

Yeah, I just copied these docs out of alloc::heap - they need to be cleaned up.

@comex

This comment has been minimized.

Show comment
Hide comment
@comex

comex Apr 17, 2017

Why not require global allocators to implement the same Allocator trait as is used for collections?

I gather that you can't just say type HeapAllocator = JemallocAllocator or something like that, because the choice of which allocator to use should be preserved until the final link. HeapAllocator needs to be a facade backed by some linking magic. However, I don't see any reason the backend allocator interface can't reuse the same trait, rather than defining a new ad-hoc attribute-based thingy.

comex commented Apr 17, 2017

Why not require global allocators to implement the same Allocator trait as is used for collections?

I gather that you can't just say type HeapAllocator = JemallocAllocator or something like that, because the choice of which allocator to use should be preserved until the final link. HeapAllocator needs to be a facade backed by some linking magic. However, I don't see any reason the backend allocator interface can't reuse the same trait, rather than defining a new ad-hoc attribute-based thingy.

@sfackler

This comment has been minimized.

Show comment
Hide comment
@sfackler

sfackler Apr 17, 2017

Member

@mark-i-m

I'm not sure I understand what the allocator_stub crate is doing. Is the issue you're thinking of a single-crate project that also wants to define its own custom allocator?

It would be nice if crates could indicate whether they absolutely must have allocator X or just would prefer allocator X.

If a crate absolutely must have allocator X it can stick #[allocator] extern crate X; in itself. A that point, the entire dependency tree is locked into allocator X, and compilation will fail if something other than X is asked for elsewhere.

Or alternately, should crate writers be encouraged to make allocators optional dependencies if they can?

I'm not sure I understand a context in which a crate would want to do this. Could you give an example?

It would be nice if liballoc did not depend on the name of allocator crate. On no_std projects which use liballoc, the allocator crate currently has to have name alloc_system because of this.

That seems like an implementation bug to me. liballoc shouldn't be doing anything other than telling the compiler that it requires a global allocator.

@comex

However, I don't see any reason the backend allocator interface can't reuse the same trait, rather than defining a new ad-hoc attribute-based thingy.

We could in theory use the Allocator trait, but there would still need to be some ad-hoc attribute weirdness going on. You'd presumably need to define some static instance of your allocator type and tag that as the allocator instance for that crate.

Member

sfackler commented Apr 17, 2017

@mark-i-m

I'm not sure I understand what the allocator_stub crate is doing. Is the issue you're thinking of a single-crate project that also wants to define its own custom allocator?

It would be nice if crates could indicate whether they absolutely must have allocator X or just would prefer allocator X.

If a crate absolutely must have allocator X it can stick #[allocator] extern crate X; in itself. A that point, the entire dependency tree is locked into allocator X, and compilation will fail if something other than X is asked for elsewhere.

Or alternately, should crate writers be encouraged to make allocators optional dependencies if they can?

I'm not sure I understand a context in which a crate would want to do this. Could you give an example?

It would be nice if liballoc did not depend on the name of allocator crate. On no_std projects which use liballoc, the allocator crate currently has to have name alloc_system because of this.

That seems like an implementation bug to me. liballoc shouldn't be doing anything other than telling the compiler that it requires a global allocator.

@comex

However, I don't see any reason the backend allocator interface can't reuse the same trait, rather than defining a new ad-hoc attribute-based thingy.

We could in theory use the Allocator trait, but there would still need to be some ad-hoc attribute weirdness going on. You'd presumably need to define some static instance of your allocator type and tag that as the allocator instance for that crate.

@rkruppe

This comment has been minimized.

Show comment
Hide comment
@rkruppe

rkruppe Apr 17, 2017

Contributor

@comex I'd rather not delay stabilizing this feature until the allocator traits are stabilized.

Contributor

rkruppe commented Apr 17, 2017

@comex I'd rather not delay stabilizing this feature until the allocator traits are stabilized.

text/0000-global-allocators.md
+///
+/// The `ptr` parameter must not be null.
+///
+/// The `old_size` and `align` parameters are the parameters that were used to

This comment has been minimized.

@rkruppe

rkruppe Apr 17, 2017

Contributor

@ruuda made a good point in the discussion of the allocator traits: It can be sensible to allocate over-aligned data, but this information is not necessarily carried along until deallocation, so there's a good reason deallocate shouldn't require the same alignment that was used to allocate.

This requirement was supposed to allow optimizations in the allocator, but AFAIK nobody could name a single existing allocator design that can use alignment information for deallocation.

@rkruppe

rkruppe Apr 17, 2017

Contributor

@ruuda made a good point in the discussion of the allocator traits: It can be sensible to allocate over-aligned data, but this information is not necessarily carried along until deallocation, so there's a good reason deallocate shouldn't require the same alignment that was used to allocate.

This requirement was supposed to allow optimizations in the allocator, but AFAIK nobody could name a single existing allocator design that can use alignment information for deallocation.

This comment has been minimized.

@mark-i-m

mark-i-m Apr 17, 2017

Contributor

I wrote an allocator for an OS kernel once that would have benefited greatly from alignment info.

@mark-i-m

mark-i-m Apr 17, 2017

Contributor

I wrote an allocator for an OS kernel once that would have benefited greatly from alignment info.

This comment has been minimized.

@rkruppe

rkruppe Apr 17, 2017

Contributor

That would be very relevant to both this RFC and the allocators design, so could you write up some details?

@rkruppe

rkruppe Apr 17, 2017

Contributor

That would be very relevant to both this RFC and the allocators design, so could you write up some details?

This comment has been minimized.

@mark-i-m

mark-i-m Apr 17, 2017

Contributor

Hmmm... It seems that I was very mistaken... I have to appologize 🤕

Actually, when I went back and looked at the code, I found the exact opposite. The allocator interface actually does pass the alignment to free, and my implementation of free ignores it for exactly the reasons mentioned above (more later). That said, passing alignment into the alloc function is useful (and required for correctness), so I assume that this discussion is mostly about if free should take align or not.

The code is here. It's a bit old and not very well-written since I was learning rust when I wrote it. Here is a simple description of what it does:

Assumptions

  • The kernel is the only entity using this allocator. (The user-mode allocator lives in user-mode).
  • The kernel is only using this allocator through Box, so the parameters size and align are trusted to be correct, since they are generated by the compiler.

Objective

Use as little metadata as possible.

Blocks

  • All blocks are a multiple of the smallest possible block size, which is based on the size of the free-block metadata (16B on a 32-bit machine).
  • All blocks have a minimum alignment which is the same as minimum block size (16B).
  • The allocator keeps a free-list which is simply a singly linked list of blocks.
  • Free blocks are used to store their own metadata.
  • Active blocks have no header/footer. This means that their is no header/footer overhead at all.

alloc

Allocating memory just grabs the first free block with required size and alignment, removes it from the free list, splits it if needed, and returns a pointer to its beginning. The size of the block allocated is a function of the alignment and size.

free

Freeing memory requires very little effort, it turns out. Since we assume that the parameters size and ptr are valid, we simply create block metadata and add to the linked list. If possible, we can merge with free blocks after the block we are freeing.

In fact, the alignment passed into free is ignored here because the ptr should already be aligned. The takeaway seems to be the opposite from what I said above (again, sorry). When I thought about it some more, it makes sense. A ptr inherently conveys some alignment information, so passing this information in as an argument actually seems somewhat redundant.

@mark-i-m

mark-i-m Apr 17, 2017

Contributor

Hmmm... It seems that I was very mistaken... I have to appologize 🤕

Actually, when I went back and looked at the code, I found the exact opposite. The allocator interface actually does pass the alignment to free, and my implementation of free ignores it for exactly the reasons mentioned above (more later). That said, passing alignment into the alloc function is useful (and required for correctness), so I assume that this discussion is mostly about if free should take align or not.

The code is here. It's a bit old and not very well-written since I was learning rust when I wrote it. Here is a simple description of what it does:

Assumptions

  • The kernel is the only entity using this allocator. (The user-mode allocator lives in user-mode).
  • The kernel is only using this allocator through Box, so the parameters size and align are trusted to be correct, since they are generated by the compiler.

Objective

Use as little metadata as possible.

Blocks

  • All blocks are a multiple of the smallest possible block size, which is based on the size of the free-block metadata (16B on a 32-bit machine).
  • All blocks have a minimum alignment which is the same as minimum block size (16B).
  • The allocator keeps a free-list which is simply a singly linked list of blocks.
  • Free blocks are used to store their own metadata.
  • Active blocks have no header/footer. This means that their is no header/footer overhead at all.

alloc

Allocating memory just grabs the first free block with required size and alignment, removes it from the free list, splits it if needed, and returns a pointer to its beginning. The size of the block allocated is a function of the alignment and size.

free

Freeing memory requires very little effort, it turns out. Since we assume that the parameters size and ptr are valid, we simply create block metadata and add to the linked list. If possible, we can merge with free blocks after the block we are freeing.

In fact, the alignment passed into free is ignored here because the ptr should already be aligned. The takeaway seems to be the opposite from what I said above (again, sorry). When I thought about it some more, it makes sense. A ptr inherently conveys some alignment information, so passing this information in as an argument actually seems somewhat redundant.

This comment has been minimized.

@rkruppe

rkruppe Apr 17, 2017

Contributor

I'm actually quite relieved to hear that 😄 Yes, allocation and reallocation should have alignment arguments, it's just deallocation that shouldn't use alignment information. It's not quite true that "ptr inherently conveys alignment information", because the pointer might just happen to have more alignment than was requested, but it's true that it's always aligned as requested at allocation time (since it must be the exact pointer returned by allocation, not a pointer into the allocation).

@rkruppe

rkruppe Apr 17, 2017

Contributor

I'm actually quite relieved to hear that 😄 Yes, allocation and reallocation should have alignment arguments, it's just deallocation that shouldn't use alignment information. It's not quite true that "ptr inherently conveys alignment information", because the pointer might just happen to have more alignment than was requested, but it's true that it's always aligned as requested at allocation time (since it must be the exact pointer returned by allocation, not a pointer into the allocation).

text/0000-global-allocators.md
+or more distinct allocator crates are selected, compilation will fail. Note that
+multiple crates can select a global allocator as long as that allocator is the
+same across all of them. In addition, a crate can depend on an allocator crate
+without declaring it to be the global allocator by omitting the `#[allocator]`

This comment has been minimized.

@rkruppe

rkruppe Apr 17, 2017

Contributor

Would it make sense to restrict this choice to "root crates" (executables, staticlibs, cdylibs) analogously to how the panic strategy is chosen? [1] I can't think of a good reason for a library to require a particular allocator, and it seems like it could cause a ton of pain (and fragmentation) to mix multiple allocators within one application.

[1]: It's true that the codegen option -C panic=... can and must be set for libraries too, but this is mostly to allow separate compilation of crates – the panic runtime to be linked in is determined by the root. There are also restrictions (can't link a panic=abort library into a panic=unwind library). In addition, Cargo exposes only the "root sets panic strategy" usage.

@rkruppe

rkruppe Apr 17, 2017

Contributor

Would it make sense to restrict this choice to "root crates" (executables, staticlibs, cdylibs) analogously to how the panic strategy is chosen? [1] I can't think of a good reason for a library to require a particular allocator, and it seems like it could cause a ton of pain (and fragmentation) to mix multiple allocators within one application.

[1]: It's true that the codegen option -C panic=... can and must be set for libraries too, but this is mostly to allow separate compilation of crates – the panic runtime to be linked in is determined by the root. There are also restrictions (can't link a panic=abort library into a panic=unwind library). In addition, Cargo exposes only the "root sets panic strategy" usage.

This comment has been minimized.

@casey

casey Apr 19, 2017

I share this concern. Allowing libraries to require a particular global allocator could create rifts in the crate ecosystem, where different sets of libraries cannot be used together because they require different global allocators.

Allocators share the same interface, and so the optimal allocator will depend on the workload of the binary. It seems like the crate root author will be in the best position to make this choice, since they'll have insight into the workload type, as well as be able to run holistic benchmarks.

Thus is seems like a good idea to restrict global allocator selection to the crate root author.

@casey

casey Apr 19, 2017

I share this concern. Allowing libraries to require a particular global allocator could create rifts in the crate ecosystem, where different sets of libraries cannot be used together because they require different global allocators.

Allocators share the same interface, and so the optimal allocator will depend on the workload of the binary. It seems like the crate root author will be in the best position to make this choice, since they'll have insight into the workload type, as well as be able to run holistic benchmarks.

Thus is seems like a good idea to restrict global allocator selection to the crate root author.

text/0000-global-allocators.md
+usage will happen through the *global allocator* interface located in
+`std::heap`. This module exposes a set of functions identical to those described
+above, but that call into the global allocator. To select the global allocator,
+a crate declares it via an `extern crate` annotated with `#[allocator]`:

This comment has been minimized.

@rkruppe

rkruppe Apr 17, 2017

Contributor

Clarification request: Can all crates do this? As mentioned in another comment, I would conservatively expect this choice to be left to the root crate, as with panic runtimes.

@rkruppe

rkruppe Apr 17, 2017

Contributor

Clarification request: Can all crates do this? As mentioned in another comment, I would conservatively expect this choice to be left to the root crate, as with panic runtimes.

This comment has been minimized.

@sfackler

sfackler Apr 17, 2017

Member

As written, any crate can do this, yeah.

I would be fine restricting allocator selection to the root crate if it simplifies the implementation - I can't think of any strong reasons for needing to select an allocator in a non-root crate.

@sfackler

sfackler Apr 17, 2017

Member

As written, any crate can do this, yeah.

I would be fine restricting allocator selection to the root crate if it simplifies the implementation - I can't think of any strong reasons for needing to select an allocator in a non-root crate.

@mark-i-m

This comment has been minimized.

Show comment
Hide comment
@mark-i-m

mark-i-m Apr 17, 2017

Contributor

@sfackler

I'm not sure I understand what the allocator_stub crate is doing. Is the issue you're thinking of a single-crate project that also wants to define its own custom allocator?

It doesn't have to be a single-crate project, but yes more or less. The idea is that you might have a large crate that both defines and uses an allocator. For example, in an OS kernel, the kernel allocator might want to define this interface so you can use it with Box. But you can easily imagine such an allocator depending on, say, the paging subsystem or a bunch of initialization functions or the kernel synchronization primitives. So while the allocator may be modular enough to go into its own module, it still depends on other parts of the crate and cannot be pulled out cleanly.

I'm not sure I understand a context in which a crate would want to do this. Could you give an example?

I guess I was thinking that maybe a crate might have performance preference for some allocator without really depending on it. For example, if you know all of your allocations will be of the same size, maybe you would prefer a slab allocator, but it doesn't change correctness if someone else would like a different allocator. TBH, I don't know if anyone actually does this, but it was a thought.

That seems like an implementation bug to me. liballoc shouldn't be doing anything other than telling the compiler that it requires a global allocator.

Hmm... That's good to know... I will have to look into this sometime...

@comex

However, I don't see any reason the backend allocator interface can't reuse the same trait, rather than defining a new ad-hoc attribute-based thingy.

Hmm... I don't understand why they should use the same trait. They seem pretty disparate to me...

Contributor

mark-i-m commented Apr 17, 2017

@sfackler

I'm not sure I understand what the allocator_stub crate is doing. Is the issue you're thinking of a single-crate project that also wants to define its own custom allocator?

It doesn't have to be a single-crate project, but yes more or less. The idea is that you might have a large crate that both defines and uses an allocator. For example, in an OS kernel, the kernel allocator might want to define this interface so you can use it with Box. But you can easily imagine such an allocator depending on, say, the paging subsystem or a bunch of initialization functions or the kernel synchronization primitives. So while the allocator may be modular enough to go into its own module, it still depends on other parts of the crate and cannot be pulled out cleanly.

I'm not sure I understand a context in which a crate would want to do this. Could you give an example?

I guess I was thinking that maybe a crate might have performance preference for some allocator without really depending on it. For example, if you know all of your allocations will be of the same size, maybe you would prefer a slab allocator, but it doesn't change correctness if someone else would like a different allocator. TBH, I don't know if anyone actually does this, but it was a thought.

That seems like an implementation bug to me. liballoc shouldn't be doing anything other than telling the compiler that it requires a global allocator.

Hmm... That's good to know... I will have to look into this sometime...

@comex

However, I don't see any reason the backend allocator interface can't reuse the same trait, rather than defining a new ad-hoc attribute-based thingy.

Hmm... I don't understand why they should use the same trait. They seem pretty disparate to me...

@rkruppe

This comment has been minimized.

Show comment
Hide comment
@rkruppe

rkruppe Apr 17, 2017

Contributor

I guess I was thinking that maybe a crate might have performance preference for some allocator without really depending on it. For example, if you know all of your allocations will be of the same size, maybe you would prefer a slab allocator, but it doesn't change correctness if someone else would like a different allocator. TBH, I don't know if anyone actually does this, but it was a thought.

In my experience, such specialized allocation behavior is usually implemented by explicitly using the allocator (and having it allocate big chunks of memory from the global allocator). And most allocators used like that aren't suitable as general purpose allocator anyway.

Contributor

rkruppe commented Apr 17, 2017

I guess I was thinking that maybe a crate might have performance preference for some allocator without really depending on it. For example, if you know all of your allocations will be of the same size, maybe you would prefer a slab allocator, but it doesn't change correctness if someone else would like a different allocator. TBH, I don't know if anyone actually does this, but it was a thought.

In my experience, such specialized allocation behavior is usually implemented by explicitly using the allocator (and having it allocate big chunks of memory from the global allocator). And most allocators used like that aren't suitable as general purpose allocator anyway.

@comex

This comment has been minimized.

Show comment
Hide comment
@comex

comex Apr 17, 2017

Hmm... I don't understand why they should use the same trait. They seem pretty disparate to me...

How are they disparate? The only reason the language needs to have a built-in concept of allocator is for standard library functionality that requires one. Most or all of that functionality should be parameterized by the Allocator trait to start with, so that multiple allocators can be used in the same program. The global allocator would exist mostly or entirely to serve as backend for the default HeapAllocator, so why shouldn't it use the same interface, rather than making HeapAllocator map the calls to a slightly different but isomorphic interface?

Here is the Allocator trait in question, with some type aliases removed:

pub struct Layout { size: usize, align: usize }
pub unsafe trait Allocator {
    // required
    unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>;
    unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout);
    // optional but allocator may want to override
    fn oom(&mut self, _: AllocErr) -> !;
    unsafe fn usable_size(&self, layout: &Layout) -> (usize, usize);
    unsafe fn realloc(&mut self, ptr: *mut u8, layout: Layout, new_layout: Layout) -> Result<*mut u8, AllocErr>;
    unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr>;
    unsafe fn realloc_excess(&mut self, ptr: *mut u8, layout: Layout, new_layout: Layout) -> Result<Excess, AllocErr>
    unsafe fn realloc_in_place(&mut self, ptr: *mut u8, layout: Layout, new_layout: Layout) -> Result<(), CannotReallocInPlace>
    // plus some convenience methods the allocator probably wouldn't override
}
pub enum AllocErr { Exhausted { request: Layout }, Unsupported { details: &'static str } }
pub struct CannotReallocInPlace; // unit struct

Here is your set of functions:

pub fn allocate(size: usize, align: usize) -> *mut u8;

Exactly equivalent to Allocator::alloc except for not taking self and indicating failure with a null pointer rather than a more descriptive AllocErr.

pub fn allocate_zeroed(size: usize, align: usize) -> *mut u8;

Not included in Allocator. You probably included this because allocators often start with blocks of zero bytes, such as those returned by mmap, and can provide zeroed allocations without doing another useless memset. I agree this is useful: but that means it should be in the Allocator trait as well. There are plenty of cases where standard containers could want zeroed blocks, such as a hash table whose internal layout guarantees that zero means unset.

pub fn deallocate(ptr: *mut u8, old_size: usize, align: usize);

Exactly equivalent to Allocator::dealloc except for not taking self.

pub fn reallocate(ptr: *mut u8, old_size: usize, size: usize, align: usize) -> *mut u8;

Exactly equivalent to Allocator::realloc except for the aforementioned caveats plus not being able to change the alignment.

pub fn reallocate_inplace(ptr: *mut u8, old_size: usize, size: usize, align: usize) -> usize;

Ditto Allocator::realloc_in_place.

Overall, there is some functionality missing in yours:

  • usable_size and its cousins alloc_excess/realloc_excess, which you explicitly removed "as it is not used anywhere in the standard library". I think it's still useful in some cases, many existing allocators provide it as a primitive, and it's easy for a custom allocator to leave the default implementation if it doesn't want to bother with it.
  • Ability to change the alignment in realloc. No reason not to support this.
  • An oom method which panics and can "provide feedback about the allocator's state at the time of the OOM" (according to the RFC).
  • AllocErr, a way for the allocator to provide explicit error messages for allocations that aren't supported (like if size is zero or way too large, or align is too large).

For each of these, HeapAllocator would have to simply not provide the corresponding functionality (even though the underlying allocator may support it), or else the global allocator API would have to be changed in the future to add it.

The only big difference is that the Allocator trait takes self while global allocators must not. But there's no need for that to cause any actual overhead. Roughly, the allocator-crate side of the linking magic would do something like

#[no_mangle]
pub extern fn magic_alloc(layout: Layout) -> Result<*mut u8, AllocErr> {
    THE_ALLOC.alloc(layout)
}

and the call to alloc would be inlined.

We could in theory use the Allocator trait, but there would still need to be some ad-hoc attribute weirdness going on. You'd presumably need to define some static instance of your allocator type and tag that as the allocator instance for that crate.

True, but it would arguably be somewhat less weird to have a single attribute than simulating the trait system by enforcing different function signatures (especially if there are extensions in the future, so you have to deal with optional functions).

Even if you don't end up literally using the Allocator trait, it would make sense to at least use the same names and signatures rather than slightly and arbitrarily different ones (allocate vs. alloc, separate size and align vs. Layout, etc.).

Also, eventually it would be nice to have a proper "forward dependencies" feature rather than special-casing specific types of dependencies (i.e. allocators). This shouldn't wait for that to be stabilized, but it would be nice if in the future the magic attribute could be essentially desugared to a use of that feature, without too much custom logic.

@comex I'd rather not delay stabilizing this feature until the allocator traits are stabilized.

The Allocator RFC has already been accepted (while this one isn't even FCP), and any nitpicks raised regarding the Allocator interface are likely to also apply to global allocators. I don't think you're actually going to save any time.

comex commented Apr 17, 2017

Hmm... I don't understand why they should use the same trait. They seem pretty disparate to me...

How are they disparate? The only reason the language needs to have a built-in concept of allocator is for standard library functionality that requires one. Most or all of that functionality should be parameterized by the Allocator trait to start with, so that multiple allocators can be used in the same program. The global allocator would exist mostly or entirely to serve as backend for the default HeapAllocator, so why shouldn't it use the same interface, rather than making HeapAllocator map the calls to a slightly different but isomorphic interface?

Here is the Allocator trait in question, with some type aliases removed:

pub struct Layout { size: usize, align: usize }
pub unsafe trait Allocator {
    // required
    unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>;
    unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout);
    // optional but allocator may want to override
    fn oom(&mut self, _: AllocErr) -> !;
    unsafe fn usable_size(&self, layout: &Layout) -> (usize, usize);
    unsafe fn realloc(&mut self, ptr: *mut u8, layout: Layout, new_layout: Layout) -> Result<*mut u8, AllocErr>;
    unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr>;
    unsafe fn realloc_excess(&mut self, ptr: *mut u8, layout: Layout, new_layout: Layout) -> Result<Excess, AllocErr>
    unsafe fn realloc_in_place(&mut self, ptr: *mut u8, layout: Layout, new_layout: Layout) -> Result<(), CannotReallocInPlace>
    // plus some convenience methods the allocator probably wouldn't override
}
pub enum AllocErr { Exhausted { request: Layout }, Unsupported { details: &'static str } }
pub struct CannotReallocInPlace; // unit struct

Here is your set of functions:

pub fn allocate(size: usize, align: usize) -> *mut u8;

Exactly equivalent to Allocator::alloc except for not taking self and indicating failure with a null pointer rather than a more descriptive AllocErr.

pub fn allocate_zeroed(size: usize, align: usize) -> *mut u8;

Not included in Allocator. You probably included this because allocators often start with blocks of zero bytes, such as those returned by mmap, and can provide zeroed allocations without doing another useless memset. I agree this is useful: but that means it should be in the Allocator trait as well. There are plenty of cases where standard containers could want zeroed blocks, such as a hash table whose internal layout guarantees that zero means unset.

pub fn deallocate(ptr: *mut u8, old_size: usize, align: usize);

Exactly equivalent to Allocator::dealloc except for not taking self.

pub fn reallocate(ptr: *mut u8, old_size: usize, size: usize, align: usize) -> *mut u8;

Exactly equivalent to Allocator::realloc except for the aforementioned caveats plus not being able to change the alignment.

pub fn reallocate_inplace(ptr: *mut u8, old_size: usize, size: usize, align: usize) -> usize;

Ditto Allocator::realloc_in_place.

Overall, there is some functionality missing in yours:

  • usable_size and its cousins alloc_excess/realloc_excess, which you explicitly removed "as it is not used anywhere in the standard library". I think it's still useful in some cases, many existing allocators provide it as a primitive, and it's easy for a custom allocator to leave the default implementation if it doesn't want to bother with it.
  • Ability to change the alignment in realloc. No reason not to support this.
  • An oom method which panics and can "provide feedback about the allocator's state at the time of the OOM" (according to the RFC).
  • AllocErr, a way for the allocator to provide explicit error messages for allocations that aren't supported (like if size is zero or way too large, or align is too large).

For each of these, HeapAllocator would have to simply not provide the corresponding functionality (even though the underlying allocator may support it), or else the global allocator API would have to be changed in the future to add it.

The only big difference is that the Allocator trait takes self while global allocators must not. But there's no need for that to cause any actual overhead. Roughly, the allocator-crate side of the linking magic would do something like

#[no_mangle]
pub extern fn magic_alloc(layout: Layout) -> Result<*mut u8, AllocErr> {
    THE_ALLOC.alloc(layout)
}

and the call to alloc would be inlined.

We could in theory use the Allocator trait, but there would still need to be some ad-hoc attribute weirdness going on. You'd presumably need to define some static instance of your allocator type and tag that as the allocator instance for that crate.

True, but it would arguably be somewhat less weird to have a single attribute than simulating the trait system by enforcing different function signatures (especially if there are extensions in the future, so you have to deal with optional functions).

Even if you don't end up literally using the Allocator trait, it would make sense to at least use the same names and signatures rather than slightly and arbitrarily different ones (allocate vs. alloc, separate size and align vs. Layout, etc.).

Also, eventually it would be nice to have a proper "forward dependencies" feature rather than special-casing specific types of dependencies (i.e. allocators). This shouldn't wait for that to be stabilized, but it would be nice if in the future the magic attribute could be essentially desugared to a use of that feature, without too much custom logic.

@comex I'd rather not delay stabilizing this feature until the allocator traits are stabilized.

The Allocator RFC has already been accepted (while this one isn't even FCP), and any nitpicks raised regarding the Allocator interface are likely to also apply to global allocators. I don't think you're actually going to save any time.

@mark-i-m

This comment has been minimized.

Show comment
Hide comment
@mark-i-m

mark-i-m Apr 17, 2017

Contributor

@comex Thanks for the clarifications. I had thought you were suggesting not having a global allocator instead of making allocators to implement a trait. I agree that the trait is a more maintainable interface and probably better in the long run...

I would stipulate one thing though:

#[no_mangle]
pub extern fn magic_alloc(layout: Layout) -> Result<*mut u8, AllocErr> {
   THE_ALLOC.alloc(layout)
}

I (the programmer) should be able to define a static mut variable THE_ALLOC and tag it as the global allocator. The initialization of this allocator should be controllable by the programmer because on embedded/very low-level projects, the system might need to do some work before initializing the heap or it might need to pass some system parameters to constructor of the Allocator. For example,

#[global_allocator]
static mut THE_ALLOC: MyAllocator = MyAllocator::new(start_addr, end_addr);
// where MyAllocator: Allocator
Contributor

mark-i-m commented Apr 17, 2017

@comex Thanks for the clarifications. I had thought you were suggesting not having a global allocator instead of making allocators to implement a trait. I agree that the trait is a more maintainable interface and probably better in the long run...

I would stipulate one thing though:

#[no_mangle]
pub extern fn magic_alloc(layout: Layout) -> Result<*mut u8, AllocErr> {
   THE_ALLOC.alloc(layout)
}

I (the programmer) should be able to define a static mut variable THE_ALLOC and tag it as the global allocator. The initialization of this allocator should be controllable by the programmer because on embedded/very low-level projects, the system might need to do some work before initializing the heap or it might need to pass some system parameters to constructor of the Allocator. For example,

#[global_allocator]
static mut THE_ALLOC: MyAllocator = MyAllocator::new(start_addr, end_addr);
// where MyAllocator: Allocator
@sfackler

This comment has been minimized.

Show comment
Hide comment
@sfackler

sfackler Apr 18, 2017

Member

@ranma42

Why is reallocate_inplace the only optional function?
It looks like all of the API could be implemented (possibly losing some efficiency) on top of just allocate_zeroed (or allocate) + deallocate.

It seems reasonable to make everything but allocate and deallocate optional, yeah. I will update.

@comex

I think it's still useful in some cases

What are these cases?

Member

sfackler commented Apr 18, 2017

@ranma42

Why is reallocate_inplace the only optional function?
It looks like all of the API could be implemented (possibly losing some efficiency) on top of just allocate_zeroed (or allocate) + deallocate.

It seems reasonable to make everything but allocate and deallocate optional, yeah. I will update.

@comex

I think it's still useful in some cases

What are these cases?

@Ericson2314

This comment has been minimized.

Show comment
Hide comment
@Ericson2314

Ericson2314 Apr 18, 2017

Contributor

Mmm besides trying to leaverage the trait as much as possible, which I fully support, there was talk in the past of using a general "needs provides" mechanism (some does of applicative functors perhaps​) for this, logging, and panicking, and other similar tasks needing a cannonical global singleton. I'd be really disappointed to retreat from that goal into a bunch of narrow mechanisms.

Contributor

Ericson2314 commented Apr 18, 2017

Mmm besides trying to leaverage the trait as much as possible, which I fully support, there was talk in the past of using a general "needs provides" mechanism (some does of applicative functors perhaps​) for this, logging, and panicking, and other similar tasks needing a cannonical global singleton. I'd be really disappointed to retreat from that goal into a bunch of narrow mechanisms.

@sfackler

This comment has been minimized.

Show comment
Hide comment
@sfackler

sfackler Apr 18, 2017

Member

I don't recall there being any more talk than "hey, would it be possible to make a general needs/provides mechanism". We can't use a thing that doesn't exist.

Member

sfackler commented Apr 18, 2017

I don't recall there being any more talk than "hey, would it be possible to make a general needs/provides mechanism". We can't use a thing that doesn't exist.

@comex

This comment has been minimized.

Show comment
Hide comment
@comex

comex Apr 18, 2017

@sfackler Vec, for one - the buffer allocations can switch from alloc/realloc to alloc_excess/realloc_excess, and increase capacity by excess / element_size. See also rust-lang/rust#29931, rust-lang/rust#32075

comex commented Apr 18, 2017

@sfackler Vec, for one - the buffer allocations can switch from alloc/realloc to alloc_excess/realloc_excess, and increase capacity by excess / element_size. See also rust-lang/rust#29931, rust-lang/rust#32075

@sfackler

This comment has been minimized.

Show comment
Hide comment
@sfackler

sfackler Apr 18, 2017

Member

I was referring more specifically to usable_size.

Member

sfackler commented Apr 18, 2017

I was referring more specifically to usable_size.

@comex

This comment has been minimized.

Show comment
Hide comment
@comex

comex Apr 18, 2017

In that case, nope, I can't think of a good reason to use usable_size rather than alloc_excess; in fact, I'd call it an anti-pattern, since there might be some allocator design where the amount of excess depends on the allocation. Quite possibly it should be removed from the Allocator trait.

comex commented Apr 18, 2017

In that case, nope, I can't think of a good reason to use usable_size rather than alloc_excess; in fact, I'd call it an anti-pattern, since there might be some allocator design where the amount of excess depends on the allocation. Quite possibly it should be removed from the Allocator trait.

@rkruppe

This comment has been minimized.

Show comment
Hide comment
@rkruppe

rkruppe Apr 18, 2017

Contributor

The Allocator RFC has already been accepted (while this one isn't even FCP), and any nitpicks raised regarding the Allocator interface are likely to also apply to global allocators. I don't think you're actually going to save any time.

I based this assertion on the fact that the allocators RFC was accepted with a large number of unresolved questions, and there's been little progress on resolving those. But you're right that most of those questions also apply to the global allocator.

Contributor

rkruppe commented Apr 18, 2017

The Allocator RFC has already been accepted (while this one isn't even FCP), and any nitpicks raised regarding the Allocator interface are likely to also apply to global allocators. I don't think you're actually going to save any time.

I based this assertion on the fact that the allocators RFC was accepted with a large number of unresolved questions, and there's been little progress on resolving those. But you're right that most of those questions also apply to the global allocator.

@sfackler

This comment has been minimized.

Show comment
Hide comment
@sfackler

sfackler Apr 18, 2017

Member

I've pushed some updates - poke me if I've forgotten anything!

cc @nox with respect to what an actual_usable_size function could look like.

Member

sfackler commented Apr 18, 2017

I've pushed some updates - poke me if I've forgotten anything!

cc @nox with respect to what an actual_usable_size function could look like.

text/0000-global-allocators.md
+The global allocator could be an instance of the `Allocator` trait. Since that
+trait's methods take `&mut self`, things are a bit complicated however. The
+allocator would most likely need to be a `const` type implementing `Allocator`
+since it wouldn't be sound to interact with a static. This may cause confusion

This comment has been minimized.

@rkruppe

rkruppe Apr 18, 2017

Contributor

This is not true. Unlike static muts, plain statics are perfectly safe to access and can, in fact, maintain state. It's just that all mutation needs to happen via thread safe interior mutability.

With an eye towards the potential confusion described in the following sentence ("a new instance will be created for each use"), a static makes much more sense than a const — the latter is a value that gets copied everywhere, while the former is a unique object with an identity, which seems more appropritate for a global allocator (besides permitting allocator state, as mentioned before).

@rkruppe

rkruppe Apr 18, 2017

Contributor

This is not true. Unlike static muts, plain statics are perfectly safe to access and can, in fact, maintain state. It's just that all mutation needs to happen via thread safe interior mutability.

With an eye towards the potential confusion described in the following sentence ("a new instance will be created for each use"), a static makes much more sense than a const — the latter is a value that gets copied everywhere, while the former is a unique object with an identity, which seems more appropritate for a global allocator (besides permitting allocator state, as mentioned before).

This comment has been minimized.

@eddyb

eddyb Apr 18, 2017

Member

Especially if you have access to unstable features, static with interior mutability is idiomatic and can be wrapped in a safe abstraction, while static mut is quite worse.

@eddyb

eddyb Apr 18, 2017

Member

Especially if you have access to unstable features, static with interior mutability is idiomatic and can be wrapped in a safe abstraction, while static mut is quite worse.

This comment has been minimized.

@mark-i-m

mark-i-m Apr 19, 2017

Contributor

I agree completely. Allocators are inherently stateful since they need to keep track of allocations for correctness. Static + interior mutability is needed.

However, this raises a new question: initializing the global allocator. How does this happen? Is there a special constructor called? Does the constructor have to be a const fn? The RFC doesn't specify this.

@mark-i-m

mark-i-m Apr 19, 2017

Contributor

I agree completely. Allocators are inherently stateful since they need to keep track of allocations for correctness. Static + interior mutability is needed.

However, this raises a new question: initializing the global allocator. How does this happen? Is there a special constructor called? Does the constructor have to be a const fn? The RFC doesn't specify this.

This comment has been minimized.

@sfackler

sfackler Apr 19, 2017

Member

I may be missing something here, but the issue is that you may not obtain a mutable reference to a static, but every method on Allocator takes &mut self:

struct MyAllocator;

impl MyAllocator {
    fn alloc(&mut self) { }
}

static ALLOCATOR: MyAllocator = MyAllocator;

fn main() {
    ALLOCATOR.alloc();
}
error: cannot borrow immutable static item as mutable
  --> <anon>:10:5
   |
10 |     ALLOCATOR.alloc();
   |     ^^^^^^^^^

error: aborting due to previous error

@mark-i-m There is no constructor.

@sfackler

sfackler Apr 19, 2017

Member

I may be missing something here, but the issue is that you may not obtain a mutable reference to a static, but every method on Allocator takes &mut self:

struct MyAllocator;

impl MyAllocator {
    fn alloc(&mut self) { }
}

static ALLOCATOR: MyAllocator = MyAllocator;

fn main() {
    ALLOCATOR.alloc();
}
error: cannot borrow immutable static item as mutable
  --> <anon>:10:5
   |
10 |     ALLOCATOR.alloc();
   |     ^^^^^^^^^

error: aborting due to previous error

@mark-i-m There is no constructor.

This comment has been minimized.

@mark-i-m

mark-i-m Apr 19, 2017

Contributor

I may be missing something here, but the issue is that you may not obtain a mutable reference to a static, but every method on Allocator takes &mut self

Ah, I see. So then, does the Allocator trait have to change? Or do we make the allocator unsafe and use static mut? If neither is possible, then we might need to switch back to the attributes approach or write a new trait with the hope of coalescing them some time...

@mark-i-m There is no constructor.

Most allocators need some setup, though. Is the intent to just do something like lazy_static? That would annoy me, but it would work, I guess. Alternately, we could add a method to the interface to do this sort of set up...

@mark-i-m

mark-i-m Apr 19, 2017

Contributor

I may be missing something here, but the issue is that you may not obtain a mutable reference to a static, but every method on Allocator takes &mut self

Ah, I see. So then, does the Allocator trait have to change? Or do we make the allocator unsafe and use static mut? If neither is possible, then we might need to switch back to the attributes approach or write a new trait with the hope of coalescing them some time...

@mark-i-m There is no constructor.

Most allocators need some setup, though. Is the intent to just do something like lazy_static? That would annoy me, but it would work, I guess. Alternately, we could add a method to the interface to do this sort of set up...

This comment has been minimized.

@rkruppe

rkruppe Apr 19, 2017

Contributor

Oh, yeah, I totally overlooked the &mut self issue 😢 We could side-step this by changing the trait (I'm not a fan of that, for reasons I'll outline separately) or by changing how the allocator is accessed. By the latter I mean, for example, tagging a static X: MyAllocator as the global allocator creates an implicit const X_REF: &MyAllocator = &X; and all allocation calls get routed through that. This feels extremely hacky, though, and brings back the aforementioned identity confusion.

@rkruppe

rkruppe Apr 19, 2017

Contributor

Oh, yeah, I totally overlooked the &mut self issue 😢 We could side-step this by changing the trait (I'm not a fan of that, for reasons I'll outline separately) or by changing how the allocator is accessed. By the latter I mean, for example, tagging a static X: MyAllocator as the global allocator creates an implicit const X_REF: &MyAllocator = &X; and all allocation calls get routed through that. This feels extremely hacky, though, and brings back the aforementioned identity confusion.

This comment has been minimized.

@sfackler

sfackler Apr 19, 2017

Member

switch back to the attributes approach

The RFC has never switched away from the attributes approach. This is an alternative.

Most allocators need some setup, though.

Neither system allocators nor jemalloc need explicit setup steps. If you're in an environment where your allocator needs setup, you can presumably call whatever functions are necessary at the start of execution.

@sfackler

sfackler Apr 19, 2017

Member

switch back to the attributes approach

The RFC has never switched away from the attributes approach. This is an alternative.

Most allocators need some setup, though.

Neither system allocators nor jemalloc need explicit setup steps. If you're in an environment where your allocator needs setup, you can presumably call whatever functions are necessary at the start of execution.

text/0000-global-allocators.md
+internally since a new instance will be created for each use. In addition, the
+`Allocator` trait uses a `Layout` type as a higher level encapsulation of the
+requested alignment and size of the allocation. The larger API surface area
+will most likely cause this feature to have a significantly longer stabilization

This comment has been minimized.

@rkruppe

rkruppe Apr 18, 2017

Contributor

I'm not so sure about this any more. At least the piece of the API surface named here (Layout) doesn't seem very likely to delay anything. I don't recall any unresolved questions about it (there's questions about what alignment means for some functions, but that's independent of whether it's wrapped in a Layout type).

@rkruppe

rkruppe Apr 18, 2017

Contributor

I'm not so sure about this any more. At least the piece of the API surface named here (Layout) doesn't seem very likely to delay anything. I don't recall any unresolved questions about it (there's questions about what alignment means for some functions, but that's independent of whether it's wrapped in a Layout type).

This comment has been minimized.

@sfackler

sfackler Apr 19, 2017

Member

The allocators RFC hasn't even been implemented yet. We have literally zero experience using the Allocator trait or the Layout type. In contrast, alloc::heap and the basic structure of global allocators have been implemented and used for the last couple of years.

@sfackler

sfackler Apr 19, 2017

Member

The allocators RFC hasn't even been implemented yet. We have literally zero experience using the Allocator trait or the Layout type. In contrast, alloc::heap and the basic structure of global allocators have been implemented and used for the last couple of years.

@Ericson2314

This comment has been minimized.

Show comment
Hide comment
@Ericson2314

Ericson2314 Apr 19, 2017

Contributor

@sfackler I think that's more for lack of time than interest. An now Haskell's "backpack" basically wrote the book on how to retrofit a module system on a language without one (and with type classes / traits), so it's not like research is needed.

I'm fine with improving how things work on an experimental basis, but moving towards stabilization seems vastly premature---we haven't even implemented our existing allocator RFC!

Contributor

Ericson2314 commented Apr 19, 2017

@sfackler I think that's more for lack of time than interest. An now Haskell's "backpack" basically wrote the book on how to retrofit a module system on a language without one (and with type classes / traits), so it's not like research is needed.

I'm fine with improving how things work on an experimental basis, but moving towards stabilization seems vastly premature---we haven't even implemented our existing allocator RFC!

@rkruppe

This comment has been minimized.

Show comment
Hide comment
@rkruppe

rkruppe Apr 19, 2017

Contributor

So, using the allocator trait naturally suggests a static of a type that implements the allocator trait, but @sfackler pointed out that Allocator methods take &mut self. So a static indeed wouldn't work with the allocator trait as specified today, and both static mut and const are unacceptable substitutes IMO.

@mark-i-m brought up the possibility of changing the trait to take &self, but I don't think that is a good idea. Most uses of allocators (e.g., any time a collection owns an allocator) can offer the guarantees &mut implies, and those guarantees could greatly simplify and speed up certain allocators (no need for interior mutability or thread safety). Furthermore, with the exception of static allocators, the &mut shouldn't be a problem, since you can always introduce a handle type that implements the trait and can be duplicated for every user (e.g., a newtype around &MyAllocatorState or Arc<MyAllocatorState>).

Contrast this with the global allocator, where you can't just hand out a handle to every user, because users are everywhere. One could define an ad-hoc scheme to automatically introduce such handles (e.g., with static ALLOC: MyAlloc, the allocator trait must be implemented on &MyAlloc and allocator methods are called on a temporary &ALLOC). This is not only a very obvious rule patch, it also brings in a layer of indirection that may not be necessary for all allocators.

To me, this mismatch between "local" allocators and global ones is a strong argument to not couple the latter to the trait used for the former.

Contributor

rkruppe commented Apr 19, 2017

So, using the allocator trait naturally suggests a static of a type that implements the allocator trait, but @sfackler pointed out that Allocator methods take &mut self. So a static indeed wouldn't work with the allocator trait as specified today, and both static mut and const are unacceptable substitutes IMO.

@mark-i-m brought up the possibility of changing the trait to take &self, but I don't think that is a good idea. Most uses of allocators (e.g., any time a collection owns an allocator) can offer the guarantees &mut implies, and those guarantees could greatly simplify and speed up certain allocators (no need for interior mutability or thread safety). Furthermore, with the exception of static allocators, the &mut shouldn't be a problem, since you can always introduce a handle type that implements the trait and can be duplicated for every user (e.g., a newtype around &MyAllocatorState or Arc<MyAllocatorState>).

Contrast this with the global allocator, where you can't just hand out a handle to every user, because users are everywhere. One could define an ad-hoc scheme to automatically introduce such handles (e.g., with static ALLOC: MyAlloc, the allocator trait must be implemented on &MyAlloc and allocator methods are called on a temporary &ALLOC). This is not only a very obvious rule patch, it also brings in a layer of indirection that may not be necessary for all allocators.

To me, this mismatch between "local" allocators and global ones is a strong argument to not couple the latter to the trait used for the former.

@Ericson2314

This comment has been minimized.

Show comment
Hide comment
@Ericson2314

Ericson2314 Apr 19, 2017

Contributor

@rkruppe the handle thing is correct. My convention the global allocator handles it's own synchronization, but that's the only magic. Allocator + Default + Sized<Size=0> is a find bound for a handle to the implicit global allocator (the third part is hypothetical but not necessary and just nice to have). In general it's ok for Allocator instances to be handles.

Contributor

Ericson2314 commented Apr 19, 2017

@rkruppe the handle thing is correct. My convention the global allocator handles it's own synchronization, but that's the only magic. Allocator + Default + Sized<Size=0> is a find bound for a handle to the implicit global allocator (the third part is hypothetical but not necessary and just nice to have). In general it's ok for Allocator instances to be handles.

@rkruppe

This comment has been minimized.

Show comment
Hide comment
@rkruppe

rkruppe Apr 19, 2017

Contributor

@Ericson2314 I'm not sure I catch your drift. The global allocator (edit: by this I mean the type implementing Allocator, be it a handle or whatever) can't be a static as outlined before, a const seems rather inappropriate and risks the confusion outlined before, and static mut is unsafe. And what's the Default bound supposed to be for?

Contributor

rkruppe commented Apr 19, 2017

@Ericson2314 I'm not sure I catch your drift. The global allocator (edit: by this I mean the type implementing Allocator, be it a handle or whatever) can't be a static as outlined before, a const seems rather inappropriate and risks the confusion outlined before, and static mut is unsafe. And what's the Default bound supposed to be for?

@sfackler

This comment has been minimized.

Show comment
Hide comment
@sfackler

sfackler Apr 19, 2017

Member

I'm fine with improving how things work on an experimental basis, but moving towards stabilization seems vastly premature---we haven't even implemented our existing allocator RFC!

This is not really related to the other allocator RFC other than both deal with memory allocation.

I am not willing to hold up stabilization of anything involving a global resource indefinitely until someone gets around to writing an RFC and implementing some needs/provides interface.

Member

sfackler commented Apr 19, 2017

I'm fine with improving how things work on an experimental basis, but moving towards stabilization seems vastly premature---we haven't even implemented our existing allocator RFC!

This is not really related to the other allocator RFC other than both deal with memory allocation.

I am not willing to hold up stabilization of anything involving a global resource indefinitely until someone gets around to writing an RFC and implementing some needs/provides interface.

@Ericson2314

This comment has been minimized.

Show comment
Hide comment
@Ericson2314

Ericson2314 Apr 20, 2017

Contributor

@rkruppe sorry that was a bit rushed. Let me lay out a better plan in more detail:

// allocator crate

trait Allocator { .. }
// global allocator infra crate

// Not just useful for global singleton allocator, fwiw
trait SelfSyncronizingAllocator {
    type Handle: Allocator;
    fn get_handle(&self) -> Handle;
}

type GlobalHeapType: SelfSyncronizingAllocator =/* implemented elsewhere */;
static GLOBAL_HEAP: GlobalHeapType = /* implemented elsewhere */;

fn allocate(args...) -> _ { GLOBAL_HEAP.allocate(args...) }
// and so on
Contributor

Ericson2314 commented Apr 20, 2017

@rkruppe sorry that was a bit rushed. Let me lay out a better plan in more detail:

// allocator crate

trait Allocator { .. }
// global allocator infra crate

// Not just useful for global singleton allocator, fwiw
trait SelfSyncronizingAllocator {
    type Handle: Allocator;
    fn get_handle(&self) -> Handle;
}

type GlobalHeapType: SelfSyncronizingAllocator =/* implemented elsewhere */;
static GLOBAL_HEAP: GlobalHeapType = /* implemented elsewhere */;

fn allocate(args...) -> _ { GLOBAL_HEAP.allocate(args...) }
// and so on
@Ericson2314

This comment has been minimized.

Show comment
Hide comment
@Ericson2314

Ericson2314 Apr 20, 2017

Contributor

@sfackler I almost forgot, but there actually had been such an RFC: #1408 . @nikomatsakis's comment before closing is where I got the idea that something ML-ish was the what was wanted.

I think in the mean time we can fake it with something like the above, #1133 stdlib-aware Cargo, and [[replace]], and that would be fine for experimentation.

Contributor

Ericson2314 commented Apr 20, 2017

@sfackler I almost forgot, but there actually had been such an RFC: #1408 . @nikomatsakis's comment before closing is where I got the idea that something ML-ish was the what was wanted.

I think in the mean time we can fake it with something like the above, #1133 stdlib-aware Cargo, and [[replace]], and that would be fine for experimentation.

@sfackler

This comment has been minimized.

Show comment
Hide comment
@sfackler

sfackler Apr 20, 2017

Member

This feature has been available for experimentation since mid 2015. The entire point of this RFC is to prepare it for stabilization.

#1133 is also arbitrarily far away from existing, not to mention being stable.

Member

sfackler commented Apr 20, 2017

This feature has been available for experimentation since mid 2015. The entire point of this RFC is to prepare it for stabilization.

#1133 is also arbitrarily far away from existing, not to mention being stable.

@alexcrichton

This comment has been minimized.

Show comment
Hide comment
@alexcrichton

alexcrichton Apr 20, 2017

Member

I personally sort of like doing this via a trait than via a bag of attributes as well, although perhaps for slightly different reasons. Here's what I'm thinking:

Pros of using a trait:

  • Easy documentation. Take a look at the trait and you precisely know what's a required method, what all the available methods are, and what all the semantics are. No need for documentation for each specific attribute with particular signatures.
  • Easy type checking. No need to encode information in the compiler about type signatures. Just encode in the compiler the trait that needs to be adhered to and then let typeck take care of the rest, it just verifies something implements the trait.
  • Easy addition of new methods. Adding a new allocator method would simply be adding a trait method. We'll still need to update the compiler, though.
  • Allocator "wrappers" and working with allocators generically is much easier, as it's leveraging, well, Rust's system of generics!

Cons of using a trait

  • This isn't implemented today, will take time and effort to implement.
  • Each method still needs to be known (with a type signature) to the compiler. Or at least I believe so. (more on this in a moment)
  • ... maybe more? I'm drawing blanks but I feel like there should be more.

If we were to go the trait route I'm not sure if we'd use the precise allocator trait from RFC 1398, but it may be nifty to do so. I would hate to block this feature on the entire trait, though, but I think we can thread the needle here with various bits and pieces of stabilization. I don't think we'd need to stabilize the entire API proposed in 1398, but rather just enough to be usable. We could consider, for example, making just alloc and dealloc stable on the first pass and require custom allocator implementations to use the default implementations of other functions (or something like that, just as an example).

I also do not believe that the &self vs &mut self issue will be a problem for the global allocator. I agree that &mut self is correct for the allocator, and I'd like to keep it like that in the Allocator trait. For global, allocators, however, we could have a system such as:

First, crates which care "candidate global allocators" are defined by having annotations such as:

#[global_allocator_candidate]
pub static MY_ALLOCATOR: MyAllocator = ...;

With such a definition you cannot take a mutable borrow, but by definition you'll basically never have a safe mutable reference to the global allocator anyway! Instead the compiler can verify that &MyAllocator: Allocator (note the leading &). This is how we deal with std::io::Read requiring &mut self but TcpStream does not require &mut self, just &self.

Then later to actually use that allocator you'd write down:

#[allocator]
extern crate my_allocator;

at which point the compiler would generate (sort of):

pub extern fn __rust_allocate(size: usize) -> *mut u8 {
   ::alloc::Allocator::alloc(&mut &my_allocator::MY_ALLOCATOR, size)
}

(or something like that)


So with those thoughts written down, I'm personally in favor of defining the interface through a trait, and ideally through trait Allocator proposed in RFC 1398. If such a coupling requires unduly delaying this RFC, however, I would not be in favor of using the exact trait Allocator and instead believe we should create a separate trait GlobalAllocator or something like that. Later on we could have impl<T: GlobalAllocator> Allocator for T or the like. @sfackler what do you think of this method of defining the interface? Can you think of more downsides than I could think of above for using a trait? (or do you disagree with what I listed as the pros?)

One other thing I've realized as I've been writing this comment is I'd personally prefer to spell out the precise mechanism by which allocators work after this RFC. This was done in RFC 1183 as well just to make sure it was vetted, and that interface will need to be significantly different in this RFC I think, however. Especially if we use traits/statics I think we'll want at least a high-level description of how everything will get wired up.

If we decide to use a trait to define the global allocator, I would propose a scheme such as:

  • Crates provide global allocator candidates via static (like I did above)
  • When providing a candidate the compiler verifies that static implements the appropriate trait.
    • Note that this trait would... I guess live in libcore. It cannot live in liballoc because liballoc depends on the crate that provides the global allocator. (details to be worked out here?)
  • When selecting a global allocator, a crate tags an extern crate definition with #[allocator] on the extern crate item. This verifies that the crate contains a candidate and no other extern crate directive in the crate graph was tagged with #[allocator]
  • In the crate which specifies #[allocator] the compiler will generate allocation shim symbols (like those we have today).
  • The std::heap API is defined by simply calling the well-known symbol names that the compiler generates in the previous step. The compiler generates a shim per function in the std::heap API (calling the appropriate trait method on the global allocator selected).

Another alternative would be that a global allocator is defined as:

#[global_allocator]
pub static ALLOCATOR: MyAllocator = my_allocator::ALLOCATOR_INIT;

That way the crate graph is required to have at most one #[global_allocator] annotation, and an allocator crate simply provides an appropriate type/implementation of the global trait.

I... may like this alternative (statics, not crates) better? Curious what others think though!

Member

alexcrichton commented Apr 20, 2017

I personally sort of like doing this via a trait than via a bag of attributes as well, although perhaps for slightly different reasons. Here's what I'm thinking:

Pros of using a trait:

  • Easy documentation. Take a look at the trait and you precisely know what's a required method, what all the available methods are, and what all the semantics are. No need for documentation for each specific attribute with particular signatures.
  • Easy type checking. No need to encode information in the compiler about type signatures. Just encode in the compiler the trait that needs to be adhered to and then let typeck take care of the rest, it just verifies something implements the trait.
  • Easy addition of new methods. Adding a new allocator method would simply be adding a trait method. We'll still need to update the compiler, though.
  • Allocator "wrappers" and working with allocators generically is much easier, as it's leveraging, well, Rust's system of generics!

Cons of using a trait

  • This isn't implemented today, will take time and effort to implement.
  • Each method still needs to be known (with a type signature) to the compiler. Or at least I believe so. (more on this in a moment)
  • ... maybe more? I'm drawing blanks but I feel like there should be more.

If we were to go the trait route I'm not sure if we'd use the precise allocator trait from RFC 1398, but it may be nifty to do so. I would hate to block this feature on the entire trait, though, but I think we can thread the needle here with various bits and pieces of stabilization. I don't think we'd need to stabilize the entire API proposed in 1398, but rather just enough to be usable. We could consider, for example, making just alloc and dealloc stable on the first pass and require custom allocator implementations to use the default implementations of other functions (or something like that, just as an example).

I also do not believe that the &self vs &mut self issue will be a problem for the global allocator. I agree that &mut self is correct for the allocator, and I'd like to keep it like that in the Allocator trait. For global, allocators, however, we could have a system such as:

First, crates which care "candidate global allocators" are defined by having annotations such as:

#[global_allocator_candidate]
pub static MY_ALLOCATOR: MyAllocator = ...;

With such a definition you cannot take a mutable borrow, but by definition you'll basically never have a safe mutable reference to the global allocator anyway! Instead the compiler can verify that &MyAllocator: Allocator (note the leading &). This is how we deal with std::io::Read requiring &mut self but TcpStream does not require &mut self, just &self.

Then later to actually use that allocator you'd write down:

#[allocator]
extern crate my_allocator;

at which point the compiler would generate (sort of):

pub extern fn __rust_allocate(size: usize) -> *mut u8 {
   ::alloc::Allocator::alloc(&mut &my_allocator::MY_ALLOCATOR, size)
}

(or something like that)


So with those thoughts written down, I'm personally in favor of defining the interface through a trait, and ideally through trait Allocator proposed in RFC 1398. If such a coupling requires unduly delaying this RFC, however, I would not be in favor of using the exact trait Allocator and instead believe we should create a separate trait GlobalAllocator or something like that. Later on we could have impl<T: GlobalAllocator> Allocator for T or the like. @sfackler what do you think of this method of defining the interface? Can you think of more downsides than I could think of above for using a trait? (or do you disagree with what I listed as the pros?)

One other thing I've realized as I've been writing this comment is I'd personally prefer to spell out the precise mechanism by which allocators work after this RFC. This was done in RFC 1183 as well just to make sure it was vetted, and that interface will need to be significantly different in this RFC I think, however. Especially if we use traits/statics I think we'll want at least a high-level description of how everything will get wired up.

If we decide to use a trait to define the global allocator, I would propose a scheme such as:

  • Crates provide global allocator candidates via static (like I did above)
  • When providing a candidate the compiler verifies that static implements the appropriate trait.
    • Note that this trait would... I guess live in libcore. It cannot live in liballoc because liballoc depends on the crate that provides the global allocator. (details to be worked out here?)
  • When selecting a global allocator, a crate tags an extern crate definition with #[allocator] on the extern crate item. This verifies that the crate contains a candidate and no other extern crate directive in the crate graph was tagged with #[allocator]
  • In the crate which specifies #[allocator] the compiler will generate allocation shim symbols (like those we have today).
  • The std::heap API is defined by simply calling the well-known symbol names that the compiler generates in the previous step. The compiler generates a shim per function in the std::heap API (calling the appropriate trait method on the global allocator selected).

Another alternative would be that a global allocator is defined as:

#[global_allocator]
pub static ALLOCATOR: MyAllocator = my_allocator::ALLOCATOR_INIT;

That way the crate graph is required to have at most one #[global_allocator] annotation, and an allocator crate simply provides an appropriate type/implementation of the global trait.

I... may like this alternative (statics, not crates) better? Curious what others think though!

@sfackler sfackler referenced this pull request in rust-lang/rust Apr 20, 2017

Closed

Tracking issue for alloc_system/alloc_jemalloc #33082

@sfackler

This comment has been minimized.

Show comment
Hide comment
@sfackler

sfackler Apr 20, 2017

Member

I'd be on board with a trait based approach in principle. Impl-ing Allocator for &'static MyGlobalAllocator doesn't seem like the best approach though. You'd presumably want to be able to use MyGlobalAllocator as a normal Allocator (in theory at least?) but &'static MyGlobalAllocator is 8 bytes compared to MyGlobalAllocator's 0 bytes. It's also a bit weird. We could have a separate trait but if the only difference is &self vs &mut self that seems pretty unfortunate.

One possible approach here is to tag the type, not a static. Since this is a global allocator, there's never any reason that you'd need to stick some state in the type implementing Allocator - you can just stick it off in a separate static. We can then require that a global allocator is a unit struct implementing Allocator and tagged with #[allocator].

#[allocator]
pub struct JeMallocAllocator;

impl Allocator for JeMallocAllocator {
    ...
}

The compiler would then just codegen __rust_allocate(size: usize) -> *mut u8 { JeMallocAllocator.allocate(size) }. The mutableness is irrelevant since there's no data.

If we're going to go with this route, is everyone happy with Layout as a type? It does limit you in the size/align combinations you can select. We'd probably want to limit Allocator to just the methods described above for now and leave all of the other convenience impls for later.

Would Allocator and Layout live in core or some new alloc_api crate?

Member

sfackler commented Apr 20, 2017

I'd be on board with a trait based approach in principle. Impl-ing Allocator for &'static MyGlobalAllocator doesn't seem like the best approach though. You'd presumably want to be able to use MyGlobalAllocator as a normal Allocator (in theory at least?) but &'static MyGlobalAllocator is 8 bytes compared to MyGlobalAllocator's 0 bytes. It's also a bit weird. We could have a separate trait but if the only difference is &self vs &mut self that seems pretty unfortunate.

One possible approach here is to tag the type, not a static. Since this is a global allocator, there's never any reason that you'd need to stick some state in the type implementing Allocator - you can just stick it off in a separate static. We can then require that a global allocator is a unit struct implementing Allocator and tagged with #[allocator].

#[allocator]
pub struct JeMallocAllocator;

impl Allocator for JeMallocAllocator {
    ...
}

The compiler would then just codegen __rust_allocate(size: usize) -> *mut u8 { JeMallocAllocator.allocate(size) }. The mutableness is irrelevant since there's no data.

If we're going to go with this route, is everyone happy with Layout as a type? It does limit you in the size/align combinations you can select. We'd probably want to limit Allocator to just the methods described above for now and leave all of the other convenience impls for later.

Would Allocator and Layout live in core or some new alloc_api crate?

@alexcrichton

This comment has been minimized.

Show comment
Hide comment
@alexcrichton

alexcrichton Apr 20, 2017

Member

Oh I wouldn't expect Allocator for &'static T, but rather <'a> Allocator for &'a T (e.g. it just wouldn't require the 'static lifetime. Could you elaborate on the sized issue, though? Regardless of whether the trait takes &self or &mut self you're still passing a pointer as an argument, right? I think these pointers would also only get manufactured at runtime, the static itself would continue to have zero size.

I agree that a good alternative is to tag a type. I would personally prefer to tag an instance, however, because allocators like jemalloc do indeed have global state and in general it seems like a global allocator would have state associated with it. In terms of wrapping it all up in a package it'd be neat if you could do that through the type rather than have extra statics on the side that manage state.

I'm not personally 100% sold on Layout. I would hope that we could provide a raw constructor which takes size/align just as values and would be the only stable constructor today (with more convenient constructors coming later).

I would also be fine having the trait/types and such live in core::heap. The only reason alloc is a separate crate is the implication of a global allocator which core doesn't have, but if everything is parameterized over an instance that seems totally fine to me! We I think, though, would of course leave core::heap unstable for perhaps longer than std::heap, just to make sure the placement is right.

Member

alexcrichton commented Apr 20, 2017

Oh I wouldn't expect Allocator for &'static T, but rather <'a> Allocator for &'a T (e.g. it just wouldn't require the 'static lifetime. Could you elaborate on the sized issue, though? Regardless of whether the trait takes &self or &mut self you're still passing a pointer as an argument, right? I think these pointers would also only get manufactured at runtime, the static itself would continue to have zero size.

I agree that a good alternative is to tag a type. I would personally prefer to tag an instance, however, because allocators like jemalloc do indeed have global state and in general it seems like a global allocator would have state associated with it. In terms of wrapping it all up in a package it'd be neat if you could do that through the type rather than have extra statics on the side that manage state.

I'm not personally 100% sold on Layout. I would hope that we could provide a raw constructor which takes size/align just as values and would be the only stable constructor today (with more convenient constructors coming later).

I would also be fine having the trait/types and such live in core::heap. The only reason alloc is a separate crate is the implication of a global allocator which core doesn't have, but if everything is parameterized over an instance that seems totally fine to me! We I think, though, would of course leave core::heap unstable for perhaps longer than std::heap, just to make sure the placement is right.

@comex

This comment has been minimized.

Show comment
Hide comment
@comex

comex Apr 20, 2017

If we're going to go with this route, is everyone happy with Layout as a type? It does limit you in the size/align combinations you can select.

How so? Layout::from_size_align checks only that align is a power of 2, which seems reasonable; or if there's some desire for non-power-of-2 alignments, that restriction can be lifted from Layout.

comex commented Apr 20, 2017

If we're going to go with this route, is everyone happy with Layout as a type? It does limit you in the size/align combinations you can select.

How so? Layout::from_size_align checks only that align is a power of 2, which seems reasonable; or if there's some desire for non-power-of-2 alignments, that restriction can be lifted from Layout.

@sfackler

This comment has been minimized.

Show comment
Hide comment
@sfackler

sfackler Apr 20, 2017

Member

Say I want to use jemalloc as my global allocator, but use the system allocator for a specific Vec, maybe for C interop or something. I'd then ideally have a Vec<u8, SystemAllocator> where SystemAllocator is a 0 sized type, but instead I'd need a Vec<u8, &'static SystemAllocator>, which both would add 8 bytes to the Vec, be a bit more verbose, and kind of weird to look at.

@comex Layout::from_size_align is private.

Member

sfackler commented Apr 20, 2017

Say I want to use jemalloc as my global allocator, but use the system allocator for a specific Vec, maybe for C interop or something. I'd then ideally have a Vec<u8, SystemAllocator> where SystemAllocator is a 0 sized type, but instead I'd need a Vec<u8, &'static SystemAllocator>, which both would add 8 bytes to the Vec, be a bit more verbose, and kind of weird to look at.

@comex Layout::from_size_align is private.

@rfcbot

This comment has been minimized.

Show comment
Hide comment
@rfcbot

rfcbot Jun 6, 2017

🔔 This is now entering its final comment period, as per the review above. 🔔

rfcbot commented Jun 6, 2017

🔔 This is now entering its final comment period, as per the review above. 🔔

@matthieu-m

This comment has been minimized.

Show comment
Hide comment
@matthieu-m

matthieu-m Jun 7, 2017

I see that alloc_array is placed in the Unresolved Questions section.

I would be in favor of providing an easy way to allocate memory for arrays based on element size and number of elements simply because doing so manually (in the absence of checked multiplication) is a recipe for failure: when the result of the multiplication wraps around, you may end up with an actually allocated size that is lower than the intended size, which in turn may lead to writes outside the allocation.

I understand that the trait is unsafe, however I see no reason for not making a best effort at providing a "safish" API, and avoid having clients of the API perform the multiplication on their own (and forgetting to check).

Note: I could even actually get behind an API that would require number of elements to always be required, even for scalar assignments. I am unsure of the performance applications; though hopefully if the method is inlined a constant 1 would clue the optimizer into eliding the multiplication and resulting checks.

I see that alloc_array is placed in the Unresolved Questions section.

I would be in favor of providing an easy way to allocate memory for arrays based on element size and number of elements simply because doing so manually (in the absence of checked multiplication) is a recipe for failure: when the result of the multiplication wraps around, you may end up with an actually allocated size that is lower than the intended size, which in turn may lead to writes outside the allocation.

I understand that the trait is unsafe, however I see no reason for not making a best effort at providing a "safish" API, and avoid having clients of the API perform the multiplication on their own (and forgetting to check).

Note: I could even actually get behind an API that would require number of elements to always be required, even for scalar assignments. I am unsure of the performance applications; though hopefully if the method is inlined a constant 1 would clue the optimizer into eliding the multiplication and resulting checks.

More updates
In particular, switch the proposal and alternative
@sfackler

This comment has been minimized.

Show comment
Hide comment
@sfackler

sfackler Jun 8, 2017

Member

The libs team discussed this RFC, and came to the conclusion that we should avoid defining a second allocator trait and instead use the reference trick with the existing Allocate trait. They're both somewhat unfortunate options, but avoiding the code duplication tipped the scales to wards &T: Allocate.

Since we're using the existing Allocate trait, discussions nailing down the exact API contracts around valid layouts etc can move to that feature's tracking issue.

Member

sfackler commented Jun 8, 2017

The libs team discussed this RFC, and came to the conclusion that we should avoid defining a second allocator trait and instead use the reference trick with the existing Allocate trait. They're both somewhat unfortunate options, but avoiding the code duplication tipped the scales to wards &T: Allocate.

Since we're using the existing Allocate trait, discussions nailing down the exact API contracts around valid layouts etc can move to that feature's tracking issue.

text/0000-global-allocators.md
- }
+pub struct Jemalloc;
+
+impl<'a> Allocate for &'a Jemalloc {

This comment has been minimized.

@joshlf

joshlf Jun 8, 2017

Allocator?

This comment has been minimized.

@sfackler

sfackler Jun 8, 2017

Member

Fixed, thanks

@sfackler

sfackler Jun 8, 2017

Member

Fixed, thanks

@sfackler

This comment has been minimized.

Show comment
Hide comment
@sfackler

sfackler Jun 8, 2017

Member

Oh right, one other note is that we expect very few global allocator implementations to exist in comparison to consumers of global allocators, so the set of people that will have to deal with the impl-for-reference weirdness is pretty small.

Member

sfackler commented Jun 8, 2017

Oh right, one other note is that we expect very few global allocator implementations to exist in comparison to consumers of global allocators, so the set of people that will have to deal with the impl-for-reference weirdness is pretty small.

@rfcbot

This comment has been minimized.

Show comment
Hide comment
@rfcbot

rfcbot Jun 16, 2017

The final comment period is now complete.

rfcbot commented Jun 16, 2017

The final comment period is now complete.

@alexcrichton alexcrichton merged commit 22fe7cb into rust-lang:master Jun 18, 2017

@alexcrichton

This comment has been minimized.

Show comment
Hide comment
@alexcrichton

alexcrichton Jun 18, 2017

Member

Alright looks like not a whole lot of new discussion came up during FCP, so I've merged!

I've also linked this to the existing tracking issue: rust-lang/rust#27389

Member

alexcrichton commented Jun 18, 2017

Alright looks like not a whole lot of new discussion came up during FCP, so I've merged!

I've also linked this to the existing tracking issue: rust-lang/rust#27389

bors added a commit to rust-lang/rust that referenced this pull request Jun 27, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=sfackler,eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jun 27, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jun 28, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 1, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 1, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 1, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 2, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 3, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 3, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 4, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 4, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 4, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 5, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 5, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

bors added a commit to rust-lang/rust that referenced this pull request Jul 6, 2017

Auto merge of #42727 - alexcrichton:allocators-new, r=eddyb
rustc: Implement the #[global_allocator] attribute

This PR is an implementation of [RFC 1974] which specifies a new method of
defining a global allocator for a program. This obsoletes the old
`#![allocator]` attribute and also removes support for it.

[RFC 1974]: rust-lang/rfcs#1974

The new `#[global_allocator]` attribute solves many issues encountered with the
`#![allocator]` attribute such as composition and restrictions on the crate
graph itself. The compiler now has much more control over the ABI of the
allocator and how it's implemented, allowing much more freedom in terms of how
this feature is implemented.

cc #27389

@phil-opp phil-opp referenced this pull request in phil-opp/blog_os Jul 7, 2017

Closed

Global allocator API changed #341

3 of 3 tasks complete

@gil0mendes gil0mendes referenced this pull request in Infinity-OS/infinity Jul 7, 2017

Open

Global allocator API changed #37

@4e554c4c 4e554c4c referenced this pull request in ESALP/ESALP-1 Jul 8, 2017

Closed

Global Allocator API Changed #17

@DavidDeSimone DavidDeSimone referenced this pull request in Wilfred/remacs Jul 9, 2017

Merged

Fix OSX build on 1.20 nightly. #221

4 of 4 tasks complete
@tarcieri

This comment has been minimized.

Show comment
Hide comment
@tarcieri

tarcieri Jul 21, 2017

How does one use a global_allocator as the default_lib_allocator in no_std environments? This used to be fairly ergonomic (just define #![allocator] and you're done), but now I'm wondering if I need to do something like copy and paste all of this code:

https://github.com/rust-lang/rust/blob/master/src/libstd/heap.rs

cc @alexcrichton

tarcieri commented Jul 21, 2017

How does one use a global_allocator as the default_lib_allocator in no_std environments? This used to be fairly ergonomic (just define #![allocator] and you're done), but now I'm wondering if I need to do something like copy and paste all of this code:

https://github.com/rust-lang/rust/blob/master/src/libstd/heap.rs

cc @alexcrichton

@sfackler sfackler deleted the sfackler:allocators-2 branch Jul 21, 2017

@alexcrichton

This comment has been minimized.

Show comment
Hide comment
@alexcrichton

alexcrichton Jul 22, 2017

Member

@tarcieri it's not intended currently to be able to implement Heap ergonomically outside of libstd right now, you'd have to mirror libstd.

Member

alexcrichton commented Jul 22, 2017

@tarcieri it's not intended currently to be able to implement Heap ergonomically outside of libstd right now, you'd have to mirror libstd.

@tarcieri

This comment has been minimized.

Show comment
Hide comment
@tarcieri

tarcieri Jul 22, 2017

@alexcrichton yeah got that working today, was just curious if there was something better I could do.

@alexcrichton yeah got that working today, was just curious if there was something better I could do.

pravic added a commit to pravic/winapi-kmd-rs that referenced this pull request Sep 13, 2017

@juleskers juleskers referenced this pull request in rust-lang/rust Jan 3, 2018

Closed

No warning for obsolete allocator syntax #47143

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment