Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upUnsized Rvalues #1909
Conversation
jonas-schievink
reviewed
Feb 19, 2017
|
Overall, seems like a step in the right direction. The details still need some fleshing out, of course. |
| a3) cast expressions | ||
| - this seems like an implementation simplicity thing. These can only be trivial casts. | ||
| b) The RHS of assignment expressions must always have a Sized type. | ||
| - Assigning an unsized type is impossible because we don't know how much memory is available at the destination. This applies to ExprAssign assignments and not to StmtLet let-statements. |
This comment has been minimized.
This comment has been minimized.
jonas-schievink
Feb 19, 2017
Member
This got all munched together by Markdown, might want to put it in a list
| let x = [0u8; n]; // x: [u8] | ||
| let x = [0u8; n + (random() % 100)]; // x: [u8] | ||
| let x = [0u8; 42]; // x: [u8; 42], like today | ||
| let x = [0u8; random() % 100]; //~ ERROR constant evaluation error |
This comment has been minimized.
This comment has been minimized.
jonas-schievink
Feb 19, 2017
Member
This error definitely wants a note explaining what's going on, it looks very confusing otherwise
This comment has been minimized.
This comment has been minimized.
petrochenkov
Feb 19, 2017
Contributor
Looks like this limitation can be easily worked around.
let n = 0;
let x = [0u8; n + (random() % 100)]; // OK
This comment has been minimized.
This comment has been minimized.
Ixrec
Feb 19, 2017
Contributor
Agreed, this rule sounds good to me as long as the error message includes something like this:
note: if you want this array's size to be computed at runtime, move the expression into a temporary variable:
let n = random() % 100;
let x = [0u8; n];
This comment has been minimized.
This comment has been minimized.
briansmith
Feb 20, 2017
I think it's better to just get rid of the restriction and allow random % 100 to work. Programmers have the intuition that one can replace let n = random() % 100; f(n) with f(random() % 100) and IMO it isn't worth fighting that intuition.
In the furture Rust should create some equivalent to constexpr for expressions that guarantees that such-marked expressions are evaluated at compile time. That would be a more general mitigation for the concern here.
This comment has been minimized.
This comment has been minimized.
petrochenkov
Feb 20, 2017
•
Contributor
@briansmith
As I understand, this is mostly an implementation concern.
If captured local value is the indicator, then type of the array is immediately known, but detecting "constexprness" for an arbitrary expression may be unreasonably hard to do in time.
This comment has been minimized.
This comment has been minimized.
nikomatsakis
Feb 21, 2017
•
Contributor
Given how hard it was for us to find the [T; n] syntax I despair of finding another syntax though. =) I guess one question is how often people will want to create VLA on the stack, and particularly VLA where the length is not compile-time constant, but also doesn't use any local variables or other inputs!
It seems like access to a static (as opposed to const) could also trigger the "runtime-dependent" rule. For example, the following does not compile today (and rightly so, I think):
static X: usize = 1;
fn main() {
let v = [0; X];
}
This comment has been minimized.
This comment has been minimized.
eddyb
Feb 21, 2017
Member
That's tricky, although I'd argue that in a type, e.g. [i32; X], reading the static is perfectly fine and observes the value just-after-initialization (which also makes accessing a static before it's fully initialized a regular on-demand cycle error).
This comment has been minimized.
This comment has been minimized.
engstad
Feb 23, 2017
•
For backward compatibility, I would assume that syntax has to be different. I feel that the current proposal is way too similar to the old syntax. As a programmer, I would very much like to be able to search for places where the stack could grow unboundedly; and I think the syntax should help in that effort.
The type syntax is fine. [T] is sufficiently different from [T; N]. The constructor for VLA should reflect that initialization happens at runtime. Perhaps, instead, something along the lines of:
fn f(n: usize) {
let v = stack_alloc::init::<[u32]>(n, 0);
let u: [i32] = stack_alloc::init_with(n, |i: usize| 2 * i);
}
This comment has been minimized.
This comment has been minimized.
whitequark
Feb 23, 2017
Member
As a programmer, I would very much like to be able to search for places where the stack could grow unboundedly; and I think the syntax should help in that effort.
Uh, this also happens everywhere you have recursion. How about having a lint / editor command for this instead?
This comment has been minimized.
This comment has been minimized.
engstad
Feb 23, 2017
Sure, unbounded stack-use due to recursion would also be nice to detect, but that's a different issue.
As far as using lint goes, that would do its job, but not everyone lints their code. I'm talking about "understanding what the code does by looking at it". With the current proposal, it is hard to figure out if it is an unbounded allocation or a statically bound allocation on the stack.
|
|
||
| However, without some way to guarantee that this can be done without allocas, that might be a large footgun. | ||
|
|
||
| One somewhat-orthogonal proposal that came up was to make `Clone` (and therefore `Copy`) not depend on `Sized`, and to make `[u8]` be `Copy`, by moving the `Self: Sized` bound from the trait to the methods, i.e. using the following declaration: |
This comment has been minimized.
This comment has been minimized.
jonas-schievink
Feb 19, 2017
Member
Just wanted to note that removing a supertrait is a breaking change
This comment has been minimized.
This comment has been minimized.
llogiq
Feb 19, 2017
Contributor
True, but Sized is...special, in this case.
Also getting rid of the arbitrary limit of 32 array elements would make up for a lot...let's see a crater run before arguing further.
This comment has been minimized.
This comment has been minimized.
briansmith
Feb 20, 2017
The reference to the concrete type [u8] here is confusing. Did you mean [T] or something different?
This comment has been minimized.
This comment has been minimized.
|
|
||
| In Unsafe code, it is very easy to create unintended temporaries, such as in: | ||
| ```Rust | ||
| unsafe fnf poke(ptr: *mut [u8]) { /* .. */ } |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
I like the idea of edit Oops, I thought // implementation can assume `*self` and `*source` have the same size.
unsafe fn move_from_unsized(&out self, source: &Self);This is similar to things that would be useful for emplacement. |
withoutboats
added
the
T-lang
label
Feb 20, 2017
This comment has been minimized.
This comment has been minimized.
briansmith
reviewed
Feb 20, 2017
| let x = [0u8; n]; // x: [u8] | ||
| let x = [0u8; n + (random() % 100)]; // x: [u8] | ||
| let x = [0u8; 42]; // x: [u8; 42], like today | ||
| let x = [0u8; random() % 100]; //~ ERROR constant evaluation error |
This comment has been minimized.
This comment has been minimized.
briansmith
Feb 20, 2017
I think it's better to just get rid of the restriction and allow random % 100 to work. Programmers have the intuition that one can replace let n = random() % 100; f(n) with f(random() % 100) and IMO it isn't worth fighting that intuition.
In the furture Rust should create some equivalent to constexpr for expressions that guarantees that such-marked expressions are evaluated at compile time. That would be a more general mitigation for the concern here.
| length: len_, | ||
| data: *s | ||
| }); | ||
| ``` |
This comment has been minimized.
This comment has been minimized.
briansmith
Feb 20, 2017
I think this really needs to be supported:
#[derive(Copy, Clone)]
struct Elem {
// ...
value: [T],
}I understand the concern expressed above about uses of Box adding surprising allocas. However, I think this feature is actually more likely to be used by code, like mine, that is trying to avoid Box completely. In particular, if we're willing/able to use the heap instead of the stack then we don't miss stack-allocated VLAs nearly as much.
This comment has been minimized.
This comment has been minimized.
arielb1
Feb 20, 2017
•
Author
Contributor
Clone::clone has Self: Sized, so this should work depending on the intelligence of derive(Clone) (and more to the point, implementing Clone/Copy yourself would work).
This comment has been minimized.
This comment has been minimized.
nikomatsakis
Feb 21, 2017
Contributor
Clone::clone has Self: Sized, so this should work depending on the intelligence of derive(Clone) (and more to the point, implementing Clone/Copy yourself would work).
I'm confused by this. How could this work unless we modify the Clone trait?
| }); | ||
| ``` | ||
|
|
||
| However, without some way to guarantee that this can be done without allocas, that might be a large footgun. |
This comment has been minimized.
This comment has been minimized.
briansmith
Feb 20, 2017
•
The problem with large allocas is already a problem today without this feature. In fact this feature helps resolve the existing problem.
Consider:
#[allow(non_snake_case)] // Use the standard names.
pub struct RSAKeyPair {
n: bigint::Modulus<N>,
e: bigint::PublicExponent,
p: bigint::Modulus<P>,
q: bigint::Modulus<Q>,
dP: bigint::OddPositive,
dQ: bigint::OddPositive,
qInv: bigint::Elem<P, R>,
qq: bigint::Modulus<QQ>,
q_mod_n: bigint::Elem<N, R>,
r_mod_p: bigint::Elem<P, R>, // 1 (mod p), Montgomery encoded.
r_mod_q: bigint::Elem<Q, R>, // 1 (mod q), Montgomery encoded.
rrr_mod_p: bigint::Elem<P, R>, // RR (mod p), Montgomery encoded.
rrr_mod_q: bigint::Elem<Q, R>, // RR (mod q), Montgomery encoded.
}Oversimplifying a bit, each one of those values is ideally an [u8] with a length of between 128-512 bytes. But, instead, I currently am in the process of making them [u8; 1024] because Rust doesn't have VLAs (Other parts of my code need 1024-byte values for these types, but the RSAKeyPair operates on shorter values).. So, currently I'd need about 13KB or 26KB of stack space when constructing one of these already today, already. VLAs would actually reduce the expected stack space usage, even without implementing any sort of solution to this problem.
This comment has been minimized.
This comment has been minimized.
eternaleye
Feb 20, 2017
•
@briansmith: So, I don't think the RSAKeyPair use case is actually viable even with the most generous expansion of this, because it not only needs VLAs, requires a struct with more than one unsized member.
This is something completely unsupported in Rust at present, poses major layout difficulties, would require multiple different runtime sizes (which is a big thing to extend fat-pointers to do), and I'm hugely unsure if LLVM will even permit this without a great deal of effort, as Clang doesn't even support VLAIS, a notable hole in its GCC compatibility.
I suspect that use case is considerably better handled by const-dependent types, and possibly a later extension of DST to support unsizing a type with a single const-dependent parameter, to a type with none (and a fat-pointer, fattened by that value)
This comment has been minimized.
This comment has been minimized.
briansmith
Feb 20, 2017
@briansmith: So, I don't think the RSAKeyPair use case is actually viable even with the most generous expansion of this, because it not only needs VLAs, requires a struct with more than one unsized member.
Good point that VLAs won't direct solve this problem. However, my main point here is that the problem with Box potentially allocating large numbers of giant values on the stack before they are put on the heap already exists, for any large structure with large elements.
I think the compiler just needs to put in a little effort to ensure that it properly optimizes (minimizes) stack usage for code using this pattern:
let large_vla_1 = ....;
let boxed_vla_1 = Box::new(large_vla_1);
let large_vla_2 = ....;
let boxed_vla_2 = Box::new(large_vla_2);
...
let large_vla_n = ....;
let boxed_vla_n = Box::new(large_vla_n);In particular, it should be optimized into this:
let boxed_vla_1;
let boxed_vla_2;
...
let boxed_vla_n;
{
let large_vla_1 = ....; // alloca
boxed_vla_1 = Box::new(large_vla_1);
} // pop `large_vla_1` off the stack;
{
let large_vla_2 = ....; // alloca
boxed_vla_2 = Box::new(large_vla_2);
} // deallocate `large_vla_2`.
...
{
let large_vla_n = ....; // alloca
boxed_vla_n = Box::new(large_vla_n);
} // deallocate `large_vla_n`.
This comment has been minimized.
This comment has been minimized.
kennytm
Feb 20, 2017
Member
@briansmith That's the point of the box expression (#809 / rust-lang/rust#22181).
This comment has been minimized.
This comment has been minimized.
arielb1
Feb 20, 2017
Author
Contributor
Unfortunately, placement of allocas is entirely controlled by LLVM in both of your code cases. I believe that it should generate the "optimal" code in both cases, as long as you don't take references to large_vla_N before boxing them.
This comment has been minimized.
This comment has been minimized.
DemiMarie
Feb 21, 2017
@briansmith Would it be possible to solve the problem with raw byte arrays and autogenerated unsafe code?
briansmith
reviewed
Feb 20, 2017
|
|
||
| The way this is implemented in MIR is that operands, rvalues, and temporaries are allowed to be unsized. An unsized operand is always "by-ref". Unsized rvalues are either a `Use` or a `Repeat` and both can be translated easily. | ||
|
|
||
| Unsized locals can never be reassigned within a scope. When first assigning to an unsized local, a stack allocation is made with the correct size. |
This comment has been minimized.
This comment has been minimized.
briansmith
Feb 20, 2017
Why not allow reassignment with the requirement that the size must be the same, with a dynamic check, similar to how copy_from_slice() works?
In my case, I'd have local values of struct Elem { value: [usize] }, and all the local values in a function of type Elem would use the same length for the value, so (hopefully) the compiler could easily optimize the equal-length checks away.
This comment has been minimized.
This comment has been minimized.
arielb1
Feb 20, 2017
Author
Contributor
Because a = b; is not supposed to be doing magic dynamic checks behind your back. You can use copy_from_slice if you want.
This comment has been minimized.
This comment has been minimized.
briansmith
Feb 20, 2017
Because a = b; is not supposed to be doing magic dynamic checks behind your back. You can use copy_from_slice if you want.
a = b[0] does a “magic dynamic check behind your back” and it's already considered acceptable. This isn't any different, except it probably will be easier to optimize away than the array index check.
This comment has been minimized.
This comment has been minimized.
eternaleye
Feb 20, 2017
•
Yes, it is different, and no, a = b[0] is neither "magic" nor "behind your back".
It assigns the result of an indexing operation, which has a syntactic representation. All such cases, OpAssign included, have some extra syntax on one side of the assignment or another.
Assignment of VLAs would not, which is what makes it both "magic" and "behind your back".
(And furthermore, it's worth noting that all of the dynamic checks in such cases come from the Op, not the Assign.)
This comment has been minimized.
This comment has been minimized.
arielb1
Feb 20, 2017
Author
Contributor
That's the b[0], you mean? The problem with the check on assignment is that assignment expressions normally can't panic, so an very innocuous-looking assignment statement could cause runtime panics, especially because variants of vec[0..0] = *w; would compile and panic, confusing Python programmers.
This comment has been minimized.
This comment has been minimized.
briansmith
Feb 20, 2017
The problem with the check on assignment is that assignment expressions normally can't panic, so an very innocuous-looking assignment statement could cause runtime panics, especially because variants of vec[0..0] = *w; would compile and panic, confusing Python programmers.
I think this is also good to note in the RFC.
Would at least the following work?
#[derive(Clone)]
struct Elem([usize]);I believe an implementation of Clone for such a type could be implemented manually:
impl Clone for Elem {
pub fn clone(&self) -> Self {
let value_len = self.0.len();
let mut r = Elem([0; value_len]);
r.0.copy_from_slice(&self.0);
r
}
}
This comment has been minimized.
This comment has been minimized.
nikomatsakis
Feb 21, 2017
Contributor
This is a tricky thing. In general, unsized return values can't quite work without some additional machinery because the caller has no idea how much space to allocate. In other words, here, how is the caller to know that you are returning an Elem with the same length as the input had?
To handle that properly would I think require making some kind of (erased) parameter that lets you say that the length of the return value is the same as the input. (Similar to how Java wildcard capture can allow you to do some things kind of like that, or full on existentials.) I would be more precise but I'm kind of at a loss for how that would look in Rust syntax at the moment.
A related thought I had once is that we could potentially make a trait with -> Self methods be object safe: this works precisely because the underlying type must be sized, and because we control the impls enough to make sure (statically) that the returned value will always be of the same type that the receiver has (so we can allocate the right amount of space by examining how much space the receiver wants).
I could imagine trying to do something similar but with array lengths.
kennytm
suggested changes
Feb 20, 2017
| } | ||
| ``` | ||
|
|
||
| "captures a variable" - as in RFC #1558 - is used as the condition for making the return be `[T]` because it is simple, easy to understand, and introduces no type-checking complications. |
This comment has been minimized.
This comment has been minimized.
kennytm
Feb 20, 2017
Member
I disagree that it is simple to understand, at least I need to read the RFC + comments 3 times to see why [0u8; random() % 100] is error while [0u8; n + random() % 100] is fine.
I don't see why #1558's rule should apply to VLA, there is no difference between random() % 100 and n + random() % 100 that makes the former unsuitable for VLA.
I'd rather you have a core::intrinsic::make_vla::<u8>(0u8, random() % 100) than having such a strange rule.
This comment has been minimized.
This comment has been minimized.
glaebhoerl
Feb 20, 2017
•
Contributor
FWIW the "previous" rule suggested by @eddyb was that only array-repeat-literals with their expected type explicitly specified as being [T] would be allowed to be VLAs, that is:
fn foo(n: usize) {
let x = [0u8; n]; // error
let x: [u8] = [0u8; n]; // OK
let x = [0u8; random() % 100]; // error
let x: [u8] = [0u8; random() % 100]; // OK
fn expects_slice(arg: &[u8]) { ... }
expects_slice(&[0u8; n + random()]); // also OK
fn expects_ref<T: ?Sized>(arg: &T) { ... }
expects_ref(&[0u8; random()]); // error!
expects_ref::<[T]>(&[0u8; random()]); // presumably OK!
}This might be less ergonomic (presumably why the new rule was chosen instead), but I do like the explicitness.
This comment has been minimized.
This comment has been minimized.
Ericson2314
Feb 20, 2017
Contributor
I like the explicitness, but I worry this doesn't give well with the principle that explicit type annotations are "just another" way to constrain type inference, rather than required à la C or Go.
This comment has been minimized.
This comment has been minimized.
arielb1
Feb 20, 2017
Author
Contributor
The "expected type" rule is also fine. Not sure which of these is easier to explain.
This comment has been minimized.
This comment has been minimized.
kennytm
Feb 20, 2017
Member
I don't understand why we need to make VLA easy to construct by overloading the [x; n] syntax, it is an advanced feature which normally no one should use it.
This comment has been minimized.
This comment has been minimized.
briansmith
Feb 20, 2017
it is an advanced feature which normally no one should use it.
I would be fine with a totally different syntax (in general I concede syntax to others).
I disagree that we should conclude that VLAs are an advanced feature, though. I think that it's better to start off trying to make VLAs as natural and ergonomic as possible, and see if they actually get abused in such a way that people write bad or dangerous (stack-overflowing) code with them. If, based on initial experience, the feature turns out to actually be dangerous enough to justify making it harder to use, then the syntax can be changed.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
@whitequark Something like |
This comment has been minimized.
This comment has been minimized.
|
@eddyb Fair. |
This comment has been minimized.
This comment has been minimized.
burdges
commented
Feb 20, 2017
•
|
I'm worried about ambiguity from overloading the There are plenty of new syntax that avoids any ambiguity in the types: Anything like In fact, you could easily build a
Edit You could even make a |
This comment has been minimized.
This comment has been minimized.
eternaleye
commented
Feb 20, 2017
•
|
@burdges: One option I think would be viable is
This has several advantages:
|
This comment has been minimized.
This comment has been minimized.
camlorn
commented
Feb 20, 2017
|
@kennytm If the concern is that people will accidentally overflow the stack, yes, they will. But I can already do that in so, so many ways. I don't know what the size of the struct from the library over there is; I don't know what the size of the stack-allocated array is either. How are these cases significantly different? What I see here is "Here is a thing that can avoid heap allocation for small arrays", and that's brilliant, and I want it. |
This comment has been minimized.
This comment has been minimized.
|
@camlorn |
arielb1
force-pushed the
arielb1:unsized-rvalues
branch
from
ea14a8b
to
8ea1d96
Feb 20, 2017
This comment has been minimized.
This comment has been minimized.
burdges
commented
Feb 20, 2017
|
If one wants the function or macro route, then
I think that's equivalent to any sort of comprehension notation without the new syntax being quite so new. |
This comment has been minimized.
This comment has been minimized.
|
Ordinary functions can't really return DSTs - you can't alloca into your caller's stackframe, and there's a chicken-and-egg problem where the callee can't figure out how much space to allocate before it calls the function. |
This comment has been minimized.
This comment has been minimized.
I'd think tail calls with dynamically sized args and this are somewhat comparable. Both should be possible in at least some cases. Certainly lots of work and out of scope for the initial version, however. |
This comment has been minimized.
This comment has been minimized.
briansmith
commented
Feb 20, 2017
This just means that functions that return DSTs need a different calling convention than other functions, right? Callee pops stack frame except for its result, which is left on the stack. The caller can calculate the size of the result by remembering the old value of SP and subtracting the new value of SP. |
This comment has been minimized.
This comment has been minimized.
|
That would requires implementing that calling convention through LLVM. Plus locals in the callee function would get buried beneath the stack frame. Also, this would mean that the return value is not by-reference, so you would not get the benefits of RVO. |
This comment has been minimized.
This comment has been minimized.
briansmith
commented
Feb 20, 2017
Of course. To me it seems very natural to expect that Rust would drive many changes to LLVM, including major ones. I think it would be helpful for the Rust language team to explain their aversion to changing LLVM somewhere so people can understand the limits of what is reasonable to ask for, somewhere (not here). My own interactions with LLVM people gave me the impression that LLVM is very open to accepting changes, especially for non-Clang languages. |
This comment has been minimized.
This comment has been minimized.
|
For the record, #1808 mandated type ascription for unsized types to steer clear of ambiguity, e.g. I also think requiring the size expression to somehow be special is a bad idea, both design- and implementation-wise. |
This comment has been minimized.
This comment has been minimized.
I... I don't think this is true at all in Rust (and honestly I'm kind of surprised so many people apparently think it is). The issue is that
I've always disfavored our use of the (...someone at this point is itching to bring up Anyway: even if |
This comment has been minimized.
This comment has been minimized.
eternaleye
commented
Feb 20, 2017
|
@glaebhoerl: Mm, that's fair. However, on the one hand I do think there's a meaningful relationship there (in a sense, that value is mutable across invocations of the function), and on the other hand, there really isn't a better keyword for it. The closest is probably |
This comment has been minimized.
This comment has been minimized.
So are function arguments and For that matter, if we have an "opposite of (I'm not convinced there's a need for any kind of special syntax, though. The meaning of |
This comment has been minimized.
This comment has been minimized.
burdges
commented
Feb 20, 2017
|
I suppose you can always define a macro to populate your VLA from an iterator or closure or whatever, like
What about simply |
This comment has been minimized.
This comment has been minimized.
eternaleye
commented
Dec 2, 2017
•
|
I think there should be a hard limit on the size of unsized rvalues.
Probably around PAGE_SIZE.
Please, please, please no, for multiple reasons:
1. Stack probing makes arbitrarily-large unsized rvalues work just fine
2. Page size is arch-variable, and so far Rust has done a very good job of minimizing visible differences of that sort
3. `PAGE_SIZE` (the compile-time macro) has been deprecated for actual eons in favor of `sysconf(PAGESIZE)` (the runtime function call)
4. Actually, when I said it's arch-variable? _I lied_. It's actually _machine-variable_ - POWER is one example, AArch64 is another. Some Linux distros use 64k pages on AArch64; some use 4k.
Page size is _very_ much the wrong thing to use for basically anything other than mmap(2), and even then it should not be treated as a compile-time constant.
|
This comment has been minimized.
This comment has been minimized.
|
To add to this, some architectures do not even have pages at all (and yet still work fine with stack probing), e.g. Cortex-M3 if you put the stack at the bottom of the RAM. |
This comment has been minimized.
This comment has been minimized.
bill-myers
commented
Jan 3, 2018
•
|
I might have missed a mention of it, but it's important to note that plain "alloca" is not enough to implement this feature. In particular, plain allocas never get freed until the function returns, which means that if you have an unsized variable declared in a loop, the stack usage will now be proportional to the number of loop iterations, which is catastrophic. Instead, the stack pointer needs to be rewinded every loop iteration (and in general whenever an alloca goes out of scope), which probably requires LLVM changes, although it might possible to get away with just altering the stack pointer via inline assembly. Also, this is fundamentally incompatible with 'fn-scoped allocas, so if they are added the language needs to forbid 'fn-scoped allocas when unsized rvalues are in scope. |
This comment has been minimized.
This comment has been minimized.
crlf0710
commented
Jan 3, 2018
|
@nikomatsakis @eddyb it's been a while since Sept the fcp completed, any chance to get this merged? Thanks a lot. |
This comment has been minimized.
This comment has been minimized.
This is not catastrophic. In fact, for certain use cases (using the stack as a local bump pointer allocator) it is necessary and desirable. Unsized rvalues must be used together with lifetime ascription to let the compiler free them.
LLVM has |
nikomatsakis
added
the
I-nominated
label
Jan 25, 2018
aturon
referenced this pull request
Feb 7, 2018
Open
Tracking issue for RFC #1909: Unsized Rvalues #48055
This comment has been minimized.
This comment has been minimized.
|
This RFC has been (very belatedly!) merged! |
aturon
merged commit 6c3c48d
into
rust-lang:master
Feb 8, 2018
petrochenkov
removed
the
I-nominated
label
Feb 23, 2018
scottlamb
added a commit
to scottlamb/moonfire-nvr
that referenced
this pull request
Feb 23, 2018
kennytm
referenced this pull request
May 3, 2018
Closed
RFC: add futures and task system to libcore #2418
This comment has been minimized.
This comment has been minimized.
TheDan64
commented
Jun 5, 2018
|
As I understand it, a goal of this RFC is to make unsized trait objects possible. So, would this make |
This comment has been minimized.
This comment has been minimized.
|
@TheDan64 no: trait objects are always unsized, and types like This RFC allows you to pass unsized values (including trait objects) to functions by value (ie. move them into the function) and to store unsized values directly on the stack. |
This comment has been minimized.
This comment has been minimized.
|
@TheDan64 Note that even if we allowed |
This comment has been minimized.
This comment has been minimized.
TheDan64
commented
Jun 5, 2018
|
It'd be interesting if somehow |
This comment has been minimized.
This comment has been minimized.
|
That's not possible because you don't know how big the type will be. Consider this, in crate A we have: pub trait Trait {
fn do_something(&self);
}
impl Trait for u8 { ... }
impl Trait for u64 { ... }
let mut TRAITS: Vec<dyn Trait> = vec![1u8, 2u64, ...];And then in crate B, we write: extern crate crate_a;
struct MyHugeStruct([u64; 8192]);
impl crate_a::Trait for MyHugeStruct {
fn do_something(&self) {}
}
...
crate_a::TRAITS.push_back(MyHugeStruct([0u64; 8192]));so how do |
This comment has been minimized.
This comment has been minimized.
TheDan64
commented
Jun 5, 2018
|
Good point! |
This comment has been minimized.
This comment has been minimized.
burdges
commented
Jun 5, 2018
•
|
Is there any facility for I'd think if |
This comment has been minimized.
This comment has been minimized.
Where do lifetimes come into it? My view on it is that Working our way from a more explicit dynamic existential notation, This makes |
This comment has been minimized.
This comment has been minimized.
burdges
commented
Jun 5, 2018
•
|
I see, so As for lifetimes, I'd expect a Now why would |
This comment has been minimized.
This comment has been minimized.
You don't need reflection for modifications, at least not all of them (specifically, adding new elements is a problem, but looking at/removing existing ones is fine), and there's nothing special about Note that |
This comment has been minimized.
This comment has been minimized.
burdges
commented
Jun 6, 2018
|
Yes, I suppose the vtable "handles the reflection" for removing elements. I suppose Anyways |
arielb1 commentedFeb 19, 2017
•
edited by mbrubeck
This is the replacement to RFC #1808. I will write the "guaranteed optimizations" section tomorrow.
Rendered
Summary comment from Sept 6, 2017
cc @nikomatsakis @eddyb @llogiq @whitequark @briansmith