Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

struct field reordering and optimization #37429

Merged
merged 28 commits into from Dec 18, 2016

Conversation

@camlorn
Copy link
Contributor

camlorn commented Oct 27, 2016

This is work in progress. The goal is to divorce the order of fields in source code from the order of fields in the LLVM IR, then optimize structs (and tuples/enum variants)by always ordering fields from least to most aligned. It does not work yet. I intend to check compiler memory usage as a benchmark, and a crater run will probably be required.

I don't know enough of the compiler to complete this work unaided. If you see places that still need updating, please mention them. The only one I know of currently is debuginfo, which I'm putting off intentionally until a bit later.

r? @eddyb

}

/// Extend the Struct with more fields.
pub fn extend<I>(&mut self, dl: &TargetDataLayout,
fn fill_in_fields<I>(&mut self, dl: &TargetDataLayout,

This comment has been minimized.

Copy link
@eddyb

eddyb Oct 27, 2016

Member

You can probably move the whole body of fill_in_fields into fn new. Also maybe rename it to from_fields.

EDIT: Oh is it because of if fields.len() == 0 {return Ok(())}; below?

This comment has been minimized.

Copy link
@camlorn

camlorn Oct 27, 2016

Author Contributor

I didn't think of it that way, but you're right in that that's a good reason. I just preferred to separate the overview from the steps, if you will.

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 16, 2016

Member

The problem is that this function shouldn't be possible to call at all, this is really outlining, and there's no usefulness gained by having self - that is, I'd expect the Struct be created at the very end.

// We have to fix the last element of path here as only we know the right value.
let mut i = *path.last().unwrap();
i = st.field_index[i as usize];
*path.last_mut().unwrap() = i;

This comment has been minimized.

Copy link
@eddyb

eddyb Oct 27, 2016

Member

Can use let i = path.last_mut().unwrap(); and then read and write *i.

This comment has been minimized.

Copy link
@camlorn

camlorn Oct 27, 2016

Author Contributor

That wouldn't be an improvement because the borrow lasts too long. It would have to be wrapped in braces and indented further or I'd have to use mem::forget. We can't push the 0 onto the Vec while i exists.

This comment has been minimized.

Copy link
@eddyb

eddyb Oct 27, 2016

Member

Oh whoops. Only braces would work, btw, borrows can't even be terminated manually.

let mut fields = fields.into_iter().map(|field| {
field.layout(infcx)
}).collect::<Vec<_>>();
fields.insert(0, Ok(&discr));

This comment has been minimized.

Copy link
@eddyb

eddyb Oct 27, 2016

Member

None of these changes seem to be really needed - or are you planning to move the ZSTs here?

This comment has been minimized.

Copy link
@camlorn

camlorn Oct 27, 2016

Author Contributor

This is because we can no longer iterate over the fields in order until after we have the struct.

@@ -511,41 +512,70 @@ pub struct Struct {
/// If true, the size is exact, otherwise it's only a lower bound.
pub sized: bool,

/// Offsets for the first byte of each field.
/// Offsets for the first byte of each field, ordered in increasing order.
/// The ith element of this vector is not always the ith field of the struct.
/// FIXME(eddyb) use small vector optimization for the common case.
pub offsets: Vec<Size>,

This comment has been minimized.

Copy link
@eddyb

eddyb Oct 27, 2016

Member

To have the increasing order offsets (instead of declaration order - is it really necessary?) you need to rename the field to permuted_offsets or ordered_offsets and (re-)add a field_offset method.

This comment has been minimized.

Copy link
@camlorn

camlorn Oct 27, 2016

Author Contributor

Should I just make it private and have field_offset and num_fields functions?

This comment has been minimized.

Copy link
@eddyb

eddyb Oct 27, 2016

Member

That sounds okay - but is it really necessary to have the offsets permuted?

@camlorn camlorn force-pushed the camlorn:univariant_layout_optimization branch from 472a158 to 9553ddd Oct 27, 2016
@arthurprs
Copy link
Contributor

arthurprs commented Oct 28, 2016

Sub. I'm sure we're gonna see some interesting stuff in the crater run, hopefully all fixable with a #[repr(C)]

@camlorn
Copy link
Contributor Author

camlorn commented Oct 29, 2016

@arthurprs
Yes, probably. But since the compiler isn't, well, compiling at the moment...

I'm waiting on #36622 at the moment. Then we shall see. Probably that can be worked around, but I don't have enough experience yet to know what I should or shouldn't be logging and also there's another project I can work on that's not Rust.

If you see places that still assume fields are in order, tell me. I'm more than willing to accept help finding them.

@camlorn camlorn force-pushed the camlorn:univariant_layout_optimization branch from 9553ddd to af058d6 Nov 7, 2016
@alexcrichton
Copy link
Member

alexcrichton commented Nov 10, 2016

ping @eddyb

Looks like some updates have happened since you last looked, want to take another peek?

@alexcrichton
Copy link
Member

alexcrichton commented Nov 10, 2016

also tagging with relnotes for when this lands, seems like it could lead to some awesome benchmarks!

@camlorn
Copy link
Contributor Author

camlorn commented Nov 10, 2016

@alexcrichton
He knows, I'm still leaning on him incredibly heavily. There's bugs I don't trivially know how to fix blocking this moving forward. Sadly, this does not appear to be my ignorance: it is simply that complicated.

@alexcrichton
Copy link
Member

alexcrichton commented Nov 10, 2016

Ah ok, no worries! Just checking up

@camlorn
Copy link
Contributor Author

camlorn commented Nov 11, 2016

All but 18 of the stage1-rpass tests now pass.

Debuginfo still needs conversion.

@petrochenkov
Copy link
Contributor

petrochenkov commented Nov 11, 2016

Some thoughts (long term, no immediate action required):

It would be good to have two things related to field reordering:

  • #[repr(rand)] - reorders struct fields pseudo-randomly, based on e.g. struct hash with some environmental variable used as salt. Can be used for fuzz testing to ensure code doesn't rely on field ordering. EDIT: I mean ability to set repr(rand) as a default layout for all structs, rather than a manually written attribute.
  • #[repr(linear)] - Rust ABI but no field reordering. Fixed field order may be useful in code not related to C/FFI at all.

Also, reordering accidentally uses stable sorting algorithm now, just because sorting in libstd is stable.
I think it should be explicitly mentioned in comments (and eventually documented) that the field sorting algorithm is stable. There absolutely no reasons to reorder fields in (u8, u8, u8) and there certainly are reasons for (u8, u8, u8) to be layout compatible with [u8; 3] which is guaranteed to not be reordered.

@camlorn
Copy link
Contributor Author

camlorn commented Nov 12, 2016

@petrochenkov
Per rust-lang/rfcs#1582, there may be reasons to reorder (u8, u8, u8). No one has said that there are good reasons as of yet, but that PR does get into stuff to do with if/how tuple layout should be changed to support it.

I like the idea of #[repr(rand)], and there may even be a case for making this the default in debug builds; I think that a #[repr(linear)] deserves an RFC. The only use cases I can think of for the latter are some cases related to cache performance that may be handled better by other language extensions such as the ability to add empty padding to the end of structs and internal stdlib usage. I don't know enough about the cache performance use cases, however.

Both of these should be almost absurdly trivial to actually add. I could work out how to do it without leaning on @eddyb, I think.

@camlorn camlorn force-pushed the camlorn:univariant_layout_optimization branch from 376ac0c to 408bed9 Nov 12, 2016
@bors
Copy link
Contributor

bors commented Nov 13, 2016

The latest upstream changes (presumably #37675) made this pull request unmergeable. Please resolve the merge conflicts.

@camlorn camlorn force-pushed the camlorn:univariant_layout_optimization branch from 408bed9 to a565851 Nov 16, 2016
}

/// Extend the Struct with more fields.
pub fn extend<I>(&mut self, dl: &TargetDataLayout,
fn fill_in_fields<I>(&mut self, dl: &TargetDataLayout,

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 16, 2016

Member

The problem is that this function shouldn't be possible to call at all, this is really outlining, and there's no usefulness gained by having self - that is, I'd expect the Struct be created at the very end.

-> Result<(), LayoutError<'gcx>>
where I: Iterator<Item=Result<&'a Layout, LayoutError<'gcx>>> {
self.offsets.reserve(fields.size_hint().0);
let fields = fields.collect::<Result<Vec<_>, LayoutError<'gcx>>>()?;
if fields.len() == 0 {return Ok(())};

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 16, 2016

Member

This seems unnecessary, as a zero-length Vec is free.


self.offsets = vec![Size::from_bytes(0); fields.len()];
let mut inverse_gep_index: Vec<u32> = Vec::with_capacity(fields.len());
inverse_gep_index.extend(0..fields.len() as u32);

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 16, 2016

Member

This can be written as let mut inverse_gep_index: Vec<u32> = (0..fields.len() as u32).collect();

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 16, 2016

Member

Also the same naming problem. laid_to_def_order might work? Still ugly.

inverse_gep_index.extend(0..fields.len() as u32);

if repr == attr::ReprAny {
let start: usize = if is_enum_variant {1} else {0};

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 16, 2016

Member

Is the usize type here necessary?

@@ -511,41 +512,76 @@ pub struct Struct {
/// If true, the size is exact, otherwise it's only a lower bound.
pub sized: bool,

/// Offsets for the first byte of each field.
/// Offsets for the first byte of each field, ordered to match the tys.

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 16, 2016

Member

Could you refer to "the definition" or "definition order" instead of "types"? That is, the order the fields come in is the source definition order, the types just happen to be associated with each field so the most straight-forward way to get them is in source definition order, but they're secondary to the order.

/// FIXME(eddyb) use small vector optimization for the common case.
pub offsets: Vec<Size>,

/// Maps field indices to GEP indices, depending how fields were permuted.

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 16, 2016

Member

"Maps from definition order to the order the fields were laid out in", maybe? Still not perfect though.
GEP should really not be mentioned if avoided. def_to_laid_order might work, although jarring.

This comment has been minimized.

Copy link
@camlorn

camlorn Nov 20, 2016

Author Contributor

I ended up calling these source order and memory order, I don't think there's anything better. The gep_index vec is now memory_index.

If there's better nomenclature, I don't know what it is.

}).collect::<Vec<_>>();
fields.insert(0, Ok(&discr));
let st = Struct::new(dl,
fields.iter().cloned(),

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 16, 2016

Member

What if you make Struct::new take the Vec<Layout>? Maybe that way the double allocating can be avoided.

@@ -26,8 +26,9 @@ fn main() {
assert_eq!(size_of::<E>(), 1);
assert_eq!(size_of::<Option<E>>(), 1);
assert_eq!(size_of::<Result<E, ()>>(), 1);
assert_eq!(size_of::<S>(), 4);
assert_eq!(size_of::<Option<S>>(), 4);
// The next asserts are correct given the currently dumb field reordering algorithm, which actually makes this struct larger.

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 16, 2016

Member

Would sorting by descending alignment work in more cases than ascending?

@camlorn
Copy link
Contributor Author

camlorn commented Nov 20, 2016

To anyone watching, all the run-pass tests now pass.

@camlorn camlorn force-pushed the camlorn:univariant_layout_optimization branch from 1fbefa0 to 825ec31 Nov 20, 2016
@@ -1305,6 +1313,8 @@ impl<'tcx> EnumMemberDescriptionFactory<'tcx> {

// Creates MemberDescriptions for the fields of a single enum variant.
struct VariantMemberDescriptionFactory<'tcx> {
// Cloned from the layout::Struct describing the variant.
offsets: Vec<layout::Size>,

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 21, 2016

Member

The layouts are interned so you can just use &'tcx [layout::Size].

This comment has been minimized.

Copy link
@eddyb

eddyb Dec 16, 2016

Member

Still need to address this.

@camlorn camlorn force-pushed the camlorn:univariant_layout_optimization branch 3 times, most recently from 9bf17ce to b0c6c53 Nov 21, 2016
@eddyb
Copy link
Member

eddyb commented Nov 23, 2016

Crater report has 3 legitimate regressions, one of them a LLVM assertion, the others are a bug with packed:

pub const SIZEOF_QUERY:      usize = 21;

#[repr(C,packed)]
pub struct p0f_api_query {
	pub magic: u32,
	pub addr_type: u8,
	pub addr: [u8; 16],
}

The correct size is indeed 21, but this PR changes it to 24, presumably some logic went missing.

@bluss
Copy link
Member

bluss commented Nov 23, 2016

Will there be another repr attribute? Something like repr(C) but only to fix the field order and not care about the C part. Edit: Stupid me didn't see #[repr(linear)] was already mentioned.

}
}

let at_least = if let Some(i) = min_from_extern {

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 23, 2016

Member

This can be written min_from_extern.unwrap_or(min_default).

can_optimize = !reprs.iter().any(|r| {
match *r {
attr::ReprAny => false,
attr::ReprInt(_) => false,

This comment has been minimized.

Copy link
@eddyb

eddyb Nov 23, 2016

Member

Should use | between these patterns to avoid repeating false (and true below).

@retep998
Copy link
Member

retep998 commented Dec 18, 2016

Fixing your code for this change is a fairly trivial matter so I'm of the opinion that the current timetable is fine. Just make the PSA prominent enough (reddit, rust blog, urlo, and irlo) and everything should be fine.

@camlorn
Copy link
Contributor Author

camlorn commented Dec 18, 2016

I do not think this should be in the next beta, should the next beta be soon. I do not have the schedules memorized yet.

This code breaks other people's broken unsafe code in fun ways, but it may also break the compiler. I am not confident enough that this code doesn't create compiler bugs to wish to shove it into a beta as quickly as possible. If @eddyb or @nikomatsakis thinks we're fine, then go for it. But I'd hold off, just in case.

As for breaking unsafe code, there were at least 2 places in the compiler with missing repr(C). While one of these was tests, it does illustrate that people aren't immune to making this mistake. I think the test coverage probably covers enough that we don't have more of these in the compiler, however.

@alexcrichton
Copy link
Member

alexcrichton commented Dec 18, 2016

To clarify, the 1.15 beta branch has not been forked off yet. That was scheduled to happen some time during the next week.

@camlorn is there perhaps a "boolean flag" style switch somewhere we could turn this optimization off on beta easily? If so we could leave this on master, easily backport a "turn off this optimization" to beta, and let this ride the trains to 1.16 stable.

@eddyb
Copy link
Member

eddyb commented Dec 18, 2016

@alexcrichton There's a check for #[repr(C)] in librustc/ty/layout.rs, that can be hardcoded.

@alexcrichton
Copy link
Member

alexcrichton commented Dec 18, 2016

Ok, so we have a few options now. I'm assuming that this PR itself is going into the 1.15 beta no matter what.

  1. Do nothing and just send out a PSA
  2. Leave this PR in tree, backport a change to librustc/ty/layout.rs to assume everything is #[repr(C)].
  3. Revert this PR on beta
@camlorn
Copy link
Contributor Author

camlorn commented Dec 18, 2016

I can open another PR which just turns it off. All you have to do is short-circuit one condition in layout.rs.

@camlorn camlorn deleted the camlorn:univariant_layout_optimization branch Dec 18, 2016
@camlorn
Copy link
Contributor Author

camlorn commented Dec 18, 2016

I was wrong. Turning this off is a little complicated. If we do, some tests are going to fail, so we'll have to restore the old versions.

@nikomatsakis
Copy link
Contributor

nikomatsakis commented Dec 19, 2016

@camlorn can't we add a compiler flag (-Z reorder-layouts) and then pass this flag to the affected tests?

In retrospect I wish I had raised this to the core team to discuss proper handling.

@alexcrichton
Copy link
Member

alexcrichton commented Dec 19, 2016

@camlorn it looks like this may have regressed a test on Windows AppVeyor, specifically:

[01:02:09] failures:
[01:02:09] 
[01:02:09] ---- [debuginfo-gdb] debuginfo-gdb\var-captured-in-sendable-closure.rs stdout ----
[01:02:09] 	NOTE: compiletest thinks it is using GDB without native rust support
[01:02:09] NOTE: compiletest thinks it is using GDB version 7009001
[01:02:09] 
[01:02:09] error: line not found in debugger output: $1 = 1
[01:02:09] status: exit code: 0
[01:02:09] command: PATH="C:\projects\rust\i686-pc-windows-gnu/stage2/lib/rustlib/i686-pc-windows-gnu/lib;C:\projects\rust\i686-pc-windows-gnu\stage2\bin;C:\projects\rust\i686-pc-windows-gnu\llvm\lib;C:\Python27;C:\projects\rust\mingw32\bin;C:\msys64\usr\bin;C:\Perl\site\bin;C:\Perl\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Program Files\7-Zip;C:\Program Files\Microsoft\Web Platform Installer;C:\Tools\GitVersion;C:\Tools\PsTools;C:\Program Files\Git LFS;C:\Program Files\Mercurial;C:\Program Files (x86)\Subversion\bin;C:\Program Files\Microsoft SQL Server\120\Tools\Binn;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\110\Tools\Binn;C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn;C:\Program Files\Microsoft SQL Server\120\DTS\Binn;C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn\ManagementStudio;C:\Program Files\Microsoft Windows Performance Toolkit;C:\Program Files (x86)\Windows Kits\8.1\Windows Performance Toolkit;C:\Tools\WebDriver;C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.4;C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\PrivateAssemblies;C:\Program Files (x86)\Microsoft SDKs\Azure\CLI\wbin;C:\Ruby193\bin;C:\Tools\NUnit\bin;C:\Tools\xUnit;C:\Tools\MSpec;C:\Tools\Coverity\bin;C:\Program Files (x86)\CMake\bin;C:\go\bin;C:\Program Files\Java\jdk1.8.0\bin;C:\Python27;C:\Program Files\erl7.3\bin;C:\Program Files\nodejs;C:\Program Files (x86)\iojs;C:\Program Files\iojs;C:\Users\appveyor\AppData\Roaming\npm;C:\Program Files\Microsoft SQL Server\130\Tools\Binn;C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit;C:\Program Files (x86)\MSBuild\14.0\Bin;C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\120;C:\Tools\NuGet;C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow;C:\Program Files\Amazon\AWSCLI;C:\Windows\SysWOW64\WindowsPowerShell\v1.0\Modules\TShell\TShell;C:\Program Files\Microsoft DNX\Dnvm;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\130\Tools\Binn;C:\Program Files\Git\cmd;C:\Program Files\Git\usr\bin;C:\Program Files (x86)\Microsoft SQL Server\130\Tools\Binn;C:\Program Files (x86)\Microsoft SQL Server\130\DTS\Binn;C:\Program Files\Microsoft SQL Server\130\DTS\Binn;C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn;C:\Program Files (x86)\Microsoft SQL Server\120\DTS\Binn;C:\Program Files (x86)\Apache\Maven\bin;C:\Program Files\LLVM\bin;C:\ProgramData\chocolatey\bin;C:\Program Files\Microsoft Service Fabric\bin\Fabric\Fabric.Code;C:\Program Files\Microsoft SDKs\Service Fabric\Tools\ServiceFabricLocalClusterManager;C:\Python27\Scripts;C:\Program Files (x86)\Yarn\bin;C:\Tools\NUnit3;C:\Program Files\dotnet;C:\Program Files (x86)\nodejs;C:\Users\appveyor\.dnx\runtimes\dnx-clr-win-x86.1.0.0-rc1-update2\bin;C:\Users\appveyor\AppData\Local\Yarn\.bin;C:\Users\appveyor\AppData\Roaming\npm;C:\Program Files\AppVeyor\BuildAgent;C:\projects\rust\sccache2" C:/projects/rust/mingw32/bin/gdb -quiet -batch -nx -command=i686-pc-windows-gnu/test/debuginfo-gdb\var-captured-in-sendable-closure.debugger.script
[01:02:09] stdout:
[01:02:09] ------------------------------------------
[01:02:09] GNU gdb (GDB) 7.9.1
[01:02:09] Copyright (C) 2015 Free Software Foundation, Inc.
[01:02:09] License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
[01:02:09] This is free software: you are free to change and redistribute it.
[01:02:09] There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
[01:02:09] and "show warranty" for details.
[01:02:09] This GDB was configured as "i686-w64-mingw32".
[01:02:09] Type "show configuration" for configuration details.
[01:02:09] For bug reporting instructions, please see:
[01:02:09] <http://www.gnu.org/software/gdb/bugs/>.
[01:02:09] Find the GDB manual and other documentation resources online at:
[01:02:09] <http://www.gnu.org/software/gdb/documentation/>.
[01:02:09] For help, type "help".
[01:02:09] Type "apropos word" to search for commands related to "word".
[01:02:09] Breakpoint 1 at 0x401bec: file C:/projects/rust/src/test/debuginfo/var-captured-in-sendable-closure.rs, line 66.
[01:02:09] Breakpoint 2 at 0x401c2a: file C:/projects/rust/src/test/debuginfo/var-captured-in-sendable-closure.rs, line 78.
[01:02:09] [New Thread 2696.0xb60]
[01:02:09] 
[01:02:09] Breakpoint 1, var_captured_in_sendable_closure::main::{{closure}} () at C:/projects/rust/src/test/debuginfo/var-captured-in-sendable-closure.rs:66
[01:02:09] 66	        zzz(); // #break
[01:02:09] $1 = 0
[01:02:09] $2 = {a = 0, b = 4.420840408746224e-307, c = 5496392}
[01:02:09] $3 = 5
[01:02:09] 
[01:02:09] Breakpoint 2, var_captured_in_sendable_closure::main::{{closure}} () at C:/projects/rust/src/test/debuginfo/var-captured-in-sendable-closure.rs:78
[01:02:09] 78	        zzz(); // #break
[01:02:09] $4 = 6
[01:02:09] [Inferior 1 (process 2696) exited normally]
[01:02:09] 
[01:02:09] ------------------------------------------
[01:02:09] stderr:
[01:02:09] ------------------------------------------
[01:02:09] warning: SHIMVIEW: ShimInfo(Complete)

[01:02:09] 
[01:02:09] 
[01:02:09] ------------------------------------------
[01:02:09] 
[01:02:09] thread '[debuginfo-gdb] debuginfo-gdb\var-captured-in-sendable-closure.rs' panicked at 'explicit panic', C:/projects/rust/src/tools/compiletest/src\runtest.rs:2407
[01:02:09] note: Run with `RUST_BACKTRACE=1` for a backtrace.
[01:02:09] 
[01:02:09] 
[01:02:09] failures:
[01:02:09]     [debuginfo-gdb] debuginfo-gdb\var-captured-in-sendable-closure.rs

That wouldn't happen to look familiar off-hand, would it?

@camlorn
Copy link
Contributor Author

camlorn commented Dec 19, 2016

@nikomatsakis
Only if they don't use stdlib. You could put repr(C) on them, though.
@alexcrichton
No, it wouldn't. Did anything merge after me which might potentially not be accounting for this change?

I'm going to see if I can figure out how to reproduce this locally.

@alexcrichton
Copy link
Member

alexcrichton commented Dec 19, 2016

Ok turns out we're accidentally not running debuginfo tests on the bots. That may mean that this is trivially reproducible, though.

@camlorn
Copy link
Contributor Author

camlorn commented Dec 19, 2016

Upon following the readme's directions verbatim, I get this. Consequently, I can't reproduce. If I don't use rustbuild, it also happens, but later in the process.

I checked against WSL: the broken test passes there. Taking a cursory look at the test, I don't see anything that could be different based on platform. These are also both using gdb.

@gnzlbg
Copy link
Contributor

gnzlbg commented Dec 21, 2016

@camlorn the WillOptimize1/2 tests cover structs and tuple structs, would it be possible to add tests that assert that the optimization triggers on enums and tuples as well? (or did I miss these tests somewhere?)

@camlorn
Copy link
Contributor Author

camlorn commented Dec 21, 2016

@gnzlbg
yeah, but I'll have to open a second PR for it at this point.

@gnzlbg
Copy link
Contributor

gnzlbg commented Dec 21, 2016

@camlorn go for it, you already did the work, those are just the 3-5 easy lines of code to make sure nobody else screws it up, totally worth it

@camlorn
Copy link
Contributor Author

camlorn commented Dec 22, 2016

@gnzlbg
Yeah, but, we're turning this off temporarily. So it's pointless for the moment. I'll do it in the probably inevitable -z enable-field-reordering PR, though.

@bstrie
Copy link
Contributor

bstrie commented Dec 22, 2016

What a pleasant surprise!

@camlorn Can you read over the comments in #28951 and confirm that this PR addresses any concerns that were raised there? If it does, can you give me the go-ahead to close it?

bors added a commit that referenced this pull request Dec 23, 2016
Disable field reordering

This was decided via IRC and needs a backport to beta.  Basically, #37429 broke servo, and probably needs an announcement and opt-in flag.  I didn't run all tests locally but think I've already reverted all the ones that need to be reverted.

r? @nikomatsakis
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

You can’t perform that action at this time.