Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove layout::size_align. #72189

Closed

Conversation

nnethercote
Copy link
Contributor

No description provided.

@rust-highfive
Copy link
Collaborator

r? @kennytm

(rust_highfive has picked a reviewer for you, use r? to override)

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label May 14, 2020
@nnethercote
Copy link
Contributor Author

r? @ghost

@nnethercote
Copy link
Contributor Author

I was getting weird perf results with this trivial change locally, so I'm going to do a perf CI run.

@bors try @rust-timer queue

@rust-timer
Copy link
Collaborator

Awaiting bors try build completion

@bors
Copy link
Contributor

bors commented May 14, 2020

⌛ Trying commit bf8ab30732fe02fa27ef69db654830e0802320e6 with merge e394819560f57a7fd5628b99e3e6d5b5d85dfeab...

@bors
Copy link
Contributor

bors commented May 14, 2020

☀️ Try build successful - checks-actions, checks-azure
Build commit: e394819560f57a7fd5628b99e3e6d5b5d85dfeab (e394819560f57a7fd5628b99e3e6d5b5d85dfeab)

@rust-timer
Copy link
Collaborator

Queued e394819560f57a7fd5628b99e3e6d5b5d85dfeab with parent 23ffeea, future comparison URL.

@rust-timer
Copy link
Collaborator

Finished benchmarking try commit e394819560f57a7fd5628b99e3e6d5b5d85dfeab, comparison URL.

@nnethercote
Copy link
Contributor Author

These results are so weird for such a trivial change.

  • Lots of regressions in the 1-5% range.
  • Wins of up to 8.4% for ctfe-stress-4.
  • A win of 38.6% for one script-servo patched run!

I will investigate further.

@nnethercote
Copy link
Contributor Author

First, let's look at the improvement in ctfe-stress-4. A cachegrind diff starts with this:

--------------------------------------------------------------------------------
Ir                  
--------------------------------------------------------------------------------
 -410,752,077 (100.0%)  PROGRAM TOTALS

--------------------------------------------------------------------------------
Ir                      file:function 
--------------------------------------------------------------------------------
 -452,985,324 (110.3%)  /home/njn/moz/rustN/src/liballoc/vec.rs:serialize::serialize::Decoder::read_seq
 -150,995,736 (36.76%)  /home/njn/moz/rustN/src/liballoc/raw_vec.rs:serialize::serialize::Decoder::read_seq
  150,994,836 (-36.8%)  /home/njn/moz/rustN/src/libserialize/serialize.rs:serialize::serialize::Decoder::read_seq
  -61,867,183 (15.06%)  /home/njn/moz/rustN/src/libserialize/leb128.rs:serialize::serialize::Encoder::emit_seq
   61,341,950 (-14.9%)  /home/njn/moz/rustN/src/libserialize/leb128.rs:<rustc_middle::ty::query::on_disk_cache::CacheEncoder<E> as serialize::serialize::Encoder>::emit_u64
   61,080,152 (-14.9%)  /home/njn/moz/rustN/src/liballoc/vec.rs:<rustc_middle::ty::query::on_disk_cache::CacheEncoder<E> as serialize::serialize::Encoder>::emit_u64
  -59,641,903 (14.52%)  /home/njn/moz/rustN/src/liballoc/vec.rs:serialize::serialize::Encoder::emit_seq
  -42,469,750 (10.34%)  /home/njn/moz/rustN/src/libcore/num/mod.rs:<rustc_middle::mir::interpret::allocation::Allocation<Tag,Extra> as core::hash::Hash>::hash
   37,883,124 (-9.22%)  /home/njn/moz/rustN/src/libcore/num/mod.rs:core::hash::impls::<impl core::hash::Hash for [T]>::hash
   35,395,806 (-8.62%)  /home/njn/moz/rustN/src/librustc_middle/ty/query/on_disk_cache.rs:<rustc_middle::ty::query::on_disk_cache::CacheEncoder<E> as serialize::serialize::Encoder>::emit_u64
  -21,234,875 ( 5.17%)  /home/njn/moz/rustN/src/libcore/ops/bit.rs:<rustc_middle::mir::interpret::allocation::Allocation<Tag,Extra> as core::hash::Hash>::hash
   18,874,530 (-4.60%)  /home/njn/moz/rustN/src/libcore/ops/bit.rs:core::hash::impls::<impl core::hash::Hash for [T]>::hash

The -452M for vec.rs:serialize::serialize::Decoder::read_seq is the main change. The rest of the changes is a mixture of pluses and minuses that suggest changes to inlining.

Looking at vec.rs, the change is due to this:

          .               pub fn push(&mut self, value: T) {
          .                   // This will panic or abort if we would allocate > isize::MAX bytes
          .                   // or if the length increment would overflow for zero-sized types.
790,701,018 (16.32%)          if self.len == self.buf.capacity() {
          .                       self.reserve(1);
          .                   }
          .                   unsafe {
     30,093 ( 0.00%)              let end = self.as_mut_ptr().add(self.len);
        107 ( 0.00%)              ptr::write(end, value);
643,975,968 (13.29%)              self.len += 1;
          .                   }
          .               }

becoming this:

          .               pub fn push(&mut self, value: T) {
          .                   // This will panic or abort if we would allocate > isize::MAX bytes
          .                   // or if the length increment would overflow for zero-sized types.
793,020,603 (17.88%)          if self.len == self.buf.capacity() {
          .                       self.reserve(1);
          .                   }
          .                   unsafe {
     29,578 ( 0.00%)              let end = self.as_mut_ptr().add(self.len);
        107 ( 0.00%)              ptr::write(end, value);
190,817,116 ( 4.30%)              self.len += 1;
          .                   }
          .               }

Notice the -452M for the self.len += 1 line. The reserve call uses layout::size_align via RawVec::grow_amortized. I looked at Vec::push changes for some of the other benchmarks, there's no real pattern to whether it gets better or worse. Perhaps it depends on exactly which push calls get inlined, and the effects on code generation, and ctfe-stress-4 gets lucky that one especially hot push call in deserialization ends up with better code?

Encoding/decoding certainly dominates that benchmark run. Here are Cachegrind results without this PR's patch applied:

--------------------------------------------------------------------------------
Ir                      file:function
--------------------------------------------------------------------------------
1,215,543,965 (25.09%)  /home/njn/moz/rust0/src/libserialize/opaque.rs:serialize::serialize::Decoder::read_seq
  767,967,874 (15.85%)  /home/njn/moz/rust0/src/liballoc/vec.rs:serialize::serialize::Decoder::read_seq
  663,669,098 (13.70%)  /home/njn/moz/rust0/src/liballoc/vec.rs:serialize::serialize::Encoder::emit_seq
  458,002,174 ( 9.45%)  /home/njn/moz/rust0/src/libcore/slice/mod.rs:serialize::serialize::Encoder::emit_seq
  162,921,907 ( 3.36%)  /home/njn/moz/rust0/src/liballoc/raw_vec.rs:serialize::serialize::Encoder::emit_seq
  162,921,845 ( 3.36%)  /home/njn/moz/rust0/src/libcore/ptr/mod.rs:serialize::serialize::Encoder::emit_seq
  156,148,897 ( 3.22%)  /home/njn/moz/rust0/src/libserialize/serialize.rs:serialize::serialize::Encoder::emit_seq
  155,997,547 ( 3.22%)  /home/njn/moz/rust0/src/libcore/iter/range.rs:serialize::serialize::Decoder::read_seq
  154,091,158 ( 3.18%)  /home/njn/moz/rust0/src/libcore/ptr/mod.rs:serialize::serialize::Decoder::read_seq
  153,645,301 ( 3.17%)  /home/njn/moz/rust0/src/liballoc/raw_vec.rs:serialize::serialize::Decoder::read_seq
  153,638,254 ( 3.17%)  /home/njn/moz/rust0/src/libcore/cmp.rs:serialize::serialize::Decoder::read_seq
  153,355,084 ( 3.17%)  /home/njn/moz/rust0/src/librustc_middle/ty/query/on_disk_cache.rs:serialize::serialize::Encoder::emit_seq
  150,387,780 ( 3.10%)  /home/njn/moz/rust0/src/libserialize/leb128.rs:serialize::serialize::Decoder::read_seq
   61,883,481 ( 1.28%)  /home/njn/moz/rust0/src/libserialize/leb128.rs:serialize::serialize::Encoder::emit_seq

Perhaps the lesson here is this: because encoding/decoding code dominates the execution time of these runs, therefore minor changes in inlining and code generation of that code occasionally has outsized effects?

Also, this is a really weird benchmark. We don't normally see encoding/decoding anything like that dominant.

@nnethercote
Copy link
Contributor Author

As for the big script-servo improvement, the self-profile results are interesting. Before:

Query/Function Time (s) Time (%) Executions Incremental loading (s) Incremental loading delta
Totals 434.013 124.94%* 2579459 1.404
LLVM_lto_optimize 162.719 37.49% 166 0.000
LLVM_thin_lto_import 84.306 19.42% 166 0.000
LLVM_module_codegen_emit_obj 72.605 16.73% 166 0.000
LLVM_passes 44.137 10.17% 1 0.000
finish_ongoing_codegen 41.815 9.63% 1 0.000
codegen_module_perform_lto 2.776 0.64% 166 0.000
LLVM_module_codegen 1.110 0.26% 166 0.000

After:

Query/Function Time (s) Time (%) Executions Incremental loading (s) Incremental loading delta
Totals 288.424 137.90%* 2576162 1.404
LLVM_lto_optimize 108.173 37.50% 65 0.000
LLVM_module_codegen_emit_obj 41.186 14.28% 65 0.000
LLVM_passes 39.898 13.83% 1 0.000
finish_ongoing_codegen 37.348 12.95% 1 0.000
LLVM_thin_lto_import 34.610 12.00% 65 0.000
codegen_module_perform_lto 2.048 0.71% 65 0.000
LLVM_module_codegen 0.535 0.19% 65 0.000

The number of executions of some of the queries drops from 166 to 65. All other query counts are either identical or almost identical between the runs. I don't know why the number of queries would change so drastically. Any suggestions?

@Mark-Simulacrum
Copy link
Member

The LTO query changes are probably different CGU partitioning, cc @rust-lang/wg-mir-opt

@nnethercote
Copy link
Contributor Author

Now for ucd. The Cachegrind diff:

--------------------------------------------------------------------------------
Ir                   
--------------------------------------------------------------------------------
405,463,889 (100.0%)  PROGRAM TOTALS

--------------------------------------------------------------------------------
Ir                    file:function
--------------------------------------------------------------------------------
119,174,913 (29.39%)  /home/njn/moz/rustN/src/libserialize/serialize.rs:serialize::serialize::Encoder::emit_enum_variant
101,718,375 (25.09%)  /home/njn/moz/rustN/src/librustc_middle/ty/query/on_disk_cache.rs:<rustc_middle::ty::query::on_disk_cache::CacheEncoder<E> as serialize::serialize::Encoder>::emit_u32
 72,955,950 (17.99%)  /home/njn/moz/rustN/src/librustc_metadata/rmeta/encoder.rs:<rustc_metadata::rmeta::encoder::EncodeContext as serialize::serialize::Encoder>::emit_u32
 49,260,540 (12.15%)  /home/njn/moz/rustN/src/libserialize/serialize.rs:serialize::serialize::Encoder::emit_seq
 47,003,165 (11.59%)  /home/njn/moz/rustN/src/liballoc/vec.rs:<rustc_middle::ty::query::on_disk_cache::CacheEncoder<E> as serialize::serialize::Encoder>::emit_u32
 46,699,490 (11.52%)  /home/njn/moz/rustN/src/liballoc/vec.rs:<rustc_metadata::rmeta::encoder::EncodeContext as serialize::serialize::Encoder>::emit_u32
-33,793,369 (-8.33%)  /home/njn/moz/rustN/src/liballoc/vec.rs:<rustc_middle::ty::query::on_disk_cache::CacheEncoder<E> as serialize::serialize::SpecializedEncoder<rustc_span::span_encoding::Span>>
::specialized_encode
 31,437,956 ( 7.75%)  /home/njn/moz/rustN/src/libserialize/leb128.rs:<rustc_metadata::rmeta::encoder::EncodeContext as serialize::serialize::Encoder>::emit_u32
-27,748,168 (-6.84%)  /home/njn/moz/rustN/src/liballoc/vec.rs:<rustc_metadata::rmeta::encoder::EncodeContext as serialize::serialize::SpecializedEncoder<rustc_span::span_encoding::Span>>::speciali
zed_encode
 27,237,498 ( 6.72%)  /home/njn/moz/rustN/src/liballoc/vec.rs:serialize::serialize::Encoder::emit_enum_variant
 26,374,177 ( 6.50%)  /home/njn/moz/rustN/src/librustc_data_structures/sip128.rs:rustc_ast::ast::_DERIVE_rustc_data_structures_stable_hasher_HashStable_CTX_FOR_LitKind::<impl rustc_data_structures
::stable_hasher::HashStable<__CTX> for rustc_ast::ast::LitKind>::hash_stable
-25,753,232 (-6.35%)  /home/njn/moz/rustN/src/liballoc/vec.rs:<(T10,T11) as serialize::serialize::Encodable>::encode
 25,592,894 ( 6.31%)  /home/njn/moz/rustN/src/libserialize/leb128.rs:<rustc_middle::ty::query::on_disk_cache::CacheEncoder<E> as serialize::serialize::Encoder>::emit_u32
-25,042,960 (-6.18%)  /home/njn/moz/rustN/src/librustc_middle/mir/mod.rs:<rustc_middle::mir::Operand as serialize::serialize::Encodable>::encode
 25,035,332 ( 6.17%)  /home/njn/moz/rustN/src/librustc_middle/mir/mod.rs:serialize::serialize::Encoder::emit_enum_variant
-21,562,592 (-5.32%)  /home/njn/moz/rustN/src/librustc_data_structures/sip128.rs:core::hash::impls::<impl core::hash::Hash for u64>::hash
-21,446,205 (-5.29%)  /home/njn/moz/rustN/src/libserialize/leb128.rs:<rustc_metadata::rmeta::encoder::EncodeContext as serialize::serialize::SpecializedEncoder<rustc_span::span_encoding::Span>>::specialized_encode

The regression is clearly all to do with metadata encoding.

I see these results for the old code:

// librustc_middle/ty/query/on_disk_cache.rs
         .           macro_rules! encoder_methods {
         .               ($($name:ident($ty:ty);)*) => {
         .                   #[inline]
 2,350,558 ( 0.03%)          $(fn $name(&mut self, value: $ty) -> Result<(), Self::Error> {
13,246,909 ( 0.15%)              self.encoder.$name(value)
 2,350,558 ( 0.03%)          })*
         .               }
         .           }

// librustc_metadata/rmeta/encoder.rs
         .           macro_rules! encoder_methods {
         .               ($($name:ident($ty:ty);)*) => {
 3,146,711 ( 0.03%)          $(fn $name(&mut self, value: $ty) -> Result<(), Self::Error> {
         .                       self.opaque.$name(value)
 2,097,912 ( 0.02%)          })*
         .               }
         .           }

// libserialize/serialize.rs
17,134,437 ( 0.19%)      fn emit_enum_variant<F>(
         .                   &mut self,
         .                   _v_name: &str,
         .                   v_id: usize,
         .                   _len: usize,
         .                   f: F,
         .               ) -> Result<(), Self::Error>
         .               where
         .                   F: FnOnce(&mut Self) -> Result<(), Self::Error>,
         .               {
 3,642,239 ( 0.04%)          self.emit_usize(v_id)?;
   139,606 ( 0.00%)          f(self)
   205,023 ( 0.00%)      }

 1,451,419 ( 0.02%)      fn emit_seq<F>(&mut self, len: usize, f: F) -> Result<(), Self::Error>
         .               where
         .                   F: FnOnce(&mut Self) -> Result<(), Self::Error>,
         .               {
 1,034,878 ( 0.01%)          self.emit_usize(len)?;
         .                   f(self)
 1,124,627 ( 0.01%)      }

and new:

// librustc_middle/ty/query/on_disk_cache.rs
         .           macro_rules! encoder_methods {
         .               ($($name:ident($ty:ty);)*) => {
         .                   #[inline]
51,983,394 ( 0.55%)          $(fn $name(&mut self, value: $ty) -> Result<(), Self::Error> {
11,988,316 ( 0.13%)              self.encoder.$name(value)
52,284,125 ( 0.55%)          })*
         .               }
         .           }

// librustc_metadata/rmeta/encoder.rs
         .           macro_rules! encoder_methods {
         .               ($($name:ident($ty:ty);)*) => {
47,736,282 ( 0.50%)          $(fn $name(&mut self, value: $ty) -> Result<(), Self::Error> {
         .                       self.opaque.$name(value)
41,334,541 ( 0.43%)          })*
         .               }
         .           }

// libserialize/serialize.rs
76,649,911 ( 0.80%)      fn emit_enum_variant<F>(
         .                   &mut self,
         .                   _v_name: &str,
         .                   v_id: usize,
         .                   _len: usize,
         .                   f: F,
         .               ) -> Result<(), Self::Error>
         .               where
         .                   F: FnOnce(&mut Self) -> Result<(), Self::Error>,
         .               {
 3,668,046 ( 0.04%)          self.emit_usize(v_id)?;
   277,579 ( 0.00%)          f(self)
30,539,486 ( 0.32%)      }

20,015,830 ( 0.21%)      fn emit_seq<F>(&mut self, len: usize, f: F) -> Result<(), Self::Error>
         .               where
         .                   F: FnOnce(&mut Self) -> Result<(), Self::Error>,
         .               {
 1,034,878 ( 0.01%)          self.emit_usize(len)?;
         .                   f(self)
16,012,664 ( 0.17%)      }

There are much higher counts for all the function entries/exits in the new code, so this definitely looks like inlining effects.

I barely understand the encoding code but presumably it's writing stuff into vectors, which would explain how layout::size_align gets involved. Even still, it's surprising to me that the trivial change in this PR would affect inlining of functions that use Vec::push.

@nnethercote
Copy link
Contributor Author

Now for issue-58319. Cachegrind diff:

--------------------------------------------------------------------------------
Ir
--------------------------------------------------------------------------------
33,823,942 (100.0%)  PROGRAM TOTALS

--------------------------------------------------------------------------------
Ir                    file:function
--------------------------------------------------------------------------------
 42,811,457 (126.6%)  /home/njn/moz/rustN/src/librustc_mir/borrow_check/region_infer/values.rs:rustc_mir::borrow_check::region_infer::values::RegionValueElements::push_predecessors
 11,884,282 (35.14%)  /home/njn/moz/rustN/src/liballoc/vec.rs:rustc_mir::borrow_check::region_infer::values::RegionValueElements::push_predecessors
-11,876,291 (-35.1%)  /home/njn/moz/rustN/src/liballoc/vec.rs:rustc_mir::borrow_check::type_check::liveness::trace::trace
-11,125,372 (-32.9%)  /home/njn/moz/rustN/src/librustc_mir/borrow_check/region_infer/values.rs:rustc_mir::borrow_check::type_check::liveness::trace::trace
  9,062,046 (26.79%)  /home/njn/moz/rustN/src/libcore/slice/mod.rs:rustc_mir::borrow_check::region_infer::values::RegionValueElements::push_predecessors
 -7,731,181 (-22.9%)  /home/njn/moz/rustN/src/libcore/slice/mod.rs:rustc_mir::borrow_check::type_check::liveness::trace::trace
 -4,905,860 (-14.5%)  /home/njn/moz/rustN/src/liballoc/raw_vec.rs:rustc_mir::borrow_check::type_check::liveness::trace::trace

Old code:

        .               crate fn push_predecessors(
        .                   &self, 
        .                   body: &Body<'_>,
        .                   index: PointIndex,
        .                   stack: &mut Vec<PointIndex>,
        .               ) {
        .                   let Location { block, statement_index } = self.to_location(index);
3,394,806 ( 0.54%)          if statement_index == 0 {
        .                       // If this is a basic block head, then the predecessors are
        .                       // the terminators of other basic blocks
        .                       stack.extend(
  564,573 ( 0.09%)                  body.predecessors()[block]
        .                               .iter()
  188,191 ( 0.03%)                      .map(|&pred_bb| body.terminator_loc(pred_bb))
  188,191 ( 0.03%)                      .map(|pred_loc| self.point_from_location(pred_loc)),
        .                       ); 
        .                   } else {
        .                       // Otherwise, the pred is just the previous statement
1,509,212 ( 0.24%)              stack.push(PointIndex::new(index.index() - 1));
        .                   }
        .               } 

New code:

15,276,627 ( 2.32%)      crate fn push_predecessors(
         .                   &self,
         .                   body: &Body<'_>,
         .                   index: PointIndex,
         .                   stack: &mut Vec<PointIndex>,
         .               ) {
         .                   let Location { block, statement_index } = self.to_location(index);
 3,394,806 ( 0.52%)          if statement_index == 0 {
         .                       // If this is a basic block head, then the predecessors are
         .                       // the terminators of other basic blocks
         .                       stack.extend(
   188,191 ( 0.03%)                  body.predecessors()[block]
         .                               .iter()
   188,191 ( 0.03%)                      .map(|&pred_bb| body.terminator_loc(pred_bb))
   188,191 ( 0.03%)                      .map(|pred_loc| self.point_from_location(pred_loc)),
         .                       );
         .                   } else {
         .                       // Otherwise, the pred is just the previous statement
 3,018,424 ( 0.46%)              stack.push(PointIndex::new(index.index() - 1));
         .                   }
13,579,224 ( 2.07%)      } 

push_precedessors is no longer getting inlined and the function entry/exit is costing a lot. (Only ~29M out of the ~43M difference, not sure where the other ~14M is going, but you get the idea.)

@nnethercote
Copy link
Contributor Author

In general, it seems like the growth path for Vec::push should not be inlined, so I'm going to investigate that.

@nnethercote
Copy link
Contributor Author

The LTO query changes are probably different CGU partitioning

I was able to replicate the script-servo-opt result locally, which indicates it's not just a one-off fluke.

@nnethercote
Copy link
Contributor Author

I was able to replicate the script-servo-opt result locally, which indicates it's not just a one-off fluke.

Also, this is a case where instruction counts are misleading. The instruction count improvement was 38%, but on CI the wall-time improvement was only 7.8%, and on my machine it was only 2.6%. Presumably this relates to the fact that LLVM codegen is multi-threaded.

@Mark-Simulacrum
Copy link
Member

The LTO query changes are probably different CGU partitioning

I was able to replicate the script-servo-opt result locally, which indicates it's not just a one-off fluke.

Yes -- CGU partitioning is intended to be deterministic on the set of inputs (and even under some changes, for incremental) but we've historically seen that it can be quite prone to shifts when altering underlying functions. See rust-lang/compiler-team#281 for a recent meeting proposal.

@wesleywiser
Copy link
Member

The script-servo-opt case does look very similar to the other instances we've seen of performance differences due to CGU partitioning. In this case, we're getting lucky and fewer CGUs are getting recompiled.

@joelpalmer joelpalmer added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels May 26, 2020
@nnethercote
Copy link
Contributor Author

Let's try this again, to see if anything has changed.

@bors try @rust-timer queue

@rust-timer
Copy link
Collaborator

Awaiting bors try build completion

@bors
Copy link
Contributor

bors commented May 29, 2020

⌛ Trying commit bf8ab30732fe02fa27ef69db654830e0802320e6 with merge f2dd1363e1fd146df330d259407f731150074f05...

@bors
Copy link
Contributor

bors commented May 29, 2020

☀️ Try build successful - checks-azure
Build commit: f2dd1363e1fd146df330d259407f731150074f05 (f2dd1363e1fd146df330d259407f731150074f05)

@rust-timer
Copy link
Collaborator

Queued f2dd1363e1fd146df330d259407f731150074f05 with parent 4512721, future comparison URL.

@Elinvynia Elinvynia added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Jun 10, 2020
@Elinvynia Elinvynia added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. labels Jun 25, 2020
@crlf0710 crlf0710 added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. T-libs Relevant to the library team, which will review and decide on the PR/issue. and removed S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. labels Jul 17, 2020
@nnethercote
Copy link
Contributor Author

This isn't going anywhere.

@nnethercote nnethercote reopened this Jul 28, 2020
@nnethercote nnethercote force-pushed the rm-layout-size_align branch 3 times, most recently from 5d95e14 to 423615a Compare July 28, 2020 09:21
@nnethercote
Copy link
Contributor Author

Let's give it one more try.

@bors try @rust-timer queue

@rust-timer
Copy link
Collaborator

Awaiting bors try build completion

@bors
Copy link
Contributor

bors commented Jul 28, 2020

⌛ Trying commit 086cecb with merge 57ad04fc500e914cfa0d54b97a7e879f751b1a17...

@bors
Copy link
Contributor

bors commented Jul 28, 2020

☀️ Try build successful - checks-actions, checks-azure
Build commit: 57ad04fc500e914cfa0d54b97a7e879f751b1a17 (57ad04fc500e914cfa0d54b97a7e879f751b1a17)

@rust-timer
Copy link
Collaborator

Queued 57ad04fc500e914cfa0d54b97a7e879f751b1a17 with parent 1f5d69d, future comparison URL.

@rust-timer
Copy link
Collaborator

Finished benchmarking try commit (57ad04fc500e914cfa0d54b97a7e879f751b1a17): comparison url.

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. Please note that if the perf results are neutral, you should likely undo the rollup=never given below by specifying rollup- to bors.

Importantly, though, if the results of this run are non-neutral do not roll this PR up -- it will mask other regressions or improvements in the roll up.

@bors rollup=never

@nnethercote
Copy link
Contributor Author

Still terrible results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. T-libs Relevant to the library team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants