Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[API review] Add MethodImplOptions.AggressiveOptimization #27370

Closed
kouvel opened this issue Sep 11, 2018 · 34 comments
Closed

[API review] Add MethodImplOptions.AggressiveOptimization #27370

kouvel opened this issue Sep 11, 2018 · 34 comments
Assignees
Labels
api-approved API was approved in API review, it can be implemented area-System.Runtime.CompilerServices
Milestone

Comments

@kouvel
Copy link
Member

kouvel commented Sep 11, 2018

Add MethodImplOptions.AggressiveOptimization as a counterpart to MethodImplOptions.NoOptimization. NoOptimization would override AggressiveOptimization.

    [Flags]
    public enum MethodImplOptions
    {
        // Unmanaged = 0x0004,
        // NoInlining = 0x0008,
        // ForwardRef = 0x0010,
        // Synchronized = 0x0020,
        // NoOptimization = 0x0040,
        // PreserveSig = 0x0080,
        // AggressiveInlining = 0x0100,
        AggressiveOptimization = 0x0200,
        // InternalCall = 0x1000
    }

This would also add a new keyword aggressiveoptimization to IL ASM:

  .method private hidebysig static void  Main() cil managed aggressiveoptimization

The flag could be used in a MethodImplAttribute to indicate that the method contains hot code:

  • For tiering, it could cause tier 1 JIT to be used right away for the method, as a workaround for https://github.com/dotnet/coreclr/issues/19751
    • For this reason, we would also like to add this API to 2.2, as it is currently planned to enable tiering by default in 2.2 for release
  • It could allow the JIT to spend more JIT time to generate better code, such as inlining more aggressively into the function

CC @noahfalk @vancem @dotnet/jit-contrib

@kouvel kouvel self-assigned this Sep 11, 2018
@MichalStrehovsky
Copy link
Member

Note that changing this enum is a file format change, not just an API change. ECMA-335 specifies the valid values for this enumeration in section II.22.26 and also says values out of the range of allowed values should be treated as error.

For example, this new value won't roundtrip through ILDASM/ILASM without revving IL syntax.

@kouvel
Copy link
Member Author

kouvel commented Sep 11, 2018

I see. I have local changes to update ildasm/ilasm. What is the process to propose the change to the spec?

@kouvel
Copy link
Member Author

kouvel commented Sep 11, 2018

Also is there an IL version (corresponding to spec version) that should be revved and taken into account for providing errors?

@vancem
Copy link
Contributor

vancem commented Sep 11, 2018

I think the language in the spec that says unspecified values are errors should be modified. As a general principle the spec should not prohibit extensions without good reason. If the spec is still accepting updates, we can add that to it, but we should not block on it.

Ideally the situation should be ILASM should have syntax for values it does not know about (e.g hex numbers), and should encode them so that they can be round-tripped. We should fix ILASM/DASM to do this.

I am interested in knowing what the ILASM/DASM tools do (does it strip it? That is not so bad in this case. This will not be the last time we will want another bit to 'hang' on a method. If MethodImplOptions is going to be problematic because tools did fragile things, we may need to use a 'real' custom attribute instead. First we have to find out how bad it is in actual practices (any reader tools blow up?).

@kouvel
Copy link
Member Author

kouvel commented Sep 11, 2018

I modified a binary to set the new enum value on Main. The baseline ILDASM without changes is stripping the unknown value and not throwing:

  .method private hidebysig static void  Main() cil managed

The modified ILDASM is recognizing the new value:

  .method private hidebysig static void  Main() cil managed aggressiveoptimization

The baseline CoreCLR is not throwing an exception. It looks like there is a flag check when using System.Reflection.Emit.TypeBuilder.DefineCustomAttribute() to throw for invalid unknown flags, which is probably fine.

@kouvel
Copy link
Member Author

kouvel commented Sep 11, 2018

ILSpy extension in VS is showing the value like this:

[MethodImpl((MethodImplOptions)512)]
private static void Main()

@vancem
Copy link
Contributor

vancem commented Sep 11, 2018

@kouvel, how did you fix ILASM? did you invent a new keyword? That is fine, but in addition it would be nice if we can make ILASM syntax that is like ILSpy's (round tripping works for anything in the future without modification).

I don't have a lot of experience knowing what kind of things break when IL is updated. I think we should still proceed, but if anyone knows of things that might break (e.g. CCI and tools based on it), a heads up would be appropriate.

@jkotas FYI.

@kouvel
Copy link
Member Author

kouvel commented Sep 11, 2018

Yea I added the new keyword. Maybe something like methodimploptions(512) would be appropriate for generic handling. Are there any tools other than ilasm that parse ildasm's output or IL-source?

kouvel referenced this issue in kouvel/coreclr Sep 11, 2018
Related to https://github.com/dotnet/corefx/issues/32235
Workaround (and probably a long-term fix) for https://github.com/dotnet/coreclr/issues/19751
- For a method flagged with AggressiveOptimization, tiering would use a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
@vancem
Copy link
Contributor

vancem commented Sep 11, 2018

I am sure there are a bunch of ad-hoc tools that parse ILASM, but most are probably pretty robust.

The current syntax looks in ILDASM is the following (where 'noinlining' is a MethodImplption).

  .method public hidebysig static void  Main(string[] args) cil managed noinlining

Thus the syntax is simpy a identifier that follows the parameters. While your suggestion of allowing one of these to be methodimploption(number), is reasonable, I worry that it woud 'trip up' simplfied parsers (e.g. regular expressions (.*)). Thus I think it would be slightly better to have code that makes up 'generic' value identifiers. For example methodImplOption_200 (where 200 is the hex value.

Ideally we would do this for other bits that have special sytnax in ILDASM, but I am OK with just doing MethodImplOptions for now (although if people can think of ones that are likely to be augmented, we should do those proactively).

@kouvel
Copy link
Member Author

kouvel commented Sep 11, 2018

I guess the new method of specification should be in addition to the keywords for back-compat and should be preferred over the keywords if we add new bits in the future without new keywords.

@jkotas
Copy link
Member

jkotas commented Sep 11, 2018

This is not the first time we are allocating a new bit here. I do not recall adding a new keyword for a new bit here ever being a problem.

should be preferred over the keywords

The keywords are there to make the text human readable. I do not think we want to be encouraging numbers that you need to lookup somewhere else.

@AndyAyersMS
Copy link
Member

Any thoughts on what should happen if a method marked for AggressiveOptimization is also an inline candidate?

My inclination would be to not inline such methods unless the root method was also so marked.

@vancem
Copy link
Contributor

vancem commented Sep 11, 2018

I agree with Andy's inclination.

What I like about AgressiveOptimization is that the guidance is straightfoward.

I believe that the guidance for AgressiveOptimization for a method M is the method M is hot (e.g. uses a few % or more of the CPU) in at least one important scenario.

Since a method can be called from many places (some of which are not hot), you should not assume that EVERY place the method is used deserves to be optimized agressvely. Certainly if its caller is marked for AgressiveOptimzation then it should be optimized, but the JIT is allowed to use additional heuristics.

It is another question whether a method is so small and likely to expose more optimization (e.g. a field fetch), that it should be optimized in almost all contexts (the only exception is that you are REALLY trying to save JIT time).

Can someone from the JIT team comment on what 'AgressiveInlining' means? In do you mark the callee or the caller (I am assuming it is the callee). Note AgressiveOptimization does not work like that (if you apply AgressiveOptimziation to M it is not likley M will be inlined, it is likey that M will inline methods it calls.

For what it is worth...

@CarolEidt
Copy link
Contributor

The spec is not a static thing - even though we are not currently working on updates to the spec, we should plan on having that opportunity in the future.

I like the idea of extending ildasm to flexibly handle as-yet-undefined values, and however we decide to expose them in ildasm we should modify ilasm to interpret them accordingly.

It is another question whether a method is so small and likely to expose more optimization (e.g. a field fetch), that it should be optimized in almost all contexts (the only exception is that you are REALLY trying to save JIT time).

I would think that such a method would be a better candidate for AggressiveInlining than for AggressiveOptimization, but if the latter were specified I think that the approach that @AndyAyersMS suggests would be appropriate - only inline if the caller has AggressiveOptimization.

Can someone from the JIT team comment on what 'AgressiveInlining' means? In do you mark the callee or the caller (I am assuming it is the callee).

You're correct, it is the callee. But I would mildly disagree with this statement:

Note AgressiveOptimization does not work like that (if you apply AgressiveOptimziation to M it is not likley M will be inlined, it is likey that M will inline methods it calls.

In some sense if you apply AggressiveOptimization to M, then it probably makes it more likely that it will be inlined into methods that are also marked AggressiveOptimization, which would allow you to tweak where you really want it inlined into.

Of course, this has the downside of presenting more knobs that can potentially be abused, but I think it's all to the general good.

@noahfalk
Copy link
Member

My inclination would be to not inline such methods unless the root method was also so marked.

If I follow correctly, the prevailing suggestion is that when unmarked method A calls [AggresiveOptimization] B, the JIT would never inline B. I think this has some bad consequences...

Consider the scenario where company A ships assembly A with method A. Company B ships assembly B with method B. Further lets assume A calling B is a hot path, and the JIT correctly identifies B as a profitable inlining choice. Developers at company B learn about the new AggresiveOptimization attribute, they know B is on a hot path and so following our guidance they ship version 2 of assembly B using the attribute. The JIT now sees unmarked method A calling [AggresiveOptimization] B, avoids inlining it, and performance gets worse - despite correctly following Microsoft's guidance.

I was imagining that unmarked A calling [AggresiveOptimization] B gets inlined if unmarked B would have gotten inlined.

@vancem
Copy link
Contributor

vancem commented Sep 11, 2018

Note in Noah's case above because B was marked as AggressiveOptimization it will be optimized, just not NECESSARILY inlined (assuming it is small enough). Thus I think it serves its purpose.

If B felt it was really important to optimzed all its calleers (that is it believes not just some but MOST callers would want it to be inlined), then it could add the AggressiveInlining.

But lets assume B does not do that.

There may be a misunderstanding. It is my belief that marking a method with AggressiveOptimzation is will never make it LESS likely to be inlined. Thus if A inlined it before, it should still inline it.

Indeed as Noah indicates, if method B is marked with AggressiveOptimzation, it should make it at least SOMEWHAT more likely to be inlined into any other method (but only somewhat). This is because before the JIT had only static information in A to decide if things are hot (e.g. is it in a loop) but now it also knows that the implementer of B thinks it is hot at least on SOME important scenarios. Contrast this with some other unmarked method C where you don't know that (so being a good Bayesian, B is a better choice than C on average).

But this is a second-order effect. It really is up to A, to mark that it is hot to get the best result in the absence of any other profile data.

@kouvel
Copy link
Member Author

kouvel commented Sep 12, 2018

I don't see a good way of having a generic way of round-tripping new flags without using numbers in the asm, or having some way of automatically adding keywords based on the enum (which would only save a little bit of time per new flag). Any other ideas? For now I'll plan on just adding the keyword.

@AndyAyersMS
Copy link
Member

Thinking about it some more -- it may not matter much, at least not immediately.

Methods that benefit from AggressiveOptimization are not likely to be considered good inline candidates by the current jit.

The thinking being: methods that benefit likely have at least one hot block where the relative execution frequency is considerably higher than the method entry block. Such methods inevitably contain some kind of loop and so generally end up being too large for the jit to inline with its current conservative heuristics. Now sometimes such methods can be optimized better when they're inlined -- say a caller passes key constant -- but more commonly they benefit from being the root method for some tree of inlines.

So I think for the most part we will get reasonable behavior by not giving these methods special consideration when they are inlinees.

So for 2.2 the impact of the attribute could simply be that the method always jits at Tier1. If it happens to get inlined it will likely be optimized (and inlined into) just as well as if it hadn't been inlined.

When a method is marked with both AggressiveOptimization and AggressiveInlining I think we'd let inlining prevail. And if the method still ends up getting jitted that version will be aggressively optimized. That behavior also just falls out from the way things are implemented in the jit now.

We can look at making the inliner (or other parts of the jit) more aggressive when we're jitting these methods for 3.0. And at that point there might be some divergence as inlining an AggressiveOptimization method may cause a perf loss over not inlining it. We'll have to see what makes sense there. There may be other knock-on effects too, as some of my half-baked thoughts for the future of Tier1 rely on the Tier0 code producing data that the Tier1 version uses to optimize better. So a method that always bypasses Tier0 will need special handling.

@jkotas
Copy link
Member

jkotas commented Sep 12, 2018

I don't see a good way of having a generic way of round-tripping new flags without using numbers in the asm

I do not think we need to worry about roundtripping new features using old tools. If you want to roundtrip IL that uses new runtime features, you have to have matching new ilasm/ildasm.

For example, we are in the process of adding default interfaces. The IL that uses default interfaces will not roundtrip with the old ilasm/ildasm. It is by design. And we are thinking about more features like this.

@kouvel
Copy link
Member Author

kouvel commented Sep 12, 2018

There may be other knock-on effects too, as some of my half-baked thoughts for the future of Tier1 rely on the Tier0 code producing data that the Tier1 version uses to optimize better. So a method that always bypasses Tier0 will need special handling.

My thinking is that depending on how expensive the code is to produce additional data for tier 1, it can be a separate tier (profiling tier) if needed, which would run for a few iterations before switching to the optimized tier. The attribute could cause the method to start at the profiling tier (or something like that), but at that point https://github.com/dotnet/coreclr/issues/19751 will need a proper fix or some other workaround.

kouvel referenced this issue in kouvel/coreclr Sep 12, 2018
Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- For a method flagged with AggressiveOptimization, tiering would use a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
@kouvel
Copy link
Member Author

kouvel commented Sep 14, 2018

When tiering is disabled, I'm thinking that AggressiveOptimization should still cause the runtime to skip r2r-pregenerated code and JIT the method instead. Thoughts?

Should we also not pregenerate code for such methods when using r2r?

@jkotas
Copy link
Member

jkotas commented Sep 14, 2018

Should we also not pregenerate code for such methods when using r2r?

I think so.

@vancem
Copy link
Contributor

vancem commented Sep 14, 2018

AggressiveOptimization is a mark that says 'code quality is important'. JIT code quality is better than R2R so we should prefer that.

If tiering is disabled, I agree this argues to go straight to JIT.

With tiering is enabled, it should not bother counting, and JIT as soon the app exits 'startup mode'. This gives us good startup time (since we avoid the JIT on the critical path of startup.

Note that at crossgen time you don't know if tiering is disabled (in fact, you expect by default that it is enabled), so you need to generate R2R code for AgressiveOptimization methods.

@jkotas
Copy link
Member

jkotas commented Sep 14, 2018

With tiering is enabled, it should not bother counting, and JIT as soon the app exits 'startup mode'.

I thought that we want to use this to fix the cold methods with hot loops problem. It means that we need to always go the Tier1 JIT right away. We cannot ever start executing the inefficient version of the AgressiveOptimization code because of we may not get opportunity to replace it.

@kouvel
Copy link
Member Author

kouvel commented Sep 14, 2018

With tiering is enabled, it should not bother counting, and JIT as soon the app exits 'startup mode'. This gives us good startup time (since we avoid the JIT on the critical path of startup.

When tiering is enabled, at the moment AggressiveOptimization is also being used as a workaround for https://github.com/dotnet/coreclr/issues/19751, so the very first call to that method has to be tier 1 jitted code. Once that issue has a proper fix, we can start at a different point.

Note that at crossgen time you don't know if tiering is disabled (in fact, you expect by default that it is enabled), so you need to generate R2R code for AgressiveOptimization methods.

If we'll be using an optimized JIT regardless of whether tiering is enabled, then the pregenerated code would never be used. Once https://github.com/dotnet/coreclr/issues/19751 has a proper fix, we'll need to consider @AndyAyersMS's point of an earlier tier code that would inform the optimized tier and whether we'd want to pregenerate such code.

kouvel referenced this issue in kouvel/coreclr Sep 17, 2018
Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization
- For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
- When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code
- R2r crossgen does not generate code for a method flagged with AggressiveOptimization
kouvel referenced this issue in kouvel/coreclr Sep 17, 2018
Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization
- For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
- When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code
- R2r crossgen does not generate code for a method flagged with AggressiveOptimization
@CarolEidt
Copy link
Contributor

If we go straight to Tier 1 we fix the hot loop problem, but we do penalize startup (more JIT time (and presumably we try harder while JIT compiling).

I would assert that the specification of AggressiveOptimization implies that runtime is more important than compile time.

Ideally we would have something that told us whether startup is important or not. If startup is unimportant, I would have no problem with going stright to tier 1.

I think that would only be useful if it were on a per-method basis. I may care about startup, but still know that certain methods are worth spending more time on even so.

If we find peopel start using the attribute alot, then we could put a budget on it (or have some config that says that startup is important).

How would we determine this? I am on board with the notion that developers don't always know what they're doing with things like this, but the solution is not to make the feature work in a way that is complex and unpredictable.

@vancem
Copy link
Contributor

vancem commented Sep 17, 2018

I would assert that the specification of AggressiveOptimization implies that runtime is more important than compile time.

I would agree with you if one developer owned all the code and understood his scenarios well. But in the more likely case where libraries are involved and there are many competing scenarios to weigh, setting 'AgressiveOptimization' only says that SOME uses of the library will consider runtime of that method important.

If however this library is used on the startup path of a GUI app (e.g. imagine it was a serializtaion library), and as with most GUI apps startup time is much more importnat than most steady-state performance, you can see that we actually sub-optimal behavior (we JIT when in fact to get CQ, but in fact it would be better not to to get startup).

As you correctly state, however there is value to simplicity and predictibily. For that reason, assuming peopel don't go bonkers with using the AggressiveOptimization flag, the 'bad' effect likely to be minimal, and we can just live with it (for the sake of simplicity/predicibility).

But I could definately imagine having a app level flag that effectivley overrides this one which says 'startup is my most important metric'. You counter thatsomething more nuanced than that because maybe also need some methods optimized for later steady-state performance. Maybe, but just this simple 'I really care about startup' flag seems to fit pretty well for many (probably most) GUI apps.

Anyway, this is mostly mute. Kount can use this flag to force methods to tier 1 agressively, and the rest we save for later.

@adamsitnik
Copy link
Member

would it also be possible to "unseal" the MethodImplAttribute class?

Reason: in BenchmarkDotNet to prevent the benchmark from being inlined we wrap the method call into a delegate (delegate calls are not being inlined yet).

So all the user needs to do is just to apply the "Benchmark" attribute to the method:

[Benchmark]
public void Method () { }

and there is no need to do:

[Benchmark]
[MethodImpl(MethodImplOptions.NoInlining)]
public void Method () { }

If we would like to enforce AggressiveOptimization by default for all of the [Benchmark] methods we would have to ask the users to always write:

[Benchmark]
[MethodImpl(MethodImplOptions.AggressiveOptimization)]
public void Method () { }

If we could unseal the MethodImplAttribute class, then the BenchmarkAttribute could just derive from it and set both NoInlining and AggressiveOptimization by default. The possibility of inlining delegates in the future would not be an issue (@AndyAyersMS @AndreyAkinshin @jorive)

@jkotas
Copy link
Member

jkotas commented Sep 21, 2018

would it also be possible to "unseal" the MethodImplAttribute class?

The .NET Framework design guidelines say "DO seal custom attribute". Dealing with unsealed custom attributes is a lot more expensive. Moreover, MethodImplAttribute is one of the special-pseudo custom attributes. It is recognized at compile time, not at runtime. If we were to unseal it, compilers and other metadata producers would need to be changed to account for this too. Your scenario should be discussed in separate issue, feel free to open one. Unsealing the attribute is not the only way to achieve what you would like to do.

@kouvel
Copy link
Member Author

kouvel commented Sep 24, 2018

If we would like to enforce AggressiveOptimization by default for all of the [Benchmark] methods

In the future the perf of methods flagged with AggressiveOptimization may diverge from unflagged methods. Whether the benchmark should be flagged I guess could depend on whether real usage is expected to be flagged with the attribute. So my suggestion is to not imply AggressiveOptimization for benchmarks and instead leave the choice to the benchmark author, at least for now.

@terrajobst
Copy link
Member

Looks good as proposed.

kouvel referenced this issue in kouvel/coreclr Oct 1, 2018
Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization
- For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
- When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code
- R2r crossgen does not generate code for a method flagged with AggressiveOptimization
kouvel referenced this issue in dotnet/coreclr Oct 3, 2018
…20009)

Add MethodImplOptions.AggressiveOptimization and use it for tiering

Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization
- For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
- When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code
- R2r crossgen does not generate code for a method flagged with AggressiveOptimization
dotnet-maestro-bot referenced this issue in dotnet-maestro-bot/corefx Oct 4, 2018
…#20009)

Add MethodImplOptions.AggressiveOptimization and use it for tiering

Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization
- For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
- When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code
- R2r crossgen does not generate code for a method flagged with AggressiveOptimization

Signed-off-by: dotnet-bot <dotnet-bot@microsoft.com>
dotnet-maestro-bot referenced this issue in dotnet-maestro-bot/corert Oct 4, 2018
…#20009)

Add MethodImplOptions.AggressiveOptimization and use it for tiering

Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization
- For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
- When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code
- R2r crossgen does not generate code for a method flagged with AggressiveOptimization

Signed-off-by: dotnet-bot <dotnet-bot@microsoft.com>
jkotas referenced this issue in dotnet/corefx Oct 4, 2018
…#20009)

Add MethodImplOptions.AggressiveOptimization and use it for tiering

Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization
- For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
- When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code
- R2r crossgen does not generate code for a method flagged with AggressiveOptimization

Signed-off-by: dotnet-bot <dotnet-bot@microsoft.com>
kouvel referenced this issue in kouvel/coreclr Oct 4, 2018
…otnet#20009)

Add MethodImplOptions.AggressiveOptimization and use it for tiering

Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization
- For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
- When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code
- R2r crossgen does not generate code for a method flagged with AggressiveOptimization
kouvel referenced this issue in kouvel/coreclr Oct 4, 2018
…otnet#20009)

Add MethodImplOptions.AggressiveOptimization and use it for tiering

Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization
- For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
- When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code
- R2r crossgen does not generate code for a method flagged with AggressiveOptimization
jkotas referenced this issue in dotnet/corert Oct 4, 2018
…#20009)

Add MethodImplOptions.AggressiveOptimization and use it for tiering

Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization
- For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
- When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code
- R2r crossgen does not generate code for a method flagged with AggressiveOptimization

Signed-off-by: dotnet-bot <dotnet-bot@microsoft.com>
kouvel referenced this issue in kouvel/coreclr Oct 22, 2018
…otnet#20009)

Add MethodImplOptions.AggressiveOptimization and use it for tiering

Part of fix for https://github.com/dotnet/corefx/issues/32235
Workaround for https://github.com/dotnet/coreclr/issues/19751
- Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization
- For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting
- When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code
- R2r crossgen does not generate code for a method flagged with AggressiveOptimization
stephentoub referenced this issue in dotnet/corefx Oct 25, 2018
…tes.AggressiveOptimization (#32322)

* Expose MethodImplOptions.AggressiveOptimization

Part of fix for https://github.com/dotnet/corefx/issues/32235
Depends on dotnet/coreclr#20009
API review: https://github.com/dotnet/corefx/issues/32235

* Include new test

* Expose MethodImplAttributes.AggressiveOptimization and fix test

API review: https://github.com/dotnet/corefx/issues/32628
@kouvel
Copy link
Member Author

kouvel commented Oct 27, 2018

Exposed in dotnet/corefx#32322

@kouvel kouvel closed this as completed Oct 27, 2018
@msftgits msftgits transferred this issue from dotnet/corefx Jan 31, 2020
@msftgits msftgits added this to the 3.0 milestone Jan 31, 2020
@ghost ghost locked as resolved and limited conversation to collaborators Dec 15, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
api-approved API was approved in API review, it can be implemented area-System.Runtime.CompilerServices
Projects
None yet
Development

No branches or pull requests

10 participants