Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support single-file distribution #11201

Closed
morganbr opened this issue Oct 5, 2018 · 225 comments
Closed

Support single-file distribution #11201

morganbr opened this issue Oct 5, 2018 · 225 comments
Assignees
Milestone

Comments

@morganbr
Copy link
Contributor

morganbr commented Oct 5, 2018

This issue tracks progress on the .NET Core 3.0 single-file distribution feature.
Here's the design doc and staging plan for the feature.

@morganbr morganbr self-assigned this Oct 5, 2018
@mattwarren
Copy link
Contributor

Out of interest, how does this initiative compare to CoreRT? They seem like similar efforts?

Is it related to 'possibly native user code', i.e. it this will still allow code to be JIT-compiled, not just AOT?

Also, I assume that the runtime components ('Native code (runtime, host, native portions of the framework..') will be the ones from the CoreCLR repo?

@morganbr
Copy link
Contributor Author

morganbr commented Oct 8, 2018

You're asking great questions, but since this is still early in design, I don't have great answers yet.

Out of interest, how does this initiative compare to CoreRT? They seem like similar efforts?

There would likely be somewhat similar outcomes (a single file), but the design may have different performance characteristics or features that do/don't work. For example, a possible design could be to essentially concatenate all of the files in a .NET Core self-contained application into a single file. That's 10s of MB and might start more slowly, but on the other hand, it would allow the full capabilities of CoreCLR, including loading plugins, reflection emit and advanced diagnostics. CoreRT could be considered the other end of the spectrum -- it's single-digit MB and has a very fast startup time, but by not having a JIT, it can't load plugins or use reflection emit and build time is slower than most .NET devs are used to. It currently has a few other limitations that could get better over time, but might not be better by .NET Core 3.0 (possibly requiring annotations for reflection, missing some interop scenarios, limited diagnostics on Linux). There are also ideas somewhere between the two. If folks have tradeoffs they'd like to make/avoid, we'd be curious to hear about them.

Is it related to 'possibly native user code', i.e. it this will still allow code to be JIT-compiled, not just AOT?

By "native user code," I meant that your app might have some C++ native code (either written by you or a 3rd-party component). There might be limits on what we can do with that code -- if it's compiled into a .dll, the only way to run it is off of disk; if it's a .lib, it might be possible to link it in, but that brings in other complications.

Also, I assume that the runtime components ('Native code (runtime, host, native portions of the framework..') will be the ones from the CoreCLR repo?

Based on everything above, we'll figure out which repos are involved. "Native portions of the framework" would include CoreFX native files like ClrCompression and the Unix PAL.

@ayende
Copy link
Contributor

ayende commented Oct 9, 2018

A single file distribution in this manner, even if has slightly slower startup time, can be invaluable for ease of deployment. I would much rather have the ability to have the full power than be forced to give up some of that.

Some scenarios that are of interest to us. How would this work in terms of cross platform?
I assume we'll have a separate "file" per platform?

With regards to native code, how would I be able to choose different native components based on the platform?

@TheBlueSky
Copy link
Contributor

Some scenarios that are of interest to us. How would this work in terms of cross platform?
I assume we'll have a separate "file" per platform?
With regards to native code, how would I be able to choose different native components based on the platform?

@ayende, I'm quoting from @morganbr comment:

a possible design could be to essentially concatenate all of the files in a .NET Core self-contained application into a single file.

The current cross-platform story for self-contained applications is creating a deployment package per platform that you'd like to target, because you ship the application with the runtime, which is a platform-specific.

@mattwarren
Copy link
Contributor

@morganbr I appreciate you taking to time to provide such a detailed answer

I'll be interested to see where the design goes, this is a really interesting initiative

@morganbr
Copy link
Contributor Author

I have a few questions for folks who'd like to use single-file. Your answers will help us narrow our options:

  1. What kind of app would you be likely to use it with? (e.g. WPF on Windows? ASP.NET in a Linux Docker container? Something else?)
  2. Does your app include (non-.NET) C++/native code?
  3. Would your app load plugins or other external dlls that you didn't originally include in your app build?
  4. Are you willing to rebuild and redistribute your app to incorporate security fixes?
  5. Would you use it if your app started 200-500 ms more slowly? What about 5 seconds?
  6. What's the largest size you'd consider acceptable for your app? 5 MB? 10? 20? 50? 75? 100?
  7. Would you accept a longer release build time to optimize size and/or startup time? What's the longest you'd accept? 15 seconds? 30 seconds? 1 minute? 5 minutes?
  8. Would you be willing to do extra work if it would cut the size of your app in half?

@tpetrina
Copy link

  1. Console/UI app on all platforms.
  2. Maybe as a third party component.
  3. Possibly yes.
  4. Yes, especially if there is a simple ClickOnce-like system.
  5. Some initial slowdown can be tolerated. Can point 3 help with that?
  6. Depends on assets. Hello world should have size on the order of MB.
  7. Doesn't matter if it is just production.
  8. Like whitelisting reflection stuff? Yes.

@TheBlueSky
Copy link
Contributor

@morganbr, do you think that these questions are better asked to a broader audience; i.e., broader that people who know about this GitHub issue?

@benaadams
Copy link
Member

For example, a possible design could be to essentially concatenate all of the files in a .NET Core self-contained application into a single file.

Looking at compressing it; or using a compressed file system in the file?

@morganbr
Copy link
Contributor Author

@tpetrina, thanks! Point 3 covers a couple of design angles:

  1. Tree shaking doesn't go well with loading plugins that the tree shaker hasn't seen since it could eliminate code the plugin relies on.
  2. CoreRT doesn't currently have a way to load plugins
    Point 5 is more about whether we'd optimize for size or startup time (and how much)
    Point 8, yes I was mostly thinking about reflection stuff

@TheBlueSky, we've contacted other folks as well, but it helps to get input from the passionate folks in the GitHub community.

@benaadams, compression is on the table, but I'm currently thinking of it as orthogonal to the overall design. Light experimentation suggests zipping may get about 50% size reduction at the cost of several seconds of startup time (and build time). To me, that's a radical enough trade-off that if we do it, it should be optional.

@Suchiman
Copy link
Contributor

@morganbr several seconds of startup time when using compression? I find that hard to believe when considering that UPX claims decompression speeds of

~10 MB/sec on an ancient Pentium 133, ~200 MB/sec on an Athlon XP 2000+.

@ayende
Copy link
Contributor

ayende commented Oct 14, 2018

@morganbr, for me the answers are:

  1. Service (console app running Kestrel, basically). Running as Windows Service / Linux Daemon or in docker.
  2. Yes
  3. Yes, typically managed assemblies using AssemblyContext.LoadFrom. These are provided by the end user.
  4. Yes, that is expected. In fact, we already bundle the entire framework anyway, so no change from that perspective.
  5. As a service, we don't care that much for the startup time. 5 seconds would be reasonable.
  6. 75MB is probably the limit. A lot depends on the actual compressed size, since all packages are delivered compressed.
  7. For release builds, longer (even much longer) build times are acceptable.
  8. Yes, absolutely. Size doesn't matter that much, but smaller is better.

Something that I didn't see mentioned and is very important is the debuggability of this.
I hope that this isn't going to mangle stack traces, and we would want to be able to include pdb files or some sort of debugging symbols.

@ayende
Copy link
Contributor

ayende commented Oct 14, 2018

About compression, take into account the fact that in nearly all cases, the actual delivery mechanism is already compressed.
For example, nuget packages.
Users are also pretty well versed in unzipping things, so that isn't much of an issue.
I think you can do compression on the side.

@morganbr
Copy link
Contributor Author

Thanks, @ayende! You're right that I should have called out debuggability. I think there are only a few minor ways debugging could be affected:

  1. It might not be possible to use Edit and Continue on a single-file (due to needing a way to rebuild and reload the original assembly)
  2. The single-file build might produce a PDB or some other files that are required for debugging beyond those that came with your assemblies.
  3. If CoreRT is used, it may have some debugging features that get filled in over time (especially on Linux/Mac).

When you say "include pdb files", do you want those inside the single file or just the ability to generate them and hang onto them in case you need to debug the single-file build?

@ayende
Copy link
Contributor

ayende commented Oct 16, 2018

  1. Not an issue for us. E&C is not relevant here since this is likely to be only used for actual deployment, not day to day.
  2. Ideally, we have a single file for everything, including the PDBs, not one file and a set of pdbs on the side. There is already the embedded PDB option, if that would work, it would be great.
  3. When talking about debug, I'm talking more about production time rather than attaching a debugger live. More specifically, stack trace information including file & line numbers, being able to resolve symbols when reading dump, etc.

@bencyoung
Copy link

  1. Mainly services but some UI
  2. Some do, but this wouldn't be urgent
  3. Yes
  4. Yes
  5. A few seconds is ok
  6. Doesn't matter to us. Sum of dll size is fine
  7. Ideally not
  8. Size is not of primary importance for us

Another question for us is whether you'd be able to do this for individual components too (perhaps even staged)? E.g. we have library dlls that use lots of dependencies. If we could package those it would save a lot of pain of version management etc. If these in turn could be packaged into an exe that would be even nicer?

@Kosyne
Copy link

Kosyne commented Nov 18, 2018

  1. Services and some UI.
  2. Not at the moment.
  3. Yes. Ideally plugins that could be loaded from a folder and reloaded at runtime.
  4. Yes
  5. Not a problem so long as we aren't pushing 10-15+.
  6. Sum of DLL size, or similar.
  7. Yes. For a production build time isn't really a problem so long as debug/testing builds build reasonably quick.
  8. Depends, but the option would be handy.

@expcat
Copy link

expcat commented Nov 18, 2018

  1. Service and UI.
  2. Sometimes.
  3. Yes, usually.
  4. Yes.
  5. It is best to be less than 5 seconds.
  6. The UI is less than 5 seconds, Service doesn't matter.
  7. The build time is not important, and the optimization effect is the most important.
  8. Yes.

@MichalStrehovsky
Copy link
Member

@tpetrina @ayende @bencyoung @Kosyne @expcat you responded yes to question 3 ("Would your app load plugins or other external dlls that you didn't originally include in your app build?") - can you tell us more about your use case?

The main selling point of a single file distribution is that there is only one file to distribute. If your app has plugins in separate files, what value would you be getting from a single file distribution that has multiple files anyway? Why is "app.exe+plugin1.dll+plugin2.dll" better than "app.exe+coreclr.dll+clrjit.dll+...+plugin1.dll+plugin2.dll"?

@ayende
Copy link
Contributor

ayende commented Nov 19, 2018

app.exe + 300+ dlls - which is the current state today is really awkward.
app.exe + 1-5 dlls which are usually defined by the user themselves is much easier.

Our scenario is that we allow certain extensions by the user, so we would typically only deploy a single exe and the user may add additional functionality as needed.

It isn't so much that we plan to do that, but we want to be able to do that if the need arise.

@bencyoung
Copy link

@ayende Agreed, same with us.

Also, if we could so this at the dll level then we could package dependencies inside our assemblies so they didn't conflict with client assemblies. I.e. by choosing a version of NewtonSoft.Json you are currently defining it for all programs, plugins and third-party assemblies in the same folder, but if you could embed it then third-parties have flexibility and increase version compatibility

@expcat
Copy link

expcat commented Nov 19, 2018

Agree with @ayende .

@morganbr
Copy link
Contributor Author

Thanks, everyone for your answers! Based on the number of folks who will either use native code or need to load plugins, we think the most compatible approach we can manage is the right place to start. To do that, we'll go with a "pack and extract" approach.

This will be tooling that essentially embeds all of the application and .NET's files as resources into an extractor executable. When the executable runs, it will extract all of those files into a temporary directory and then run as though the app were published as a non-single file application. It won't start out with compression, but we could potentially add it in the future if warranted.

The trickiest detail of this plan is where to extract files to. We need to account for several scenarios:

  • First launch -- the app just needs to extract to somewhere on disk
  • Subsequent launches -- to avoid paying the cost of extraction (likely several seconds) on every launch, it would be preferable to have the extraction location be deterministic and allow the second launch to use files extracted by the first launch.
  • Upgrade -- If a new version of the application is launched, it shouldn't use the files extracted by an old version. (The reverse is also true; people may want to run multiple version side-by-side). That suggests that the deterministic path should be based on the contents of the application.
  • Uninstall -- Users should be able to find the extracted directories to delete them if desired.
  • Fault-tolerance -- If a first launch fails after partially extracting its contents, a second launch should redo the extraction
  • Running elevated -- Processes run as admin should only run from admin-writable locations to prevent low-integrity processes from tampering with them.
  • Running non-elevated -- Processes run without admin privileges should run from user-writable locations

I think we can account for all of those by constructing a path that incorporates:

  1. A well-known base directory (e.g. %LOCALAPPDATA%\dotnetApps on Windows and user user profile locations on other OSes)
  2. A separate subdirectory for elevated
  3. Application identity (maybe just the exe name)
  4. A version identifier. The number version is probably useful, but insufficient since it also needs to incorporate exact dependency versions. A per-build guid or hash might be appropriate.

Together, that might look something like c:\users\username\AppData\Local\dotnetApps\elevated\MyCoolApp\1.0.0.0_abc123\MyCoolApp.dll
(Where the app is named MyCoolApp, its version number is 1.0.0.0 and its hash/guid is abc123 and it was launched elevated).

There will also be work required to embed files into the extractor. On Windows, we can simply use native resources, but Linux and Mac may need custom work.

Finally, this may also need adjustments in the host (to find extracted files) and diagnostics (to find the DAC or other files).

CC @swaroop-sridhar @jeffschwMSFT @vitek-karas

@Kosyne
Copy link

Kosyne commented Nov 29, 2018

I feel like this cure is worse than the disease. If we have to deal with external directories (different across OS's), updating, uninstalling and the like, that flies in the face of my reason for desiring this feature in the first place (keeping everything simple, portable, self contained and clean).

If it absolutely has to be this way, for my project, I'd much prefer a single main executable and the unpacked files to live in a directory alongside that executable, or possibly the ability to decide where that directory goes.

That's just me though, I'm curious to hear from others as well.

@ChristianSauer
Copy link

I have to agree here, using a different directory can have many exciting problems - e.g. you place a config file alongside the exe and this exe is not picked up because the "real" directory is somewhere else.
Disk space could be a problem to, also random file locks due to access policies etc. pp.
I would like to use this feature, but not if it adds a host of faile modes which are impossible to detect before.

@strich
Copy link

strich commented Nov 29, 2018

Agreed with @Kosyne - The proposed initial solution seems to simply automate an "installer" of sorts. If that was the limit of the problem we're trying to solve with a single exec then I think we'd have all simply performed that automation ourselves.

The key goal of the single exec proposal should be to be able to run an executable on an unmanaged system. Who knows if it even has write access to any chosen destination "install" directory? It should certainly not leave artefacts of itself after launch either (not by default).

As a small modification to the existing proposal to satisfy the above: Could we not unpack into memory and run from there?

@ayende
Copy link
Contributor

ayende commented Nov 29, 2018

Agree with the rest of the comments. Unzipping to another location is something that is already available.
We can have a self extracting zip which will run the extracted values fairly easily. That doesn't answer a lot of the concerns that this is meant to answer and is just another name for installation.

The location of the file is important. For example, in our case, that would mean:

  • Finding the config file (which we generate on the fly if not there and let the user customize)
  • Finding / creating data files, which is usually relative to the source exe.
  • The PID / name of the process should match, to ensure proper monitoring / support.

One of our users need to run our software from a DVD, how does that works, on a system that may not actually have a HD to run on.

I agree that it would be better to do everything in memory. And the concern about the startup time isn't that big, I would be fine paying this for every restart, or manually doing a step to alleviate that if needed.

Another issue here is the actual size. If this is just (effectively) an installer, that means that we are talking about file sizes for a reasonable app in the 100s of MB, no?

@GSPP
Copy link

GSPP commented Nov 30, 2018

It seems that building the proposed solution does not require (many if any) CLR changes. Users can already build a solution like that. There is no point in adding this to CoreCLR. Especially, since the use case for this is fairly narrow and specific.

@ayende
Copy link
Contributor

ayende commented Nov 30, 2018

@GSPP This seems like basically something that I can do today with 7z-Extra, I agree that if this is the case, it would be better to not have it at all.

@replaysMike
Copy link

replaysMike commented Nov 11, 2019

@chris3713 The best way for accessing the exe location currently is by PInvoke-ing to native APIs. For example:GetModuleFileNameW(Null, <buffer>, <len>)

It seems Environment.CurrentDirectory is a better solution, though I haven't tried both approaches on something other than Windows yet.

EDIT: Nope. That path is subject to change at different entry points in the application. No good.

@Webreaper
Copy link

On a slightly related note, I found this regression in the single-file publishing of Blazor apps in the latest preview of VS for Mac: dotnet/aspnetcore#17079 - I've reported it under AspNeCore/Blazor, but it may be that this is more relevant for the coreclr group - not sure. Will leave it for you guys to move around!

@ghost
Copy link

ghost commented Nov 17, 2019

@Suchiman careful, that compiler has problems:

dotnet/roslyn#39856

@Suchiman
Copy link
Contributor

@cup except that using the file path i've named, you're using the old C# 5 compiler written in C++, that is not roslyn and they'll probably close that issue for that reason. But roslyn can do the same thing, just a different path...

@swaroop-sridhar
Copy link
Contributor

On a slightly related note, I found this regression in the single-file publishing of Blazor apps in the latest preview of VS for Mac: aspnet/AspNetCore#17079 - I've reported it under AspNeCore/Blazor, but it may be that this is more relevant for the coreclr group - not sure. Will leave it for you guys to move around!

@Webreaper Thanks for reporting the issue, that looks like an ASP.net issue regarding static assets. So, that's the right place to file it.

@devedse
Copy link

devedse commented Nov 18, 2019

** Moving post from other issue to here. Original post: https://github.com/dotnet/coreclr/issues/27528 **

@swaroop-sridhar ,

The startup times of DotNet Core Single File WPF apps is a lot slower then the original ILMerge-ed WPF application build on .net 4.7. Is this to be expected or will this improve in the future?

Builds come from my ImageOptimizer: https://github.com/devedse/DeveImageOptimizerWPF/releases

Type Estimated First Startup time Estimated second startup time Size Download link
.NET 4.7.0 + ILMerge ~3 sec ~1 sec 39.3mb LINK
dotnet publish -r win-x64 -c Release --self-contained=false /p:PublishSingleFile=true ~10 sec ~3 sec 49mb
dotnet publish -r win-x64 -c Release /p:PublishSingleFile=true ~19 sec ~2 sec 201 mb
dotnet publish -r win-x64 -c Release /p:PublishSingleFile=true /p:PublishTrimmed=true ~15 sec ~3 sec 136mb LINK
dotnet publish -r win-x64 -c Release ~2.5 sec ~1.5 sec 223kb for exe (+400mb in dlls)

@devedse, to make sure, is the "second startup" the average of several runs (other than the first)?
I'm curious, but lacking any explanation for why the /p:PublishSingleFile=true /p:PublishTrimmed=true run should be slower than ``/p:PublishSingleFile=true` run.

So, before investigating, I want to make sure the numbers in the "second startup" are stable numbers and that the difference in startup is reproducible,

Also, this issue is about single-file plugins, can you please move the perf discussion to a new issue, or to dotnet/coreclr#20287? Thanks.

@swaroop-sridhar , in response to your question about it being the average:
It's a bit hard for me to very accurately time this, so the timing was done by counting while the application was starting and then trying it a few times to see if there's a significant difference in startup time. If you're aware of a better method you can easily reproduce it by building my solution: https://github.com/devedse/DeveImageOptimizerWPF

My main question relates to why it takes longer for a bundled (single file) application to start in comparison to an unbundled .exe file.

@igloo15
Copy link

igloo15 commented Nov 19, 2019

I may be wrong here but it makes sense to me since there is overhead with a single file. Essentially you have an app that is starting another app. While the ILMerge is starting directly. ILMerge only merged referenced dlls into the exe it did not wrap the whole thing in another layer which is what is currently being done with PublishSingleFile.

@RajeshAKumar
Copy link

@devedse The single file is essentially extracting, checking the checksums, etc. before starting the dotnet run.
Guess that is why it took that time.
The extract is "cached" so next run, there is no IO overhead.

@devedse
Copy link

devedse commented Nov 19, 2019

@RajeshAKumar , hmm is extracting really the way to go in this scenario? Wouldn't it be better to go the ILMerge way and actually merge the DLL's into one single bundle?

Especially for bigger .exe files you're also inducing the disk space cost of storing all files twice then.

@Safirion
Copy link

Safirion commented Nov 19, 2019

@devedse We are all waiting for next stages of this feature (Run from Bundle) but for now, it's the only solution. 😉

https://github.com/dotnet/designs/blob/master/accepted/single-file/staging.md

@vitek-karas
Copy link
Member

(Mostly repeating what was already stated):
First start is expected to be much slower - it extracts the app onto the disk - so lot of IO. Second and subsequent starts should be almost identical to non-single-file version of the app. In our internal measurements we didn't see a difference.

How to measure: We used tracing (ETW on Windows) - there are events when the process starts and then there are runtime events which can be used for this - it's not exactly easy though.

As mentioned by @Safirion we are working on the next improvement for single-file which should run most of the managed code from the .exe directly (no extract to disk). Can't promise a release train yet though.

JIT: All of the framework should be precompiled with Ready2Run (CoreFX, WPF), so at startup only the application code should be JITed. It's not perfect, but it should make a big difference. Given the ~1-2 second startup times I think it is already using that in all of the tests.

@devedse
Copy link

devedse commented Nov 19, 2019

Thanks all, I wasn't aware of the next steps that are planned. This clarifies it.

@Webreaper
Copy link

@RUSshy Why so much hate? If you don't want the startup delay when you first launch then don't use single-file deployment.

I find the startup is significantly less than 10s, and since it's only the first time you run, it's no problem at all. I'm deploying a server-side webapp which means in most cases it's going to be starting up once and then running for days/weeks, so the initial extraction is negligible in the scheme of things - so I'd much prefer this as a stop-gap until there's a single-compiled image, because it just makes deployment far easier than copying hundreds of DLLs around the place.

@am11
Copy link
Member

am11 commented Nov 21, 2019

+1 in my case, we have a build docker generating single exe, and a seperate docker to run the app (using regular Alpine docker image without dotnet). After the bulid step, we hot-load the runtime container once and docker-commit the layer. Subsequently, we have not observed any performance regression compared to a framework-dependent deployment. Once the load-from-bundle file mechanism is implemented and shipped, we will remove the intermediate hot-loading step.

@vitek-karas, is there an issue tracking "load runtime assets from bundle" feature? interested in understanding what kind of impediments are there. :)

@vitek-karas
Copy link
Member

@am11 We're currently putting together the detailed plan. You can look at the prototype which has been done in https://github.com/dotnet/coreclr/tree/single-exe. The real implementation will probably not be too different (obviously better factoring and so on, but the core idea seems to be sound).

@Safirion
Copy link

Safirion commented Nov 21, 2019

@Webreaper For web apps, it isn't a problem at all but maybe because .Net Core 3 is recommanded for WPF/WinForm development now, and that sharing a Desktop application .exe lost in hundred .dll is not an option then I totaly understand the frustration related to the first stage of this feature.
;)

And no user wait 10sec (or more than 3sec) before re-clicking on an exe today. The fact that there is no loading indicator is the second big issue of this feature. Unfortunately, it seems that the loading indicator will not be a part of .Net Core 3.1 so users will have to be patient...

Desktop developers realy waiting for stage 2, and I expect that will be a part of .Net 5 because actualy, Desktop development in .Net Core is a realy bad experience for end users.

@GSPP
Copy link

GSPP commented Nov 27, 2019

.NET is objectively the best platform for most applications these days. I wish more people realized that.

The war stories I hear from other platforms such as Java, Go, Rust, Node, ... are frankly disturbing. These platforms are productivity killers.

@charlesroddie
Copy link

in my opinion this is waste of time and ressources, focus on AOT compilation and tree-shaking, put all your ressources there, stop with hacks

I agree. .Net has a great type system. Too many times it gets circumvented using reflection. Tooling needs to focus on AOT but also on minimizing reflection. dotnet/corert#7835 (comment) would be a very good start. The billion dollar mistake of nulls is being mitigated now; the same should be done with reflection with a setting or marker for reflection-free code (or otherwise linker or corert compatible code).

@msftgits msftgits transferred this issue from dotnet/coreclr Jan 31, 2020
@msftgits msftgits added this to the Future milestone Jan 31, 2020
@JohnNilsson
Copy link

JohnNilsson commented Mar 23, 2020

Eliminating reflection would be awesome. So much suffering could be avoided if reflection was banned. My most recent horror story was discovering I couldn’t event move code around because the framework used (service fabric SDK) found it prudent to tie the serialized bytes to the assembly name of the serializer implementation with no override possible.

Any progress towards discouraging reflection would be progress.

Btw was looking for way to merge assemblies to reduce bundle size and load times, allow whole program optimization. I gather this issue isn’t really targeted at that.

Edit: Since this post gathered some reactions just to clarify. I believe meta programming should happen at design-time, where the code is the data, and it’s under my control.

Types I like to use to enforce invariants that I can trust. Runtime reflections breaks that trust.

Hence I’d like runtime reflection to be replaced with design time meta programming. Where it could also be be more powerful overlapping with use cases such as analyzers, quick fixes and refactoring.

@ghost
Copy link

ghost commented May 13, 2020

Tagging subscribers to this area: @swaroop-sridhar
Notify danmosemsft if you want to be subscribed.

@swaroop-sridhar
Copy link
Contributor

Single-file feature design is in this document: https://github.com/dotnet/designs/blob/master/accepted/2020/single-file/design.md
Tracking progress of single-file apps in this issue: #36590.
Thanks to everyone for your feedback and suggestions in this issue.

@ghost ghost locked as resolved and limited conversation to collaborators Dec 15, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests