Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migration story for running untrusted code in CoreCLR without AppDomains #4108

Closed
eatdrinksleepcode opened this issue Apr 5, 2015 · 39 comments
Labels
area-VM-coreclr documentation Documentation bug or enhancement, does not impact product or test code
Milestone

Comments

@eatdrinksleepcode
Copy link

Searching the internet returns various references to AppDomains not being supported in CoreCLR, and various explanations as to why that is. Many of these explanations make sense: AppDomains are "heavy", they attempt to provide many levels of isolation with a single mechanism, they are not "pay for play", etc.

However, even if their implementation was less than ideal, AppDomains served a number of important functions. If they are not going to be included in .NET Core, it is important to provide a migration path for these scenarios. I am having trouble finding information on these migration paths.

In particular, the scenario I am interested in is executing untrusted code. I am working on a website that would allow users to provide their own code which will be compiled and executed on the server. Obviously this must be done securely. Under the original .NET Framework, this could be accomplished using security policy. After security policy was deprecated, sandboxing provided this capability. With the removal of AppDomains (and the labeling of sandboxing as not being required for Core scenarios), it is no longer clear how I could implement my desired functionality.

May I request that more information about migration paths from AppDomains to Core be posted, in blog posts if nothing else? And in particular, can someone comment about how securely executing untrusted code can be accomplished under .NET Core?

@ellismg
Copy link
Contributor

ellismg commented Apr 5, 2015

Are you unable to run the third party code in another process (which would run with a restricted set of permissions)?

I believe the general guidance for folks that are running untrusted code is to use the OS provided primitives like processes for sandboxing purposes.

@MattWhilden
Copy link
Contributor

I think this brings up a great point about our current lack of guidance to the giant ball or scenarios (see #3986 for a similar issue). @ellismg is correct in that we'd greatly prefer to be out of the security business but providing guidance or new mechanisms is certainly in our wheelhouse.

@eatdrinksleepcode it's most helpful for us to bite these off in smaller bits. It helps us figure out which scenarios people most often use without having to re-implement everything. In this case, it seems like the request is pretty straight forward: Guidance for running untrusted code via CoreCLR. Would you mind either opening a new issue for that or re-purposing this one?

@jakesays-old
Copy link

There are other features of appdomains that are not related to security. I have a scenario where I need to customize at runtime *.exe.config before it is referenced. The only way to accomplish this is to start up a new appdomain that uses the customized configuration file.

The biggest non-security use, though, is being able to unload assemblies. I understand this can be done with child processes, but using processes adds a whole new level of complexities (like communication between the two processes, etc.)

We are losing a ton of functionality by dropping appdomains, and I for one am concerned.

@davidfowl
Copy link
Member

@JakeSays What scenario is that exactly? Assume you never had app domains to begin with, can you list out the things you need to do? I agree that unloadable assemblies is important and the AssemblyLoadContext will be the way to accomplish this in the future.

@jakesays-old
Copy link

@davidfowl ah I wasn't aware of AssemblyLoadContext. Excellent. The other scenario is the one I described regarding customizing .exe.config files at runtime. I 'boostrap' my application by generating a custom .config file, then spool up a new appdomain that runs the app. There's a property that lets me point to the config file to use.

@eatdrinksleepcode
Copy link
Author

@ellismg

I believe the general guidance for folks that are running untrusted code is to use the OS provided primitives like processes for sandboxing purposes.

Where would I go to find this guidance? Where can I find a thorough analysis of the various related scenarios, example implementations, etc?

Are you unable to run the third party code in another process (which would run with a restricted set of permissions)?

I probably can, but there are a number of reasons why that would be a less desirable solution:

  • Getting error information from the external process if something goes wrong will be more difficult than from an AppDomain.
  • Starting up an external process to run the code will have more overhead than running the code in-process.
  • I don't currently know whether my solution will need to work on multiple operating systems, but if it does, executing external processes securely will likely require more OS-specific code than the AppDomain implementation.

@eatdrinksleepcode eatdrinksleepcode changed the title Migration story for current AppDomain scenarios Migration story for running untrusted code in CoreCLR without AppDomains Apr 6, 2015
@eatdrinksleepcode
Copy link
Author

@MattWhilden I have updated the title of the issue

@davidfowl
Copy link
Member

The other scenario is the one I described regarding customizing .exe.config files at runtime. I 'boostrap' my application by generating a custom .config file, then spool up a new appdomain that runs the app. There's a property that lets me point to the config file to use.

That's not a scenario though. The API's for hosting the CLR don't even talk about a configuration file (There is no System.Configuration). It's all code based in core clr AFAIK (somebody will correct me if I'm wrong). So the question is, what is the scenario?

@MattWhilden
Copy link
Contributor

@JakeSays Let's take "customizing .exe.config files at runtime" into a separate issue. As @davidfowl suggests, we'd love to know what your goal was when jumping through all these hoops. That'll help us generate a solution that fits within the vision for CoreCLR going forward.

@jakesays-old
Copy link

@davidfowl Well, in the context of a story to migrate away from appdomains, one of the features appdimains offers is the ability to name a custom configuration file. See AppDomainSetup.ConfigurationFile. So yeah, it is a scenario that needs to be addressed.

@MattWhilden the goal was simple: customize the configuration file at runtime, based on parameters pulled from a central configuration service. Trust me, it was not my preferred solution, but I could find no other way to solve the problem.

The feature of specifying a configuration file may not be part of the appdomain whatever, but appdomains are the vehicle for making it happen. If they go away, I'd at least like an alternative.

@davidfowl
Copy link
Member

Well, in the context of a story to migrate away from appdomains, one of the features appdimains offers is the ability to name a custom configuration file. See AppDomainSetup.ConfigurationFile. So yeah, it is a scenario that needs to be addressed.

Given that there's no System.Configuration, is the scenario here would be to let you provide configuration files per isolate (whatever an isolate/unit is in your application)? What were you setting in these files? App settings? Binding redirects (don't exist anymore)?

The reason the scenario matters is because coreclr may not even have what you're trying to setup, or it might be accomplished in a totally different way. It's hard to determine what you need if you describe the problem in terms of the solution.

@MattWhilden
Copy link
Contributor

@JakeSays Again, let's move this to another Issue. This Issue is currently purposed for tracking information about untrusted code and I'd hate for all this to get lost along the way.

@jakesays-old
Copy link

Ok. I'll open a new issue. Should it be associated with CoreCLR or CoreFX?

@MattWhilden
Copy link
Contributor

CoreCLR is fine. Thanks!

@eatdrinksleepcode
Copy link
Author

Coming back to this: I don't think my original concern has been addressed. There seem to be some very serious changes coming in vNext that are not sufficiently documented. It's not enough to say that more will come later: many of us are making decisions about our code now in preparation for being compatible with .NET Core, and we need to understand what that requires.

@epsitec
Copy link
Contributor

epsitec commented Jun 25, 2015

I agree with @eatdrinksleepcode that we need to know what is coming. Knowing that Assembly unloading will be supported (#295) is a first step, but undestanding how far this will go is another matter altogether.

What will happen if I need to hot-replace an assembly? I don't see how you can possibly unload all of its types in a clean if there is no longer any AppDomain isolation in place. Does this effectively mean that we will have to move to separate processes. And then, my concern is: how efficiently will I be able to remote into the other process? Won't this be an order of magnitude slower than cross-AppDomain remoting?

@kangaroo
Copy link
Contributor

Tag @richlander -- last 2 comments vis-a-vis roadmap.

@jakesays-old
Copy link

it is my understanding that process isolation will be the recommended alternative, unfortunately. However given that inter-appdomain marshaling involves serialization I wouldn't expect IPC via named pipes or shared memory to be much slower. The biggest perf hit will probably be the process start overhead.

The down side, though, is we'll lose remoting as well, so we'll need to come up with an alternative.

@pootow
Copy link

pootow commented Jan 15, 2016

There is another problem if we do not have AppDomain. How do I globally handle unhandled exception? I know that this is bad generally speaking, but I think the app need to try its best to log this unhandled event at least.

@jkotas
Copy link
Member

jkotas commented Dec 2, 2017

We have no plans to support secure sandboxing in .NET Core that would allow running arbitrary untrusted code.

globally handle unhandled exception

AppDomain.UnhandledException event was added back in .NET Core 2.0.

@jkotas jkotas closed this as completed Dec 2, 2017
@eatdrinksleepcode
Copy link
Author

@jkotas My original request - coming up on 3 years ago now - was for

more information about migration paths from AppDomains to Core

Or are you saying that not only will .NET Core not provide this functionality, but that the team feels no need to provide any guidance for developers who were using AppDomains for this purpose who now want to migrate to .NET Core?

@jkotas
Copy link
Member

jkotas commented Jan 17, 2018

Our guidance is to use process, container or virtual machine isolation; depending on the required security guarantees.

The sandboxing is not supported as security boundary even in .NET Framework for number of years. From https://docs.microsoft.com/en-us/dotnet/framework/misc/how-to-run-partially-trusted-code-in-a-sandbox :

Code Access Security in .NET Framework should not be used as a mechanism for enforcing security boundaries based on code origination or other identity aspects. We are updating our guidance to reflect that Code Access Security and Security-Transparent Code will not be supported as a security boundary with partially trusted code, especially code of unknown origin.

@CRodriguez25
Copy link

"Our guidance is to use process, container or virtual machine isolation; depending on the required security guarantees."

Everywhere I go, this is the only sentence that pops up. No specifics, no other guidance, just that sentence. I really want to switch to .NET Core for my project, but I'm currently loading untrusted dll plugins that need to interact with the main application. I'm doing this by loading them into a separate App Domain with restrictive permission sets. Is there any guidance on how to migrate this scenario, outside of that sentence?

@danmoseley
Copy link
Member

@CRodriguez25 I would consider creating a sub process to host the plug-ins, and communicate by a means you choose such as a pipe. That would likely be more secure and reliable than what you are doing today anyway.

@CRodriguez25
Copy link

@danmosemsft Perhaps I'm not familiar enough with this to understand, but how would I perform the restrictions I want on that other process? With AppDomains, I can set FileIOPermissions so that the untrusted plugin can only read from a specific folder, for example. I'm not understanding how I'd be able to accomplish this with a separate process.

Thanks for the reply

@danmoseley
Copy link
Member

Some of the restrictions you could apply with CAS you could potentially achieve with ACL'S, ie run the child process with lower privileges. It would depend on what you wanted to restrict it to. There isn't an equivalent mechanism for you could do with CAS eg restrict access to environment variables. You may decide that you want to continue running this code in.NET Framework.

Could you say a little more about these plug-ins? What is the scenario and what are the restrictions you apply?

@CRodriguez25
Copy link

This is a client desktop application where the end user is able to download 3rd party plugins. These plugins request (via an interface implementation) access to certain folders on the computer to run. If the user agrees, then the desktop application creates an AppDomain with FileIOPermissions that enable the plugin to read from (and only from) the folders that the user permitted.

Notepad++ has a dll plugin model as well (as far as I can tell) so perhaps a better question is "how do they handle security"? How do they prevent malicious plugins from harming a user's computer or stealing information? Or do they just rely on messaging like "Be careful to only download plugins from sources you trust"?

@CRodriguez25
Copy link

Oh, just for clarity, the "request for folder" access happens in an AppDomain that has only "Execute" permissions, so that initial request can't read or write to any folders until the user approves the request and the second, less restrictive app domain is created

@danmoseley
Copy link
Member

I see. Most app plugin models that I am aware of rely on trust to some extent since ultimately they usually process user data and want access to the network as well. Even with CAS today it would likely be possible for a genuinely malicious plugin to escape. That is why we recommend using the OS as the only way to truly limit privileges.

@jkotas do you know of examples of managed code running in a restricted plugin model without using CAS?

@jkotas
Copy link
Member

jkotas commented Mar 19, 2018

The CAS does not provide any strong security guarantees. It has been deprecated as security boundary many years ago. Nobody serious is using CAS to enforce security today.

I think there are two main cases:

If you assume that the plugins are assumed to be not actively malicious (you are assuming that with CAS) and the file access restrictions are there just to enforce hygiene, you can ask the plugins to open all files via API that you provide.

If the plugins can be actively malicious, your only secure option is to host the managed plugins in a virtual machine. This is regardless of whether it is .NET Framework or .NET Core. For example, using docker run --isolation=hyperv. Nether process isolation or regular container isolation is secure enough to sandbox actively malicious code these days.

@CRodriguez25
Copy link

Wow well good to know. At least I can stop wasting my time then hah.

@TeddyAlbina
Copy link

My scenario is really simple, i need an equivalent of System.AddIn in net core. Did someone know how to do that ?

@koteisaev
Copy link

About possible usage scenarios for AppDomain.
Imagine if we have a CMS that would use plugins, or generate page classes based on content from database our even user content, AppDomain was a way to represent isolated CMS plugin environment and it gave opportunity to offload main assembly and assemblies with classes wjen plugin disabled or uninstalled, or upgraded, without full CMS server restart.

@davidfowl
Copy link
Member

What does “isolated” mean

@koteisaev
Copy link

@davidfowl Sorry not clarifying it.
Well, I do not want to complicate the isolation level for plugin, May be these restructions would be enough:

  1. prevent plugin from direct access to file system.
  2. prevent plugin from direct access to network
  3. allow plugin to load assemblies only from specific list (CMS assemblies designed to be interface libraries trusted to be safe).

@BrunoZell
Copy link

IMHO, especially with the introduction of the collectible assembly load context in .NET Core in conjunction with compiled expressions, I think a solution for untrusted code is required. Just saying to use os capabilities or containerization will not cut it for most scenarios where user-defined code will be running.

Essentially one is required to run all untrusted user code out of process. I find it hard to imagine a plugin-system doing that, where the plugin's code ideally is called by the main application passing various object instances.

It gets even harder when performance is a major constraint and the throughput (calls into plugin per sec) has to be high.

The main concerns are file access, network access, stdin/stdout. In my specific case, I use big memory-mapped files and provide access to them via ReadOnlySpan<T>. From those spans, I create objects with convenience methods to hand over to the consumers. The consuming code is mainly not written by us and can really be anything. However, it shouldn't be able to read or write those MMFs, nor have access to the network or standard input/output streams as it is running on our servers. It should only compute something, use a little memory and CPU, and interact with the passed objects.

How would one do such an undertaking in .NET Core?

@koteisaev
Copy link

I guess that there are 2 ways to restrict the plugin from making some dangerous things in .net core:

  1. pre-scan plugin IL for forbidden calls
  2. add a hook to redirect/decline type resolution, like prevent for specific assembly (loaded via assembly load context) from resolving type like System.IO.File

@davidfowl
Copy link
Member

How would one do such an undertaking in .NET Core?

Don’t? It was attempted in the past and the result was still insecure (partial trust). The most robust plugins systems run plugins out of process, not only for security reasons but to have poorly written plugins avoid taking down the host.

Maybe you could do something crazy like host a WebAssembly runtime in process, and load plugins into that 😬

cc @GrabYourPitchforks @blowdart

@danmoseley
Copy link
Member

Attempts to create a trust boundary within a process have repeatedly been defeated. It's not just.NET, browsers try to avoid it too.

@msftgits msftgits transferred this issue from dotnet/coreclr Jan 30, 2020
@msftgits msftgits added this to the Future milestone Jan 30, 2020
@ghost ghost locked as resolved and limited conversation to collaborators Jan 6, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-VM-coreclr documentation Documentation bug or enhancement, does not impact product or test code
Projects
None yet
Development

No branches or pull requests