Skip to content

Isolation Contexts

Steve Ives edited this page Jun 15, 2020 · 4 revisions

Harmony Core Logo

Isolation Contexts

With Synergy applications of any significant size and complexity developers generally find it necessary to manage the lifetimes of non-memory resources. For maximum performance with Synergy data files this means you should minimize opening and closing of these files. And for SQL Connection it's better to reuse cursors than close and reopen them.

Channel Basics

In the most basic case you can just use a file channel pool for Synergy data files. This takes care of opening a file channel in a given mode for a given filename. If a channel that matches the given parameters is available, it will be used. Otherwise the channel pool will make a call to OPEN. When you are finished with a channel you simply call ReturnChannel to put it back into the pool for the next request.

Basic State

In addition to managing channels for Synergy data files you might have other non-trivial setup that needs to be carried out prior to processing requests. This could involve reading configuration data or opening SQL Connection channels. If the resulting state can be placed into an object that implements IContextBase then the library can pool it using a free threaded context pool. This allows the state object to be created and initialized as needed and reused by later requests, rather than being thrown away.

If you need these state objects to be created per user session, you should also implement ISessionStickyContext and ITimeoutContext, which allows your state object to stick with requests made by the same user when passed the same cookie.

ITimeoutContext is needed because HTTP is a stateless protocol and a user can disappear at any time. With an ITimeoutContext you can control how long you want the session to stay alive between requests.

xfServer

If there isn't a significant load on your server it's possible to use xfServer with a simple file channel pool (described above). However, if you need additional scaling beyond a single connection to xfServer you will need to make a call to s_server_thread_init.

Once you have called s_server_thread_init channel management becomes significantly more complex; channels opened on a certain thread must only be used from that same thread. This leads to the next evolution of the file channel pool: simply make a separate channel for each thread.

Unfortunately, in most environments we aren't really in control of which thread the library is called on when asked to process a request. The library needs to be able to call s_server_thread_shutdown if the thread is being shut down, but the library doesn't control the lifetime of those threads. This is where a threaded context pool is useful.

Rather than relying on ASP.NET Core to choose which thread we should process our request on, we can force requests to be processed on a thread controlled by our context pool. Calling IApplicationBuilder.UseEagerContext<MyContextTypeName> with an instance of a ThreadedContextPool will take advantage of ASP.NET Core middleware to ensure that if a controller has a dependency on a class that extends ThreadedContextBase, our request will be handled on a thread owned by the ThreadedContextPool.

If configured, threads created by ThreadedContextPool will be called by s_server_thread_init and s_server_thread_shutdown when necessary. This allows you to make use of a per-thread IFileChannelManager without having to worry about getting threads mixed up.

Isolation

Multi-threading is an important part of achieving reasonable scalability in ASP.NET Core, so handling non-thread-safe code is equally as important. The use of global data sections, commons, static records, static class fields, or hard-coded channel numbers results in code that cannot be run on multiple threads at the same time.

In the full .NET Framework, we can use an AppDomain to isolate running code from almost all side effects that might be caused by other running AppDomains. However, this will not isolate environment variable changes made within a process. In order to make AppDomain isolation simple and performant, you can use AppDomainContextPool to manage your context class that extends AppDomainContextBase.

Taking this approach allows you to call your existing non-thread-safe code from within your context class, as though it were the only code running in the process. Consumers of your context class will get an instance from the AppDomainContextPool and interact with it as though the underlying code were actually thread safe.

An AppDomain can be created or unloaded on demand. Using an AppDomain within a non-UI application will result in taking an additional Synergy Runtime license.

.NET Core adds some additional complexity to isolation. AppDomain exists in .NET Core, but for the purpose of code isolation, is non-functional. The recommended path is to use AssemblyLoadContext instead. Unfortunately, as of version 4.7.2, AssemblyLoadContext doesn't exist in the full .NET Framework. This means that this solution is platform specific to .NET Core.

There is currently one additional limitation; it is not possible to unload an AssemblyLoadContext, making it a requirement to pool and reuse them. There are also some upsides to AssemblyLoadContext: It is significantly more lightweight and poses no performance cost when making calls from code running in one context to another. There is no marshaling of arguments and much less memory is taken up by a running AssemblyLoadContext.

Traditional Synergy

If you have logic written in traditional Synergy but one of the following conditions exits, you can make use of IDynamicCallProvider to invoke that logic, passing arguments and returning data.

  • It cannot be compiled with Synergy .NET
  • It must run on OpenVMS
  • It must run on a remote system
  • It must run in a separate process

As with most of the other concepts in this library, we can pool these using either RemoteExternalContextPool or ProcessExternalContextPool, depending on if we are communicating over SSH to a remote machine or creating a process locally. Because this scenario potentially involves multiple machines and at least separate processes, things can get more difficult to follow.

For a local process the flow is relatively straightforward. The TraditionalBridge project offers all the generic support functionality that is needed. This includes logging, JSON parsing, JSON writing, and basic routine dispatching. The code you want to run can be exposed in one of two ways:

  • If your code was designed for use with xfServerPlus and has xfMethod and xfParameter attributes, or if your routines are described in a method catalog, then you can use CodeGen to generate strongly-typed dispatch stubs for the code you wish to call. This is the most performant, feature rich, and ultimately reliable method of dispatching routines.

  • If you want to expose non-xfServerPlus code, then the generic routine dispatcher will do its best to map the arguments you pass to a function, subroutine or method with the following caveats:

    • Structures and class arguments must be wrapped by or derived from DataObjectBase.
    • Arrays can only be passed as an ArrayList that can contain only classes derived from DataObjectBase or primitives like string, a, d, and i.

You'll need to compile your code into a DBR and add a reference to the TraditionalBridge library. The mainline of your DBR needs to look something like the following, where MyDispatcher is a class that you've code-generated from a combination of repository structures and potentially xfServerPlus method catalog data:

main
	record
		dispatcher, @MyDispatcher
		ttChan, i4, 0
		jsonReader, @Json.Json
		jsonVal, @Json.JsonValue
	endrecord
proc
	xcall flags(0101010010)
	open(ttChan, O, "TT:")
	puts(ttChan, "READY" + %char(13)+ %char(10))
	dispatcher = new MyDispatcher()
	dispatcher.Dispatch(ttChan)
endmain

ProcessExternalContextPool will take care of creating, initializing, recycling, and destroying the spawned DBRs that will execute your code. Spawned processes will each use a Synergy Runtime license and will be executed using the same user as the running web server.

For remote processes the flow is identical once the remote process is started. But first we must make a connection and do any authentication. There are several possible security flows:

  • A preconfigured username/password can be stored in configuration data. (NOTE: This is insecure and is recommended only for development scenarios.)

  • A preconfigured username/password can be stored in a secure configuration store such as app-secrets or Microsoft Azure Key Vault.

  • SSH private keys are stored in a secure configuration store, as described above.

  • In a web service that is authenticated against an Azure Active Directory or Active Directory server, it is possible to pass the authentication token through the web service using JSON Web Tokens (JWT). When the request is made to create a process, the JWT from the request can be used to acquire AltSecurityIdentities from the AD account. These security credentials are then used to log into SSH and create the target process. This security flow generally precludes the use of a pool of pre-created processes/connections. Processes can be left running (and reused by the same user) for a predetermined amount of time. But because they cannot be shared or pre-created, this may negatively affect performance.

In any case, your interaction point with the library will be a PasswordAuthenticationMethod or PrivateKeyAuthenticationMethod passed to the constructor of your RemoteExternalContextPool. Once you're connected, the library runs the supplied command line remotely. This command line can do whatever environment setup is needed but should result in running dbr or dbs against the DBR you created earlier.

Clone this wiki locally