-
Notifications
You must be signed in to change notification settings - Fork 2
Process flow
In an old-school console application, you had a main()-like function where you wrote the algorithm of your application. Later, you (or your runtime environment) started an event processing loop and you had to react to events dispatched to your processing codes. This applies to both the GUI-based applications and application server components (HTTP servlets or J2EE objects for Java, and alike in other languages, environments). In all the cases, you had a code-level connection to the running environment: the message handler and processing API depends on the environment; if you have to provide a function in different environments, you have to rewrite at least the connection part of your code. This is true for the first, "console" application model: the main function is the handler of the initialize message - and if you have to communicate further with the user (process startup parameters, support a kind of console or remote network socket interaction), you have to manually implement a kind of message handling, similar to the latter environments.
In dust, you have no "main" function, no "application" in the common sense, and you have no dependency on what environment you are running. Your code is a service, a plugin that you add to the "dust feature set" that can either be used to connect the "dust world" to something outside in the real world, or to support the communication between a user and those entities. So, your code is invoked "sometime", integrated by a proper configuration into a Context, and should respond to messages and commands coming through different channels one by one or as parts of a longer conversation. Furthermore, the dust "application" is not locked to a single machine or runtime environment, but on the contrary: you never know what remote systems you connect when accessing entities, read their data or send them messages. Your code is connected to the network of the whole world of dust most of the time.
So, what makes your code run? Events. Events make the dust runtime load the entities containing aspects that you provide, connect and initialize your codes to them, call your functions to process messages targeted to your aspects. When there is no event, the dust runtime does nothing; when there are many, the runtime may even process some of them at the same time - depending on the features of the actual hardware.
The configuration may also indicate that some of your code should run in a parallel fashion. For example, you should process the content of an incoming queue, so you get the new data in your code as normal messages. However, you configure this channel as "parallel with max 5 items", thus allow the runtime to create more instances of your processor entity (and code) which will be called with the incoming data. Of course, this scenario assumes that those incoming data can be processed independently, but you can also direct them to a single processor instance for the initial dependent check and then forward to the forked processors - or have a single "collector" entity on the other end to which the processors will send their result. This is a design and deployment question, not an implementation detail inside your actual code.
Threads, calls, contexts
The Island context itself must contain the runtime itself as an entity having some required aspects like the Context manager, Type manager, Thread manager and so on.
The Thread manager is a low level component, connected directly to the actual runtime environment (depending on the current development it can be the OS: Windows, GNU/linux, OSX, etc. or the JVM for Java). This component "knows" how many real parallel threads are allowed in the runtime (configured in the OS or available on the CPU) and is responsible for having a quickly accessible "thread local storage" in which the thread refers to the actual call.
The "call" on the other hand is an actual message processing stack, started by an event or a parallel message. There can be many of them with an optional top limit: if the limit is exceeded, the call request is not accepted until some of the previous calls finish. Calls have local stack, and priority that is used when distributing them among the actual threads. The management of the calls and thread assignment is similar to the normal time sharing OS kernel - or is actually done by that, when the thread manager is just a wrapper upon a threading library.
Core events mostly come from "the external world", from different hardware components as interrupts: timer, keyboard, mouse, data from the hard drive or the network. There are other, secondary events, like a resource is polled regularly on the timer and an event is generated when that resource has something to say. Sometimes the hardware has no timer and the main processor enters an infinite loop and leave it regularly to call the timer component.
The events must be dispatched, which is quite straightforward. An entity required a constant listening to a network port - this means it opened a channel (see channels) to it. The network component created the entity that is connected to the required network port and it remembered the response channel. When something arrives through that port, it is put onto the response channel, and the component gets called. Of course, the network listener component should spend the least amount of time in the interrupt code, so it just puts the data into a buffer and registers the call in the thread manager, which in turn (perhaps later) assigns it to a thread and resumes its execution.
Separating ourselves from the "computer" theory embedded into all operating systems, we suddenly find very interesting options, like connecting two keyboards and mice to the same computer, assign each pair to different users who play a game on the same computer, but each can only activate and use the controls of their own part of the user interface. Have any "location" source, like a 3D hand tracking or gps location service as virtual mouses, tts engines as virtual keyboards, and so on.