-
Notifications
You must be signed in to change notification settings - Fork 0
Relay/Orchestrator Roadmap: Content addressing of containers and environments #2
Comments
Example of passing addresses: // A
const http = require('http');
// this is not the way we should be doing this
// const matrix = require('matrix');
const server = http.createServer((req, res) => {
if (req.... === 'blah') {
// this is ok if it can be done as easily as as effiicnetly
const result = http.get('dogs.com');
// as this
const result = http.get(env.BADDRESS);
const result2 = ftp.get(env.CADDRESS);
const result3 = unixdomain(env.DADDRESS);
console.log(env.BADDRESS);
res.write(result);
}
res.end();
});
server.on('clientError', (err, socket) => {
socket.end('HTTP/1.1 400 Bad Request\r\n\r\n');
});
server.listen(env.APP_PORT); This is the architect expression that somebody might write.
So we have enumerated 3 ways of passing "content addresses" into the automaton. Using a custom matrix library is not a good way. Environment variables is possible is a good way IMO. The content address is the address of the interface. The best way to think about this is like OOP. |
You need to look at the source code of http load balancers: haproxy. IPv6 Anycast. Mesh networking. |
So I can look at implementing the function that takes a content address and converts it to the necessary http, ip, ftp address in the source. The actual conversion depends on the other functionalities i.e. the QOS (as it determines the machine that the get actually comes from). If we think of the function as only producing temporary addresses, and the communication happening in our network over some arbitrary type of data flow, then how to implement temporary addresses becomes relevant (like IPv6 Anycast). So my area of research could be in doing this for multiple different types of protocols? |
@CMCDragonkai From our discussion, if we use a centralized orchestrator, we don't really need to do discovery. (A central node knows the location of every service). Mostly it remains to redirect connections to the correct location. low level API to use is nftables (packet routing using BPF bpfilter is not yet ready). Since the minimal API stuff is sort of covered by the Haskell bindings that VIvian is writing, and the service abstraction layer is pretty minimal (really just mapping into nftables), I think I'll start work on part 3, the mini orchestrator. Probably to start off with, I see this as a daemon that runs on the machine, similar to running an ipfs daemon. The API should look something like, suppose we have some abstraction for machine on the network. Then mini orchestrator knows how to setup .nix configuration on the machine (If the Node is running NixOS, then we can represent an automaton environment using per user profiles? Alternatively, one virtual machine per automaton.) I think just having an orchestrator that is able to deploy nix configurations to multiple machines is a step forward from the current (there are orchestrators like Puppet that already do this for their own configuration files, but not for nixOS unless I'm mistaken.) Thoughts? |
Each node maintains synchronisation and knowledge about automatons. So "centralised" here doesn't mean not-distributed.
…On 31 March 2018 20:15:07 GMT+11:00, Quoc Ho ***@***.***> wrote:
@CMCDragonkai From our discussion, if we use a centralized
orchestrator, we don't really need to do discovery. (A central node
knows the location of every service). Mostly it remains to redirect
connections to the correct location. low level API to use is nftables
(packet routing using BPF bpfilter is not yet ready).
Since the minimal API stuff is sort of covered by the Haskell bindings
that VIvian is writing, and the service abstraction layer is pretty
minimal (really just mapping into nftables), I think I'll start work on
part 3, the mini orchestrator. Probably to start off with, I see this
as a daemon that runs on the machine, similar to running an ipfs
daemon.
The API should look something like, suppose we have some abstraction
for machine on the network. Then mini orchestrator knows how to setup
.nix configuration on the machine (If the Node is running NixOS, then
we can represent an automaton environment using per user profiles?
Alternatively, one virtual machine per automaton.) I think just having
an orchestrator that is able to deploy nix configurations to multiple
machines is a step forward from the current (there are orchestrators
like Puppet that already do this for their own configuration files, but
not for nixOS unless I'm mistaken.)
Thoughts?
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#2 (comment)
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
|
Deployment tools are irrelevant. Node orchestrators (are already an orchestrator tool) can receive architect expressions and create the relevant nix expressions or alternative side-effectful commands.
…On 31 March 2018 20:15:07 GMT+11:00, Quoc Ho ***@***.***> wrote:
@CMCDragonkai From our discussion, if we use a centralized
orchestrator, we don't really need to do discovery. (A central node
knows the location of every service). Mostly it remains to redirect
connections to the correct location. low level API to use is nftables
(packet routing using BPF bpfilter is not yet ready).
Since the minimal API stuff is sort of covered by the Haskell bindings
that VIvian is writing, and the service abstraction layer is pretty
minimal (really just mapping into nftables), I think I'll start work on
part 3, the mini orchestrator. Probably to start off with, I see this
as a daemon that runs on the machine, similar to running an ipfs
daemon.
The API should look something like, suppose we have some abstraction
for machine on the network. Then mini orchestrator knows how to setup
.nix configuration on the machine (If the Node is running NixOS, then
we can represent an automaton environment using per user profiles?
Alternatively, one virtual machine per automaton.) I think just having
an orchestrator that is able to deploy nix configurations to multiple
machines is a step forward from the current (there are orchestrators
like Puppet that already do this for their own configuration files, but
not for nixOS unless I'm mistaken.)
Thoughts?
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#2 (comment)
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
|
Yes, a daemon runs per node, but I imagine that daemon gets all it's information from the orchestrator (and may additionally have information about automatons that are running on the same node).
Yeah okay. I'll forget about part 3 the orchestrator, and focus exclusively on ensuring that whatever the substrate is (docker container, unikernel), we can support migrating connections to the new location without the need for a load balancer. But do you still mean to tightly integrate with nixOS? |
The daemon is an orchestrator node. All daemons are part of the same orchestrator.
No need to worry about NixOS atm. It's only a potential subtrate mediated through the nix language.
…On 1 April 2018 19:35:39 GMT+10:00, Quoc Ho ***@***.***> wrote:
> Each node maintains synchronisation and knowledge about automatons.
So "centralised" here doesn't mean not-distributed.
Yes, a daemon runs per node, but I imagine that daemon gets all it's
information from the orchestrator (and may additionally have
information about automatons that are running on the same node).
> Node orchestrators (are already an orchestrator tool) can receive
architect expressions and create the relevant nix expressions
Yeah okay. I'll forget about part 3 the orchestrator, and focus
exclusively on ensuring that whatever the substrate is (docker
container, unikernel), we can support migrating connections to the new
location without the need for a load balancer. But do you still mean to
tightly integrate with nixOS?
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#2 (comment)
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
|
I'd like to suggest an alternate roadmap because I think that this one focuses on development before necessary structural considerations are figured out. I've taken some points from the proposed roadmap quoted though as a base for what's below. Just as a heads up, I'd expect progress to be made on multiple points at once. For example, points 1 and 2 probably will be worked on at the same time and be influenced by point 4, but at least this list provides some insight into what I think should occur in a loose order.
|
I'm adding an overarching aim to resolve: to find out the mapping from the high level Architect composition operators (functional, object, union... etc) to the low level network implementation. There are many solutions on the market right now that address bits and pieces, but nothing that perfectly matches what we are looking for, but we need to examine them, to derive some insights and build on top of their experiments. We need to answer that question ASAP, and then proceed to build the bindings into them for integration into the language. |
Update to roadmap after close to 2 months.
After having done an initial experiment based on point 2 (service addressing and migration possibilities), the importance of repeatedly building and testing designs and ideas is rather clear. The points above are still relevant although the roadmap may not be as linear and the ordering may not be as first proposed. Roadmap directory now contains documents relating to things to do and directions to take. Notable file is the orchestrator design doc which jots down thoughts on functionalities, layouts, data structures etc. |
@ramwan This issue should be split up into issues and organised under the project boards now. |
The key concerns of Relay and Orchestrator are in providing a network and deployment service at the level of automatons (where an automaton can be thought of as a container + associated container environment if any).
This involves implementing some of the mechanisms in the literature on P2P Networks and service based networks, such as service and peer discovery, establishing seperate control and data planes between nodes in the network, and ensuring that network communication between two automatons is transparent to each service running inside the automatons.
We also introduce a content addressing system for containers and container environments, processes, network environments, and hardware that our automatons may run on. The idea here is that we can use content-based addresses to precisely indicate to the orchestrator what container should be deployed on what machine, as well as also the required environment for the container to run.
Optimistically, already existing p2p frameworks such as LibP2P will allow us to avoid reimplementing the necessary discovery mechanisms and transports for our automaton network.
So the (draft) roadmap looks like this:
1.1 Start off with integrating with either LibP2P or doing a small peer discovery module over the regular transport.
Most of the work here seems to be in implementing the discovery module, and ensuring that we have a way to bring up the necessary environmental configuration for the container automatically using NixOS. Most of what features we'll need will probably come out of how this interacts with the other subsystems. I'll probably flesh out this roadmap a bit more as I work on these areas.
The text was updated successfully, but these errors were encountered: