Skip to content

Commit

Permalink
Split the agent code and the orchestration code
Browse files Browse the repository at this point in the history
1. Now the orchestration layer only maintain data in db;
2. Agent layer code will do the real work on platform;
3. Init the structure to support k8s, and also fabric 1.0;
4. Some related documentation is also updated.

This patchset helps fix CE-20.

https://jira.hyperledger.org/browse/CE-20.

Change-Id: Idf87d4041ebb1b5273089e457b2d3da92dfe5fd2
Signed-off-by: Baohua Yang <yangbaohua@gmail.com>
  • Loading branch information
yeasy committed Mar 28, 2017
1 parent a447c81 commit b7c72e3
Show file tree
Hide file tree
Showing 26 changed files with 405 additions and 75 deletions.
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ all: check

check: ##@Code Check code format
tox
make start && sleep 10 && make stop

clean: ##@Code Clean tox result
rm -rf .tox
Expand Down
14 changes: 14 additions & 0 deletions docs/arch.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,20 @@ The architecture will follow the following principles:
* Fault-resilience: Means the service should be tolerant for fault, such as database crash.
* Scalability: Try best to distribute the services, to mitigate centralized bottle neck.

## Functional Layers

Following the decouple design, there are 3 layers in Cello.

* Access layer: including those Web UI dashboards operated by users.
* Orchestration layer: received the request form Access layer, and make call to correct agents to operate the blockchain resources.
* Agent layer: real workers that interact with underly infrastructures like Docker, Swarm, K8s.

Each layer should maintain stable APIs for upper layers, to achieve pluggability without changing upper layer code.

### Agent layer APIs

* Host management: create, query/list, update, delete, fillup, clean, reset
* Cluster management: create, query/list, start/stop/restart, delete, reset

## Components

Expand Down
22 changes: 11 additions & 11 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,19 +32,19 @@ You can also find more [scenarios](docs/scenario.md).
## Documentation

### Operational Docs
* [Installation & Deployment](install.md)
* [Terminologies](terminology.md)
* [Tutorial](tutorial.md)
* [Scenarios](scenario.md)
* [Production Configuration](production_config.md)
* [Installation & Deployment](docs/install.md)
* [Terminologies](docs/terminology.md)
* [Tutorial](docs/tutorial.md)
* [Scenarios](docs/scenario.md)
* [Production Configuration](docs/production_config.md)

### Development Docs
* [How to contribute](CONTRIBUTING.md)
* We're following [pep8 style guide](https://www.python.org/dev/peps/pep-0008/), [Coding Style](code_style.md)
* [Architecture Design](arch.md)
* [Database Model](db.md)
* [API](../api/restserver_v2.md)
* [Develop react js](reactjs.md)
* [How to contribute](docs/CONTRIBUTING.md)
* We're following [pep8 style guide](https://www.python.org/dev/peps/pep-0008/), [Coding Style](docs/code_style.md)
* [Architecture Design](docs/arch.md)
* [Database Model](docs/db.md)
* [API](api/restserver_v2.md)
* [Develop react js](docs/reactjs.md)

## Why named Cello?
Can u find anyone better at playing chains? :)
Expand Down
3 changes: 3 additions & 0 deletions scripts/worker_node_setup/setup_docker_worker_node.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# This script will help setup Docker at a server, then the server can be used as a worker node.

# TODO:
3 changes: 3 additions & 0 deletions scripts/worker_node_setup/setup_k8s_worker_node.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# This script will help setup a Kubernetes cluster at servers, then the cluster can be used as a worker node.

# TODO:
3 changes: 3 additions & 0 deletions scripts/worker_node_setup/setup_swarm_worker_node.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# This script will help setup a Swarm cluster at servers, then the cluster can be used as a worker node.

# TODO:
9 changes: 8 additions & 1 deletion src/agent/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
from .docker_swarm import get_project, \
# Agent pkg provides drivers to those underly platforms, e.g., swarm/k8s. jAS

from .docker.docker_swarm import get_project, \
check_daemon, detect_daemon_type, \
get_swarm_node_ip, \
compose_up, compose_clean, compose_start, compose_stop, compose_restart, \
setup_container_host, cleanup_host, reset_container_host

from .docker.host import DockerHost
from .docker.cluster import ClusterOnDocker

from .k8s.host import KubernetesHost
30 changes: 30 additions & 0 deletions src/agent/cluster_base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
import abc


class ClusterBase(object):
__metaclass__ = abc.ABCMeta

@abc.abstractmethod
def create(self, *args, **kwargs):
"""
Create a new cluster
Args:
*args: args
**kwargs: keyword args
Returns:
"""
return

@abc.abstractmethod
def delete(self, *args, **kwargs):
return

@abc.abstractmethod
def start(self, *args, **kwargs):
return

@abc.abstractmethod
def stop(self, *args, **kwargs):
return
Empty file added src/agent/docker/__init__.py
Empty file.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
83 changes: 83 additions & 0 deletions src/agent/docker/cluster.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
import logging
import os
import sys

sys.path.append(os.path.join(os.path.dirname(__file__), '..', '..'))
from common import log_handler, LOG_LEVEL

from agent import compose_up, compose_clean, compose_start, compose_stop, \
compose_restart

from common import CONSENSUS_PLUGINS, \
CONSENSUS_MODES, CLUSTER_SIZES

from ..cluster_base import ClusterBase


logger = logging.getLogger(__name__)
logger.setLevel(LOG_LEVEL)
logger.addHandler(log_handler)


class ClusterOnDocker(ClusterBase):
""" Main handler to operate the cluster in pool
"""
def __init__(self):
pass

def create(self, cid, mapped_ports, host, user_id="",
consensus_plugin=CONSENSUS_PLUGINS[0],
consensus_mode=CONSENSUS_MODES[0], size=CLUSTER_SIZES[0]):
""" Create a cluster based on given data
TODO: maybe need other id generation mechanism
:param name: name of the cluster
:param host_id: id of the host URL
:param start_port: first service port for cluster, will generate
if not given
:param user_id: user_id of the cluster if start to be applied
:param consensus_plugin: type of the consensus type
:param size: size of the cluster, int type
:return: Id of the created cluster or None
"""

# from now on, we should be safe

# start compose project, failed then clean and return
logger.debug("Start compose project with name={}".format(cid))
containers = compose_up(
name=cid, mapped_ports=mapped_ports, host=host,
consensus_plugin=consensus_plugin, consensus_mode=consensus_mode,
cluster_size=size)
if not containers or len(containers) != size:
logger.warning("failed to create cluster, with container={}"
.format(containers))
return []
else:
return containers

def delete(self, id, daemon_url, consensus_plugin):
return compose_clean(id, daemon_url, consensus_plugin)

def start(self, name, daemon_url, mapped_ports, consensus_plugin,
consensus_mode, log_type, log_level, log_server, cluster_size):
return compose_start(name, daemon_url, mapped_ports, consensus_plugin,
consensus_mode, log_type, log_level, log_server,
cluster_size)

def restart(self, name, daemon_url, mapped_ports, consensus_plugin,
consensus_mode, log_type, log_level, log_server, cluster_size):
return compose_restart(name, daemon_url, mapped_ports,
consensus_plugin, consensus_mode, log_type,
log_level, log_server, cluster_size)

def stop(self, name, daemon_url, mapped_ports, consensus_plugin,
consensus_mode, log_type, log_level, log_server, cluster_size):
return compose_stop(name, daemon_url, mapped_ports, consensus_plugin,
consensus_mode, log_type, log_level, log_server,
cluster_size)


cluster_on_docker = ClusterOnDocker()
36 changes: 25 additions & 11 deletions src/agent/docker_swarm.py → src/agent/docker/docker_swarm.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,15 @@
from common import \
HOST_TYPES, \
CLUSTER_NETWORK, \
COMPOSE_FILE_PATH, \
CONSENSUS_PLUGINS, CONSENSUS_MODES, \
CLUSTER_LOG_TYPES, CLUSTER_LOG_LEVEL, \
CLUSTER_SIZES, \
SERVICE_PORTS

COMPOSE_FILE_PATH = os.getenv("COMPOSE_FILE_PATH",
"./agent/docker/_compose_files")


logger = logging.getLogger(__name__)
logger.setLevel(LOG_LEVEL)
logger.addHandler(log_handler)
Expand Down Expand Up @@ -336,7 +339,7 @@ def compose_up(name, host, mapped_ports,
consensus_plugin=CONSENSUS_PLUGINS[0],
consensus_mode=CONSENSUS_MODES[0],
cluster_size=CLUSTER_SIZES[0],
timeout=5):
timeout=5, cluster_version='fabric-0.6'):
""" Compose up a cluster
:param name: The name of the cluster
Expand All @@ -363,7 +366,8 @@ def compose_up(name, host, mapped_ports,
consensus_mode, cluster_size, log_level, log_type,
log_server)
try:
project = get_project(COMPOSE_FILE_PATH + "/" + log_type)
project = get_project(COMPOSE_FILE_PATH +
"/{}/".format(cluster_version) + log_type)
containers = project.up(detached=True, timeout=timeout)
except Exception as e:
logger.warning("Exception when compose start={}".format(e))
Expand Down Expand Up @@ -417,7 +421,8 @@ def compose_start(name, daemon_url, mapped_ports=SERVICE_PORTS,
consensus_mode=CONSENSUS_MODES[0],
log_type=CLUSTER_LOG_TYPES[0], log_server="",
log_level=CLUSTER_LOG_LEVEL[0],
cluster_size=CLUSTER_SIZES[0]):
cluster_size=CLUSTER_SIZES[0],
cluster_version='fabric-0.6'):
""" Start the cluster
:param name: The name of the cluster
Expand All @@ -438,7 +443,8 @@ def compose_start(name, daemon_url, mapped_ports=SERVICE_PORTS,
consensus_mode, cluster_size, log_level, log_type,
log_server)
# project = get_project(COMPOSE_FILE_PATH+"/"+consensus_plugin)
project = get_project(COMPOSE_FILE_PATH + "/" + log_type)
project = get_project(COMPOSE_FILE_PATH +
"/{}/".format(cluster_version) + log_type)
try:
project.start()
start_containers(daemon_url, name + '-')
Expand All @@ -453,7 +459,8 @@ def compose_restart(name, daemon_url, mapped_ports=SERVICE_PORTS,
consensus_mode=CONSENSUS_MODES[0],
log_type=CLUSTER_LOG_TYPES[0], log_server="",
log_level=CLUSTER_LOG_LEVEL[0],
cluster_size=CLUSTER_SIZES[0]):
cluster_size=CLUSTER_SIZES[0],
cluster_version='fabric-0.6'):
""" Restart the cluster
:param name: The name of the cluster
Expand All @@ -474,7 +481,8 @@ def compose_restart(name, daemon_url, mapped_ports=SERVICE_PORTS,
consensus_mode, cluster_size, log_level, log_type,
log_server)
# project = get_project(COMPOSE_FILE_PATH+"/"+consensus_plugin)
project = get_project(COMPOSE_FILE_PATH + "/" + log_type)
project = get_project(COMPOSE_FILE_PATH +
"/{}/".format(cluster_version) + log_type)
try:
project.restart()
start_containers(daemon_url, name + '-')
Expand All @@ -489,7 +497,8 @@ def compose_stop(name, daemon_url, mapped_ports=SERVICE_PORTS,
consensus_mode=CONSENSUS_MODES[0],
log_type=CLUSTER_LOG_TYPES[0], log_server="",
log_level=CLUSTER_LOG_LEVEL[0],
cluster_size=CLUSTER_SIZES[0], timeout=5):
cluster_size=CLUSTER_SIZES[0], timeout=5,
cluster_version='fabric-0.6'):
""" Stop the cluster
:param name: The name of the cluster
Expand All @@ -512,7 +521,8 @@ def compose_stop(name, daemon_url, mapped_ports=SERVICE_PORTS,
_compose_set_env(name, daemon_url, mapped_ports, consensus_plugin,
consensus_mode, cluster_size, log_level, log_type,
log_server)
project = get_project(COMPOSE_FILE_PATH + "/" + log_type)
project = get_project(COMPOSE_FILE_PATH +
"/{}/".format(cluster_version) + log_type)
try:
project.stop(timeout=timeout)
except Exception as e:
Expand All @@ -526,7 +536,8 @@ def compose_down(name, daemon_url, mapped_ports=SERVICE_PORTS,
consensus_mode=CONSENSUS_MODES[0],
log_type=CLUSTER_LOG_TYPES[0], log_server="",
log_level=CLUSTER_LOG_LEVEL[0],
cluster_size=CLUSTER_SIZES[0], timeout=5):
cluster_size=CLUSTER_SIZES[0], timeout=5,
cluster_version='fabric-0.6'):
""" Stop the cluster and remove the service containers
:param name: The name of the cluster
Expand All @@ -542,13 +553,16 @@ def compose_down(name, daemon_url, mapped_ports=SERVICE_PORTS,
"""
logger.debug("Compose remove {} with daemon_url={}, "
"consensus={}".format(name, daemon_url, consensus_plugin))
# import os, sys
# compose use this
_compose_set_env(name, daemon_url, mapped_ports, consensus_plugin,
consensus_mode, cluster_size, log_level, log_type,
log_server)

# project = get_project(COMPOSE_FILE_PATH+"/"+consensus_plugin)
project = get_project(COMPOSE_FILE_PATH + "/" + log_type)
project = get_project(COMPOSE_FILE_PATH +
"/{}/".format(cluster_version) + log_type)

# project.down(remove_orphans=True)
project.stop(timeout=timeout)
project.remove_stopped(one_off=OneOffFilter.include, force=True)
Loading

0 comments on commit b7c72e3

Please sign in to comment.