No description, website, or topics provided.
Java Shell Python
Switch branches/tags
Nothing to show
Clone or download
Failed to load latest commit information.



This document provides the detailed explanation for each component of the CCM system and relationship between different components. There are three parts of CCM system:

  • Application/Component/Instance Repository,
  • Initial Deployment,
  • Auto Scaling.

Underlying infrastructure

The current OpenStack deployment spreads over 3 nodes. (We followed the 3-node OpenStack set up, i.e. one controller node and two compute nodes, one is called deploying node and the other is called scaling node).

We are using the Havana version of OpenStack. It would be a good warm task to spawn VM's and containers using the nova driver. Login into the machines and do a source to get the desired credentials for the nova user. In case of the error in spawning the instance, the logs can be found at /var/logs/nova/. Usually compute.log and network.log give adequate information about the logs. The configuration information is stored in the zookeeper repository. It is present on the VM. To start the service, login into the machine and then cd ../zookeeper/zk-server-1/bin/, and then run We also can check the status of the zk-server using command ./ status. And using ./ can have access to the zookeeper and have some simple operations of the znode in zookeeper.

Key Components of CCM System

  • Application/Component/Instance Repository

The CCM repository code is present in repository/ccm_repo. Calling command will launch the menu as below:

| MENU                                                        |
| Enter 1              Register a Component                   |
| Enter 2              Remove a Component                     |
| Enter 3              View all Components                    |
| Enter 4              Add an Application Profile             |
| Enter 5              Delete an Application Profile          |
| Enter 6              View all Applications                  |
| Enter 7              View all Instances                     |
| Enter 8              View the App/Compo/Instance Profile    |
| Enter 0              Exit                                   |
Enter your choice:

There are 8 options to choose to manage/view the Application/Component/Instances. For Applications and Components, their metadata and profile need to be add/delete manually. And the Application’s profile is in json format. Here is an example. e.g. Application profile for RUBiS application:

  "components": [
    "name" : "mysql",
    "version": "version number",
    "legalCompliance" : "N/A",
    "working Directory": "/path to run the command",
    "commands": "./ mysql.dockerid",
    "uses": "null",
    "ip-required": "yes"
    "name" : "haproxySql",
    "version": "version number",
    "legalCompliance" : "N/A",
    "working Directory": "/path to run the command",
    "commands": "./ mysql.ip haproxy_mysql.dockerid",
    "uses": "mysql",
    "ip-required": "yes"
    "name" : "apache",
    "version": "version number",
    "legalCompliance" : "N/A",
    "working Directory": "/path to run the command",
    "commands": "./ haproxy_mysql.ip apache.dockerid",
    "uses": "haproxy_mysql",
    "ip-required": "yes"
    "name" : "haproxyServer",
    "version": "version number",
    "legalCompliance" : "N/A",
    "working Directory": "/path to run the command",
    "commands": "./ apache.ip haproxy_server.dockerid",
    "uses": "apache",
    "ip-required": "yes"

While the profile for the components and instances have no special format. Here is an example. e.g. Component profile for haproxyServer.

| ID                   = 40f616de-4ef0-493e-9683-826c236e05c0 |
| Name                 = haproxyServer                        |
| Type                 = Docker                               |
| Image ID             = 636e4f6d-199c-4515-a7e6-0ed524bbbd7b |
| Legal Compliance     = N/A                                  |
| Hardware Attribute   = null                                 |
| Script Location      = null                                 |
| Other                = null                                 |

As for instances, their profiles are automatically added/deleted by the controller. Controller calls two scripts, and to store or delete the instances.

The major interface provided includes:

  1. it contains the repository menu and also the functions for each option in the menu.
  2. getComponentProfile: create component’s profile though user’s input one by one.
  3. addComponent: add component into repository in either of the two ways. One is element by element, which would call getComponentProfile function. The other is using Json file, which would call ReadComponentJson function.
  4. removeComponent: delete the Component from the repository.
  5. listComponents: list all Components in the repository.
  6. listInstances: list all Instances in the repository.
  7. listApplications: list all Applications in the repository.
  8. addApplicationProfile: takes the name of the Application, read the Json file from the argument (dir of the Json file) and then store them to repository.
  9. removeApplicationProfile: delete the Applications in the repository.
  10. viewACIProfile: list the profile of Application or Component or Instance based on the input.
  11. printMenu: print the menu of repository.
  12. defines the specific variables of the Component.
  13. takes one argument, Instance ID, and then search all znodes and then deleted the one matches this Instance ID.
  14. takes one argument, Component Name, then fetch the znode of this component and then return the docker image id of this component.
  15. defines the specific variables of the Instance.
  16. defines the specific variable of Application Profile.
  17. parse the Json file of component and then return a String, which contains the combinations of all the metadata read from the Json file.
  18. takes four arguments, Instance ID, Instance Name, IP addresses of Instance and Hostname, and create a new instance and store it in the repository.

Initial Deployment

CCM controller works in the client-server architecture. I used the socketchannel and selector to implement the CCM Controller. So that, it can listen to multiple ports and respond multiple requests.

Currently, CCM Controller listens to two ports, 4444(deploying port) and 4445(scaling port).

Port 4444 is for receiving the request from users who want to deploy the application. The users need to provide the name of the application and locations where the application to be deployed.

Port 4445 is used for receiving the “Abnormal/Anomaly Alert” message from Prediction Code.

After receiving the users’ request, CCM Controller would start to deploy the application. There are 9 specific steps to finish the initial deployment.

  • The first step is Request Mapping. Controller uses ZooKeeper's java API to connect to the repository to check whether this application is stored in the repository or not. If Yes, then it goes to the second step.
  • The second step is Component Discovery. Controller connect to repository and fetch the application’s profile and then check how many components this application needs and whether these components are stored in the repository or not. If Yes, then it goes to thethird step.
  • The third step is Component Selection. If there are multiple choices of the each component, then Controller can choose specific ones. For example, it would choose ones with the “docker” type or some specific version. After selecting specific component, it goes to the fourth step.
  • The fourth step is Component Deployment. The Controller would boot the component docker images in the order as shown in the application’s profile. Normal, the order is from back to front, i.e. from database server to database server load balancer, to web server, and then to web server load balancer.
  • The fifth step is Component Configuration. The Controller would configure each component using the same order in the fourth step. And the names of the configuration scripts are stored in the application’s profile.
  • The sixth step is to start the cAdvisor, which is used to get the containers’ (docker instances’) resource metrics.
  • The seventh step is to assign floating IP addresses to the instances if they are required.
  • The eighth step is to store the instances into repository by running
  • The ninth step is to start the CCM Daemon, which is used for scaling. After the above ninth steps, the Controller would return the floating ip address to the User. So that, the user can access the application service by type the ip address into the web browser.

The major interface provided includes:

  1. the CCM Controller program. It listens to 2 ports, 4444 and 4445. When received message from 4444, it would call deployApplication function, and startDaemonMetricMonitor to start the Daemon. When received “ABNORMAL” message from 4445, it would trigger scaling, call startDaemonMetricMonitor to start the Daemon in another host first, call ccmdeamonclient function twice to send different command to different Daemons. It contains:
  2. deployApplication: present the start of the process of deploy applications. It would call the RequestMapper function.
  3. ccmdeamonclient: send command to CCM Daemon.
  4. startDaemonMetricMonitor: start CCM Daemon and InsightFinder agent code.
  5. User type Application name and Location and send them to Controller.
  6. find the Application in repository and call selectComponents function.
  7. map all components in repository and then select the ones, which are needed for the to-be deployed Application. And then call deployComponents function.
  9. deployComponents: it include the following steps: Deploy individual components; Configure each component; start cAdvisor ; Add floating ips; store instances into repository.
  11. configureComponent: use session and channel to run command in remote hosts.
  12. deploy: boot the component docker images and inject networks.

Auto Scaling

In each work node, there is an InsightFinder agent, which used to obtain the instance metrics from cAdvisor, proc, cgroup or docker_remote_api. And then they will send these data to InsightFinder Server.

InsightFinder Server receives, aggregates the data from agents, and then creates models based on the received metrics. The model will predict the potential abnormal, and send the anomaly alert to the CCM Controller.

When Controller receives the abnormal message, it triggers scaling. The Controller sends the scaling commands to CCM Daemon(s).CCM Daemon(s) receive the commands from Controller, and run scaling commands on each host.