Skip to content

This is an experimental project to observe how a docker service scales out.

Notifications You must be signed in to change notification settings

dmuiruri/lbalancer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

This project is an experimental test on the scaling characteristics of docker services. The setup includes a client, a proxy server and a HTTP service connected to a redis DB where the proxy server, the HTTP service and the database are run as docker containers.

Design Overview

The proxy server is configured to listen for traffic on port 80 and in this case the only resource available through the proxy is an api that calculates the factorial of a number -which is a cpu bound operation and generates significant processing workload for a large number.

server{
	listen 80;
        location / {
	proxy_pass http://frontend:5000/factorial;
        }
}

The HTTP server is a simple server that takes takes a value from the database (a key-value storage) and computes the factorial of that number.

To run a single instance of the service: sudo docker-compose up

To scale the service the yml file in stage2 folder is run using a scaling option and the proxy server's load balancing feature will allocate resources in the default round-robin fashion. sudo docker-compose up --scale frontend=3

Results

To rest the set up, a client (not yet included in this repo) that issues multiple requests to the HTTP server is started and raises a large volume of requests to port 80 where the proxy sends the calls to the HTTP server instances.

About

This is an experimental project to observe how a docker service scales out.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages