Cheaper than built-in, the load balancer for your Heroku pipeline applications. The way to host CPU-bound
What is a load balancer
A load balancer is a device that distributes network or application traffic across a cluster of servers. A load balancer sits between the client and the server farm accepting incoming network and application traffic and distributing the traffic across multiple backend servers. By balancing application requests across multiple servers, a load balancer reduces individual server load and prevents any one application server from becoming a single point of failure, thus improving overall application availability and responsiveness.
Heroku has built-in load balancer:
Heroku’s HTTP routers distribute incoming requests for your application across your running web dynos. So scaling an app’s capacity to handle web traffic involves scaling the number of web dynos. A random selection algorithm is used for HTTP request load balancing across web dynos - and this routing handles both HTTP and HTTPS traffic.
But at the same time, built-in load balancer can't help if your application is CPU-bound.
If you are processing individual requests slowly due to CPU or other shared resource constraints (such as database), then optimizing concurrency on the dyno may not help your application’s throughput at all.
So, there are pros of the solution:
- costs — the solution is cheaper. If you use the built-in approach and buy the
Performenceplan for your application (it means you will have 4 dynos — 4 server instances), it will cost you $100 per month. In case of the solution, you can buy 4 independent applications (
Hobbyplan — $7 per month per instance), setup an indential software, put the load balancer before them (also, $7 per month) — it will cost you $35 (7$x5) — ~3 times cheaper,
- CPU-bound applications — as mentioned above,
Herokucannot completely fit you in this case. You can even buy the
Performanceplan, but it will not increase your
CPUperformance too much to pay a few hundred dollars for this. But if you create tens of the instances with identical software and put a load balancer before them, it may solve your problems.
And cons of the solution. Keep in mind that this solution requires multiple, technically independent applications. The applications do not behave as a single application:
- any add-ons must be manually attached to each app — makes operations more complex,
- all logging is spread across apps — makes debugging harder,
- performance metrics are spread across the apps — makes understanding app behavior harder,
Heroku platformdoes not operate them as a single app (could cause downtime during deployments or daily dyno cycling) when the single load balancer
Hobbydyno cycles (restarts) each day or on deployment, the entire app will go offline temporarily,
- added request latency (another two HTTP hops in front of the
How to use
- Press the button named
Deploy to Herokubelow.
- Enter the name for the application which will host the load balancer. Choose the region and add to the pipeline if needed.
- Visit the Heroku account setting page, find the
API Keysection, reveal the key and paste it to the
- Open the preferable pipeline and copy its identifier from the URL. On the screenshoot it is
f64cf79b-79ba-4c45-8039-57c9af5d4508mentioned by red arrow at the top.
- Return to the deploying page, paste the identifier to the
- Press the button named
Deploy app. The process of deploying will start immediately as illustrated below.
- When build is finished, you can manage your application (rename, etc.) and view it (open URL in the browser).
- To check if load balancer works properly, just open logs of each production back-end servers
heroku logs --tail -a application-namein the terminal), and send the request to the load balancer application. As the result, the load balancer will proxy your request to the each back-end server in round-robin method (one by one in order).
How it works
- You specify pipeline's identifier (
PIPELINE_IDENTIFER) to create load balancer for its applications in
- Through the Heroku API using your
HEROKU_API_KEY, URLs of applications are fetched.
- Then configuration file for load balancing based on fetched URLS is created.
- And served by the Nginx in round-robin method (one by one in order).
Clone the project with the following command:
$ git clone https://github.com/dmytrostriletskyi/heroku-load-balancer.git $ cd heroku-load-balancer
To build the project, use the following command:
$ docker build -t heroku-load-balancer . -f Dockerfile
To run the project, use the following command. It will start the server and occupate current terminal session:
$ docker run -p 7979:7979 -v $PWD:/heroku-load-balancer \ -e PORT=7979 \ -e HEROKU_API_KEY='8af7dbb9-e6b8-45bd-8c0a-87787b5ae881' \ -e PIPELINE_IDENTIFIER='f64cf79b-79ba-4c45-8039-57c9af5d4508' \ --name heroku-load-balancer heroku-load-balancer
If you need to enter the bash of the container, use the following command:
$ docker exec -it heroku-load-balancer bash
Clean all containers with the following command:
$ docker rm $(docker ps -a -q) -f
Clean all images with the following command:
$ docker rmi $(docker images -q) -f