Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simulated Scaling Documentation #1383

Merged
merged 20 commits into from
Dec 21, 2016
Merged

Simulated Scaling Documentation #1383

merged 20 commits into from
Dec 21, 2016

Conversation

TylerJewell
Copy link
Contributor

@TylerJewell TylerJewell commented Dec 20, 2016

Adds a section to the managing doc of the admin-guide to discuss how to use Docker Machine to simulate running a 3 node Codenvy cluster where 2 of the nodes are workspace node and the third is a Codenvy node.

Need to get more feedback from @garagatyi on my last open comments, which is why we are having admins modify codenvy.env to setup the cluster instead of using the codenvy add-node command which is the prescribed way.

@TylerJewell TylerJewell added this to the 5.0.0-M9 milestone Dec 20, 2016
@TylerJewell TylerJewell self-assigned this Dec 20, 2016

You can remove nodes with `codenvy remove-node <ip>`.
Overlay networks require a distributed key-value store to be running on a node. We embed Consul, a key-value storage implementation as part of the Codenvy master node. We currently only support adding Linux nodes into an overlay network.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that key-value storage can not be on a docker node for workspaces. But I read these docs as it should be there. Can you rephrase that sentence.

You can remove nodes with `codenvy remove-node <ip>`.
Overlay networks require a distributed key-value store to be running on a node. We embed Consul, a key-value storage implementation as part of the Codenvy master node. We currently only support adding Linux nodes into an overlay network.

The default network in Docker is a "bridge" network. If you know that your users will only ever have single container workspaces (this would be unusual and rare), then you can continue using the bridge network for scaling. Bridge networks can be scaled with Linux, Windows, or Mac.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't support more than 1 node with a bridge networking. So it is not scaling if user can't start new node on that network.


2: On the Codenvy master node, start Consul: `docker -H <CODENVY-IP>:2376 run -d -p 8500:8500 -h consul progrium/consul -server -bootstrap`

3: On each workspace node, [configure and restart Docker](https://docs.docker.com/engine/admin/) with new options: `--cluster-store=consul://<CODENVY-IP>:8500`, `--cluster-advertise=<WS-IF>:2376`, and `--engine-insecure-registry=<CODENVY-IP>:5000`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They have to add listening on TCP port into docker daemon conf also.

5: On the Codenvy master node, modify `codenvy.env` to uncomment or add:
```json
# Comma-separated list of IP addresses for each workspace node
CODENVY_SWARM_NODES=<WS-IP>:2376,<WS2-IP>:2376,<WSn-IP>:2376
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Port depends on what is set in docker daemon conf. If we declare only 2376 here then we should require it is opened on a node. Or don't require 2376 and put here that port which is used on a node.

CODENVY_SWARM_NODES=<WS-IP>:2376,<WS2-IP>:2376,<WSn-IP>:2376
```

6: Restart Codenvy with `codenvy/cli restart`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Roman said that we can reload just swarm container. So you can test it with him. It is better way if it works.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not have that syntax in the CLI. Until the CLI supports such a command, we only have restart.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But then workspaces will be stopped. Not a good thing for scaling to stop whole application to just add 1 node


This simulated scaling can be used for production, but it is generally discouraged because you would be running Docker in VMs that are on a host, and you are just taking on some extra I/O overhead that may not generally be necessary. However, this simulated-based approach gives good pointers on configuration of a distributed, cluster-based system if you were to use VMs-only.

As an example, the following sequence launches a 3-node cluster of Codenvy using Docker machine with a VirtualBox hypervisor. In this example, we launch 4 VMs: a Codenvy node, 2 additional workspace nodes, and a node to handle key-value storage. The key-value storage node is typically not part of the scaling configuration. However, Codenvy requires an "overlay" network, which is powered by a key-value storage provider such as Consule, etcd, or zookeeper. When running Codenvy on the host, we are able to setup an etcd key-value storage system automatically and associate the nodes with it. However, in a VM scale-out scenario, a dedicated key-value storage provider is needed. This particular example uses Consul key-value storage to setup the overlay network.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When running Codenvy on the host, we are able to setup an etcd key-value storage system automatically and associate the nodes with it

Is it about dockerized Codenvy or not? In dockerized Codenvy we don't install KV storage yet. Maybe we should not put that in docs for now?
If it is about non dockerized Codenvy I would recommend to specify that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is only for dockerized codenvy. The simulated scaling example runs consul kv storage, so we are covered.

# Grab the IP address of this VM and use it in other commands where we have <CODENVY-IP>
docker-machine create -d virtualbox --engine-env DOCKER_TLS=no --virtualbox-memory "2048" \
--engine-opt host=tcp://0.0.0.0:2375 \
--engine-opt="cluster-store=consul://<KV-IP>:8500" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that for codenvy master node which doesn't take part in workspace infrastructure we don't need cluster config

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You added the cluster configuration to your example. Can you update your example to indicate which of the 4 options are needed in the Docker daemon (on the Codenvy host) if the Codenvy master does not need to have workspaces?

There are four - which ones do we drop?

--cluster-store=consul://<CODENVY-IP>:8500, --cluster-advertise=<WS-IF>:2376, '--host=tcp://0.0.0.0:2375, and --engine-insecure-registry=:5000`

--engine-opt="cluster-advertise=eth1:2376" codenvy

# Workspace Node 1
docker-machine create -d virtualbox --engine-env DOCKER_TLS=no --virtualbox-memory "3000" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should mention that 2048 is default WS size, so this node will be able to run single WS only. And we can use <3000 MB in that case.

-v /home/docker/.codenvy:/data codenvy/cli:nightly init

# Setup Codenvy's configuration file to have the IP addresses of each workspace node
sed -i "s/^#CODENVY_WORKSPACE_AUTO_SNAPSHOT=true.*/CODENVY_WORKSPACE_AUTO_SNAPSHOT=true/g" ~/.codenvy/codenvy.env
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add sudo before seds.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your example did not have sudo in the other docs - why is it needed now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated example yesterday with sudo


# Start Codenvy with this configuration
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock \
-v /home/docker/.codenvy:/data codenvy/cli:nightly init
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use start instead of init

Copy link
Contributor

@garagatyi garagatyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please address my comments

@TylerJewell
Copy link
Contributor Author

I've incorporated most comments - just need instructions about which parameters are not required on the master Daemon configuration if the master is not going to host workspaces.

@TylerJewell
Copy link
Contributor Author

TylerJewell commented Dec 21, 2016 via email

@TylerJewell TylerJewell merged commit 1ce100d into master Dec 21, 2016
@TylerJewell TylerJewell deleted the simulated-scaling branch December 21, 2016 19:21
Copy link
Contributor

@bmicklea bmicklea left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants