Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZENKO-887 change to Installation/Install_guide.rst #240

Merged
merged 3 commits into from
Aug 2, 2018

Conversation

wabernat
Copy link
Contributor

This makes a one-line change to Install_guide.rst. Note, however, that I had to git add this file and directory to and paste in a copy of the file because the previous two versions are hung up in builds, so the history may look odd.


Helm can now install applications on the Kubernetes cluster.

3. Declare the ZooKeeper repository:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section is no longer needed

$ helm repo add zenko-zookeeper https://scality.github.io/zenko-zookeeper/charts
"zenko-zookeeper" has been added to your repositories

$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While these are no longer needed we need the following step helm repo add scality https://scality.github.io/charts/

Copy link
Contributor

@giacomoguiulfo giacomoguiulfo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not all the comments are requests.

Fixed a couple of formatting errors, one typo.

Removed ZENKO-633

Fixed more minor formatting issues.

updated issues per Salim's comments

removed:
autoscaling
lines 206-208
added note re NGINX ingress controller

Implementing Giacomo's changes.

Also added info for non-MetalK8s Helm installation.

picked a couple of nits,

added nodes 4 and 5 to cluster build intructions.
@wabernat wabernat force-pushed the bugfix/ZENKO-887_minimum_rootFS_req branch from b5952c7 to 55a20a2 Compare August 2, 2018 00:27
@giacomoguiulfo giacomoguiulfo changed the title committing ZENKO-887 change to Installation/Install_guide.rst ZENKO-887 change to Installation/Install_guide.rst Aug 2, 2018
@ssalaues ssalaues merged commit bfe0889 into development/0.9 Aug 2, 2018
@ssalaues ssalaues deleted the bugfix/ZENKO-887_minimum_rootFS_req branch August 2, 2018 17:40
Cluster," below.

The following section describes general cluster requirements, which are tested
on Metal K8s. Because MetalK8s is designed to operate without support from
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MetalK8s, not Metal K8s.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

VMs) running CentOS_ 7.4 (The recommended mimimum for Zenko production service
is five server nodes with three masters/etcds, but for testing and
familiarization, three masters and three nodes is fine). You must have SSH
access to these machines and they must have SSH access to each other. (You
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for SSH access between nodes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the distinction you're making.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they must have SSH access to each other

That's not a requirement, and if it were it shouldn't be one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I talked through this. Fixed.

is five server nodes with three masters/etcds, but for testing and
familiarization, three masters and three nodes is fine). You must have SSH
access to these machines and they must have SSH access to each other. (You
can copy SSH credentials from one machine to the next and log in once to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never copy SSH private keys to other machines... There's no need to do it when deploying MetalK8s, and it's bad practice. Worst case you use agent forwarding.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

Sizing
======

Sizing for Metal K8s
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MetalK8s

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

- Storage (for the system)

- 20 GB for the root filesystem
- 16 GB for etcd
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we size etcd storage, we should document how to set up the system(s) to have etcd storage not on / (which is good practice in any case).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To whose expertise shall I appeal for this information?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a show-stopper? I need to know either: a) that I can omit the etcd sizing requirement or b) how to configure the system to put etcd somewhere other than / .

If I don't hear back, I'm filing this under "nice to have" and moving the other changes along.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this ought to be discussed with out TS folks. Right now, we don't have any recommendations for putting various types of data on different volumes, but maybe that should be the case.

To install MetalK8s, you must issue commands from within a virtual shell.
The following steps ensure you can access the virtual environment.

1. Install python-virtualenv:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to install python-virtualenv?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed.


1. Make sure kubectl is installed on your local machine::

$ yum install kubectl
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You get this in a make shell environment, which also ensures you get a version which works with the deployed cluster, which is definitely not guaranteed when installing from CentOS repositories.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted.

+-------------------------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------+---------------------------------------+
| `Grafana`_ | Monitoring dashboards for cluster services | http://localhost:8001/api/v1/namespaces/kube-ops/services/kube-prometheus-grafana:http/proxy/ | |
+-------------------------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------+---------------------------------------+
| `Cerebro`_ | An administration and monitoring console for | http://localhost:8001/api/v1/namespaces/kube-ops/services/cerebro:http/proxy/ | When accessing Cerebro, connect it to |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We pre-provision a config.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would you have this reflected in the documentation? Is Cerebro preconfigured to elasticsearch:9200?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following precedent in ZENKO-921 (Kibana, next comment) and deleting setup note.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, there's an entry in a pre-populated list of 'known clusters' when you open Cerebro.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

| | Elasticsearch clusters | | http://elasticsearch:9200 to operate |
| | | | the MetalK8s Elasticsearch cluster. |
+-------------------------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------+---------------------------------------+
| `Kibana`_ | A search console for logs indexed in Elasticsearch | http://localhost:8001/api/v1/namespaces/kube-ops/services/http:kibana:/proxy/ | When accessing Kibana for the first |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We pre-provision a config.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in Zenko-921

| | | | field name*. |
+-------------------------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------+---------------------------------------+

See :doc:`../architecture/cluster-services` for more about these services
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these docs build?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed bad docs link.

Can't comment about build other than to note that the merge request went through, and the linter for RST is quite permissive/nonexistent.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would only be found by actually building the docs. For MetalK8s, we're working on a step in CI which performs this, and as such makes sure any proposed change to the docs doesn't break its build.

@smaffulli
Copy link
Contributor

Folks, this commit (although merged) contains information that is not pertinent to Zenko IMO. How to install a MetalK8s cluster should be a link to MetalK8s docs. The instructions to install Zenko are also too MetalK8s-centric which is limiting.

I think we should revert this change.

And before going any further, I would suggest to focus on agreeing on a common architecture for the whole Zenko documentation, setup the infrastructure to build it properly (take MetalK8s docs as an example to follow) and put the content we have in various formats/sources into one coherent information architecture. Such conversation already started on #149

@ssalaues
Copy link
Contributor

ssalaues commented Aug 7, 2018

@smaffulli I think the reason why we were okay with having this MetalK8s info on Zenko was because so many devs were having issues with specifically installing Zenko on MetalK8s. The docs for MetalK8s is great for just installing MetalK8s but we need a bit more configurations for Zenko on MetalK8s (mainly volume provisioning)

And if our devs were having pretty much the same issues with installs, likely others will run into the same issues as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants