Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation updates Round 2! #1630

Merged
merged 54 commits into from Feb 12, 2020
Merged
Show file tree
Hide file tree
Changes from 35 commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
7436eb6
Cleanup of subfolder docs.
Jan 21, 2020
1b8f6f0
Adding css to force tables to overflow.
Jan 22, 2020
62cc7c3
Cleaning up Backups documentation.
Jan 22, 2020
12f532b
Cleaning up GraphQL documentation.
Jan 22, 2020
2dc7cfd
Cleaning up Build Deploy Process docs.
Jan 22, 2020
d6af486
Cleanup of subfolder docs.
Jan 22, 2020
c9510bf
Fixing typo in table of contents.
Jan 30, 2020
6b8d787
Updating Build and Deploy process docs.
Jan 30, 2020
46fed7d
Updating subfolders doc.
Jan 30, 2020
d6d9b25
Updating deprecated pages config.
Jan 30, 2020
5e431eb
Adding graphiql images.
Jan 30, 2020
c78f60d
Updating GraphQL page.
Jan 30, 2020
d9227b0
Updating Install lagoon page.
Jan 30, 2020
eb1b556
Updating OpenShift requirements.
Jan 30, 2020
2d15f7f
Updating css for RBAC wide table.
Jan 30, 2020
6495ae2
Updating API Debugging page.
Jan 30, 2020
a586d31
Updating Code of Conduct page.
Jan 30, 2020
07fedd5
Updating Developing Lagoon page.
Jan 30, 2020
9bcaaed
Updating Test page.
Jan 30, 2020
6a4de3d
Updating contrib page.
Jan 30, 2020
43b34ba
Updating webhooks.
Jan 30, 2020
5e1c8b3
Updating docker compose page.
Jan 30, 2020
044958f
Updating first deployment page.
Jan 30, 2020
4eeaa70
Updating Go Live page.
Jan 30, 2020
d06a9db
Updating Using Lagoon.
Jan 30, 2020
8f555b8
Merge branch 'master' into documentation-updates
Feb 4, 2020
5d7c7ca
Merge branch 'master' of github.com:amazeeio/lagoon into documentatio…
Feb 4, 2020
6d192ab
Fixing merge artifact and github ref.
Feb 5, 2020
f687a7e
Cleanup of Install and OS reqs.
Feb 5, 2020
4628b88
Cleaning up and adding ciphers to ToC.
Feb 5, 2020
a3a0933
Changing wording in Contributing.
Feb 5, 2020
5b9d4e7
Fixing test links.
Feb 5, 2020
0bafb0c
Merge branch 'master' of github.com:amazeeio/lagoon into documentatio…
Feb 5, 2020
af79b59
Updating image pages.
Feb 5, 2020
f7d2139
Updating Using Lagoon section.
Feb 5, 2020
a55e379
Updating Using Lagoon.
Feb 5, 2020
83ae78c
Updating Administering and developing.
Feb 5, 2020
449aef5
Update docs/using_lagoon/index.md
AlannaBurke Feb 5, 2020
ff00753
Update docs/using_lagoon/index.md
AlannaBurke Feb 5, 2020
d7599a0
Update docs/using_lagoon/index.md
AlannaBurke Feb 5, 2020
f3ee316
Update docs/using_lagoon/index.md
AlannaBurke Feb 5, 2020
c274d97
Update docs/using_lagoon/index.md
AlannaBurke Feb 5, 2020
4c280d7
Update docs/using_lagoon/index.md
AlannaBurke Feb 6, 2020
530adc8
Update docs/using_lagoon/index.md
AlannaBurke Feb 6, 2020
76f48eb
Update docs/using_lagoon/index.md
AlannaBurke Feb 6, 2020
de5d187
Update docs/using_lagoon/index.md
AlannaBurke Feb 6, 2020
17408c2
Update docs/using_lagoon/index.md
AlannaBurke Feb 6, 2020
62370bb
Update docs/using_lagoon/index.md
AlannaBurke Feb 6, 2020
94accfd
Update docs/using_lagoon/index.md
AlannaBurke Feb 6, 2020
666a9ca
Changing deprecated mkdocs info back for readthedocs.
Feb 6, 2020
41c9693
Renaming Redis Permanent to Persistent.
Feb 6, 2020
4b8ddf2
Fixing links.
Feb 6, 2020
5edaabd
Fixing links.
Feb 6, 2020
73da45a
Merge branch 'documentation-updates' of github.com:amazeeio/lagoon in…
Feb 6, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
17 changes: 17 additions & 0 deletions docs/_static/theme_overrides.css
@@ -0,0 +1,17 @@
/* override table width restrictions */
@media screen and (min-width: 767px) {

.docutils table td {
/* !important prevents the common CSS stylesheets from overriding
this as on RTD they are loaded after this stylesheet */
white-space: nowrap !important;
}

.docutils {
overflow: visible !important;
}
}

.wy-nav-content-wrap {
overflow: scroll;
}
167 changes: 89 additions & 78 deletions docs/administering_lagoon/graphql_api.md

Large diffs are not rendered by default.

98 changes: 51 additions & 47 deletions docs/administering_lagoon/install.md
@@ -1,80 +1,84 @@
# Install Lagoon on OpenShift

Lagoon is not only capable to deploy into OpenShift, it actually runs in OpenShift. This creates the just tiny chicken-egg problem of how to install Lagoon on an OpenShift when there is no Lagoon yet.
Lagoon is not only capable of _deploying_ into OpenShift, it actually _runs_ in OpenShift. This creates the just tiny chicken-egg problem of how to install Lagoon on an OpenShift when there is no Lagoon yet. 🐣

Luckily we can use the local development environment to kickstart another Lagoon in any OpenShift, running somewhere in the world.
Luckily, we can use the local development environment to kickstart another Lagoon in any OpenShift, running somewhere in the world.

Check the [Requirements for OpenShift by Lagoon](/administering_lagoon/openshift_requirements.md) before continuing.
Check the [OpenShift Requirements](openshift_requirements.md) before continuing.

This process consists of 3 main stages, which are in short:
This process consists of 4 main stages::

1. Configure existing OpenShift
2. Configure and connect local Lagoon with OpenShift
1. Configure existing OpenShift.
2. Configure and connect local Lagoon with OpenShift.
3. Deploy!
4. Configure Installed Lagoon
4. Configure Installed Lagoon.

### Configure existing OpenShift
## Configure existing OpenShift

!!!hint
This also works with the OpenShift provided via MiniShift that can be started via `make minishift`.

Hint: This also works with the OpenShift provided via MiniShift that can be started via `make minishift`.

In order to create resources inside OpenShift and push into the OpenShift Registry, Lagoon needs a Service Account within OpenShift \([read more about Service Accounts](https://docs.openshift.org/latest/dev_guide/service_accounts.html)\).

Technically Lagoon can use any Service Account and also needs no admin permissions, the only requirement is that the `self-provisioner` role is given to the Service Account.
Technically, Lagoon can use any Service Account and also needs no admin permissions. The only requirement is that the `self-provisioner` role is given to the Service Account.

In this example we create the Service Account `lagoon` in the OpenShift Project `default`.

1. Make sure you have the oc cli tools already installed. If not, please see [here](https://docs.openshift.org/latest/cli_reference/get_started_cli.html#cli-reference-get-started-cli).
1. Make sure you have the `oc cli` tools already installed. If not, please see [here](https://docs.openshift.org/latest/cli_reference/get_started_cli.html#cli-reference-get-started-cli).
2. Log into OpenShift as an admin:

oc login <openshift console>

3. Run the openshift-lagoon-setup script

make openshift-lagoon-setup

4. At the end of this script it will give you a serviceaccount token, keep that somewhere safe.

### Configure and connect local Lagoon with OpenShift
```text
oc login <openshift console>
```

In order to use a local Lagoon to deploy itself on an OpenShift, we need a subset of Lagoon running locally. We need to tech this local Lagoon how to connect to the OpenShift:
3. Run the `openshift-lagoon-setup` script

1. Edit `lagoon` inside `local-dev/api-data/01-populate-api-data.gql`, in the `Lagoon Kickstart Objects` section:
```text
make openshift-lagoon-setup
```

1. `[REPLACE ME WITH OPENSHIFT URL]` - The URL to the OpenShift Console, without `console` at the end.
2. `[REPLACE ME WITH OPENSHIFT LAGOON SERVICEACCOUNT TOKEN]` - The token of the lagoon service account that was shown to you during `make openshift-lagoon-setup`
4. At the end of this script it will give you a `serviceaccount` token. Keep that somewhere safe.

2. Build required Images and start services:
## Configure and connect local Lagoon with OpenShift

make lagoon-kickstart
In order to use a local Lagoon to deploy itself on an OpenShift, we need a subset of Lagoon running locally. We need to teach this local Lagoon how to connect to the OpenShift:

This will do the following:
1. Edit `lagoon` inside `local-dev/api-data/01-populate-api-data.gql`, in the `Lagoon Kickstart Objects` section:
1. `[REPLACE ME WITH OPENSHIFT URL]` - The URL to the OpenShift Console, without `console` at the end.
2. `[REPLACE ME WITH OPENSHIFT LAGOON SERVICEACCOUNT TOKEN]` - The token of the lagoon service account that was shown to you during `make openshift-lagoon-setup`.
2. Build required images and start services:

1. Build all required Lagoon service Images (this can take a while)
2. Start all required Lagoon services
3. Wait 30 secs for all services to fully start
4. Trigger a deployment of the `lagoon` sitegroup that you edited further, which will cause your local lagoon to connect to the defined OpenShift and trigger a new deployment
5. Show the logs of all Local Lagoon Services
```text
make lagoon-kickstart
```

3. As soon as you see messages like `Build lagoon-1 running` in the logs it's time to connect to your OpenShift and check the build. The URL you will use for that depends on your system, but it's most probably the same as in `openshift.console`.
4. Then you should see a new OpenShift Project called `[lagoon] develop` and in there a `Build` that is running. On a local OpenShift you can find that under <https://192.168.42.100:8443/console/project/lagoon-develop/browse/builds/lagoon?tab=history>.
5. If you see the Build running check the logs and see how the deployment system does it's magic! This is your very first Lagoon deployment running! 🎉 Congrats!
This will do the following:

1. Short background on what is actually happening here:
1. Build all required Lagoon service images \(this can take a while\).
2. Start all required Lagoon services.
3. Wait 30 secs for all services to fully start.
4. Trigger a deployment of the `lagoon` sitegroup that you edited, which will cause your local lagoon to connect to the defined OpenShift and trigger a new deployment.
5. Show the logs of all local Lagoon services.

2. Your local running Lagoon (inside docker-compose) received a deploy command for a project called `lagoon` that you configured.
3. In this project it is defined to which OpenShift that should be deployed (one single Lagoon can deploy into multiple OpenShifts all around the world).
4. So the local running Lagoon service `openshiftBuildDeploy` connects to this OpenShift and creates a new project, some needed configurations (ServiceAccounts, BuildConfigs, etc.) and triggers a new Build.
5. This Build will run and deploy another Lagoon within the OpenShift it runs.
3. As soon as you see messages like `Build lagoon-1 running` in the logs, it's time to connect to your OpenShift and check the build. The URL you will use for that depends on your system, but it's probably the same as in `openshift.console`.
4. Then you should see a new OpenShift Project called `[lagoon] develop` , and in there a `Build` that is running. On a local OpenShift you can find that under [https://192.168.42.100:8443/console/project/lagoon-develop/browse/builds/lagoon?tab=history](https://192.168.42.100:8443/console/project/lagoon-develop/browse/builds/lagoon?tab=history).
5. If you see the build running, check the logs and see how the deployment system does its magic! This is your very first Lagoon deployment running! 🎉 Congrats!
1. Short background on what is actually happening here:
2. Your local running Lagoon \(inside `docker-compose`\) received a deploy command for a project called `lagoon` that you configured.
3. This project has defined to which OpenShift it should be deployed \(one single Lagoon can deploy into multiple OpenShifts all around the world\).
4. The local running Lagoon service `openshiftBuildDeploy` connects to this OpenShift and creates a new project, some needed configurations \(ServiceAccounts, BuildConfigs, etc.\) and triggers a new build.
5. This build will run and deploy another Lagoon within the OpenShift it runs.
6. As soon as the build is done, go to the `Application > Deployments` section of the OpenShift Project, and you should see all the Lagoon DeploymentConfigs deployed and running. Also go to `Application > Routes` and click on the generated route for `rest2tasks` \(for a local OpenShift this will be [http://rest2tasks-lagoon-develop.192.168.42.100.xip.io/](http://rest2tasks-lagoon-develop.192.168.42.100.xip.io/)\), if you get `welcome to rest2tasks` as result, you did everything correct, bravo! 🏆

6. As soon as the build is done, go to the `Application > Deployments` section of the OpenShift Project and you should see all the Lagoon Deployment Configs deployed and running. Also go to `Application > Routes` and click on the generated route for `rest2tasks` (for a local OpenShift this will be <http://rest2tasks-lagoon-develop.192.168.42.100.xip.io/>), if you get `welcome to rest2tasks` as result, you did everything correct, bravo!
## OpendistroSecurity

### OpendistroSecurity
Once Lagoon is install operational, you need to initialize OpendistroSecurity to allow Kibana multitenancy. This only needs to be run once in a new setup of Lagoon.

Once Lagoon is install operational you need to initialise OpendistroSecurity to allow Kibana multitenancy. This only needs to be run once in a new setup of lagoon.
1. Open a shell of an Elasticsearch pod in `logs-db`.
2. `run ./securityadmin_demo.sh`.

1. Open a shell of an elasticsearch pod in logs-db.
2. run ./securityadmin_demo.sh
## Configure Installed Lagoon

### Configure Installed Lagoon
We have a fully running Lagoon. Now it's time to configure the first project inside of it. Follow the examples in [GraphQL API](graphql_api.md).

We have now a fully running Lagoon. Now it's time to configure the first project inside of it. Follow the examples in [GraphQL API](/administering_lagoon/graphql_api.md)
35 changes: 19 additions & 16 deletions docs/administering_lagoon/openshift_requirements.md
@@ -1,30 +1,33 @@
# OpenShift Requirements by Lagoon
---
description: >-
Lagoon tries to run on as standard installation of OpenShift as possible, but
it expects some things:
---

Lagoon tries to run on a standard installation of OpenShift as possible, but it expects some things:
# OpenShift Requirements


### OpenShift Version
## OpenShift Version

Currently Lagoon is tested and supported with OpenShift 3.11.

### Permissions
## Permissions

In order to setup Lagoon in an OpenShift you need a cluster-admin account to run the initial setup via `make lagoon-kickstart`. With this Lagoon will create it's own Roles and Permissions and the cluster-admin is not needed anymore.
In order to set up Lagoon in an OpenShift, you need a cluster-admin account to run the initial setup via `make lagoon-kickstart`. With this, Lagoon will create its own roles and permissions and the cluster-admin is not needed anymore.

### PV StorageClasses
## PV StorageClasses

For deployment projects by Lagoon the following StorageClasses will be needed:

| Name | Used for | Description |
| -----| ------ |------|
| default | Single Pod mounts (mariadb, solr) | The default StorageClass will be used for any single pod mounts like mariadb, solr, etc. Suggested to use SSD based Storage |
| `bulk` | multi pod mounts (drupal files) | `bulk` StorageClass will be used whenever a project requests storage that needs to be mounted into multiple pods at the same time. Like nginx-php-persistent which will mount the same PVC in all nginx-php pods. Suggested to be on SSD but not required. |
| Name |Used for |Description |
| :--- |:--- |:--- |
| default | Single pod mounts \(MariaDB, Solr\) | The default StorageClass will be used for any single pod mounts like MariaDB, Solr, etc. We suggest using SSD-based storage. |
| `bulk` | Multi-pod mounts \(Drupal files\) | `bulk` StorageClass will be used whenever a project requests storage that needs to be mounted into multiple pods at the same time. Like `nginx-php-persistent`, which will mount the same PVC in all `nginx-php` pods. We suggest putting these on SSD, but it's not required. |

Lagoon itself will create PVCs with the following StorageClasses:

| Name | Used for | Description |
| -----| ------ |------|
| `lagoon-elasticsearch` | `logs-db` | `logs-db` will create PVCs with the storageClass name `lagoon-elasticsearch` for persistent storage of the elasticsearch. Standard deployments of `logs-db` create an Elasticsearch Cluster with 3 `live` nodes. Strongly suggested to be on SSD. |
| `lagoon-logs-db-archive` | `logs-db` | Beside the `live` nodes, `logs-db` also creates 3 `archive` nodes. These are used for elasticsearch data which is older than 1 month. Therefore it should be much bigger than `lagoon-elasticsearch` but can run on regular HDD. |
| `lagoon-logs-forwarder` | `logs-forwarder` | Used by `logs-forwarder` fluentd to provide a persistent buffer. Default configurations of Lagoon create 3 `logs-forwarder` pods. Preferred to be on SSD, but not needed. |
| Name | Used for | Description |
| :--- | :--- | :--- |
| `lagoon-elasticsearch` | `logs-db` | `logs-db` will create PVCs with the StorageClass name `lagoon-elasticsearch` for persistent storage of Elasticsearch. Standard deployments of `logs-db` create an Elasticsearch cluster with 3 `live` nodes. Strongly reccomend putting these on SSD. |
| `lagoon-logs-db-archive` | `logs-db` | Beside the `live` nodes, `logs-db` also creates 3 `archive` nodes. These are used for Elasticsearch data which is older than 1 month. Therefore it should be much bigger than `lagoon-elasticsearch`. Can run on regular HDD. |
| `lagoon-logs-forwarder` | `logs-forwarder` | Used by `logs-forwarder` fluentd to provide a persistent buffer. Default configurations of Lagoon create 3 `logs-forwarder` pods. We prefer to put these on SSD, but it's not needed. |