Skip to content
This repository has been archived by the owner on Aug 27, 2021. It is now read-only.

Commit

Permalink
Section 3 (#7)
Browse files Browse the repository at this point in the history
  • Loading branch information
chicco785 committed Nov 15, 2018
1 parent 3f163d8 commit 13f2715
Show file tree
Hide file tree
Showing 16 changed files with 1,502 additions and 2 deletions.
2 changes: 1 addition & 1 deletion .markdownlintrc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"code_blocks": false,
"tables": false
},
"MD033": { "allowed_elements": ["img"] },
"MD033": { "allowed_elements": ["img","a"] },
"no-hard-tabs": false,
"whitespace": false,
"fenced-code-language": false,
Expand Down
2 changes: 2 additions & 0 deletions docs/1.essentials/5.steps_status.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ following operation:
[*https://wiki.openstack.org/wiki/Release_Naming*](https://wiki.openstack.org/wiki/Release_Naming)
to see the corresponding OpenStack release (in this example Kilo).

Alternatively it is possible to use semi-automatic scripts in this [repository](https://github.com/SmartInfrastructures/fiware-lab-refenv)

- HOW TO CHECK THE MONITORING VERSION

```
Expand Down
41 changes: 41 additions & 0 deletions docs/3.process/1.services.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
## OpenStack Services Required

FIWARE Lab Nodes are based on the OpenStack distribution. Please take a
look to the section OpenStack upgrade version policy in FIWARE Lab (see
section 3.2) to understand which version of OpenStack should be running
on the nodes. As such and to the time of writing this document, nodes
are required to install the following OpenStack services based on the
'''OpenStack Newton release''':

- Mandatory:

- OpenStack Nova (using KVM as hypervisor since image catalogue
stores KVM compatible images).

- OpenStack Glance (Swift as default backend type, other solutions
may be adopted depending on hardware owned by the specific
FIWARE Lab node).

- OpenStack Cinder (as default solution we suggest LVM, other
solutions may be adopted depending on hardware owned by the
specific FIWARE Lab node).

- OpenStack Neutron with OVS and GRE or VxLAN tunnels (floating
IPs must be made available to users).

- OpenStack Ceilometer with MongoDB as backed as default solution.

- OpenStack Keystone only for initial setup and testing, then
FIWARE Lab keystone should be used.

- OpenStack Horizon only for initial setup and testing, then
FIWARE Lab Cloud Portal should be used.

- Optional

- OpenStack Swift with 3 replication factor value. Optionally,
CEPH with the OpenStack Swift APIs could be installed.

- OpenStack Murano with OpenStack Heat for PaaS capabilities.

- OpenStack Magnum with Swarm for managed docker.
22 changes: 22 additions & 0 deletions docs/3.process/10.log.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Registering your Node in Deep Log Inspection

FIWARE Lab offers a centralised solution for Log Inspection to help node
admins detect anomalies in the behaviour of their services or of the
users. The solution is based on ElasticSearch ecosystem. The adoption of
the deep log inspection is optional, but it is highly recommended to
help you in the management of your node.

Node admins will need only to configure a syslog server and configure
OpenStack services to forward their logs to it.

The syslog server used as default solution is the Monasca Log Agent
[https://github.com/logstash-plugins/logstash-output-monasca_log_api](https://github.com/logstash-plugins/logstash-output-monasca_log_api).
The server can be installed on an existing server (such as the OpenStack
node controllers) or a Virtual Machine provided inside the node.

Detailed instructions are provided in this guide:
[http://deep-log-inspection.readthedocs.io/en/latest/install/monasca-log-agent/](http://deep-log-inspection.readthedocs.io/en/latest/install/monasca-log-agent/)

The actual user and password to connect your log agent will be provided
by FIWARE Lab team, you will need to open an issue requesting it in
Jira.
22 changes: 22 additions & 0 deletions docs/3.process/2.upgrade.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
## OpenStack upgrade version policy in FIWARE Lab

FIWARE Lab nodes are based on OpenStack. OpenStack is developed and
released around 6-month cycle.

According to FIWARE Lab management rules the upgrade policy of a FIWARE
Lab node is two version behind the current official version under
development. This is to avoid unsupported-EOL OpenStack release,
security and performance issues. This policy secures us that we are
almost in line with the community. It is important to notice that the
upgrade of the OpenStack version can involve also the upgrade of the
Operating System. The recommendation of the FIWARE Lab is the use of
Ubuntu like Operating System. The next image shows as detail about the
FIWARE Lab policy in use:

![FIWARE Lab OpenStack support model](image3.png)

More information about the release series of OpenStack can be found at
the following link: [https://releases.openstack.org](https://releases.openstack.org)

* **IMPORTANT**: A FIWARE Lab Node not updated, will stop working properly because
not compatible with FIWARE Lab services!
119 changes: 119 additions & 0 deletions docs/3.process/3.installing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
## Installing FIWARE Lab Node

### Introduction

In order to install your FIWARE Lab Node you can choose among different
options that allows you to deploy an updated version of Vanilla
OpenStack compatible with the requirements listed in section [Essential things](../1.essentials).

### <a name="how"></a> How to install a FIWARE Lab Node

Currently, the OpenStack community offers many ways to install a
complete environment and the procedure can be manual or automatic.
Thanks to novel DevOps techniques the current trend is to leverage
Infrastructure as Code concept and IT Automation tools like Ansible,
Puppet or Chef in order to provision and maintain such complex systems.

Moreover Operating System Virtualization (Container-based) helps the
management and the upgrade of all services running in the OpenStack
based FIWARE Lab Node and it also grants the compatibility and
portability of those services across different Operating Systems[^2].

For the above reasons FIWARE suggests the usage of IT Automation tools
and Container-based virtualization in order to setup and maintain FIWARE
Lab Nodes. Hereunder are references projects currently supporting the
OpenStack installation:

1. **OpenStack-Ansible:** OpenStack services are automatically
installed by Ansible and run inside LXC containers.

1. [*https://docs.openstack.org/project-deploy-guide/openstack-ansible/*](https://docs.openstack.org/project-deploy-guide/openstack-ansible/)

1. [*https://docs.openstack.org/openstack-ansible/latest/*](https://docs.openstack.org/openstack-ansible/latest/)

1. [*https://github.com/openstack/openstack-ansible*](https://github.com/openstack/openstack-ansible)

1. **Kolla & Kolla-Ansible:** OpenStack services run inside pre-built
Docker containers offered as Docker images from the Docker Hub and
installed on nodes by Ansible.

1. [*https://wiki.openstack.org/wiki/Kolla*](https://wiki.openstack.org/wiki/Kolla)

1. [*https://docs.openstack.org/kolla/latest/*](https://docs.openstack.org/kolla/latest/)

1. [*https://github.com/openstack/kolla-ansible*](https://github.com/openstack/kolla-ansible)

Of course, manual installation is still possible, although discouraged
as it results in more difficult management primarily due to package
dependencies:

1. **Manual Installation & Configuration:**

1. [*https://docs.openstack.org/install-guide/*](https://docs.openstack.org/install-guide/)

1. [*https://docs.openstack.org/install/*](https://docs.openstack.org/pike/install/)

### Suggested deployment architecture

To join FIWARE Lab no minimal requirement is enforced but the
infrastructure must be adequate to support the needs of users who will
be hosted on the new nodes. Obviously during the first node setup may
not be clear how many users will be active and neither their needs in
term of resources. For the above reasons it is strongly recommended, for
a production environment, to follow the suggested deployment
architecture:

- 3 Controllers in HA (including also Neutron L3 HA solution) with the
following services

- The nova-scheduler service, that allocates VMs on the
compute nodes.

- The cinder-scheduler service, that allocates block storage on
the compute nodes.

- The glance-registry service, that manages the images and
VM templates. The backend for the registry maybe the controller
node, or the Object Storage.

- The neutron-server service, that manages the VM networks.

- The heat-api and engine

- The swift-proxy service that manages request to the object
storage nodes.

- The nova-api service, that exposes the APIs to interact with
the nova-scheduler.

- The cinder-api service, that exposes the APIs to interact with
the cinder-scheduler.

- The glance-api service, that exposes the APIs to interact with
the glance-registry.

- The keystone service, that manages OpenStack services in a node.

- (Optional) 3+ Object storage nodes with the following services:

- The swift-account-server service, that handles listing
of containers.

- The swift-container-server service, that handles listing of
stored objects.

- The swift-object-server service, that provides actual object
storage capability.

- 6+ Compute nodes (including also Cinder LVM) with the following
services

- The nova-compute service, that manages VMs on the local node.

- The cinder-volume service, that manages block storage on the
local node.

- The neutron-agent service, that manages VM networks on the
local node.

- 3 Ceilometer nodes
120 changes: 120 additions & 0 deletions docs/3.process/4.configuring.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
## Configuring FIWARE Lab Node

This section provides details about how to make the proper changes in
the configuration of your node in order to join FIWARE Lab. Those
changes are basically related to the proper configuration of flavours
and quotas and more important, related to the common way to define the
available networks in a FIWARE Lab node.

### Configure Flavors and Quotas

The default flavors should be:

| **ID** | **Name** | **Memory (MB)** | **Disk (Gb)** | **Ephemeral** | **Swap** | **vCPUs** | **RXTX Factor** | **Public** |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium| 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |

For the nova service, the default quotas values that should be defined
are the following:

Default defined quotas

| **Quota** | **Limit** |
| --- | --- |
| Instances | 2 |
| Cores | 4 |
| RAM | 4096 |
| Floating IPs | 1 |
| Fixed IPs | -1 |
| Metadata Items | 1024 |
| Injected files | 5 |
| Injected file content (bytes) | 20240 |
| Injected file path (bytes) | 255 |
| Key pairs | 10 |
| Security Groups | 10 |
| Security Group Rules | 20 |

The neutron default quotas should be:

Default defined neutron quotas

| **Field** | **Value** |
| --- | --- |
| Floating IP | 1 |
| Network | 5 |
| Port | 20 |
| Router | 1 |
| Security Group | -1 |
| Security Group Rule | -1 |
| Subnet | 5 |

### Configure OpenStack Networks

FIWARE Lab has defined a predefined name of networks to be used by all
the nodes. It helps to the different services deployed on top of
OpenStack to work with the correct network without any special
configuration on it.

- **public-ext-net-01**. This is the Public External network, a non-shared network
providing a
floating IP pool (i.e. subnet) that provides public, routable IPv4
addresses. Additionally, nodes can configure IPv6 dual-stack on this
network in order to provide IPv6 addresses. This network is not
visible to attach directly OpenStack Instances on it. It is only
visible to allocate public IPs to be used by tenants.

- **node-int-net-01**. A shared tenant network providing DHCP IPv4 (and IPv6 in the future)
addresses. This network is visible for all tenants and therefore
anyone can attach OpenStack instances on it. Any node could choose its
own network range since this should not collide with other node’s
networks.

There is no limitation in the use of networks and every node can
configure additional networks in its OpenStack configuration. If we test
this information with the CLI tool we obtain the following result if we
execute the following command:

```
$ neutron net-list
```

Or using the more recent version of the CLI, the following command:

```
$ openstack network list
```

The output of networks and subnets should be:

Example returned values of: openstack network list


| Id | Name | subnets |
| --- | --- | --- |
| 3dccc622-7200-40be-b523-0f73674db0e7 | public-ext-net-01 | 44c356e1-53ad-43ce-b3b7-816bbd1d9529 130.206.82.0/22 |
| b99da016-cb02-4556-8d5f-2ce27a9a861d | node-int-net-01 | a250c7a4-4d23-4c9a-85be-3e9b367a00a1 172.16.0.0/20 |

And if we check the sub-network that we have associated to this network,
through the following commands

```
$ neutron subnet-list
```

or

```
$ openstack subnet list
```

we will see something like this for the second network:

Example returned values of: openstack subnet list


| Id | Name | CIDR | Allocation pools
| --- | --- | --- | --- |
| a250c7a4-4d23-4c9a-85be-3e9b367a00a1 | node-int-subnet-01 | 172.16.0.0/20 | {"start": "172.16.0.2", "end": "172.16.15.254"} |

0 comments on commit 13f2715

Please sign in to comment.