Skip to content

Commit

Permalink
Mass merge of master into stable/juno
Browse files Browse the repository at this point in the history
This merges all of master into stable/juno for use in the 10.1.0
release cycle. This merge was taken directly out of master and
applied as a patch. Due to permission issues and general merge
conflicts this was the easiest and most reliable way to
accomplish the mass merge.

Change-Id: I33068df8b85962e35cfde2038e60af2b3ca6ca12
  • Loading branch information
cloudnull committed Dec 5, 2014
1 parent af6fdb8 commit 30d08fe
Show file tree
Hide file tree
Showing 189 changed files with 5,309 additions and 1,334 deletions.
5 changes: 5 additions & 0 deletions .gitreview
@@ -0,0 +1,5 @@
[gerrit]
host=review.openstack.org
port=29418
project=stackforge/os-ansible-deployment.git

30 changes: 30 additions & 0 deletions CONTRIBUTING.md
@@ -0,0 +1,30 @@
###Contributor guidelines
**Filing Bugs**

Bugs should be filed on Launchpad, not GitHub:

https://bugs.launchpad.net/openstack-ansible

When submitting a bug, or working on a bug, please ensure the following criteria are met:

* The description clearly states or describes the original problem or root cause of the problem.
* Include historical information on how the problem was identified.
* Any relevant logs are included.
* The provided information should be totally self-contained. External access to web services/sites should not be needed.
* Steps to reproduce the problem if possible.

**Submitting Code**

Changes to the project should be submitted for review via the Gerrit tool, following
the workflow documented at:

https://wiki.openstack.org/GerritWorkflow

Pull requests submitted through GitHub will be ignored.


**Extra**

***Tags***: If it's a bug that needs fixing in a branch in addition to Master, add a '\<release\>-backport-potential' tag (eg ```juno-backport-potential```). There are predefined tags that will autocomplete
***Status***: Please leave this alone, it should be New till someone triages the issue.
***Importance***: Should only be touched if it is a Blocker/Gating issue. If it is, please set to High, and only use Critical if you have found a bug that can take down whole infrastructures.
126 changes: 126 additions & 0 deletions etc/rpc_deploy/conf.d/swift.yml
@@ -0,0 +1,126 @@
---
## Swift group variables are required only when using swift.
## Below is a sample configuration.
##
## part_power value is required at the swift_level and cannot be changed once the ring has been built without removing the rings manually and rerunning the ring_builder.
##
## The weight value is not required, and will default to 100 if not specified. This value will apply to all drives setup, but can be overriden on a drive or node basis by setting this value in the node or drive config.
##
## The min_part_hours and repl_number values are not required, and will default to "1" and "3" respectively. Setting these at the swift level will apply this value as a default for all rings (including account/container). These can be overriden on a per ring basis by adjusting the value for account/container or specific storage_policy.
##
## If you are using a storage_network specify the interface that the storage_network is setup on. If this value isn't specified the swift services will listen on the default management ip. NB If the storage_network isn't set but storage_ip's per host are set (or the storage_ip is not on the storage_network interface) the proxy server will not be able to connect to the storage services as this directly changes the IP address the storage hosts are listening on.
##
## If you are using a dedicated replication network specify the interface that the storage_network is setup on. If this value isn't specified no dedicated replication_network will be set. As with the storage_network this impacts the IP that the replication service listens on, if the repl_ip isn't set on that interface replication will not work properly.
##
## Set the default drives per host. This is useful when all hosts have the exact same drives. This can be overridden on a "per host" basis.
##
## Set the default mount_point - which is the location where your swift drives are mounted. For example with a mount point of /mnt and a drive of sdc there should be a drive mounted at /mnt/sdc on the swift_host. This can be overriden on a per host basis if required.
##
## For account and container rings, min_part_hours and repl_number are the only values that can be set. Setting them here will override the defaults for the specific ring.
##
## Specify your storage_policies, there must be atleast one storage policy, and atleast one storage policy with index of 0 for legacy containers created before storage policies were instituted. At least one storage policy must have "default: True" set. The options that can be set for storage_policies are name (str), index (int), default (bool), deprecated (bool), repl_number (int) and min_part_hours (int) - with the last 2 overriding the default if specified.
##

# global_overrides:
# swift:
# part_power: 8
# weight: 100
# min_part_hours: 1
# repl_number: 3
# storage_network: 'br-storage'
# replication_network: 'br-repl'
# drives:
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
# mount_point: /mnt
# account:
# container:
# storage_policies:
# - policy:
# name: gold
# index: 0
# default: True
# - policy:
# name: silver
# index: 1
# repl_number: 3
# deprecated: True

## Specify the swift-proxy_hosts - these will typically be your infra nodes and are where your swift_proxy containers will be created.
## All that is required is the IP address of the host that ansible will connect to.

# swift-proxy_hosts:
# infra-node1:
# ip: 192.0.2.1
# infra-node2:
# ip: 192.0.2.2
# infra-node3:
# ip: 192.0.2.3

## Specify the swift_hosts which will be the swift storage nodes.
##
## The ip is the address of the host that ansible will connect to.
##
## all swift settings are set under swift_vars.
##
## The storage_ip and repl_ip represent the IP that will go in the ring for storage and replication.
## These will be pulled from the server's interface (specified by storage_network & replication_network), but can be overriden by specifying them at the node or drive level.
## If only the storage_ip is specified then the repl_ip will default to the storage_ip
## If only the repl_ip is specified then the storage_ip will default to the host ip above.
## If neither are specified both will default to the host ip above.
##
## zone and region can be specified for swift when building the ring.
##
## groups can be set to list which rings a host's drive should belong to. This can be set on a per drive basis which will override the host setting.
##
## swift-node5 is an example of overriding the values. Where the groups are set, and overridden on drive sdb. The weight is overriden for the host, and specifically adjusted on drive sdb, and the storage/repl_ip's are different for sdb.
##

# swift_hosts:
# swift-node1:
# ip: 192.0.2.4
# container_vars:
# swift_vars:
# zone: 0
# swift-node2:
# ip: 192.0.2.5
# container_vars:
# swift_vars:
# zone: 1
# swift-node3:
# ip: 192.0.2.6
# container_vars:
# swift_vars:
# zone: 2
# swift-node4:
# ip: 192.0.2.7
# container_vars:
# swift_vars:
# zone: 3
# swift-node5:
# ip: 192.0.2.8
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.8
# repl_ip: 203.0.113.8
# zone: 4
# region: 3
# weight: 200
# groups:
# - account
# - container
# - silver
# drives:
# - name: sdb
# storage_ip: 198.51.100.9
# repl_ip: 203.0.113.9
# weight: 75
# groups:
# - gold
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf

47 changes: 47 additions & 0 deletions etc/rpc_deploy/rpc_environment.yml
Expand Up @@ -116,6 +116,18 @@ component_skel:
utility:
belongs_to:
- utility_all
swift_proxy:
belongs_to:
- swift_all
swift_acc:
belongs_to:
- swift_all
swift_obj:
belongs_to:
- swift_all
swift_cont:
belongs_to:
- swift_all
container_skel:
cinder_api_container:
belongs_to:
Expand Down Expand Up @@ -257,6 +269,29 @@ container_skel:
- infra_containers
contains:
- utility
swift_proxy_container:
belongs_to:
- swift-proxy_containers
contains:
- swift_proxy
swift_acc_container:
is_metal: true
belongs_to:
- swift_containers
contains:
- swift_acc
swift_obj_container:
is_metal: true
belongs_to:
- swift_containers
contains:
- swift_obj
swift_cont_container:
is_metal: true
belongs_to:
- swift_containers
contains:
- swift_cont
physical_skel:
network_containers:
belongs_to:
Expand Down Expand Up @@ -288,3 +323,15 @@ physical_skel:
storage_hosts:
belongs_to:
- hosts
swift_containers:
belongs_to:
- all_containers
swift_hosts:
belongs_to:
- hosts
swift-proxy_containers:
belongs_to:
- all_containers
swift-proxy_hosts:
belongs_to:
- hosts
22 changes: 13 additions & 9 deletions etc/rpc_deploy/rpc_user_config.yml
Expand Up @@ -15,18 +15,20 @@

# This is the md5 of the environment file
# this will ensure consistency when deploying.
environment_version: e0955a92a761d5845520a82dcca596af
environment_version: 3511a43b8e4cc39af4beaaa852b5f917

# User defined CIDR used for containers
# Global cidr/s used for everything.
# User defined container networks in CIDR notation. The inventory generator
# assigns IP addresses to network interfaces inside containers from these
# ranges.
cidr_networks:
# Cidr used in the Management network
# Management (same range as br-mgmt on the target hosts)
container: 172.29.236.0/22
# Cidr used in the Service network
# Service (optional, same range as br-snet on the target hosts)
snet: 172.29.248.0/22
# Cidr used in the VM network
# Tunnel endpoints for VXLAN tenant networks
# (same range as br-vxlan on the target hosts)
tunnel: 172.29.240.0/22
# Cidr used in the Storage network
# Storage (same range as br-storage on the target hosts)
storage: 172.29.244.0/22

# User defined list of consumed IP addresses that may intersect
Expand All @@ -43,6 +45,8 @@ global_overrides:
internal_lb_vip_address: 172.29.236.1
# External DMZ VIP address
external_lb_vip_address: 192.168.1.1
# Name of load balancer
lb_name: lb_name_in_core
# Bridged interface to use with tunnel type networks
tunnel_bridge: "br-vxlan"
# Bridged interface to build containers with
Expand Down Expand Up @@ -72,6 +76,8 @@ global_overrides:
- cinder_api
- cinder_volume
- nova_compute
# If you are using the storage network for swift_proxy add it to the group_binds
# - swift_proxy
type: "raw"
container_bridge: "br-storage"
container_interface: "eth2"
Expand Down Expand Up @@ -109,8 +115,6 @@ global_overrides:
type: "vlan"
range: "1:1"
net_name: "vlan"
# Name of load balancer
lb_name: lb_name_in_core

# User defined Infrastructure Hosts, this should be a required group
infra_hosts:
Expand Down
26 changes: 17 additions & 9 deletions etc/rpc_deploy/rpc_user_config.yml.example
Expand Up @@ -17,16 +17,18 @@
# this will ensure consistency when deploying.
environment_version: 5e7155d022462c5a82384c1b2ed8b946

# User defined CIDR used for containers
# Global cidr/s used for everything.
# User defined container networks in CIDR notation. The inventory generator
# assigns IP addresses to network interfaces inside containers from these
# ranges.
cidr_networks:
# Cidr used in the Management network
# Management (same range as br-mgmt on the target hosts)
container: 172.29.236.0/22
# Cidr used in the Service network
# Service (optional, same range as br-snet on the target hosts)
snet: 172.29.248.0/22
# Cidr used in the VM network
# Tunnel endpoints for VXLAN tenant networks
# (same range as br-vxlan on the target hosts)
tunnel: 172.29.240.0/22
# Cidr used in the Storage network
# Storage (same range as br-storage on the target hosts)
storage: 172.29.244.0/22

# User defined list of consumed IP addresses that may intersect
Expand All @@ -43,6 +45,8 @@ global_overrides:
internal_lb_vip_address: 172.29.236.10
# External DMZ VIP address
external_lb_vip_address: 192.168.1.1
# Name of load balancer
lb_name: lb_name_in_core
# Bridged interface to use with tunnel type networks
tunnel_bridge: "br-vxlan"
# Bridged interface to build containers with
Expand Down Expand Up @@ -72,6 +76,8 @@ global_overrides:
- cinder_api
- cinder_volume
- nova_compute
# If you are using the storage network for swift_proxy add it to the group_binds
# - swift_proxy
type: "raw"
container_bridge: "br-storage"
container_interface: "eth2"
Expand Down Expand Up @@ -109,8 +115,6 @@ global_overrides:
type: "vlan"
range: "1:1"
net_name: "vlan"
# Name of load balancer
lb_name: lb_name_in_core
# Other options you may want
debug: True
### Cinder default volume type option
Expand Down Expand Up @@ -176,7 +180,7 @@ storage_hosts:
cinder_storage_availability_zone: cinderAZ_3
cinder_default_availability_zone: cinderAZ_1
cinder_backends:
limit_container_type: cinder_volume
limit_container_types: cinder_volume
netapp:
netapp_storage_family: ontap_7mode
netapp_storage_protocol: iscsi
Expand All @@ -186,6 +190,10 @@ storage_hosts:
netapp_password: "{{ cinder_netapp_password }}"
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name: NETAPP_iSCSI
nfs_client:
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- { ip: "{{ cinder_netapp_hostname }}", share: "/vol/cinder" }

# User defined Logging Hosts, this should be a required group
log_hosts:
Expand Down

0 comments on commit 30d08fe

Please sign in to comment.