From dca1f14a130a46d5805aa1d267a2116a589b7db2 Mon Sep 17 00:00:00 2001 From: Madhukar Nayakbomman Date: Wed, 18 Apr 2018 00:15:59 -0700 Subject: [PATCH] Rebasing openstack-helm repo Changes from below commits are been added as part of rebase Cinder: allow Ceph RBD pool params to be tuned This PS exposes the Ceph RBD pool params to the cinder chart, allowing them to be tuned. Change-Id: I615e999928948193b24cc4978efb31bd1b36f8f7 Armada check: Enable storage for OSH-infra services This enables storage for the osh-infra services running in the armada job Change-Id: Ic0f11a9d161529c6fb58474e856032745b07a032 remove trailing ws Change-Id: Ida8e4a5d072f8dff635dfffd4336d697ab1d4753 Add ldap support This patch set adds python ldap support to keystone. Change-Id: I420612555d92f6fb932f2f210cc36f3f7f5afc97 Signed-off-by: Tin Lam Reduce the number of workers spawned by services This PS reduces the number of processes spawned by services, as with Kubernetes load distribution can be better managed by a larger number of single threaded pods (up to a certain point) and doing so also provides both increased avilibility, leading to smoother rolling updates. In addtion when running single replicas resource consuption is reduced. Change-Id: Ifb7494a0804913d843a072e10d26c6ec53c3bd16 DB-Drop-Jobs: consolidate to helm-toolkit This PS consolidates the DB-Drop Job to helm-toolkit. Change-Id: Ia2b035d730bf612086a9fd9b5d14aba494f56dc7 Add trustee domain This patch set allows for searching the trustee user in a specified domain rather than just the "default" domain. Change-Id: I53ee6816e02c25e577244015fe5aea0870e0fd32 Signed-off-by: Tin Lam Add Makefile This patchset adds a Makefile for each component under tools/images Change-Id: I84d8bda0313e921f0921dfef10d14469ed26ff5c Ingress controller service: consolidate to helm-toolkit This PS consolidates the Ingress controller service, that is used to resolve internal requests to public endpoints correctly, to helm-toolkit. Change-Id: If7c7deca1b8289a32709f7dc7c936883469aadfe Cinder: Fix sudoers reference in configmap The cinder_sudoers entry in the cinder configmap-etc was consuming the neutron_sudoers entry in the values.yaml. This corrects it to point at cinder_sudoers instead Change-Id: I214912b3ed4185a201f4f94e82eaa50d6d321018 Cinder: Fix default uid for cinder user with loci images This PS corrects the UID for the cinder user used with loci images in the cinder chart. Change-Id: I1001711928fb47e77f01c8e83f88ec317a46498e glance-api: add dependency on message bus without this the api starts up in a non-working state, the bootstrap job then runs and give is images which are stuck queued Change-Id: Ie3e03620618b1c46882c05b3a5ef8745c78af6a3 Neutron: SR-IOV support This PS adds SR-IOV Support to OSH. Change-Id: Ia744c6d7c4a45be7728bba3213b50f1246b897db Cinder: add qemu profile to cinder images This PS adds the qemu profile to cinder images Change-Id: I91f457471b0b9ae7d83a29ff6521ee319eea44f7 Add LDAP-backed domain gate This patch set adds a nv-gating with an OpenLDAP server with some sample data loaded for development or testing use using a bootstrap job. This patch set also adds confirming authentication works using domain- specific configuration for keystone. Consolidated change from: https://review.openstack.org/#/c/552976/ Co-Authored-By: Gage Hugo Change-Id: I1aeccffc018d0fcefc8e2b15a4ac6b83cb2be8b6 Signed-off-by: Tin Lam Nova: Fix sudoers location in nova-etc configmap The nova_sudoers entry in the nova configmap-etc was consuming the neutron_sudoers entry in the values.yaml. This corrects it to point at nova_sudoers instead Change-Id: I621c817c579cc1c31fa51b1a0f49a43a652784a2 neutron: allow creation of ovs bridges with no ports it's valid to create a bridge and not add ports; this restores that ability Change-Id: I46881fe3ee48a56a796abe8cf2036eba9e4064e1 Use pod dependencies in nova chart Changes nova chart to depend on neutron pod labels instead of daemonsets in order to prepare for utilizing daemonset overrides in neutron chart, Utilizes a new feature of kubernetes-entrypoint, pod dependencies, added to kubernetes-entrypoint in v0.3.0. Change-Id: Ic79ddc1b7f477195c5b3dfd630df4d78d7589030 Use pod dependencies in neutron chart Changes neutron chart to depend on pod labels instead of daemonsets. Change-Id: Ieaa2f2863864229a4f6587c3e66fa661b9b7ef81 Add tls support for ldap This patch set adds TLS support for keystone LDAP. External-tracking: OSH#555 Change-Id: Ice32a31a712b8534a5d1a8f90a8a203710bdb9a9 Signed-off-by: Tin Lam Barbican: Include missing image build This PS adds the misisng image build commands for the barbican image. Change-Id: I72085d20a098005bf79074f0f3297658de69f54c Libvirt: Update ubuntu package version This PS updates the libvirt package version. Change-Id: I3d5f0cfc25412c1dcc4c70d5f060bc9a1541e68a Polish TLS patch set This patch set performs non-critical polish fix to [0]. [0] https://review.openstack.org/#/c/552171/ Change-Id: I5bbb64d5af65782665fd659886e55e25bac61452 Signed-off-by: Tin Lam Document usage of pod dependencies Replace references to daemonset dependencies with pod dependencies in docs. Change-Id: I252089006929d7e218ebfc4f98d49c4650143a7e Use v0.3.0 of kubernetes-entrypoint This version is already being used by some charts, so this brings the rest of the charts in line and allows them to use a new feature, pod dependencies, that this version provides. Change-Id: Ie8289eb09b31cd8f98c2c5b4dd5bbe469078e6d8 Document node and label specific configurations This PS adds documentation for node and label specific configurations. Change-Id: I2bb02bfa028a61b2d8a9206eaff305590664946f Ingress: support arbitary hostnames. This PS allows arbitary hostnames to be used for public endpoints, provided the resolve externally to the ingress controllers. Change-Id: I44411687f756968d00178d487af66c2393e6bde0 Revert "Changed MariaDB version to 10.2.13" This reverts commit 81bf5f3656f12b6f8279329edcf91ef63e7a6b5f back to MariaDB 10.1.23 which we know works with clustering enabled (pod.replicas.server > 1). Change-Id: Ibf70dbab78f03d32e1ec96e99ac8db59d23cb96e Horizon enable v3 keystone support. This PS enables v3 kesytone support in horizon. Change-Id: If176617d37efc19925c2dc5a65d992086442fd70 Neutron: agent host targetting This PS adds the ablity to target different configs to different hosts for the nutron agents, in the same manner as nova-compute. Change-Id: Iebd87e30014d6cac2127c7e1a14259b10d74fbf8 Detect and enable hugepage support for QEMU Change-Id: I3284c0f8f8946a36a63871dc57e287fbe7260490 MariaDB: Update to 10.2.13; patching wsrep_sst_xtrabackup-v2 Recent versions of MariaDB (10.1.31, 10.2.13) have a regression that breaks clustering. See https://github.com/MariaDB/server/pull/457 and https://github.com/MariaDB/server/commit/4e6dab94d0931eafba502f5a91da29a54e75bb33 for an in depth explanation. We need 10.2.13+ for Barbican to function correctly (see bug #1734329) but we also need the fix above to support MariaDB clustering. This work-around can be removed later on when MariaDB 10.2.x releases contain the needed script fix. Thanks to Sam Yample for helping track this down. Change-Id: Ifd09d7effe7d382074ca9e6678df36bdd4bce0af gate: fix ceph on centos Change-Id: Id006bc4c81cfb4b3d72168f1da4ff1220c758e34 Neutron: SR-IOV agent template fix This PS fixes the template rendered in the neutron SR-IOV agent manifest. Change-Id: Ib221213c8df94613a2dcf12e2615442db0684794 Nova: Update endpoint path to 2.1 This PS updates the Nova endpoint to use v2.1, which makes tempest happy. Change-Id: I1fbda225820cdc3b40be27198cc44caa15fac156 MariaDB: use multiple replicas in multinode gates Change-Id: Ibea3f0270bed830c8b13eafc5f196f30601c13c3 fix typos in documentation Change-Id: Idb156b0141e177041de5c79b2118d682808d45aa Neutron: Move all config to be directly values driven. This PS moves all the config files to be directly values driven, both simplifying over-ride and allowing configs to be targeted to pods in future work. Change-Id: Ifcbc19b17aa1d145f12ed1aed8b15a69ca045bb7 Ceph: Increase period between livenessProbe checks This PS udpates the frequency and initial delay on the mons livenessProbe, to allow time for the cluster to restart if mons get into a crashloop backoff following power outage. Change-Id: Iea74c4d52882a157a84f4f12bc411f2014869f99 Gate: disable ubuntu multinode voting gate. This PS disables the multinode ubuntu gate from voting, which has been failing due to -infra issues - and severly hampering development work as a result. Change-Id: I411ebe20ba19c52475b559952712faf742343673 Cinder: Move all config to be directly values driven. This PS moves all the config files to be directly values driven, both simplifying over-ride and allowing configs to be targeted to pods in future work. Change-Id: I286af7434aab6de941f9700a7fbf70c6dd0ee4cb Horizon: Move all config to be directly values driven. This PS moves all the config files to be directly values driven, both simplifying over-ride and allowing configs to be targeted to pods in future work. Change-Id: I7e16585c9ef49275327d19a48f00bad192dc4923 Update heat bootstrap scripts This patch set adds in two roles for heat: heat_stack_owner and heat_stack_user as outlined in the Newton [0] and Ocata [1], as well as assigning roles. [0] https://docs.openstack.org/project-install-guide/orchestration/newton/install-ubuntu.html [1] https://docs.openstack.org/project-install-guide/orchestration/ocata/install-ubuntu.html Change-Id: I8510ae114448cc1985c11e9b337b9697a379a920 Signed-off-by: Tin Lam Co-Authored-By: Pete Birley Ingress: Give arbitary fqdns a different name from namespaced rules This PS gives ingress rules attached to the cluster wide ingress controller the suffix -fqdn to allow them to be used. Change-Id: I7de85e349fb609b8380070030579b9b4767e72d1 Fix indent on Postgres pod resources - Properly align the `resources` key in the Postgres server pod spec. Change-Id: Ia17cdabd38291c1365aab7aca71dd59ee9a32b4f fix the vms turn transient after libvirt pod restarts After libvirt pod restarts, the virtual machines created before turn transient ,then opetrate these vms ,nova-compute will throw exception. This is because that the directory /etc/libivrt/qemu in pod contains vitual machines xml files and it is temporary, the xmls files disappear after the pod restarts, so we mount it to hostpath /etc/libvirt/qemu. Change-Id: I48fd712c2b0565cb2cfe850482e8501f4e5022a4 Closed-bug: 1760003 Gate: Update heat templates and enable cinder in ceph dev pipeline This PS updates the heat templates, reducing the size of the launched vm. In addition cinder is enabled in the ceph dev pipeline, this is possible due to the resources no longer consumed by the test vm. Change-Id: I9efe6fe643c636b660dd54b60dfe7c8785d7fec2 Add gate for rbd storage backend. This PS allows to test rbd storage backend when apply glance. Currently, only radosgw is verified after ceph distribution. Change-Id: Ia3c2c915a2e9a65b09123b8e1c47892069c9ae1b Blueprint: add-rbd-gate Ceph: Update images and references This PS udpdates the Ceph charts images and references. Change-Id: I52b6577cdad58a21848f7eb31abb66ebdc47d81e Ceph: Move all config to be directly values driven. This PS moves all the keyring templates to be directly values driven, both simplifying over-ride and allowing configs to be targeted to pods in future work. Change-Id: I7752cbfdeef85f71a1a084437556de062cbb5680 Helm-Toolkit: Reduce delta between OSH and OSH-Infra This PS reduces the delta between OSH and OSH-Infra helm toolkits. Change-Id: I5026b0238555513f8415a864adf4e91e81e3fbd8 Helm-Toolkit: Reduce delta between OSH and OSH-Infra to image repo This PS reduces the delta between OSH and OSH-Infra helm toolkits to simply the image repo management functions. Change-Id: I62a169cff39a96f98ec2b5664d483db26c771e4c Rally: remove unused config template This PS removes an unused config template in the rally chart, and also cleans up some whitespace issues. Change-Id: Iaf6168e377aaf9a2b895af8c8a76b5cb420bb5e8 Rally: Move all heat templates to be directly values driven. This PS moves all the heat templates to be directly values driven, both simplifying over-ride and allowing configs to be targeted to pods in future work. Change-Id: Iebe382bd7945abe9bfbb30c4cf48c53f17fcb1f4 Glance: Move all config to be directly values driven. This PS moves all the config files to be directly values driven, both simplifying over-ride and allowing configs to be targeted to pods in future work. Change-Id: Ida5d9e312cc18cb50f5805a59f9fc4fef1a98658 Gnocchi: move to use templater for apache config This PS moves gnocchi to use the templater function for its apache config. Change-Id: I9b179db066867f00b8cd8cdbf92d37ea2dc8836d Ironic: Move all config to be directly values driven. This PS moves all the config files to be directly values driven, both simplifying over-ride and allowing configs to be targeted to pods in future work. Change-Id: I177ddfe8c932733aeacb0fdb9b3e60ef75881c6a Fix document ref link A link referencing software verion is broken (404). This patch set updates the link the correct ansible var yaml file. Change-Id: I9383ad2bee1fa4671606a9ce19fa3965adcc2c52 Signed-off-by: Tin Lam Ceph: Make mon deployment compatible with k8s >= 1.10 This PS updates the ceph chart to work with newer versions of K8s which always mounts configmaps as read-only. Change-Id: If96dec4af385ed1ce210f2d4f63e09c89ec82c76 Ceph: Mgr: force key creation on each restart This PS forces keyring creation on each start of the mgr container. It resolves an issue found following k8s outage, where sometimes the key is not created correctly the 1st time the container starts. Change-Id: I7e642ca49883ac823196730362b796cd52cd841c RabbitMQ: only request 256Mi for PVC by default Change-Id: I94a30b16390a035fe6df3fd0f4a95b6ea000d8fe Move openstack-helm-multinode-(centos|fedora) to experimental pipeline To help conserve resources move the centos / fedora multinode jobs into the experimental pipeline. The will mean we are no longer using 10 nodes on every patchset. These jobs have been non-voting for 3+ months, and will help reduce the number of nodes needed by the helm project. The jobs can still be run using 'check experimental' but now on demand. And once they have been properly fixed can be moved back into check / gate pipelines. Change-Id: I6f5c6362749b7beb3e9f0ccff2b75d6b99d809d8 Signed-off-by: Paul Belanger Storage: increase robustness of storage clean jobs This PS increases the robustness of storage cleaning jobs by precreating the service accounts and roles for the pod to consume, and removes the potential for race conditions by removing the delete hook. Change-Id: I1f3c35fe2bd2a4325430e8025951349526f683af Add robust ldap domain-specific config This patch set provides PATCH capability for ldap-backed domain config, and prevents silent failure if the configuration contains erroneous setting. This also moves from loading .conf files into DB directly, and uses the API endpoints. Change-Id: I17a19046fa96e0f3e8fb029c156ba79c924a0097 Signed-off-by: Tin Lam osh-gate: Move to use roles from openstack-helm-infra This moves to consume the roles from openstack-helm-infra in the openstack-helm gates Depends-On: https://review.openstack.org/559836 Change-Id: I3ed721333b899f8dde812f1843a9fcb074c63121 ldap: merge yaml for dependencies Change-Id: I539a8dfa6903a60ccc013ee82dd4d3be4e3ff0df senlin: yaml indentation fixes Change-Id: I79c97747fa8494813ff27a471fac2be2b4b6ad5f mistral: yaml indentation fixes Change-Id: I93d1701cfc629dabc07550c0fbe0a754b77e7bcc Add validation to domain logic This patch set addresses the comments left in [0] by fixing the header information in the python template file and adding logic to query the domain specific logic. [0]https://review.openstack.org/#/c/559191/ Change-Id: I656d7ac8158f9b40246ac739e4dc4fc88e1e43da Signed-off-by: Tin Lam openvswitch: use pidfile option Make appctl to search pidfile for exit command as pid 1 is not always the target process in some cases. For example, pid 1 is "pause" when pid namespace sharing is enabled in your k8s cluster. Change-Id: I90e202245a9522fe53bea7e1f047061a0a280834 memcached: yaml indentation fixes Change-Id: Ib10c7f03d24cb39feb7f3eb7e35a21b0257b478c keystone: yaml indentation fixes Change-Id: Ic402d57f2b0a0a625164a294760476725faea3aa nova: yaml indentation fixes Change-Id: I45b6c691ce9ea4bb1cd4607efcf71a2dc068be3c glance: yaml indentation fixes Change-Id: Icf7366d44dbe8b6cba96a5ba781cd76a55278b18 cinder: yaml indentation fixes Change-Id: Ia59b2822dbe40ab7431987b2dc55e00067275f86 heat: yaml indentation fixes Change-Id: Ia514170edf2498abaedcf07872ea7e383e847f89 neutron: yaml indentation fixes Change-Id: I579091fa21fcd0429bdc13df6cb2dfbeb8ae4a8e Nova: NoVNCProxy Ingress This PS adds ingress rules and config for nova's novncproxy. Change-Id: Ibc89e67c8ee6c93d8ee3e798dec10e976c002cab magnum: yaml indentation fixes Change-Id: Ia504ee55f3b44250725043b240b9465e22491ded RabbitMQ: recover from full cluster restart This PS updates the RabbitMQ chart to name nodes via their hostnames rather than IPs - allowing the cluster (and single nodes) to be restarted without impact. Additionally the rabbitmq managment interface is exposed and basic helm tests have been added. Change-Id: I84857d9f3697eaa8491aafaf6ee3b9d47dbf2191 zuul: yaml indent/cleanup Change-Id: I915c40eb0d62949eaa7041ff1fe62e3a681763df Fixes/Updates OSH Developer and Multinode install guide This PS fixes few typos and adds DNS entry update section to notify user. Closes-Bug: #1765459 Change-Id: I59f5c90aaa06a5996c3ddb7a7b1bd3c4adfe0eb7 --- barbican/templates/job-db-drop.yaml | 73 +- barbican/templates/service-ingress-api.yaml | 18 +- barbican/values.yaml | 16 +- .../templates/job-db-drop.yaml | 8 +- ceilometer/templates/service-ingress-api.yaml | 18 +- ceilometer/values.yaml | 9 +- ceph/templates/bin/mgr/_start.sh.tpl | 3 +- ceph/templates/bin/mon/_start.sh.tpl | 6 +- ceph/templates/configmap-templates.yaml | 14 +- ceph/templates/daemonset-mon.yaml | 18 +- ceph/templates/daemonset-osd.yaml | 6 +- ceph/templates/deployment-mds.yaml | 4 +- ceph/templates/deployment-mgr.yaml | 4 +- ceph/templates/deployment-moncheck.yaml | 18 +- ceph/templates/deployment-rgw.yaml | 6 +- ceph/templates/job-rbd-pool.yaml | 2 +- ceph/templates/templates/_admin.keyring.tpl | 7 - .../templates/_bootstrap.keyring.mds.tpl | 3 - .../templates/_bootstrap.keyring.mgr.tpl | 3 - .../templates/_bootstrap.keyring.osd.tpl | 3 - .../templates/_bootstrap.keyring.rgw.tpl | 3 - ceph/templates/templates/_mon.keyring.tpl | 3 - ceph/values.yaml | 57 +- .../templates/bin/_backup-storage-init.sh.tpl | 2 + cinder/templates/bin/_storage-init.sh.tpl | 2 + cinder/templates/configmap-etc.yaml | 11 +- cinder/templates/deployment-volume.yaml | 10 +- cinder/templates/etc/_rootwrap.conf.tpl | 27 - .../etc/rootwrap.d/_volume.filters.tpl | 224 -- cinder/templates/job-backup-storage-init.yaml | 6 +- cinder/templates/job-clean.yaml | 6 - cinder/templates/job-db-drop.yaml | 70 +- cinder/templates/job-storage-init.yaml | 6 +- cinder/templates/service-ingress-api.yaml | 18 +- cinder/values.yaml | 308 +- .../templates/job-db-drop.yaml | 8 +- congress/templates/service-ingress-api.yaml | 18 +- congress/values.yaml | 7 +- doc/source/devref/images.rst | 2 +- doc/source/devref/index.rst | 1 + doc/source/devref/networking.rst | 26 +- ...node-and-label-specific-configurations.rst | 106 + .../install/developer/deploy-with-ceph.rst | 4 + .../install/developer/deploy-with-nfs.rst | 4 + .../developer/kubernetes-and-common-setup.rst | 6 + doc/source/install/multinode.rst | 22 +- .../specs/support-linux-bridge-on-neutron.rst | 18 +- etcd/values.yaml | 2 +- glance/templates/configmap-etc.yaml | 3 +- glance/templates/etc/_swift-store.conf.tpl | 30 - glance/templates/job-clean.yaml | 6 - glance/templates/job-db-drop.yaml | 72 +- glance/templates/service-ingress-api.yaml | 18 +- .../templates/service-ingress-registry.yaml | 18 +- glance/values.yaml | 60 +- gnocchi/templates/configmap-etc.yaml | 3 +- gnocchi/templates/etc/_wsgi-gnocchi.conf.tpl | 37 - gnocchi/templates/job-clean.yaml | 6 - gnocchi/templates/job-db-drop.yaml | 20 + gnocchi/templates/service-ingress-api.yaml | 18 +- gnocchi/values.yaml | 31 +- heat/templates/bin/_trusts.sh.tpl | 3 +- heat/templates/job-db-drop.yaml | 70 +- heat/templates/job-trusts.yaml | 2 + heat/templates/service-ingress-api.yaml | 18 +- heat/templates/service-ingress-cfn.yaml | 18 +- .../templates/service-ingress-cloudwatch.yaml | 18 +- heat/values.yaml | 55 +- helm-toolkit/.gitignore | 2 +- .../_hostname_short_endpoint_lookup.tpl | 4 + .../_keystone_endpoint_scheme_lookup.tpl | 34 + .../_keystone_endpoint_uri_lookup.tpl | 8 +- ...ce_name_endpoint_with_namespace_lookup.tpl | 14 + .../templates/manifests/_ingress.yaml.tpl | 52 +- .../templates/manifests/_job-bootstrap.yaml | 3 + .../manifests/_job-db-drop-mysql.yaml.tpl | 123 + .../templates/manifests/_job-ks-user.yaml.tpl | 9 +- .../templates/manifests/_service-ingress.tpl | 43 + .../templates/scripts/_ks-user.sh.tpl | 11 +- .../_kubernetes_entrypoint_init_container.tpl | 4 + .../snippets/_kubernetes_pod_rbac_roles.tpl | 2 +- .../_kubernetes_pod_rbac_serviceaccount.tpl | 2 + .../templates/utils/_dependency_resolver.tpl | 8 + .../utils/_values_template_renderer.tpl | 81 + horizon/templates/configmap-etc.yaml | 24 +- horizon/templates/deployment.yaml | 31 +- horizon/templates/etc/_horizon.conf.tpl | 51 - horizon/templates/etc/_local_settings.tpl | 688 ---- horizon/templates/job-db-drop.yaml | 64 +- horizon/templates/service-ingress.yaml | 18 +- horizon/values.yaml | 2901 ++++++++++------- ingress/values.yaml | 4 +- ironic/templates/configmap-etc.yaml | 22 +- ironic/templates/etc/_nginx.conf.tpl | 41 - ironic/templates/etc/_tftp-map-file.tpl | 4 - ironic/templates/job-db-drop.yaml | 20 + ironic/templates/service-ingress-api.yaml | 14 +- ironic/values.yaml | 50 +- keystone/templates/bin/_domain-manage.py.tpl | 55 + keystone/templates/bin/_domain-manage.sh.tpl | 14 +- keystone/templates/configmap-bin.yaml | 2 + keystone/templates/configmap-etc.yaml | 4 +- keystone/templates/deployment-api.yaml | 11 + .../templates/etc/_wsgi-keystone.conf.tpl | 4 +- keystone/templates/job-db-drop.yaml | 75 +- keystone/templates/job-domain-manage.yaml | 8 +- keystone/templates/secret-ldap-tls.yaml | 26 + keystone/templates/service-ingress-api.yaml | 18 +- keystone/values.yaml | 359 +- ldap/templates/_helpers.tpl | 6 + ldap/templates/bin/_bootstrap.sh.tpl | 8 + ldap/templates/configmap-bin.yaml | 27 + ldap/templates/configmap-etc.yaml | 27 + ldap/templates/job-bootstrap.yaml | 18 + ldap/values.yaml | 121 +- libvirt/templates/bin/_libvirt.sh.tpl | 8 + libvirt/templates/daemonset-libvirt.yaml | 5 + libvirt/values.yaml | 2 +- magnum/templates/job-db-drop.yaml | 70 +- magnum/templates/service-ingress-api.yaml | 18 +- magnum/values.yaml | 16 +- mariadb/templates/bin/_start.sh.tpl | 6 + mariadb/values.yaml | 7 +- memcached/values.yaml | 4 +- mistral/requirements.yaml | 1 - mistral/templates/job-db-drop.yaml | 70 +- mistral/templates/service-ingress-api.yaml | 18 +- mistral/values.yaml | 14 +- mongodb/values.yaml | 2 +- neutron/templates/bin/_db-sync.sh.tpl | 2 +- .../templates/bin/_neutron-dhcp-agent.sh.tpl | 2 +- .../templates/bin/_neutron-l3-agent.sh.tpl | 2 +- .../_neutron-linuxbridge-agent-init.sh.tpl | 1 - .../bin/_neutron-metadata-agent.sh.tpl | 6 +- .../_neutron-openvswitch-agent-init.sh.tpl | 9 + neutron/templates/bin/_neutron-server.sh.tpl | 5 +- .../bin/_neutron-sriov-agent-init.sh.tpl | 39 + .../templates/bin/_neutron-sriov-agent.sh.tpl | 24 + neutron/templates/configmap-bin.yaml | 4 + neutron/templates/configmap-etc.yaml | 206 +- neutron/templates/daemonset-dhcp-agent.yaml | 80 +- neutron/templates/daemonset-l3-agent.yaml | 80 +- neutron/templates/daemonset-lb-agent.yaml | 122 +- .../templates/daemonset-metadata-agent.yaml | 87 +- neutron/templates/daemonset-ovs-agent.yaml | 122 +- neutron/templates/daemonset-sriov-agent.yaml | 187 ++ neutron/templates/deployment-server.yaml | 6 +- neutron/templates/etc/_rootwrap.conf.tpl | 34 - .../etc/rootwrap.d/_debug.filters.tpl | 18 - .../etc/rootwrap.d/_dhcp.filters.tpl | 34 - .../etc/rootwrap.d/_dibbler.filters.tpl | 16 - .../etc/rootwrap.d/_ebtables.filters.tpl | 11 - .../rootwrap.d/_ipset-firewall.filters.tpl | 12 - .../rootwrap.d/_iptables-firewall.filters.tpl | 27 - .../templates/etc/rootwrap.d/_l3.filters.tpl | 52 - .../_linuxbridge-plugin.filters.tpl | 28 - .../etc/rootwrap.d/_netns-cleanup.filters.tpl | 12 - .../_openvswitch-plugin.filters.tpl | 24 - neutron/templates/job-db-drop.yaml | 70 +- neutron/templates/job-db-sync.yaml | 2 +- .../templates/service-ingress-neutron.yaml | 17 +- neutron/values.yaml | 510 ++- .../bin/_nova-api-metadata-init.sh.tpl | 3 +- nova/templates/configmap-etc.yaml | 8 +- nova/templates/daemonset-compute.yaml | 10 +- nova/templates/etc/_rootwrap.conf.tpl | 2 +- nova/templates/ingress-novncproxy.yaml | 20 + nova/templates/job-db-drop.yaml | 104 +- nova/templates/service-ingress-metadata.yaml | 21 +- .../templates/service-ingress-novncproxy.yaml | 20 + nova/templates/service-ingress-osapi.yaml | 18 +- nova/templates/service-ingress-placement.yaml | 18 +- nova/values.yaml | 136 +- .../bin/_openvswitch-db-server.sh.tpl | 5 +- .../bin/_openvswitch-vswitchd.sh.tpl | 5 +- openvswitch/values.yaml | 2 +- postgresql/templates/statefulset.yaml | 2 +- postgresql/values.yaml | 2 +- .../templates/bin/_rabbitmq-liveness.sh.tpl | 2 + .../templates/bin/_rabbitmq-readiness.sh.tpl | 2 + rabbitmq/templates/bin/_rabbitmq-test.sh.tpl | 77 + rabbitmq/templates/configmap-bin.yaml | 2 + rabbitmq/templates/configmap-etc.yaml | 19 +- rabbitmq/templates/ingress-management.yaml | 25 + .../prometheus/exporter-deployment.yaml | 28 +- .../prometheus/exporter-service.yaml | 4 +- rabbitmq/templates/pod-test.yaml | 55 + rabbitmq/templates/service-discovery.yaml | 39 + .../templates/service-ingress-management.yaml | 25 + rabbitmq/templates/service.yaml | 2 + rabbitmq/templates/statefulset.yaml | 46 +- .../_to_rabbit_config.tpl} | 2 +- rabbitmq/values.yaml | 47 +- rally/templates/configmap-tasks.yaml | 22 +- rally/templates/configmap-test-templates.yaml | 34 +- rally/templates/etc/_rally.conf.tpl | 1010 ------ .../_autoscaling-group.yaml.template.tpl | 46 - .../_autoscaling-policy.yaml.template.tpl | 17 - .../test-templates/_default.yaml.template.tpl | 1 - .../_random-strings.yaml.template.tpl | 13 - ...group-server-with-volume.yaml.template.tpl | 44 - ...ce-group-with-constraint.yaml.template.tpl | 21 - ...ource-group-with-outputs.yaml.template.tpl | 37 - .../_resource-group.yaml.template.tpl | 13 - .../_server-with-ports.yaml.template.tpl | 64 - .../_server-with-volume.yaml.template.tpl | 39 - ...toscaling-policy-inplace.yaml.template.tpl | 23 - ...dated-random-strings-add.yaml.template.tpl | 19 - ...ed-random-strings-delete.yaml.template.tpl | 11 - ...d-random-strings-replace.yaml.template.tpl | 19 - ...-resource-group-increase.yaml.template.tpl | 16 - ...ed-resource-group-reduce.yaml.template.tpl | 16 - rally/values.yaml | 486 ++- senlin/templates/job-db-drop.yaml | 70 +- senlin/templates/service-ingress-api.yaml | 18 +- senlin/values.yaml | 24 +- .../armada/multinode/armada-lma.yaml | 5 - tools/deployment/developer/ceph/120-glance.sh | 3 +- tools/deployment/developer/ceph/130-cinder.sh | 17 + .../deployment/developer/common/900-use-it.sh | 2 - .../deployment/developer/ldap/080-keystone.sh | 76 + tools/deployment/multinode/030-ceph.sh | 3 + tools/deployment/multinode/050-mariadb.sh | 2 +- .../multinode/131-libvirt-opencontrail.sh | 2 +- .../gate/files/heat-basic-bm-deployment.yaml | 29 +- .../gate/files/heat-basic-vm-deployment.yaml | 78 +- .../files/heat-public-net-deployment.yaml | 18 +- .../files/heat-subnet-pool-deployment.yaml | 12 +- tools/gate/playbooks/dev-deploy-ceph.yaml | 16 +- tools/gate/playbooks/dev-deploy-nfs.yaml | 11 + tools/gate/playbooks/osh-infra-build.yaml | 36 + .../playbooks/osh-infra-collect-logs.yaml | 58 + .../playbooks/osh-infra-deploy-docker.yaml | 43 + .../gate/playbooks/osh-infra-deploy-k8s.yaml | 44 + .../playbooks/osh-infra-upgrade-host.yaml | 39 + tools/gate/playbooks/vars.yaml | 64 + tools/images/ceph-config-helper/Dockerfile | 32 +- tools/images/ceph-config-helper/Makefile | 39 + tools/images/ceph-config-helper/README.rst | 3 +- tools/images/gate-utils/Makefile | 36 + tools/images/libvirt/Dockerfile.ubuntu.xenial | 2 +- tools/images/libvirt/Makefile | 47 + tools/images/libvirt/README.rst | 2 +- tools/images/openstack/newton/loci.sh | 26 +- tools/images/openstack/ocata/loci.sh | 26 +- tools/images/openvswitch/Makefile | 39 + tools/images/vbmc/Makefile | 36 + .../backends/networking/compute-kit-sr-iov.sh | 151 + .../backends/networking/linuxbridge.yaml | 3 +- .../backends/opencontrail/neutron.yaml | 3 +- .../overrides/backends/opencontrail/nova.yaml | 3 +- .../example/keystone_domain_config.yaml | 49 - .../keystone/ldap_domain_config.yaml | 46 + tools/overrides/releases/ocata/loci.yaml | 2 + 254 files changed, 6524 insertions(+), 6322 deletions(-) rename cinder/templates/etc/_cinder_sudoers.tpl => ceilometer/templates/job-db-drop.yaml (59%) delete mode 100644 ceph/templates/templates/_admin.keyring.tpl delete mode 100644 ceph/templates/templates/_bootstrap.keyring.mds.tpl delete mode 100644 ceph/templates/templates/_bootstrap.keyring.mgr.tpl delete mode 100644 ceph/templates/templates/_bootstrap.keyring.osd.tpl delete mode 100644 ceph/templates/templates/_bootstrap.keyring.rgw.tpl delete mode 100644 ceph/templates/templates/_mon.keyring.tpl delete mode 100644 cinder/templates/etc/_rootwrap.conf.tpl delete mode 100644 cinder/templates/etc/rootwrap.d/_volume.filters.tpl rename neutron/templates/etc/_neutron_sudoers.tpl => congress/templates/job-db-drop.yaml (58%) create mode 100644 doc/source/devref/node-and-label-specific-configurations.rst delete mode 100644 glance/templates/etc/_swift-store.conf.tpl delete mode 100644 gnocchi/templates/etc/_wsgi-gnocchi.conf.tpl create mode 100644 gnocchi/templates/job-db-drop.yaml create mode 100644 helm-toolkit/templates/endpoints/_keystone_endpoint_scheme_lookup.tpl create mode 100644 helm-toolkit/templates/manifests/_job-db-drop-mysql.yaml.tpl create mode 100644 helm-toolkit/templates/manifests/_service-ingress.tpl create mode 100644 helm-toolkit/templates/utils/_values_template_renderer.tpl delete mode 100644 horizon/templates/etc/_horizon.conf.tpl delete mode 100644 horizon/templates/etc/_local_settings.tpl delete mode 100644 ironic/templates/etc/_nginx.conf.tpl delete mode 100644 ironic/templates/etc/_tftp-map-file.tpl create mode 100644 ironic/templates/job-db-drop.yaml create mode 100644 keystone/templates/bin/_domain-manage.py.tpl create mode 100644 keystone/templates/secret-ldap-tls.yaml create mode 100644 ldap/templates/bin/_bootstrap.sh.tpl create mode 100644 ldap/templates/configmap-bin.yaml create mode 100644 ldap/templates/configmap-etc.yaml create mode 100644 ldap/templates/job-bootstrap.yaml create mode 100644 neutron/templates/bin/_neutron-sriov-agent-init.sh.tpl create mode 100644 neutron/templates/bin/_neutron-sriov-agent.sh.tpl create mode 100644 neutron/templates/daemonset-sriov-agent.yaml delete mode 100644 neutron/templates/etc/_rootwrap.conf.tpl delete mode 100644 neutron/templates/etc/rootwrap.d/_debug.filters.tpl delete mode 100644 neutron/templates/etc/rootwrap.d/_dhcp.filters.tpl delete mode 100644 neutron/templates/etc/rootwrap.d/_dibbler.filters.tpl delete mode 100644 neutron/templates/etc/rootwrap.d/_ebtables.filters.tpl delete mode 100644 neutron/templates/etc/rootwrap.d/_ipset-firewall.filters.tpl delete mode 100644 neutron/templates/etc/rootwrap.d/_iptables-firewall.filters.tpl delete mode 100644 neutron/templates/etc/rootwrap.d/_l3.filters.tpl delete mode 100644 neutron/templates/etc/rootwrap.d/_linuxbridge-plugin.filters.tpl delete mode 100644 neutron/templates/etc/rootwrap.d/_netns-cleanup.filters.tpl delete mode 100644 neutron/templates/etc/rootwrap.d/_openvswitch-plugin.filters.tpl create mode 100644 nova/templates/ingress-novncproxy.yaml create mode 100644 nova/templates/service-ingress-novncproxy.yaml create mode 100644 rabbitmq/templates/bin/_rabbitmq-test.sh.tpl create mode 100644 rabbitmq/templates/ingress-management.yaml create mode 100644 rabbitmq/templates/pod-test.yaml create mode 100644 rabbitmq/templates/service-discovery.yaml create mode 100644 rabbitmq/templates/service-ingress-management.yaml rename rabbitmq/templates/{_helpers.tpl => utils/_to_rabbit_config.tpl} (95%) delete mode 100644 rally/templates/etc/_rally.conf.tpl delete mode 100644 rally/templates/tasks/test-templates/_autoscaling-group.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_autoscaling-policy.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_default.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_random-strings.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_resource-group-server-with-volume.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_resource-group-with-constraint.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_resource-group-with-outputs.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_resource-group.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_server-with-ports.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_server-with-volume.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_updated-autoscaling-policy-inplace.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_updated-random-strings-add.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_updated-random-strings-delete.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_updated-random-strings-replace.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_updated-resource-group-increase.yaml.template.tpl delete mode 100644 rally/templates/tasks/test-templates/_updated-resource-group-reduce.yaml.template.tpl create mode 100755 tools/deployment/developer/ldap/080-keystone.sh create mode 100644 tools/gate/playbooks/osh-infra-build.yaml create mode 100644 tools/gate/playbooks/osh-infra-collect-logs.yaml create mode 100644 tools/gate/playbooks/osh-infra-deploy-docker.yaml create mode 100644 tools/gate/playbooks/osh-infra-deploy-k8s.yaml create mode 100644 tools/gate/playbooks/osh-infra-upgrade-host.yaml create mode 100644 tools/gate/playbooks/vars.yaml create mode 100644 tools/images/ceph-config-helper/Makefile create mode 100644 tools/images/gate-utils/Makefile create mode 100644 tools/images/libvirt/Makefile create mode 100644 tools/images/openvswitch/Makefile create mode 100644 tools/images/vbmc/Makefile create mode 100755 tools/overrides/backends/networking/compute-kit-sr-iov.sh delete mode 100644 tools/overrides/example/keystone_domain_config.yaml create mode 100644 tools/overrides/keystone/ldap_domain_config.yaml diff --git a/barbican/templates/job-db-drop.yaml b/barbican/templates/job-db-drop.yaml index 41365ccbc6..3c6ec5d451 100644 --- a/barbican/templates/job-db-drop.yaml +++ b/barbican/templates/job-db-drop.yaml @@ -15,73 +15,8 @@ limitations under the License. */}} {{- if .Values.manifests.job_db_drop }} -{{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.db_drop }} - -{{- $randStringSuffix := randAlphaNum 5 | lower }} - -{{- $serviceAccountName := print "barbican-db-drop-" $randStringSuffix }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: {{ print "barbican-db-drop-" $randStringSuffix }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded -spec: - template: - metadata: - labels: -{{ tuple $envAll "barbican" "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - spec: - serviceAccountName: {{ $serviceAccountName }} - restartPolicy: OnFailure - nodeSelector: - {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }} - initContainers: -{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - containers: - - name: barbican-db-drop - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: OPENSTACK_CONFIG_FILE - value: /etc/barbican/barbican.conf - - name: OPENSTACK_CONFIG_DB_SECTION - value: DEFAULT - - name: OPENSTACK_CONFIG_DB_KEY - value: sql_connection - command: - - /tmp/db-drop.py - volumeMounts: - - name: barbican-etc - mountPath: /etc/barbican - - name: barbican-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true - - name: barbican-conf - mountPath: /etc/barbican/barbican.conf - subPath: barbican.conf - readOnly: true - volumes: - - name: barbican-etc - emptyDir: {} - - name: barbican-conf - configMap: - name: barbican-etc - defaultMode: 0444 - - name: barbican-bin - configMap: - name: barbican-bin - defaultMode: 0555 - +{{- $serviceName := "barbican" -}} +{{- $dbToDrop := dict "adminSecret" .Values.secrets.oslo_db.admin "configFile" (printf "/etc/%s/%s.conf" $serviceName $serviceName ) "configDbSection" "DEFAULT" "configDbKey" "sql_connection" -}} +{{- $dbDropJob := dict "envAll" . "serviceName" $serviceName "dbToDrop" $dbToDrop -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} {{- end }} diff --git a/barbican/templates/service-ingress-api.yaml b/barbican/templates/service-ingress-api.yaml index c7001cd42f..76c406619c 100644 --- a/barbican/templates/service-ingress-api.yaml +++ b/barbican/templates/service-ingress-api.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_api }} -{{- $envAll := . }} -{{- if .Values.network.api.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "key-manager" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "key-manager" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/barbican/values.yaml b/barbican/values.yaml index 5afedb3b42..ca8b0c43bc 100644 --- a/barbican/values.yaml +++ b/barbican/values.yaml @@ -26,7 +26,7 @@ release_group: null images: tags: bootstrap: docker.io/openstackhelm/heat:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 scripted_test: docker.io/openstackhelm/heat:newton db_init: docker.io/openstackhelm/heat:newton barbican_db_sync: docker.io/openstackhelm/barbican:newton @@ -43,11 +43,11 @@ pod: barbican: uid: 42424 affinity: - anti: - type: - default: preferredDuringSchedulingIgnoredDuringExecution - topologyKey: - default: kubernetes.io/hostname + anti: + type: + default: preferredDuringSchedulingIgnoredDuringExecution + topologyKey: + default: kubernetes.io/hostname mounts: barbican_api: init_container: null @@ -149,8 +149,10 @@ network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / external_policy_local: false node_port: diff --git a/cinder/templates/etc/_cinder_sudoers.tpl b/ceilometer/templates/job-db-drop.yaml similarity index 59% rename from cinder/templates/etc/_cinder_sudoers.tpl rename to ceilometer/templates/job-db-drop.yaml index 2b822ab2d2..865a2ab0fd 100644 --- a/cinder/templates/etc/_cinder_sudoers.tpl +++ b/ceilometer/templates/job-db-drop.yaml @@ -14,7 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -# This sudoers file supports rootwrap for both Kolla and LOCI Images. -Defaults !requiretty -Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/var/lib/openstack/bin:/var/lib/kolla/venv/bin" -cinder ALL = (root) NOPASSWD: /var/lib/kolla/venv/bin/cinder-rootwrap /etc/cinder/rootwrap.conf *, /var/lib/openstack/bin/cinder-rootwrap /etc/cinder/rootwrap.conf * +{{- if .Values.manifests.job_db_drop }} +{{- $dbDropJob := dict "envAll" . "serviceName" "ceilometer" -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} +{{- end }} diff --git a/ceilometer/templates/service-ingress-api.yaml b/ceilometer/templates/service-ingress-api.yaml index 03f99f18a0..559e97d5be 100644 --- a/ceilometer/templates/service-ingress-api.yaml +++ b/ceilometer/templates/service-ingress-api.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_api }} -{{- $envAll := . }} -{{- if .Values.network.api.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "metering" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "metering" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/ceilometer/values.yaml b/ceilometer/values.yaml index e14819974a..945c4cf10d 100644 --- a/ceilometer/values.yaml +++ b/ceilometer/values.yaml @@ -55,15 +55,17 @@ images: ceilometer_collector: quay.io/larryrensing/ubuntu-source-ceilometer-collector:3.0.3 ceilometer_compute: quay.io/larryrensing/ubuntu-source-ceilometer-compute:3.0.3 ceilometer_notification: quay.io/larryrensing/ubuntu-source-ceilometer-notification:3.0.3 - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / port: 8777 node_port: @@ -1627,7 +1629,7 @@ bootstrap: script: | openstack token issue -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -1937,6 +1939,7 @@ manifests: deployment_notification: true ingress_api: true job_bootstrap: true + job_db_drop: false job_db_init: true job_db_init_mongodb: true job_db_sync: true diff --git a/ceph/templates/bin/mgr/_start.sh.tpl b/ceph/templates/bin/mgr/_start.sh.tpl index 6fc7686ca0..be622ac317 100644 --- a/ceph/templates/bin/mgr/_start.sh.tpl +++ b/ceph/templates/bin/mgr/_start.sh.tpl @@ -17,7 +17,8 @@ if [ ${CEPH_GET_ADMIN_KEY} -eq 1 ]; then fi fi -# Check to see if our MGR has been initialized +# Create a MGR keyring +rm -rf $MGR_KEYRING if [ ! -e "$MGR_KEYRING" ]; then # Create ceph-mgr key timeout 10 ceph --cluster "${CLUSTER}" auth get-or-create mgr."${MGR_NAME}" mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o "$MGR_KEYRING" diff --git a/ceph/templates/bin/mon/_start.sh.tpl b/ceph/templates/bin/mon/_start.sh.tpl index 451eeb26e9..24b6ed8e21 100644 --- a/ceph/templates/bin/mon/_start.sh.tpl +++ b/ceph/templates/bin/mon/_start.sh.tpl @@ -68,9 +68,11 @@ get_mon_config # If we don't have a monitor keyring, this is a new monitor if [ ! -e "${MON_DATA_DIR}/keyring" ]; then - if [ ! -e ${MON_KEYRING} ]; then - echo "ERROR- ${MON_KEYRING} must exist. You can extract it from your current monitor by running 'ceph auth get mon. -o ${MON_KEYRING}' or use a KV Store" + if [ ! -e ${MON_KEYRING}.seed ]; then + echo "ERROR- ${MON_KEYRING}.seed must exist. You can extract it from your current monitor by running 'ceph auth get mon. -o ${MON_KEYRING}' or use a KV Store" exit 1 + else + cp -vf ${MON_KEYRING}.seed ${MON_KEYRING} fi if [ ! -e ${MONMAP} ]; then diff --git a/ceph/templates/configmap-templates.yaml b/ceph/templates/configmap-templates.yaml index c4bc509fee..aa96d8002d 100644 --- a/ceph/templates/configmap-templates.yaml +++ b/ceph/templates/configmap-templates.yaml @@ -23,15 +23,15 @@ metadata: name: ceph-templates data: admin.keyring: | -{{ tuple "templates/_admin.keyring.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} +{{ .Values.conf.templates.keyring.admin | indent 4 }} + mon.keyring: | +{{ .Values.conf.templates.keyring.mon | indent 4 }} bootstrap.keyring.mds: | -{{ tuple "templates/_bootstrap.keyring.mds.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} +{{ .Values.conf.templates.keyring.bootstrap.mds | indent 4 }} bootstrap.keyring.mgr: | -{{ tuple "templates/_bootstrap.keyring.mgr.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} +{{ .Values.conf.templates.keyring.bootstrap.mgr | indent 4 }} bootstrap.keyring.osd: | -{{ tuple "templates/_bootstrap.keyring.osd.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} +{{ .Values.conf.templates.keyring.bootstrap.osd | indent 4 }} bootstrap.keyring.rgw: | -{{ tuple "templates/_bootstrap.keyring.rgw.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} - mon.keyring: | -{{ tuple "templates/_mon.keyring.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} +{{ .Values.conf.templates.keyring.bootstrap.rgw | indent 4 }} {{- end }} diff --git a/ceph/templates/daemonset-mon.yaml b/ceph/templates/daemonset-mon.yaml index 1c31632dc5..a0354cd4e2 100644 --- a/ceph/templates/daemonset-mon.yaml +++ b/ceph/templates/daemonset-mon.yaml @@ -65,7 +65,7 @@ spec: initContainers: {{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - name: ceph-init-dirs - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_mon }} imagePullPolicy: {{ .Values.images.pull_policy }} command: - /tmp/init-dirs.sh @@ -85,7 +85,7 @@ spec: readOnly: false containers: - name: ceph-mon - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_mon }} imagePullPolicy: {{ .Values.images.pull_policy }} {{ tuple $envAll $envAll.Values.pod.resources.mon | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} env: @@ -130,8 +130,8 @@ spec: command: - /tmp/mon-check.sh - liveness - initialDelaySeconds: 60 - periodSeconds: 60 + initialDelaySeconds: 360 + periodSeconds: 180 readinessProbe: exec: command: @@ -161,21 +161,21 @@ spec: subPath: ceph.client.admin.keyring readOnly: true - name: ceph-mon-keyring - mountPath: /etc/ceph/ceph.mon.keyring + mountPath: /etc/ceph/ceph.mon.keyring.seed subPath: ceph.mon.keyring - readOnly: false + readOnly: true - name: ceph-bootstrap-osd-keyring mountPath: /var/lib/ceph/bootstrap-osd/ceph.keyring subPath: ceph.keyring - readOnly: false + readOnly: true - name: ceph-bootstrap-mds-keyring mountPath: /var/lib/ceph/bootstrap-mds/ceph.keyring subPath: ceph.keyring - readOnly: false + readOnly: true - name: ceph-bootstrap-rgw-keyring mountPath: /var/lib/ceph/bootstrap-rgw/ceph.keyring subPath: ceph.keyring - readOnly: false + readOnly: true - name: pod-var-lib-ceph mountPath: /var/lib/ceph readOnly: false diff --git a/ceph/templates/daemonset-osd.yaml b/ceph/templates/daemonset-osd.yaml index 94832f48ed..33e174adc5 100644 --- a/ceph/templates/daemonset-osd.yaml +++ b/ceph/templates/daemonset-osd.yaml @@ -42,7 +42,7 @@ spec: initContainers: {{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - name: ceph-init-dirs - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_osd }} imagePullPolicy: {{ .Values.images.pull_policy }} command: - /tmp/init-dirs.sh @@ -71,7 +71,7 @@ spec: mountPath: /run readOnly: false - name: osd-init - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_osd }} imagePullPolicy: {{ .Values.images.pull_policy }} {{ tuple $envAll $envAll.Values.pod.resources.osd | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} securityContext: @@ -126,7 +126,7 @@ spec: readOnly: false containers: - name: osd-pod - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_osd }} imagePullPolicy: {{ .Values.images.pull_policy }} {{ tuple $envAll $envAll.Values.pod.resources.osd | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} securityContext: diff --git a/ceph/templates/deployment-mds.yaml b/ceph/templates/deployment-mds.yaml index 832d9a05f1..5553f1262a 100644 --- a/ceph/templates/deployment-mds.yaml +++ b/ceph/templates/deployment-mds.yaml @@ -41,7 +41,7 @@ spec: initContainers: {{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - name: ceph-init-dirs - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_mds }} imagePullPolicy: {{ .Values.images.pull_policy }} command: - /tmp/init-dirs.sh @@ -61,7 +61,7 @@ spec: readOnly: false containers: - name: ceph-mds - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_mds }} imagePullPolicy: {{ .Values.images.pull_policy }} {{ tuple $envAll $envAll.Values.pod.resources.mds | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} command: diff --git a/ceph/templates/deployment-mgr.yaml b/ceph/templates/deployment-mgr.yaml index 8f52a7aa6f..d329ce8b49 100644 --- a/ceph/templates/deployment-mgr.yaml +++ b/ceph/templates/deployment-mgr.yaml @@ -44,7 +44,7 @@ spec: initContainers: {{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - name: ceph-init-dirs - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_mgr }} imagePullPolicy: {{ .Values.images.pull_policy }} command: - /tmp/init-dirs.sh @@ -66,7 +66,7 @@ spec: mountPath: /etc/ceph containers: - name: ceph-mgr - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_mgr }} imagePullPolicy: {{ .Values.images.pull_policy }} {{ tuple $envAll $envAll.Values.pod.resources.mgr | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} env: diff --git a/ceph/templates/deployment-moncheck.yaml b/ceph/templates/deployment-moncheck.yaml index b27d601810..8c739608ff 100644 --- a/ceph/templates/deployment-moncheck.yaml +++ b/ceph/templates/deployment-moncheck.yaml @@ -39,25 +39,9 @@ spec: {{ .Values.labels.mon.node_selector_key }}: {{ .Values.labels.mon.node_selector_value }} initContainers: {{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - - name: ceph-init-dirs - image: {{ .Values.images.tags.ceph_daemon }} - imagePullPolicy: {{ .Values.images.pull_policy }} - command: - - /tmp/init-dirs.sh - env: - - name: CLUSTER - value: "ceph" - volumeMounts: - - name: ceph-bin - mountPath: /tmp/init-dirs.sh - subPath: init-dirs.sh - readOnly: true - - name: pod-var-lib-ceph - mountPath: /var/lib/ceph - readOnly: false containers: - name: ceph-mon - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_mon_check }} imagePullPolicy: {{ .Values.images.pull_policy }} {{ tuple $envAll $envAll.Values.pod.resources.moncheck | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} env: diff --git a/ceph/templates/deployment-rgw.yaml b/ceph/templates/deployment-rgw.yaml index 63de3475e6..db0f9926f3 100644 --- a/ceph/templates/deployment-rgw.yaml +++ b/ceph/templates/deployment-rgw.yaml @@ -40,7 +40,7 @@ spec: initContainers: {{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - name: ceph-init-dirs - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_rgw }} imagePullPolicy: {{ .Values.images.pull_policy }} command: - /tmp/init-dirs.sh @@ -60,7 +60,7 @@ spec: readOnly: false {{ if .Values.conf.rgw_ks.enabled }} - name: ceph-rgw-ks-init - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_rgw }} imagePullPolicy: {{ .Values.images.pull_policy }} {{ tuple $envAll $envAll.Values.pod.resources.rgw | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} env: @@ -94,7 +94,7 @@ spec: {{ end }} containers: - name: ceph-rgw - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_rgw }} imagePullPolicy: {{ .Values.images.pull_policy }} {{ tuple $envAll $envAll.Values.pod.resources.rgw | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} env: diff --git a/ceph/templates/job-rbd-pool.yaml b/ceph/templates/job-rbd-pool.yaml index 342a17f188..c8205220ae 100644 --- a/ceph/templates/job-rbd-pool.yaml +++ b/ceph/templates/job-rbd-pool.yaml @@ -42,7 +42,7 @@ spec: {{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} containers: - name: ceph-rbd-pool - image: {{ .Values.images.tags.ceph_daemon }} + image: {{ .Values.images.tags.ceph_rbd_pool }} imagePullPolicy: {{ .Values.images.pull_policy }} {{ tuple $envAll $envAll.Values.pod.resources.mgr | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} env: diff --git a/ceph/templates/templates/_admin.keyring.tpl b/ceph/templates/templates/_admin.keyring.tpl deleted file mode 100644 index e012ebe858..0000000000 --- a/ceph/templates/templates/_admin.keyring.tpl +++ /dev/null @@ -1,7 +0,0 @@ -[client.admin] - key = {{"{{"}} key {{"}}"}} - auid = 0 - caps mds = "allow" - caps mon = "allow *" - caps osd = "allow *" - caps mgr = "allow *" diff --git a/ceph/templates/templates/_bootstrap.keyring.mds.tpl b/ceph/templates/templates/_bootstrap.keyring.mds.tpl deleted file mode 100644 index c52fd6397a..0000000000 --- a/ceph/templates/templates/_bootstrap.keyring.mds.tpl +++ /dev/null @@ -1,3 +0,0 @@ -[client.bootstrap-mds] - key = {{"{{"}} key {{"}}"}} - caps mon = "allow profile bootstrap-mds" diff --git a/ceph/templates/templates/_bootstrap.keyring.mgr.tpl b/ceph/templates/templates/_bootstrap.keyring.mgr.tpl deleted file mode 100644 index b48ffcc462..0000000000 --- a/ceph/templates/templates/_bootstrap.keyring.mgr.tpl +++ /dev/null @@ -1,3 +0,0 @@ -[client.bootstrap-mgr] - key = {{"{{"}} key {{"}}"}} - caps mgr = "allow profile bootstrap-mgr" diff --git a/ceph/templates/templates/_bootstrap.keyring.osd.tpl b/ceph/templates/templates/_bootstrap.keyring.osd.tpl deleted file mode 100644 index c5fe618d99..0000000000 --- a/ceph/templates/templates/_bootstrap.keyring.osd.tpl +++ /dev/null @@ -1,3 +0,0 @@ -[client.bootstrap-osd] - key = {{"{{"}} key {{"}}"}} - caps mon = "allow profile bootstrap-osd" diff --git a/ceph/templates/templates/_bootstrap.keyring.rgw.tpl b/ceph/templates/templates/_bootstrap.keyring.rgw.tpl deleted file mode 100644 index 1f2a58d6ab..0000000000 --- a/ceph/templates/templates/_bootstrap.keyring.rgw.tpl +++ /dev/null @@ -1,3 +0,0 @@ -[client.bootstrap-rgw] - key = {{"{{"}} key {{"}}"}} - caps mon = "allow profile bootstrap-rgw" diff --git a/ceph/templates/templates/_mon.keyring.tpl b/ceph/templates/templates/_mon.keyring.tpl deleted file mode 100644 index f9681f2d90..0000000000 --- a/ceph/templates/templates/_mon.keyring.tpl +++ /dev/null @@ -1,3 +0,0 @@ -[mon.] - key = {{"{{"}} key {{"}}"}} - caps mon = "allow *" diff --git a/ceph/values.yaml b/ceph/values.yaml index 68cb6daf65..22177813af 100644 --- a/ceph/values.yaml +++ b/ceph/values.yaml @@ -21,17 +21,23 @@ deployment: rgw_keystone_user_and_endpoints: false images: + pull_policy: IfNotPresent tags: - ks_user: docker.io/openstackhelm/heat:newton - ks_service: docker.io/openstackhelm/heat:newton - ks_endpoints: docker.io/openstackhelm/heat:newton - ceph_bootstrap: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 - ceph_daemon: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 - ceph_config_helper: docker.io/port/ceph-config-helper:v1.7.5 - ceph_rbd_provisioner: quay.io/external_storage/rbd-provisioner:v0.1.1 - ceph_cephfs_provisioner: quay.io/external_storage/cephfs-provisioner:v0.1.1 - pull_policy: "IfNotPresent" + ceph_bootstrap: 'docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04' + ceph_cephfs_provisioner: 'quay.io/external_storage/cephfs-provisioner:v0.1.1' + ceph_config_helper: 'docker.io/port/ceph-config-helper:v1.9.6' + ceph_mds: 'docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04' + ceph_mgr: 'docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04' + ceph_mon: 'docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04' + ceph_mon_check: 'docker.io/port/ceph-config-helper:v1.9.6' + ceph_osd: 'docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04' + ceph_rbd_pool: 'docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04' + ceph_rbd_provisioner: 'quay.io/external_storage/rbd-provisioner:v0.1.1' + ceph_rgw: 'docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04' + dep_check: 'quay.io/stackanetes/kubernetes-entrypoint:v0.3.0' + ks_endpoints: 'docker.io/openstackhelm/heat:newton' + ks_service: 'docker.io/openstackhelm/heat:newton' + ks_user: 'docker.io/openstackhelm/heat:newton' labels: job: @@ -187,6 +193,37 @@ network: mgr: 7000 conf: + templates: + keyring: + admin: | + [client.admin] + key = {{ key }} + auid = 0 + caps mds = "allow" + caps mon = "allow *" + caps osd = "allow *" + caps mgr = "allow *" + mon: | + [mon.] + key = {{ key }} + caps mon = "allow *" + bootstrap: + mds: | + [client.bootstrap-mds] + key = {{ key }} + caps mon = "allow profile bootstrap-mds" + mgr: | + [client.bootstrap-mgr] + key = {{ key }} + caps mgr = "allow profile bootstrap-mgr" + osd: | + [client.bootstrap-osd] + key = {{ key }} + caps mon = "allow profile bootstrap-osd" + rgw: | + [client.bootstrap-rgw] + key = {{ key }} + caps mon = "allow profile bootstrap-rgw" features: mds: true rgw: true diff --git a/cinder/templates/bin/_backup-storage-init.sh.tpl b/cinder/templates/bin/_backup-storage-init.sh.tpl index 3d8214b542..2c619d060f 100644 --- a/cinder/templates/bin/_backup-storage-init.sh.tpl +++ b/cinder/templates/bin/_backup-storage-init.sh.tpl @@ -38,6 +38,8 @@ elif [ "x$STORAGE_BACKEND" == "xcinder.backup.drivers.ceph" ]; then if [[ ${test_luminous} -gt 0 ]]; then ceph osd pool application enable $1 $3 fi + ceph osd pool set $1 size ${RBD_POOL_REPLICATION} + ceph osd pool set $1 crush_rule "${RBD_POOL_CRUSH_RULE}" } ensure_pool ${RBD_POOL_NAME} ${RBD_POOL_CHUNK_SIZE} "cinder-backup" diff --git a/cinder/templates/bin/_storage-init.sh.tpl b/cinder/templates/bin/_storage-init.sh.tpl index 76a4de46bb..ed3ec0a8d1 100644 --- a/cinder/templates/bin/_storage-init.sh.tpl +++ b/cinder/templates/bin/_storage-init.sh.tpl @@ -35,6 +35,8 @@ if [ "x$STORAGE_BACKEND" == "xcinder.volume.drivers.rbd.RBDDriver" ]; then if [[ ${test_luminous} -gt 0 ]]; then ceph osd pool application enable $1 $3 fi + ceph osd pool set $1 size ${RBD_POOL_REPLICATION} + ceph osd pool set $1 crush_rule "${RBD_POOL_CRUSH_RULE}" } ensure_pool ${RBD_POOL_NAME} ${RBD_POOL_CHUNK_SIZE} "cinder-volume" diff --git a/cinder/templates/configmap-etc.yaml b/cinder/templates/configmap-etc.yaml index c889bbbaca..b3f8a3e2b3 100644 --- a/cinder/templates/configmap-etc.yaml +++ b/cinder/templates/configmap-etc.yaml @@ -110,9 +110,12 @@ data: policy.json: | {{ toJson .Values.conf.policy | indent 4 }} cinder_sudoers: | -{{- tuple .Values.conf.neutron_sudoers "etc/_cinder_sudoers.tpl" . | include "helm-toolkit.utils.configmap_templater" }} +{{ $envAll.Values.conf.cinder_sudoers | indent 4 }} rootwrap.conf: | -{{- tuple .Values.conf.rootwrap "etc/_rootwrap.conf.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - volume.filters: | -{{- tuple .Values.conf.rootwrap_filters.volume "etc/rootwrap.d/_volume.filters.tpl" . | include "helm-toolkit.utils.configmap_templater" }} +{{ $envAll.Values.conf.rootwrap | indent 4 }} +{{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} +{{- $filePrefix := replace "_" "-" $key }} + {{ printf "%s.filters" $filePrefix }}: | +{{ $value.content | indent 4 }} +{{- end }} {{- end }} diff --git a/cinder/templates/deployment-volume.yaml b/cinder/templates/deployment-volume.yaml index 65bd2a2b46..d721be97c3 100644 --- a/cinder/templates/deployment-volume.yaml +++ b/cinder/templates/deployment-volume.yaml @@ -137,10 +137,16 @@ spec: mountPath: /etc/cinder/rootwrap.conf subPath: rootwrap.conf readOnly: true + {{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} + {{- if ( has "volume" $value.pods ) }} + {{- $filePrefix := replace "_" "-" $key }} + {{- $rootwrapFile := printf "/etc/cinder/rootwrap.d/%s.filters" $filePrefix }} - name: cinder-etc - mountPath: /etc/cinder/rootwrap.d/volume.filters - subPath: volume.filters + mountPath: {{ $rootwrapFile }} + subPath: {{ base $rootwrapFile }} readOnly: true + {{- end }} + {{- end }} {{ if $mounts_cinder_volume.volumeMounts }}{{ toYaml $mounts_cinder_volume.volumeMounts | indent 12 }}{{ end }} volumes: - name: cinder-bin diff --git a/cinder/templates/etc/_rootwrap.conf.tpl b/cinder/templates/etc/_rootwrap.conf.tpl deleted file mode 100644 index 2d88d689e4..0000000000 --- a/cinder/templates/etc/_rootwrap.conf.tpl +++ /dev/null @@ -1,27 +0,0 @@ -# Configuration for cinder-rootwrap -# This file should be owned by (and only-writeable by) the root user - -[DEFAULT] -# List of directories to load filter definitions from (separated by ','). -# These directories MUST all be only writeable by root ! -filters_path=/etc/cinder/rootwrap.d - -# List of directories to search executables in, in case filters do not -# explicitely specify a full path (separated by ',') -# If not specified, defaults to system PATH environment variable. -# These directories MUST all be only writeable by root ! -exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin,/var/lib/openstack/bin,/var/lib/kolla/venv/bin - -# Enable logging to syslog -# Default value is False -use_syslog=False - -# Which syslog facility to use. -# Valid values include auth, authpriv, syslog, local0, local1... -# Default value is 'syslog' -syslog_log_facility=syslog - -# Which messages to log. -# INFO means log all usage -# ERROR means only log unsuccessful attempts -syslog_log_level=ERROR diff --git a/cinder/templates/etc/rootwrap.d/_volume.filters.tpl b/cinder/templates/etc/rootwrap.d/_volume.filters.tpl deleted file mode 100644 index f7810c46f7..0000000000 --- a/cinder/templates/etc/rootwrap.d/_volume.filters.tpl +++ /dev/null @@ -1,224 +0,0 @@ -# cinder-rootwrap command filters for volume nodes -# This file should be owned by (and only-writeable by) the root user - -[Filters] -# cinder/volume/iscsi.py: iscsi_helper '--op' ... -ietadm: CommandFilter, ietadm, root -tgtadm: CommandFilter, tgtadm, root -iscsictl: CommandFilter, iscsictl, root -tgt-admin: CommandFilter, tgt-admin, root -cinder-rtstool: CommandFilter, cinder-rtstool, root -scstadmin: CommandFilter, scstadmin, root - -# LVM related show commands -pvs: EnvFilter, env, root, LC_ALL=C, pvs -vgs: EnvFilter, env, root, LC_ALL=C, vgs -lvs: EnvFilter, env, root, LC_ALL=C, lvs -lvdisplay: EnvFilter, env, root, LC_ALL=C, lvdisplay - -# -LVM related show commands with suppress fd warnings -pvs_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, pvs -vgs_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, vgs -lvs_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvs -lvdisplay_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvdisplay - - -# -LVM related show commands conf var -pvs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, pvs -vgs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, vgs -lvs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, lvs -lvdisplay_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, lvdisplay - -# -LVM conf var with suppress fd_warnings -pvs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, pvs -vgs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, vgs -lvs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvs -lvdisplay_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvdisplay - -# os-brick library commands -# os_brick.privileged.run_as_root oslo.privsep context -# This line ties the superuser privs with the config files, context name, -# and (implicitly) the actual python code invoked. -privsep-rootwrap: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, os_brick.privileged.default, --privsep_sock_path, /tmp/.* -# The following and any cinder/brick/* entries should all be obsoleted -# by privsep, and may be removed once the os-brick version requirement -# is updated appropriately. -scsi_id: CommandFilter, /lib/udev/scsi_id, root -drbdadm: CommandFilter, drbdadm, root - -# cinder/brick/local_dev/lvm.py: 'vgcreate', vg_name, pv_list -vgcreate: CommandFilter, vgcreate, root - -# cinder/brick/local_dev/lvm.py: 'lvcreate', '-L', sizestr, '-n', volume_name,.. -# cinder/brick/local_dev/lvm.py: 'lvcreate', '-L', ... -lvcreate: EnvFilter, env, root, LC_ALL=C, lvcreate -lvcreate_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, lvcreate -lvcreate_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvcreate -lvcreate_lvmconf_fdwarn: EnvFilter, env, root, LVM_SYSTEM_DIR=, LVM_SUPPRESS_FD_WARNINGS=, LC_ALL=C, lvcreate - -# cinder/volume/driver.py: 'dd', 'if=%s' % srcstr, 'of=%s' % deststr,... -dd: CommandFilter, dd, root - -# cinder/volume/driver.py: 'lvremove', '-f', %s/%s % ... -lvremove: CommandFilter, lvremove, root - -# cinder/volume/driver.py: 'lvrename', '%(vg)s', '%(orig)s' '(new)s'... -lvrename: CommandFilter, lvrename, root - -# cinder/brick/local_dev/lvm.py: 'lvextend', '-L' '%(new_size)s', '%(lv_name)s' ... -# cinder/brick/local_dev/lvm.py: 'lvextend', '-L' '%(new_size)s', '%(thin_pool)s' ... -lvextend: EnvFilter, env, root, LC_ALL=C, lvextend -lvextend_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, lvextend -lvextend_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvextend -lvextend_lvmconf_fdwarn: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvextend - -# cinder/brick/local_dev/lvm.py: 'lvchange -a y -K ' -lvchange: CommandFilter, lvchange, root - -# cinder/brick/local_dev/lvm.py: 'lvconvert', '--merge', snapshot_name -lvconvert: CommandFilter, lvconvert, root - -# cinder/volume/driver.py: 'iscsiadm', '-m', 'discovery', '-t',... -# cinder/volume/driver.py: 'iscsiadm', '-m', 'node', '-T', ... -iscsiadm: CommandFilter, iscsiadm, root - -# cinder/volume/utils.py: utils.temporary_chown(path, 0) -chown: CommandFilter, chown, root - -# cinder/volume/utils.py: copy_volume(..., ionice='...') -ionice_1: ChainingRegExpFilter, ionice, root, ionice, -c[0-3], -n[0-7] -ionice_2: ChainingRegExpFilter, ionice, root, ionice, -c[0-3] - -# cinder/volume/utils.py: setup_blkio_cgroup() -cgcreate: CommandFilter, cgcreate, root -cgset: CommandFilter, cgset, root -cgexec: ChainingRegExpFilter, cgexec, root, cgexec, -g, blkio:\S+ - -# cinder/volume/driver.py -dmsetup: CommandFilter, dmsetup, root -ln: CommandFilter, ln, root - -# cinder/image/image_utils.py -qemu-img: EnvFilter, env, root, LC_ALL=C, qemu-img -qemu-img_convert: CommandFilter, qemu-img, root - -udevadm: CommandFilter, udevadm, root - -# cinder/volume/driver.py: utils.read_file_as_root() -cat: CommandFilter, cat, root - -# cinder/volume/nfs.py -stat: CommandFilter, stat, root -mount: CommandFilter, mount, root -df: CommandFilter, df, root -du: CommandFilter, du, root -truncate: CommandFilter, truncate, root -chmod: CommandFilter, chmod, root -rm: CommandFilter, rm, root - -# cinder/volume/drivers/remotefs.py -mkdir: CommandFilter, mkdir, root - -# cinder/volume/drivers/netapp/nfs.py: -netapp_nfs_find: RegExpFilter, find, root, find, ^[/]*([^/\0]+(/+)?)*$, -maxdepth, \d+, -name, img-cache.*, -amin, \+\d+ - -# cinder/volume/drivers/glusterfs.py -chgrp: CommandFilter, chgrp, root -umount: CommandFilter, umount, root -fallocate: CommandFilter, fallocate, root - -# cinder/volumes/drivers/hds/hds.py: -hus-cmd: CommandFilter, hus-cmd, root -hus-cmd_local: CommandFilter, /usr/local/bin/hus-cmd, root - -# cinder/volumes/drivers/hds/hnas_backend.py -ssc: CommandFilter, ssc, root - -# cinder/brick/initiator/connector.py: -ls: CommandFilter, ls, root -tee: CommandFilter, tee, root -multipath: CommandFilter, multipath, root -multipathd: CommandFilter, multipathd, root -systool: CommandFilter, systool, root - -# cinder/volume/drivers/block_device.py -blockdev: CommandFilter, blockdev, root - -# cinder/volume/drivers/ibm/gpfs.py -# cinder/volume/drivers/tintri.py -mv: CommandFilter, mv, root - -# cinder/volume/drivers/ibm/gpfs.py -cp: CommandFilter, cp, root -mmgetstate: CommandFilter, /usr/lpp/mmfs/bin/mmgetstate, root -mmclone: CommandFilter, /usr/lpp/mmfs/bin/mmclone, root -mmlsattr: CommandFilter, /usr/lpp/mmfs/bin/mmlsattr, root -mmchattr: CommandFilter, /usr/lpp/mmfs/bin/mmchattr, root -mmlsconfig: CommandFilter, /usr/lpp/mmfs/bin/mmlsconfig, root -mmlsfs: CommandFilter, /usr/lpp/mmfs/bin/mmlsfs, root -mmlspool: CommandFilter, /usr/lpp/mmfs/bin/mmlspool, root -mkfs: CommandFilter, mkfs, root -mmcrfileset: CommandFilter, /usr/lpp/mmfs/bin/mmcrfileset, root -mmlinkfileset: CommandFilter, /usr/lpp/mmfs/bin/mmlinkfileset, root -mmunlinkfileset: CommandFilter, /usr/lpp/mmfs/bin/mmunlinkfileset, root -mmdelfileset: CommandFilter, /usr/lpp/mmfs/bin/mmdelfileset, root -mmcrsnapshot: CommandFilter, /usr/lpp/mmfs/bin/mmcrsnapshot, root -mmdelsnapshot: CommandFilter, /usr/lpp/mmfs/bin/mmdelsnapshot, root - -# cinder/volume/drivers/ibm/gpfs.py -# cinder/volume/drivers/ibm/ibmnas.py -find_maxdepth_inum: RegExpFilter, find, root, find, ^[/]*([^/\0]+(/+)?)*$, -maxdepth, \d+, -ignore_readdir_race, -inum, \d+, -print0, -quit - -# cinder/brick/initiator/connector.py: -aoe-revalidate: CommandFilter, aoe-revalidate, root -aoe-discover: CommandFilter, aoe-discover, root -aoe-flush: CommandFilter, aoe-flush, root - -# cinder/brick/initiator/linuxscsi.py: -sg_scan: CommandFilter, sg_scan, root - -#cinder/backup/services/tsm.py -dsmc:CommandFilter,/usr/bin/dsmc,root - -# cinder/volume/drivers/hitachi/hbsd_horcm.py -raidqry: CommandFilter, raidqry, root -raidcom: CommandFilter, raidcom, root -pairsplit: CommandFilter, pairsplit, root -paircreate: CommandFilter, paircreate, root -pairdisplay: CommandFilter, pairdisplay, root -pairevtwait: CommandFilter, pairevtwait, root -horcmstart.sh: CommandFilter, horcmstart.sh, root -horcmshutdown.sh: CommandFilter, horcmshutdown.sh, root -horcmgr: EnvFilter, env, root, HORCMINST=, /etc/horcmgr - -# cinder/volume/drivers/hitachi/hbsd_snm2.py -auman: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auman -auluref: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auluref -auhgdef: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auhgdef -aufibre1: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/aufibre1 -auhgwwn: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auhgwwn -auhgmap: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auhgmap -autargetmap: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/autargetmap -aureplicationvvol: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/aureplicationvvol -auluadd: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auluadd -auludel: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auludel -auluchgsize: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auluchgsize -auchapuser: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auchapuser -autargetdef: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/autargetdef -autargetopt: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/autargetopt -autargetini: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/autargetini -auiscsi: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auiscsi -audppool: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/audppool -aureplicationlocal: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/aureplicationlocal -aureplicationmon: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/aureplicationmon - -# cinder/volume/drivers/hgst.py -vgc-cluster: CommandFilter, vgc-cluster, root - -# cinder/volume/drivers/vzstorage.py -pstorage-mount: CommandFilter, pstorage-mount, root -pstorage: CommandFilter, pstorage, root -ploop: CommandFilter, ploop, root - -# initiator/connector.py: -drv_cfg: CommandFilter, /opt/emc/scaleio/sdc/bin/drv_cfg, root, /opt/emc/scaleio/sdc/bin/drv_cfg, --query_guid diff --git a/cinder/templates/job-backup-storage-init.yaml b/cinder/templates/job-backup-storage-init.yaml index d6d23e334e..9f23112050 100644 --- a/cinder/templates/job-backup-storage-init.yaml +++ b/cinder/templates/job-backup-storage-init.yaml @@ -102,8 +102,12 @@ spec: value: {{ .Values.conf.cinder.DEFAULT.backup_ceph_pool | quote }} - name: RBD_POOL_USER value: {{ .Values.conf.cinder.DEFAULT.backup_ceph_user | quote }} + - name: RBD_POOL_CRUSH_RULE + value: {{ .Values.conf.ceph.pools.backup.crush_rule | quote }} + - name: RBD_POOL_REPLICATION + value: {{ .Values.conf.ceph.pools.backup.replication | quote }} - name: RBD_POOL_CHUNK_SIZE - value: "8" + value: {{ .Values.conf.ceph.pools.backup.chunk_size | quote }} - name: RBD_POOL_SECRET value: {{ .Values.secrets.rbd.backup | quote }} {{ end }} diff --git a/cinder/templates/job-clean.yaml b/cinder/templates/job-clean.yaml index 0098e34171..005d353c1f 100644 --- a/cinder/templates/job-clean.yaml +++ b/cinder/templates/job-clean.yaml @@ -28,9 +28,6 @@ apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: {{ $serviceAccountName }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded rules: - apiGroups: - "" @@ -44,9 +41,6 @@ apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: {{ $serviceAccountName }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded roleRef: apiGroup: rbac.authorization.k8s.io kind: Role diff --git a/cinder/templates/job-db-drop.yaml b/cinder/templates/job-db-drop.yaml index 9ad3f36189..9e4f5e2c2a 100644 --- a/cinder/templates/job-db-drop.yaml +++ b/cinder/templates/job-db-drop.yaml @@ -15,72 +15,6 @@ limitations under the License. */}} {{- if .Values.manifests.job_db_drop }} -{{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.db_drop }} - -{{- $randStringSuffix := randAlphaNum 5 | lower }} - -{{- $serviceAccountName := print "cinder-db-drop-" $randStringSuffix }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: {{ print "cinder-db-drop-" $randStringSuffix }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded -spec: - template: - metadata: - labels: -{{ tuple $envAll "cinder" "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - spec: - serviceAccountName: {{ $serviceAccountName }} - restartPolicy: OnFailure - nodeSelector: - {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }} - initContainers: -{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - containers: - - name: cinder-db-drop - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: OPENSTACK_CONFIG_FILE - value: /etc/cinder/cinder.conf - - name: OPENSTACK_CONFIG_DB_SECTION - value: database - - name: OPENSTACK_CONFIG_DB_KEY - value: connection - command: - - /tmp/db-drop.py - volumeMounts: - - name: cinder-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true - - name: etccinder - mountPath: /etc/cinder - - name: cinder-etc - mountPath: /etc/cinder/cinder.conf - subPath: cinder.conf - readOnly: true - volumes: - - name: etccinder - emptyDir: {} - - name: cinder-etc - configMap: - name: cinder-etc - defaultMode: 0444 - - name: cinder-bin - configMap: - name: cinder-bin - defaultMode: 0555 +{{- $dbDropJob := dict "envAll" . "serviceName" "cinder" -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} {{- end }} diff --git a/cinder/templates/job-storage-init.yaml b/cinder/templates/job-storage-init.yaml index 80b2b95bd7..dbc0f56eca 100644 --- a/cinder/templates/job-storage-init.yaml +++ b/cinder/templates/job-storage-init.yaml @@ -102,8 +102,12 @@ spec: value: {{ index (index .Values.conf.backends (include "cinder.ceph_volume_section_name" $envAll)) "rbd_pool" | quote }} - name: RBD_POOL_USER value: {{ index (index .Values.conf.backends (include "cinder.ceph_volume_section_name" $envAll)) "rbd_user" | quote }} + - name: RBD_POOL_CRUSH_RULE + value: {{ .Values.conf.ceph.pools.volume.crush_rule | quote }} + - name: RBD_POOL_REPLICATION + value: {{ .Values.conf.ceph.pools.volume.replication | quote }} - name: RBD_POOL_CHUNK_SIZE - value: "8" + value: {{ .Values.conf.ceph.pools.volume.chunk_size | quote }} - name: RBD_POOL_SECRET value: {{ .Values.secrets.rbd.volume | quote }} {{- end }} diff --git a/cinder/templates/service-ingress-api.yaml b/cinder/templates/service-ingress-api.yaml index 6486758f2b..b21c0db0f8 100644 --- a/cinder/templates/service-ingress-api.yaml +++ b/cinder/templates/service-ingress-api.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_api }} -{{- $envAll := . }} -{{- if .Values.network.api.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "volume" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "volume" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/cinder/values.yaml b/cinder/values.yaml index 65b613bf3b..6207e9f41e 100644 --- a/cinder/values.yaml +++ b/cinder/values.yaml @@ -59,7 +59,7 @@ images: cinder_storage_init: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 cinder_backup: docker.io/openstackhelm/cinder:newton cinder_backup_storage_init: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" jobs: @@ -72,13 +72,14 @@ jobs: pod: user: cinder: - uid: 1000 + uid: 42424 affinity: - anti: - type: - default: preferredDuringSchedulingIgnoredDuringExecution - topologyKey: - default: kubernetes.io/hostname + anti: + type: + default: preferredDuringSchedulingIgnoredDuringExecution + topologyKey: + default: kubernetes.io/hostname + mounts: cinder_api: init_container: null @@ -246,8 +247,10 @@ network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / external_policy_local: false node_port: @@ -422,32 +425,293 @@ conf: clusters:get: rule:admin_api clusters:get_all: rule:admin_api clusters:update: rule:admin_api - cinder_sudoers: - override: - append: - rootwrap: - override: - append: + cinder_sudoers: | + # This sudoers file supports rootwrap for both Kolla and LOCI Images. + Defaults !requiretty + Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/var/lib/openstack/bin:/var/lib/kolla/venv/bin" + cinder ALL = (root) NOPASSWD: /var/lib/kolla/venv/bin/cinder-rootwrap /etc/cinder/rootwrap.conf *, /var/lib/openstack/bin/cinder-rootwrap /etc/cinder/rootwrap.conf * + rootwrap: | + # Configuration for cinder-rootwrap + # This file should be owned by (and only-writeable by) the root user + + [DEFAULT] + # List of directories to load filter definitions from (separated by ','). + # These directories MUST all be only writeable by root ! + filters_path=/etc/cinder/rootwrap.d + + # List of directories to search executables in, in case filters do not + # explicitely specify a full path (separated by ',') + # If not specified, defaults to system PATH environment variable. + # These directories MUST all be only writeable by root ! + exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin,/var/lib/openstack/bin,/var/lib/kolla/venv/bin + + # Enable logging to syslog + # Default value is False + use_syslog=False + + # Which syslog facility to use. + # Valid values include auth, authpriv, syslog, local0, local1... + # Default value is 'syslog' + syslog_log_facility=syslog + + # Which messages to log. + # INFO means log all usage + # ERROR means only log unsuccessful attempts + syslog_log_level=ERROR rootwrap_filters: volume: - override: - append: + pods: + - volume + content: | + # cinder-rootwrap command filters for volume nodes + # This file should be owned by (and only-writeable by) the root user + + [Filters] + # cinder/volume/iscsi.py: iscsi_helper '--op' ... + ietadm: CommandFilter, ietadm, root + tgtadm: CommandFilter, tgtadm, root + iscsictl: CommandFilter, iscsictl, root + tgt-admin: CommandFilter, tgt-admin, root + cinder-rtstool: CommandFilter, cinder-rtstool, root + scstadmin: CommandFilter, scstadmin, root + + # LVM related show commands + pvs: EnvFilter, env, root, LC_ALL=C, pvs + vgs: EnvFilter, env, root, LC_ALL=C, vgs + lvs: EnvFilter, env, root, LC_ALL=C, lvs + lvdisplay: EnvFilter, env, root, LC_ALL=C, lvdisplay + + # -LVM related show commands with suppress fd warnings + pvs_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, pvs + vgs_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, vgs + lvs_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvs + lvdisplay_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvdisplay + + + # -LVM related show commands conf var + pvs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, pvs + vgs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, vgs + lvs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, lvs + lvdisplay_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, lvdisplay + + # -LVM conf var with suppress fd_warnings + pvs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, pvs + vgs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, vgs + lvs_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvs + lvdisplay_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvdisplay + + # os-brick library commands + # os_brick.privileged.run_as_root oslo.privsep context + # This line ties the superuser privs with the config files, context name, + # and (implicitly) the actual python code invoked. + privsep-rootwrap: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, os_brick.privileged.default, --privsep_sock_path, /tmp/.* + # The following and any cinder/brick/* entries should all be obsoleted + # by privsep, and may be removed once the os-brick version requirement + # is updated appropriately. + scsi_id: CommandFilter, /lib/udev/scsi_id, root + drbdadm: CommandFilter, drbdadm, root + + # cinder/brick/local_dev/lvm.py: 'vgcreate', vg_name, pv_list + vgcreate: CommandFilter, vgcreate, root + + # cinder/brick/local_dev/lvm.py: 'lvcreate', '-L', sizestr, '-n', volume_name,.. + # cinder/brick/local_dev/lvm.py: 'lvcreate', '-L', ... + lvcreate: EnvFilter, env, root, LC_ALL=C, lvcreate + lvcreate_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, lvcreate + lvcreate_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvcreate + lvcreate_lvmconf_fdwarn: EnvFilter, env, root, LVM_SYSTEM_DIR=, LVM_SUPPRESS_FD_WARNINGS=, LC_ALL=C, lvcreate + + # cinder/volume/driver.py: 'dd', 'if=%s' % srcstr, 'of=%s' % deststr,... + dd: CommandFilter, dd, root + + # cinder/volume/driver.py: 'lvremove', '-f', %s/%s % ... + lvremove: CommandFilter, lvremove, root + + # cinder/volume/driver.py: 'lvrename', '%(vg)s', '%(orig)s' '(new)s'... + lvrename: CommandFilter, lvrename, root + + # cinder/brick/local_dev/lvm.py: 'lvextend', '-L' '%(new_size)s', '%(lv_name)s' ... + # cinder/brick/local_dev/lvm.py: 'lvextend', '-L' '%(new_size)s', '%(thin_pool)s' ... + lvextend: EnvFilter, env, root, LC_ALL=C, lvextend + lvextend_lvmconf: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, lvextend + lvextend_fdwarn: EnvFilter, env, root, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvextend + lvextend_lvmconf_fdwarn: EnvFilter, env, root, LVM_SYSTEM_DIR=, LC_ALL=C, LVM_SUPPRESS_FD_WARNINGS=, lvextend + + # cinder/brick/local_dev/lvm.py: 'lvchange -a y -K ' + lvchange: CommandFilter, lvchange, root + + # cinder/brick/local_dev/lvm.py: 'lvconvert', '--merge', snapshot_name + lvconvert: CommandFilter, lvconvert, root + + # cinder/volume/driver.py: 'iscsiadm', '-m', 'discovery', '-t',... + # cinder/volume/driver.py: 'iscsiadm', '-m', 'node', '-T', ... + iscsiadm: CommandFilter, iscsiadm, root + + # cinder/volume/utils.py: utils.temporary_chown(path, 0) + chown: CommandFilter, chown, root + + # cinder/volume/utils.py: copy_volume(..., ionice='...') + ionice_1: ChainingRegExpFilter, ionice, root, ionice, -c[0-3], -n[0-7] + ionice_2: ChainingRegExpFilter, ionice, root, ionice, -c[0-3] + + # cinder/volume/utils.py: setup_blkio_cgroup() + cgcreate: CommandFilter, cgcreate, root + cgset: CommandFilter, cgset, root + cgexec: ChainingRegExpFilter, cgexec, root, cgexec, -g, blkio:\S+ + + # cinder/volume/driver.py + dmsetup: CommandFilter, dmsetup, root + ln: CommandFilter, ln, root + + # cinder/image/image_utils.py + qemu-img: EnvFilter, env, root, LC_ALL=C, qemu-img + qemu-img_convert: CommandFilter, qemu-img, root + + udevadm: CommandFilter, udevadm, root + + # cinder/volume/driver.py: utils.read_file_as_root() + cat: CommandFilter, cat, root + + # cinder/volume/nfs.py + stat: CommandFilter, stat, root + mount: CommandFilter, mount, root + df: CommandFilter, df, root + du: CommandFilter, du, root + truncate: CommandFilter, truncate, root + chmod: CommandFilter, chmod, root + rm: CommandFilter, rm, root + + # cinder/volume/drivers/remotefs.py + mkdir: CommandFilter, mkdir, root + + # cinder/volume/drivers/netapp/nfs.py: + netapp_nfs_find: RegExpFilter, find, root, find, ^[/]*([^/\0]+(/+)?)*$, -maxdepth, \d+, -name, img-cache.*, -amin, \+\d+ + + # cinder/volume/drivers/glusterfs.py + chgrp: CommandFilter, chgrp, root + umount: CommandFilter, umount, root + fallocate: CommandFilter, fallocate, root + + # cinder/volumes/drivers/hds/hds.py: + hus-cmd: CommandFilter, hus-cmd, root + hus-cmd_local: CommandFilter, /usr/local/bin/hus-cmd, root + + # cinder/volumes/drivers/hds/hnas_backend.py + ssc: CommandFilter, ssc, root + + # cinder/brick/initiator/connector.py: + ls: CommandFilter, ls, root + tee: CommandFilter, tee, root + multipath: CommandFilter, multipath, root + multipathd: CommandFilter, multipathd, root + systool: CommandFilter, systool, root + + # cinder/volume/drivers/block_device.py + blockdev: CommandFilter, blockdev, root + + # cinder/volume/drivers/ibm/gpfs.py + # cinder/volume/drivers/tintri.py + mv: CommandFilter, mv, root + + # cinder/volume/drivers/ibm/gpfs.py + cp: CommandFilter, cp, root + mmgetstate: CommandFilter, /usr/lpp/mmfs/bin/mmgetstate, root + mmclone: CommandFilter, /usr/lpp/mmfs/bin/mmclone, root + mmlsattr: CommandFilter, /usr/lpp/mmfs/bin/mmlsattr, root + mmchattr: CommandFilter, /usr/lpp/mmfs/bin/mmchattr, root + mmlsconfig: CommandFilter, /usr/lpp/mmfs/bin/mmlsconfig, root + mmlsfs: CommandFilter, /usr/lpp/mmfs/bin/mmlsfs, root + mmlspool: CommandFilter, /usr/lpp/mmfs/bin/mmlspool, root + mkfs: CommandFilter, mkfs, root + mmcrfileset: CommandFilter, /usr/lpp/mmfs/bin/mmcrfileset, root + mmlinkfileset: CommandFilter, /usr/lpp/mmfs/bin/mmlinkfileset, root + mmunlinkfileset: CommandFilter, /usr/lpp/mmfs/bin/mmunlinkfileset, root + mmdelfileset: CommandFilter, /usr/lpp/mmfs/bin/mmdelfileset, root + mmcrsnapshot: CommandFilter, /usr/lpp/mmfs/bin/mmcrsnapshot, root + mmdelsnapshot: CommandFilter, /usr/lpp/mmfs/bin/mmdelsnapshot, root + + # cinder/volume/drivers/ibm/gpfs.py + # cinder/volume/drivers/ibm/ibmnas.py + find_maxdepth_inum: RegExpFilter, find, root, find, ^[/]*([^/\0]+(/+)?)*$, -maxdepth, \d+, -ignore_readdir_race, -inum, \d+, -print0, -quit + + # cinder/brick/initiator/connector.py: + aoe-revalidate: CommandFilter, aoe-revalidate, root + aoe-discover: CommandFilter, aoe-discover, root + aoe-flush: CommandFilter, aoe-flush, root + + # cinder/brick/initiator/linuxscsi.py: + sg_scan: CommandFilter, sg_scan, root + + #cinder/backup/services/tsm.py + dsmc:CommandFilter,/usr/bin/dsmc,root + + # cinder/volume/drivers/hitachi/hbsd_horcm.py + raidqry: CommandFilter, raidqry, root + raidcom: CommandFilter, raidcom, root + pairsplit: CommandFilter, pairsplit, root + paircreate: CommandFilter, paircreate, root + pairdisplay: CommandFilter, pairdisplay, root + pairevtwait: CommandFilter, pairevtwait, root + horcmstart.sh: CommandFilter, horcmstart.sh, root + horcmshutdown.sh: CommandFilter, horcmshutdown.sh, root + horcmgr: EnvFilter, env, root, HORCMINST=, /etc/horcmgr + + # cinder/volume/drivers/hitachi/hbsd_snm2.py + auman: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auman + auluref: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auluref + auhgdef: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auhgdef + aufibre1: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/aufibre1 + auhgwwn: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auhgwwn + auhgmap: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auhgmap + autargetmap: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/autargetmap + aureplicationvvol: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/aureplicationvvol + auluadd: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auluadd + auludel: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auludel + auluchgsize: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auluchgsize + auchapuser: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auchapuser + autargetdef: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/autargetdef + autargetopt: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/autargetopt + autargetini: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/autargetini + auiscsi: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/auiscsi + audppool: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/audppool + aureplicationlocal: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/aureplicationlocal + aureplicationmon: EnvFilter, env, root, LANG=, STONAVM_HOME=, LD_LIBRARY_PATH=, STONAVM_RSP_PASS=, STONAVM_ACT=, /usr/stonavm/aureplicationmon + + # cinder/volume/drivers/hgst.py + vgc-cluster: CommandFilter, vgc-cluster, root + + # cinder/volume/drivers/vzstorage.py + pstorage-mount: CommandFilter, pstorage-mount, root + pstorage: CommandFilter, pstorage, root + ploop: CommandFilter, ploop, root + + # initiator/connector.py: + drv_cfg: CommandFilter, /opt/emc/scaleio/sdc/bin/drv_cfg, root, /opt/emc/scaleio/sdc/bin/drv_cfg, --query_guid ceph: override: append: monitors: [] admin_keyring: null + pools: + backup: + replication: 3 + crush_rule: replicated_rule + chunk_size: 8 + volume: + replication: 3 + crush_rule: replicated_rule + chunk_size: 8 cinder: DEFAULT: use_syslog: false use_stderr: true enable_v1_api: false volume_name_template: "%s" - osapi_volume_workers: 8 + osapi_volume_workers: 1 glance_api_version: 2 os_region_name: RegionOne host: cinder-volume-worker - #NOTE(portdirect): the bind port should not be defined, and is manipulated + # NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. osapi_volume_listen_port: null enabled_backends: "rbd1" @@ -580,8 +844,8 @@ dependencies: service: identity rabbit_init: services: - - service: oslo_messaging - endpoint: internal + - service: oslo_messaging + endpoint: internal scheduler: jobs: - cinder-db-sync @@ -725,7 +989,7 @@ endpoints: path: default: '/v1/%(tenant_id)s' scheme: - default: 'http' + default: 'http' port: api: default: 8776 @@ -740,7 +1004,7 @@ endpoints: path: default: '/v2/%(tenant_id)s' scheme: - default: 'http' + default: 'http' port: api: default: 8776 diff --git a/neutron/templates/etc/_neutron_sudoers.tpl b/congress/templates/job-db-drop.yaml similarity index 58% rename from neutron/templates/etc/_neutron_sudoers.tpl rename to congress/templates/job-db-drop.yaml index cf1f12aca6..3e790d16af 100644 --- a/neutron/templates/etc/_neutron_sudoers.tpl +++ b/congress/templates/job-db-drop.yaml @@ -14,7 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -# This sudoers file supports rootwrap for both Kolla and LOCI Images. -Defaults !requiretty -Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/var/lib/openstack/bin:/var/lib/kolla/venv/bin" -neutron ALL = (root) NOPASSWD: /var/lib/kolla/venv/bin/neutron-rootwrap /etc/neutron/rootwrap.conf *, /var/lib/openstack/bin/neutron-rootwrap /etc/neutron/rootwrap.conf * +{{- if .Values.manifests.job_db_drop }} +{{- $dbDropJob := dict "envAll" . "serviceName" "congress" -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} +{{- end }} diff --git a/congress/templates/service-ingress-api.yaml b/congress/templates/service-ingress-api.yaml index d1a23f3283..6fe2abcfbb 100644 --- a/congress/templates/service-ingress-api.yaml +++ b/congress/templates/service-ingress-api.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_api }} -{{- $envAll := . }} -{{- if .Values.network.api.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "policy" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "policy" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/congress/values.yaml b/congress/values.yaml index c3457d79e0..20c635a181 100644 --- a/congress/values.yaml +++ b/congress/values.yaml @@ -47,15 +47,17 @@ images: ks_endpoints: docker.io/openstackhelm/heat:newton congress_ds_create: docker.io/openstackhelm/congress:newton congress_scripted_test: docker.io/openstackhelm/congress:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / node_port: enabled: false @@ -476,6 +478,7 @@ manifests: deployment_policy_engine: true ingress_api: true job_bootstrap: true + job_db_drop: false job_db_init: true job_db_sync: true job_ds_create: true diff --git a/doc/source/devref/images.rst b/doc/source/devref/images.rst index f9e8739901..cc589b9e46 100644 --- a/doc/source/devref/images.rst +++ b/doc/source/devref/images.rst @@ -68,7 +68,7 @@ chart: cfn: docker.io/kolla/ubuntu-source-heat-api:3.0.3 cloudwatch: docker.io/kolla/ubuntu-source-heat-api:3.0.3 engine: docker.io/openstackhelm/heat:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" The OpenStack-Helm project today uses a mix of Docker images from diff --git a/doc/source/devref/index.rst b/doc/source/devref/index.rst index 663d88a15f..a6d4e00d72 100644 --- a/doc/source/devref/index.rst +++ b/doc/source/devref/index.rst @@ -13,3 +13,4 @@ Contents: pod-disruption-budgets upgrades fluent-logging + node-and-label-specific-configurations diff --git a/doc/source/devref/networking.rst b/doc/source/devref/networking.rst index 8275c8531c..fc5d7f129f 100644 --- a/doc/source/devref/networking.rst +++ b/doc/source/devref/networking.rst @@ -137,9 +137,11 @@ for the L2 agent daemonset: endpoint: internal - service: compute endpoint: internal - daemonset: - # this should be set to corresponding neutron L2 agent - - neutron-ovs-agent + pod: + # this should be set to corresponding neutron L2 agent + - labels: + application: neutron + component: neutron-ovs-agent There is also a need for DHCP agent to pass ovs agent config file (in :code:`neutron/templates/bin/_neutron-dhcp-agent.sh.tpl`): @@ -317,14 +319,20 @@ and use this `neutron/values.yaml` override: backend: linuxbridge dependencies: dhcp: - daemonset: - - neutron-lb-agent + pod: + - labels: + application: neutron + component: neutron-lb-agent metadata: - daemonset: - - neutron-lb-agent + pod: + - labels: + application: neutron + component: neutron-lb-agent l3: - daemonset: - - neutron-lb-agent + pod: + - labels: + application: neutron + component: neutron-lb-agent conf: neutron: DEFAULT diff --git a/doc/source/devref/node-and-label-specific-configurations.rst b/doc/source/devref/node-and-label-specific-configurations.rst new file mode 100644 index 0000000000..41ed9c901e --- /dev/null +++ b/doc/source/devref/node-and-label-specific-configurations.rst @@ -0,0 +1,106 @@ +Node and label specific configurations +-------------------------------------- + +There are situations where we need to define configuration differently for +different nodes in the environment. For example, we may require that some nodes +have a different vcpu_pin_set or other hardware specific deltas in nova.conf. + +To do this, we can specify overrides in the values fed to the chart. Ex: + +.. code-block:: yaml + + conf: + nova: + DEFAULT: + vcpu_pin_set: "0-31" + cpu_allocation_ratio: 3.0 + overrides: + nova_compute: + labels: + - label: + key: compute-type + values: + - "dpdk" + - "sriov" + conf: + nova: + DEFAULT: + vcpu_pin_set: "0-15" + - label: + key: another-label + values: + - "another-value" + conf: + nova: + DEFAULT: + vcpu_pin_set: "16-31" + hosts: + - name: host1.fqdn + conf: + nova: + DEFAULT: + vcpu_pin_set: "8-15" + - name: host2.fqdn + conf: + nova: + DEFAULT: + vcpu_pin_set: "16-23" + +Note that only one set of overrides is applied per node, such that: + +1. Host overrides supercede label overrides +2. The farther down the list the label appears, the greater precedence it has. + e.g., "another-label" overrides will apply to a node containing both labels. + +Also note that other non-overridden values are inherited by hosts and labels with overrides. +The following shows a set of example hosts and the values fed into the configmap for each: + +1. ``host1.fqdn`` with labels ``compute-type: dpdk, sriov`` and ``another-label: another-value``: + + .. code-block:: yaml + + conf: + nova: + DEFAULT: + vcpu_pin_set: "8-15" + cpu_allocation_ratio: 3.0 + +2. ``host2.fqdn`` with labels ``compute-type: dpdk, sriov`` and ``another-label: another-value``: + + .. code-block:: yaml + + conf: + nova: + DEFAULT: + vcpu_pin_set: "16-23" + cpu_allocation_ratio: 3.0 + +3. ``host3.fqdn`` with labels ``compute-type: dpdk, sriov`` and ``another-label: another-value``: + + .. code-block:: yaml + + conf: + nova: + DEFAULT: + vcpu_pin_set: "16-31" + cpu_allocation_ratio: 3.0 + +4. ``host4.fqdn`` with labels ``compute-type: dpdk, sriov``: + + .. code-block:: yaml + + conf: + nova: + DEFAULT: + vcpu_pin_set: "0-15" + cpu_allocation_ratio: 3.0 + +5. ``host5.fqdn`` with no labels: + + .. code-block:: yaml + + conf: + nova: + DEFAULT: + vcpu_pin_set: "0-31" + cpu_allocation_ratio: 3.0 diff --git a/doc/source/install/developer/deploy-with-ceph.rst b/doc/source/install/developer/deploy-with-ceph.rst index 2e09cee37a..ec802f70f3 100644 --- a/doc/source/install/developer/deploy-with-ceph.rst +++ b/doc/source/install/developer/deploy-with-ceph.rst @@ -2,6 +2,10 @@ Deployment With Ceph ==================== +.. note:: + For other deployment options, select appropriate ``Deployment with ...`` + option from `Index <../developer/index.html>`__ page. + Deploy Ceph ^^^^^^^^^^^ diff --git a/doc/source/install/developer/deploy-with-nfs.rst b/doc/source/install/developer/deploy-with-nfs.rst index 1da35cd6a5..0b2f5a5945 100644 --- a/doc/source/install/developer/deploy-with-nfs.rst +++ b/doc/source/install/developer/deploy-with-nfs.rst @@ -2,6 +2,10 @@ Deployment With NFS =================== +.. note:: + For other deployment options, select appropriate ``Deployment with ...`` + option from `Index <../developer/index.html>`__ page. + Deploy NFS Provisioner ^^^^^^^^^^^^^^^^^^^^^^ diff --git a/doc/source/install/developer/kubernetes-and-common-setup.rst b/doc/source/install/developer/kubernetes-and-common-setup.rst index 6abb986a1c..ded5ebeb27 100644 --- a/doc/source/install/developer/kubernetes-and-common-setup.rst +++ b/doc/source/install/developer/kubernetes-and-common-setup.rst @@ -26,6 +26,12 @@ should be cloned: git clone https://git.openstack.org/openstack/openstack-helm.git +.. note:: + This installation, by default will use Google DNS servers, 8.8.8.8 or 8.8.4.4 + and updates resolv.conf. These DNS nameserver entries can be changed by + updating file ``/opt/openstack-helm-infra/tools/images/kubeadm-aio/assets/opt/playbooks/vars.yaml`` + under section ``external_dns_nameservers``. + Deploy Kubernetes & Helm ------------------------ diff --git a/doc/source/install/multinode.rst b/doc/source/install/multinode.rst index 79d5ee7abd..9640381427 100644 --- a/doc/source/install/multinode.rst +++ b/doc/source/install/multinode.rst @@ -19,7 +19,7 @@ comments, please create an `issue .. note:: Please see the supported application versions outlined in the - `source variable file `_. + `source variable file `_. Other versions and considerations (such as other CNI SDN providers), config map data, and value overrides will be included in other @@ -55,9 +55,9 @@ On the worker nodes #!/bin/bash set -xe - apt-get update - apt-get install --no-install-recommends -y \ - git + sudo apt-get update + sudo apt-get install --no-install-recommends -y \ + git SSH-Key preparation @@ -78,7 +78,7 @@ should be cloned onto each node in the cluster: #!/bin/bash set -xe - chown -R ubuntu: /opt + sudo chown -R ubuntu: /opt git clone https://git.openstack.org/openstack/openstack-helm-infra.git /opt/openstack-helm-infra git clone https://git.openstack.org/openstack/openstack-helm.git /opt/openstack-helm @@ -141,6 +141,15 @@ On the master node create an environment file for the cluster: domain: cluster.local EOF + +.. note:: + This installation, by default will use Google DNS servers, 8.8.8.8 or 8.8.4.4 + and updates resolv.conf. These DNS nameserver entries can be changed by + updating file ``/openstack-helm-infra/tools/images/kubeadm-aio/assets/opt/playbooks/vars.yaml`` + under section ``external_dns_nameservers``. This change must be done on each + node in your cluster. + + Run the playbooks ----------------- @@ -157,6 +166,9 @@ On the master node run the playbooks: Deploy OpenStack-Helm ===================== +.. note:: + The following commands all assume that they are run from the ``openstack-helm`` directory. + Setup Clients on the host and assemble the charts ------------------------------------------------- diff --git a/doc/source/specs/support-linux-bridge-on-neutron.rst b/doc/source/specs/support-linux-bridge-on-neutron.rst index 0386e11206..713af3eac4 100644 --- a/doc/source/specs/support-linux-bridge-on-neutron.rst +++ b/doc/source/specs/support-linux-bridge-on-neutron.rst @@ -98,14 +98,20 @@ updated to reflect the new kind on L2 agent: dependencies: dhcp: - daemonset: - - lb-agent + pod: + - labels: + application: neutron + component: neutron-lb-agent metadata: - daemonset: - - lb-agent + pod: + - labels: + application: neutron + component: neutron-lb-agent l3: - daemonset: - - lb-agent + pod: + - labels: + application: neutron + component: neutron-lb-agent LinuxBridge should be also enabled in :code:`manifests` section: diff --git a/etcd/values.yaml b/etcd/values.yaml index 4c7289c554..366e129d61 100644 --- a/etcd/values.yaml +++ b/etcd/values.yaml @@ -20,7 +20,7 @@ images: tags: etcd: 'gcr.io/google_containers/etcd-amd64:2.2.5' - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: IfNotPresent labels: diff --git a/glance/templates/configmap-etc.yaml b/glance/templates/configmap-etc.yaml index 9c5591cbb3..2fcf0d48e8 100644 --- a/glance/templates/configmap-etc.yaml +++ b/glance/templates/configmap-etc.yaml @@ -162,6 +162,5 @@ data: {{ include "helm-toolkit.utils.to_ini" .Values.conf.paste_registry | indent 4 }} policy.json: | {{ toJson .Values.conf.policy | indent 4 }} - swift-store.conf: | -{{- tuple .Values.conf.swift_store "etc/_swift-store.conf.tpl" . | include "helm-toolkit.utils.configmap_templater" }} +{{- include "helm-toolkit.snippets.values_template_renderer" (dict "envAll" $envAll "template" .Values.conf.swift_store "key" "swift-store.conf") | indent 2 }} {{- end }} diff --git a/glance/templates/etc/_swift-store.conf.tpl b/glance/templates/etc/_swift-store.conf.tpl deleted file mode 100644 index a537857448..0000000000 --- a/glance/templates/etc/_swift-store.conf.tpl +++ /dev/null @@ -1,30 +0,0 @@ -{{/* -Copyright 2017 The Openstack-Helm Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/}} - -[{{ .Values.conf.glance.glance_store.default_swift_reference }}] -{{- if eq .Values.storage "radosgw" }} -auth_version = 1 -auth_address = {{ tuple "ceph_object_store" "public" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup" }} -user = {{ .Values.endpoints.ceph_object_store.auth.glance.username }}:swift -key = {{ .Values.endpoints.ceph_object_store.auth.glance.password }} -{{- else }} -user = {{ .Values.endpoints.identity.auth.glance.project_name }}:{{ .Values.endpoints.identity.auth.glance.username }} -key = {{ .Values.endpoints.identity.auth.glance.password }} -auth_address = {{ tuple "identity" "internal" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup" }} -user_domain_name = {{ .Values.endpoints.identity.auth.glance.user_domain_name }} -project_domain_name = {{ .Values.endpoints.identity.auth.glance.project_domain_name }} -auth_version = 3 -{{- end -}} diff --git a/glance/templates/job-clean.yaml b/glance/templates/job-clean.yaml index e67e567a87..9dd15912a9 100644 --- a/glance/templates/job-clean.yaml +++ b/glance/templates/job-clean.yaml @@ -28,9 +28,6 @@ apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: {{ $serviceAccountName }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded rules: - apiGroups: - "" @@ -44,9 +41,6 @@ apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: {{ $serviceAccountName }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded roleRef: apiGroup: rbac.authorization.k8s.io kind: Role diff --git a/glance/templates/job-db-drop.yaml b/glance/templates/job-db-drop.yaml index 8c65bd0c28..1622b06244 100644 --- a/glance/templates/job-db-drop.yaml +++ b/glance/templates/job-db-drop.yaml @@ -15,72 +15,8 @@ limitations under the License. */}} {{- if .Values.manifests.job_db_drop }} -{{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.db_drop }} - -{{- $randStringSuffix := randAlphaNum 5 | lower }} - -{{- $serviceAccountName := print "glance-db-drop-" $randStringSuffix }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: {{ print "glance-db-drop-" $randStringSuffix }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded -spec: - template: - metadata: - labels: -{{ tuple $envAll "glance" "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - spec: - serviceAccountName: {{ $serviceAccountName }} - restartPolicy: OnFailure - nodeSelector: - {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }} - initContainers: -{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - containers: - - name: glance-db-drop - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: OPENSTACK_CONFIG_FILE - value: /etc/glance/glance-api.conf - - name: OPENSTACK_CONFIG_DB_SECTION - value: database - - name: OPENSTACK_CONFIG_DB_KEY - value: connection - command: - - /tmp/db-drop.py - volumeMounts: - - name: glance-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true - - name: etcglance - mountPath: /etc/glance - - name: glance-etc - mountPath: /etc/glance/glance-api.conf - subPath: glance-api.conf - readOnly: true - volumes: - - name: etcglance - emptyDir: {} - - name: glance-etc - configMap: - name: glance-etc - defaultMode: 0444 - - name: glance-bin - configMap: - name: glance-bin - defaultMode: 0555 +{{- $serviceName := "glance" -}} +{{- $dbToDrop := dict "adminSecret" .Values.secrets.oslo_db.admin "configFile" (printf "/etc/%s/%s.conf" $serviceName "glance-api" ) "configDbSection" "database" "configDbKey" "connection" -}} +{{- $dbDropJob := dict "envAll" . "serviceName" $serviceName "dbToDrop" $dbToDrop -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} {{- end }} diff --git a/glance/templates/service-ingress-api.yaml b/glance/templates/service-ingress-api.yaml index a36b45bca7..c865684840 100644 --- a/glance/templates/service-ingress-api.yaml +++ b/glance/templates/service-ingress-api.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_api }} -{{- $envAll := . }} -{{- if .Values.network.api.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "image" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "image" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/glance/templates/service-ingress-registry.yaml b/glance/templates/service-ingress-registry.yaml index 9f56e2ce14..d614c5cf7d 100644 --- a/glance/templates/service-ingress-registry.yaml +++ b/glance/templates/service-ingress-registry.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_registry }} -{{- $envAll := . }} -{{- if .Values.network.registry.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "image_registry" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_registry .Values.network.registry.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "image_registry" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/glance/values.yaml b/glance/values.yaml index 75f195fb3a..e842a13bdb 100644 --- a/glance/values.yaml +++ b/glance/values.yaml @@ -51,7 +51,7 @@ images: glance_registry: docker.io/openstackhelm/glance:newton # Bootstrap image requires curl bootstrap: docker.io/openstackhelm/heat:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" bootstrap: @@ -158,8 +158,8 @@ conf: paste.filter_factory: glance.api.middleware.gzip:GzipMiddleware.factory filter:osprofiler: paste.filter_factory: osprofiler.web:WsgiMiddleware.factory - hmac_keys: SECRET_KEY #DEPRECATED - enabled: yes #DEPRECATED + hmac_keys: SECRET_KEY # DEPRECATED + enabled: yes # DEPRECATED filter:cors: paste.filter_factory: oslo_middleware.cors:filter_factory oslo_config_project: glance @@ -215,9 +215,10 @@ conf: add_metadef_tags: '' glance: DEFAULT: - #NOTE(portdirect): the bind port should not be defined, and is manipulated + # NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. bind_port: null + workers: 1 keystone_authtoken: auth_type: password auth_version: v3 @@ -259,13 +260,14 @@ conf: paste.filter_factory: keystonemiddleware.auth_token:filter_factory filter:osprofiler: paste.filter_factory: osprofiler.web:WsgiMiddleware.factory - hmac_keys: SECRET_KEY #DEPRECATED - enabled: yes #DEPRECATED + hmac_keys: SECRET_KEY # DEPRECATED + enabled: yes # DEPRECATED glance_registry: DEFAULT: - #NOTE(portdirect): the bind port should not be defined, and is manipulated + # NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. bind_port: null + workers: 1 keystone_authtoken: auth_type: password auth_version: v3 @@ -276,16 +278,30 @@ conf: max_retries: -1 oslo_messaging_notifications: driver: messagingv2 - swift_store: - override: - append: + swift_store: | + [{{ .Values.conf.glance.glance_store.default_swift_reference }}] + {{- if eq .Values.storage "radosgw" }} + auth_version = 1 + auth_address = {{ tuple "ceph_object_store" "public" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup" }} + user = {{ .Values.endpoints.ceph_object_store.auth.glance.username }}:swift + key = {{ .Values.endpoints.ceph_object_store.auth.glance.password }} + {{- else }} + user = {{ .Values.endpoints.identity.auth.glance.project_name }}:{{ .Values.endpoints.identity.auth.glance.username }} + key = {{ .Values.endpoints.identity.auth.glance.password }} + auth_address = {{ tuple "identity" "internal" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup" }} + user_domain_name = {{ .Values.endpoints.identity.auth.glance.user_domain_name }} + project_domain_name = {{ .Values.endpoints.identity.auth.glance.project_domain_name }} + auth_version = 3 + {{- end -}} network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/proxy-body-size: "1024M" external_policy_local: false @@ -295,8 +311,10 @@ network: registry: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / external_policy_local: false node_port: @@ -320,6 +338,8 @@ dependencies: service: oslo_db - endpoint: internal service: identity + - endpoint: internal + service: oslo_messaging bootstrap: jobs: - glance-storage-init @@ -363,8 +383,8 @@ dependencies: service: identity rabbit_init: services: - - service: oslo_messaging - endpoint: internal + - service: oslo_messaging + endpoint: internal registry: jobs: - glance-storage-init @@ -404,7 +424,7 @@ secrets: glance: glance-rabbitmq-user -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -555,11 +575,11 @@ pod: glance: uid: 42424 affinity: - anti: - type: - default: preferredDuringSchedulingIgnoredDuringExecution - topologyKey: - default: kubernetes.io/hostname + anti: + type: + default: preferredDuringSchedulingIgnoredDuringExecution + topologyKey: + default: kubernetes.io/hostname mounts: glance_api: init_container: null diff --git a/gnocchi/templates/configmap-etc.yaml b/gnocchi/templates/configmap-etc.yaml index 72c136eddd..b09975b7bf 100644 --- a/gnocchi/templates/configmap-etc.yaml +++ b/gnocchi/templates/configmap-etc.yaml @@ -95,6 +95,5 @@ data: {{ include "helm-toolkit.utils.to_ini" .Values.conf.paste | indent 4 }} policy.json: | {{ toJson .Values.conf.policy | indent 4 }} - wsgi-gnocchi.conf: | -{{- tuple .Values.conf.wsgi_gnocchi "etc/_wsgi-gnocchi.conf.tpl" . | include "helm-toolkit.utils.configmap_templater" }} +{{- include "helm-toolkit.snippets.values_template_renderer" (dict "envAll" $envAll "template" .Values.conf.apache "key" "wsgi-gnocchi.conf") | indent 2 }} {{- end }} diff --git a/gnocchi/templates/etc/_wsgi-gnocchi.conf.tpl b/gnocchi/templates/etc/_wsgi-gnocchi.conf.tpl deleted file mode 100644 index dd45c05d06..0000000000 --- a/gnocchi/templates/etc/_wsgi-gnocchi.conf.tpl +++ /dev/null @@ -1,37 +0,0 @@ -{{/* -Copyright 2017 The Openstack-Helm Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/}} - -Listen 0.0.0.0:{{ tuple "metric" "internal" "api" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} - -SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded -CustomLog /dev/stdout combined env=!forwarded -CustomLog /dev/stdout proxy env=forwarded - - - WSGIDaemonProcess gnocchi processes=1 threads=2 user=gnocchi group=gnocchi display-name=%{GROUP} - WSGIProcessGroup gnocchi - WSGIScriptAlias / "/var/lib/kolla/venv/lib/python2.7/site-packages/gnocchi/rest/app.wsgi" - WSGIApplicationGroup %{GLOBAL} - - ErrorLog /dev/stderr - SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded - CustomLog /dev/stdout combined env=!forwarded - CustomLog /dev/stdout proxy env=forwarded - - - Require all granted - - diff --git a/gnocchi/templates/job-clean.yaml b/gnocchi/templates/job-clean.yaml index 1003d796a1..2bfa74c68a 100644 --- a/gnocchi/templates/job-clean.yaml +++ b/gnocchi/templates/job-clean.yaml @@ -27,9 +27,6 @@ apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: {{ $serviceAccountName }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded rules: - apiGroups: - "" @@ -43,9 +40,6 @@ apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: {{ $serviceAccountName }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded roleRef: apiGroup: rbac.authorization.k8s.io kind: Role diff --git a/gnocchi/templates/job-db-drop.yaml b/gnocchi/templates/job-db-drop.yaml new file mode 100644 index 0000000000..ac2b6562a3 --- /dev/null +++ b/gnocchi/templates/job-db-drop.yaml @@ -0,0 +1,20 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- if .Values.manifests.job_db_drop }} +{{- $dbDropJob := dict "envAll" . "serviceName" "gnocchi" -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} +{{- end }} diff --git a/gnocchi/templates/service-ingress-api.yaml b/gnocchi/templates/service-ingress-api.yaml index 690ef97811..269a681f49 100644 --- a/gnocchi/templates/service-ingress-api.yaml +++ b/gnocchi/templates/service-ingress-api.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_api }} -{{- $envAll := . }} -{{- if .Values.network.api.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "metric" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "metric" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/gnocchi/values.yaml b/gnocchi/values.yaml index 6d6f7293ba..3eb17ae631 100644 --- a/gnocchi/values.yaml +++ b/gnocchi/values.yaml @@ -21,7 +21,7 @@ labels: images: tags: - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 gnocchi_storage_init: docker.io/ceph/daemon:tag-build-master-luminous-ubuntu-16.04 db_init_indexer: docker.io/postgres:9.5 # using non-kolla images until kolla supports postgres as @@ -40,8 +40,10 @@ network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / external_policy_local: false node_port: @@ -261,6 +263,28 @@ pod: cpu: "2000m" conf: + apache: | + Listen 0.0.0.0:{{ tuple "metric" "internal" "api" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} + + SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded + CustomLog /dev/stdout combined env=!forwarded + CustomLog /dev/stdout proxy env=forwarded + + + WSGIDaemonProcess gnocchi processes=1 threads=2 user=gnocchi group=gnocchi display-name=%{GROUP} + WSGIProcessGroup gnocchi + WSGIScriptAlias / "/var/lib/kolla/venv/lib/python2.7/site-packages/gnocchi/rest/app.wsgi" + WSGIApplicationGroup %{GLOBAL} + + ErrorLog /dev/stderr + SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded + CustomLog /dev/stdout combined env=!forwarded + CustomLog /dev/stdout proxy env=forwarded + + + Require all granted + + ceph: monitors: [] admin_keyring: null @@ -375,7 +399,7 @@ bootstrap: script: | openstack token issue -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -503,6 +527,7 @@ manifests: ingress_api: true job_bootstrap: true job_clean: true + job_db_drop: false job_db_init_indexer: true job_db_init: true secret_db_indexer: true diff --git a/heat/templates/bin/_trusts.sh.tpl b/heat/templates/bin/_trusts.sh.tpl index 0e1e2e5ea1..bef874dcf2 100644 --- a/heat/templates/bin/_trusts.sh.tpl +++ b/heat/templates/bin/_trusts.sh.tpl @@ -19,7 +19,7 @@ set -ex # Get IDs for filtering OS_PROJECT_ID=$(openstack project show -f value -c id ${OS_PROJECT_NAME}) OS_USER_ID=$(openstack user show -f value -c id ${OS_USERNAME}) -SERVICE_OS_TRUSTEE_ID=$(openstack user show -f value -c id ${SERVICE_OS_TRUSTEE}) +SERVICE_OS_TRUSTEE_ID=$(openstack user show -f value -c id --domain ${SERVICE_OS_TRUSTEE_DOMAIN} ${SERVICE_OS_TRUSTEE}) # Check if trust doesn't already exist openstack trust list -f value -c "Project ID" \ @@ -42,6 +42,7 @@ fi SERVICE_OS_TRUST_ID=$(openstack trust create -f value -c id \ --project="${OS_PROJECT_NAME}" \ ${roles[@]/#/--role=} \ + --trustee-domain="${SERVICE_OS_TRUSTEE_DOMAIN}" \ "${OS_USERNAME}" \ "${SERVICE_OS_TRUSTEE}") diff --git a/heat/templates/job-db-drop.yaml b/heat/templates/job-db-drop.yaml index 20ee0df34b..36020b3337 100644 --- a/heat/templates/job-db-drop.yaml +++ b/heat/templates/job-db-drop.yaml @@ -15,72 +15,6 @@ limitations under the License. */}} {{- if .Values.manifests.job_db_drop }} -{{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.db_drop }} - -{{- $randStringSuffix := randAlphaNum 5 | lower }} - -{{- $serviceAccountName := print "heat-db-drop-" $randStringSuffix }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: {{ print "heat-db-drop-" $randStringSuffix }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded -spec: - template: - metadata: - labels: -{{ tuple $envAll "heat" "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - spec: - serviceAccountName: {{ $serviceAccountName }} - restartPolicy: OnFailure - nodeSelector: - {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }} - initContainers: -{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - containers: - - name: heat-db-drop - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: OPENSTACK_CONFIG_FILE - value: /etc/heat/heat.conf - - name: OPENSTACK_CONFIG_DB_SECTION - value: database - - name: OPENSTACK_CONFIG_DB_KEY - value: connection - command: - - /tmp/db-drop.py - volumeMounts: - - name: heat-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true - - name: etcheat - mountPath: /etc/heat - - name: heat-etc - mountPath: /etc/heat/heat.conf - subPath: heat.conf - readOnly: true - volumes: - - name: etcheat - emptyDir: {} - - name: heat-etc - configMap: - name: heat-etc - defaultMode: 0444 - - name: heat-bin - configMap: - name: heat-bin - defaultMode: 0555 +{{- $dbDropJob := dict "envAll" . "serviceName" "heat" -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} {{- end }} diff --git a/heat/templates/job-trusts.yaml b/heat/templates/job-trusts.yaml index 46b09f74b0..95b627670b 100644 --- a/heat/templates/job-trusts.yaml +++ b/heat/templates/job-trusts.yaml @@ -61,6 +61,8 @@ spec: value: {{ .Values.conf.heat.DEFAULT.trusts_delegated_roles }} - name: SERVICE_OS_TRUSTEE value: {{ .Values.endpoints.identity.auth.heat_trustee.username }} + - name: SERVICE_OS_TRUSTEE_DOMAIN + value: {{ .Values.endpoints.identity.auth.heat_trustee.user_domain_name }} volumes: - name: heat-bin configMap: diff --git a/heat/templates/service-ingress-api.yaml b/heat/templates/service-ingress-api.yaml index 6aedca4657..36da627fad 100644 --- a/heat/templates/service-ingress-api.yaml +++ b/heat/templates/service-ingress-api.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_api }} -{{- $envAll := . }} -{{- if .Values.network.api.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "orchestration" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "orchestration" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/heat/templates/service-ingress-cfn.yaml b/heat/templates/service-ingress-cfn.yaml index d0ef649fcb..437d38ba64 100644 --- a/heat/templates/service-ingress-cfn.yaml +++ b/heat/templates/service-ingress-cfn.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_cfn }} -{{- $envAll := . }} -{{- if .Values.network.cfn.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "cloudformation" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_cfn .Values.network.cfn.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "cloudformation" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/heat/templates/service-ingress-cloudwatch.yaml b/heat/templates/service-ingress-cloudwatch.yaml index af402ead0e..4134008a36 100644 --- a/heat/templates/service-ingress-cloudwatch.yaml +++ b/heat/templates/service-ingress-cloudwatch.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_cloudwatch }} -{{- $envAll := . }} -{{- if .Values.network.cloudwatch.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "cloudwatch" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_cloudwatch .Values.network.cloudwatch.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "cloudwatch" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/heat/values.yaml b/heat/values.yaml index 9f24ad9b5a..f1dba5f2cc 100644 --- a/heat/values.yaml +++ b/heat/values.yaml @@ -51,7 +51,7 @@ images: heat_cloudwatch: docker.io/openstackhelm/heat:newton heat_engine: docker.io/openstackhelm/heat:newton heat_engine_cleaner: docker.io/openstackhelm/heat:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 opencontrail_heat_init: pull_policy: "IfNotPresent" @@ -211,7 +211,7 @@ conf: resource_types:OS::Cinder::QoSSpecs: rule:project_admin heat: DEFAULT: - num_engine_workers: 4 + num_engine_workers: 1 trusts_delegated_roles: "" host: heat-engine keystone_authtoken: @@ -227,17 +227,17 @@ conf: #NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. bind_port: null - workers: 4 + workers: 1 heat_api_cloudwatch: #NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. bind_port: null - workers: 4 + workers: 1 heat_api_cfn: #NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. bind_port: null - workers: 4 + workers: 1 paste_deploy: api_paste_config: /etc/heat/api-paste.ini clients: @@ -251,8 +251,10 @@ network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / external_policy_local: false node_port: @@ -261,8 +263,10 @@ network: cfn: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / node_port: enabled: false @@ -270,18 +274,27 @@ network: cloudwatch: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / node_port: enabled: false port: 30003 bootstrap: - enabled: false - ks_user: heat + enabled: true + ks_user: admin script: | - openstack token issue + #NOTE(portdirect): required for all users who operate heat stacks + openstack role create --or-show heat_stack_owner + + #NOTE(portdirect): The Orchestration service automatically assigns the + # 'heat_stack_user' role to users that it creates during stack deployment. + # By default, this role restricts API operations. To avoid conflicts, do + # not add this role to users with the heat_stack_owner role. + openstack role create --or-show heat_stack_user dependencies: static: @@ -375,8 +388,8 @@ dependencies: service: identity rabbit_init: services: - - service: oslo_messaging - endpoint: internal + - service: oslo_messaging + endpoint: internal trusts: jobs: - heat-ks-user @@ -400,7 +413,7 @@ secrets: admin: heat-rabbitmq-admin heat: heat-rabbitmq-user -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -416,7 +429,9 @@ endpoints: user_domain_name: default project_domain_name: default heat: - role: admin + role: + - admin + - heat_stack_owner region_name: RegionOne username: heat password: password @@ -555,11 +570,11 @@ pod: heat: uid: 42424 affinity: - anti: - type: - default: preferredDuringSchedulingIgnoredDuringExecution - topologyKey: - default: kubernetes.io/hostname + anti: + type: + default: preferredDuringSchedulingIgnoredDuringExecution + topologyKey: + default: kubernetes.io/hostname mounts: heat_api: init_container: null diff --git a/helm-toolkit/.gitignore b/helm-toolkit/.gitignore index e1bd7e85af..f5f3a91ab3 100644 --- a/helm-toolkit/.gitignore +++ b/helm-toolkit/.gitignore @@ -1,3 +1,3 @@ secrets/* -!secrets/.gitkeep +!secrets/.gitkeep templates/_secrets.tpl diff --git a/helm-toolkit/templates/endpoints/_hostname_short_endpoint_lookup.tpl b/helm-toolkit/templates/endpoints/_hostname_short_endpoint_lookup.tpl index cc1fe8af84..6fc17c314e 100644 --- a/helm-toolkit/templates/endpoints/_hostname_short_endpoint_lookup.tpl +++ b/helm-toolkit/templates/endpoints/_hostname_short_endpoint_lookup.tpl @@ -29,7 +29,11 @@ limitations under the License. {{- with $endpointMap -}} {{- $endpointScheme := .scheme }} {{- $endpointHost := index .hosts $endpoint | default .hosts.default}} +{{- if regexMatch "[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+" $endpointHost }} +{{- printf "%s" $typeYamlSafe -}} +{{- else }} {{- $endpointHostname := printf "%s" $endpointHost }} {{- printf "%s" $endpointHostname -}} +{{- end }} {{- end -}} {{- end -}} diff --git a/helm-toolkit/templates/endpoints/_keystone_endpoint_scheme_lookup.tpl b/helm-toolkit/templates/endpoints/_keystone_endpoint_scheme_lookup.tpl new file mode 100644 index 0000000000..150a5446bd --- /dev/null +++ b/helm-toolkit/templates/endpoints/_keystone_endpoint_scheme_lookup.tpl @@ -0,0 +1,34 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +# This function returns the scheme for a service, it takes an tuple +# input in the form: service-type, endpoint-class, port-name. eg: +# { tuple "etcd" "internal" "client" . | include "helm-toolkit.endpoints.keystone_scheme_lookup" } +# will return the scheme setting for this particular endpoint. In other words, for most endpoints +# it will return either 'http' or 'https' + +{{- define "helm-toolkit.endpoints.keystone_endpoint_scheme_lookup" -}} +{{- $type := index . 0 -}} +{{- $endpoint := index . 1 -}} +{{- $port := index . 2 -}} +{{- $context := index . 3 -}} +{{- $typeYamlSafe := $type | replace "-" "_" }} +{{- $endpointMap := index $context.Values.endpoints $typeYamlSafe }} +{{- with $endpointMap -}} +{{- $endpointScheme := index .scheme $endpoint | default .scheme.default | default "http" }} +{{- printf "%s" $endpointScheme -}} +{{- end -}} +{{- end -}} diff --git a/helm-toolkit/templates/endpoints/_keystone_endpoint_uri_lookup.tpl b/helm-toolkit/templates/endpoints/_keystone_endpoint_uri_lookup.tpl index 25837d1682..8c13651ef7 100644 --- a/helm-toolkit/templates/endpoints/_keystone_endpoint_uri_lookup.tpl +++ b/helm-toolkit/templates/endpoints/_keystone_endpoint_uri_lookup.tpl @@ -35,7 +35,11 @@ limitations under the License. {{- $endpointPort := index $endpointPortMAP $endpoint | default (index $endpointPortMAP "default") }} {{- $endpointPath := index .path $endpoint | default .path.default | default "/" }} {{- $endpointClusterHostname := printf "%s.%s.%s" $endpointHost $namespace $clusterSuffix }} -{{- $endpointHostname := index .host_fqdn_override $endpoint | default .host_fqdn_override.default | default $endpointClusterHostname }} -{{- printf "%s://%s:%1.f%s" $endpointScheme $endpointHostname $endpointPort $endpointPath -}} +{{- if regexMatch "[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+" $endpointHost }} +{{- printf "%s://%s:%1.f%s" $endpointScheme $endpointHost $endpointPort $endpointPath -}} +{{- else -}} +{{- $endpointFqdnHostname := index .host_fqdn_override $endpoint | default .host_fqdn_override.default | default $endpointClusterHostname }} +{{- printf "%s://%s:%1.f%s" $endpointScheme $endpointFqdnHostname $endpointPort $endpointPath -}} +{{- end -}} {{- end -}} {{- end -}} diff --git a/helm-toolkit/templates/endpoints/_service_name_endpoint_with_namespace_lookup.tpl b/helm-toolkit/templates/endpoints/_service_name_endpoint_with_namespace_lookup.tpl index c4a82a60a9..a3c2f496a3 100644 --- a/helm-toolkit/templates/endpoints/_service_name_endpoint_with_namespace_lookup.tpl +++ b/helm-toolkit/templates/endpoints/_service_name_endpoint_with_namespace_lookup.tpl @@ -18,6 +18,12 @@ limitations under the License. # definition. This is used in kubernetes-entrypoint to support dependencies # between different services in different namespaces. # returns: the endpoint namespace and the service name, delimited by a colon +# +# Normally, the service name is constructed dynamically from the hostname +# however when an ip address is used as the hostname, we default to +# namespace:endpointCategoryName in order to construct a valid service name +# however this can be overriden to a custom service name by defining +# .service.name within the endpoint definition {{- define "helm-toolkit.endpoints.service_name_endpoint_with_namespace_lookup" -}} {{- $type := index . 0 -}} @@ -29,6 +35,14 @@ limitations under the License. {{- $endpointScheme := .scheme }} {{- $endpointName := index .hosts $endpoint | default .hosts.default}} {{- $endpointNamespace := .namespace | default $context.Release.Namespace }} +{{- if regexMatch "[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+" $endpointName }} +{{- if .service.name }} +{{- printf "%s:%s" $endpointNamespace .service.name -}} +{{- else -}} +{{- printf "%s:%s" $endpointNamespace $typeYamlSafe -}} +{{- end -}} +{{- else -}} {{- printf "%s:%s" $endpointNamespace $endpointName -}} {{- end -}} {{- end -}} +{{- end -}} diff --git a/helm-toolkit/templates/manifests/_ingress.yaml.tpl b/helm-toolkit/templates/manifests/_ingress.yaml.tpl index 430c561307..09ca8515f7 100644 --- a/helm-toolkit/templates/manifests/_ingress.yaml.tpl +++ b/helm-toolkit/templates/manifests/_ingress.yaml.tpl @@ -19,6 +19,19 @@ limitations under the License. # {- $ingressOpts := dict "envAll" . "backendServiceType" "key-manager" -} # { $ingressOpts | include "helm-toolkit.manifests.ingress" } +{{- define "helm-toolkit.manifests.ingress._host_rules" -}} +{{- $vHost := index . "vHost" -}} +{{- $backendName := index . "backendName" -}} +{{- $backendPort := index . "backendPort" -}} +- host: {{ $vHost }} + http: + paths: + - path: / + backend: + serviceName: {{ $backendName }} + servicePort: {{ $backendPort }} +{{- end }} + {{- define "helm-toolkit.manifests.ingress" -}} {{- $envAll := index . "envAll" -}} {{- $backendService := index . "backendService" | default "api" -}} @@ -27,7 +40,6 @@ limitations under the License. {{- $ingressName := tuple $backendServiceType "public" $envAll | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} {{- $backendName := tuple $backendServiceType "internal" $envAll | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} {{- $hostName := tuple $backendServiceType "public" $envAll | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -{{- $hostNameNamespaced := tuple $backendServiceType "public" $envAll | include "helm-toolkit.endpoints.hostname_namespaced_endpoint_lookup" }} {{- $hostNameFull := tuple $backendServiceType "public" $envAll | include "helm-toolkit.endpoints.hostname_fqdn_endpoint_lookup" }} --- apiVersion: extensions/v1beta1 @@ -35,29 +47,27 @@ kind: Ingress metadata: name: {{ $ingressName }} annotations: + kubernetes.io/ingress.class: {{ index $envAll.Values.network $backendService "ingress" "classes" "namespace" | quote }} {{ toYaml (index $envAll.Values.network $backendService "ingress" "annotations") | indent 4 }} spec: rules: -{{ if ne $hostNameNamespaced $hostNameFull }} -{{- range $key1, $vHost := tuple $hostName $hostNameNamespaced $hostNameFull }} - - host: {{ $vHost }} - http: - paths: - - path: / - backend: - serviceName: {{ $backendName }} - servicePort: {{ $backendPort }} -{{- end }} -{{- else }} -{{- range $key1, $vHost := tuple $hostName $hostNameNamespaced }} - - host: {{ $vHost }} - http: - paths: - - path: / - backend: - serviceName: {{ $backendName }} - servicePort: {{ $backendPort }} +{{- range $key1, $vHost := tuple $hostName (printf "%s.%s" $hostName $envAll.Release.Namespace) (printf "%s.%s.svc.%s" $hostName $envAll.Release.Namespace $envAll.Values.endpoints.cluster_domain_suffix)}} +{{- $hostRules := dict "vHost" $vHost "backendName" $backendName "backendPort" $backendPort }} +{{ $hostRules | include "helm-toolkit.manifests.ingress._host_rules" | indent 4}} {{- end }} +{{- if not ( hasSuffix ( printf ".%s.svc.%s" $envAll.Release.Namespace $envAll.Values.endpoints.cluster_domain_suffix) $hostNameFull) }} +{{- $hostNameFullRules := dict "vHost" $hostNameFull "backendName" $backendName "backendPort" $backendPort }} +{{ $hostNameFullRules | include "helm-toolkit.manifests.ingress._host_rules" | indent 4}} +--- +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: {{ printf "%s-%s" $ingressName "fqdn" }} + annotations: + kubernetes.io/ingress.class: {{ index $envAll.Values.network $backendService "ingress" "classes" "cluster" | quote }} +{{ toYaml (index $envAll.Values.network $backendService "ingress" "annotations") | indent 4 }} +spec: + rules: +{{ $hostNameFullRules | include "helm-toolkit.manifests.ingress._host_rules" | indent 4}} {{- end }} - {{- end }} diff --git a/helm-toolkit/templates/manifests/_job-bootstrap.yaml b/helm-toolkit/templates/manifests/_job-bootstrap.yaml index d820f30457..754ff217af 100644 --- a/helm-toolkit/templates/manifests/_job-bootstrap.yaml +++ b/helm-toolkit/templates/manifests/_job-bootstrap.yaml @@ -30,6 +30,7 @@ limitations under the License. {{- $configMapEtc := index . "configMapEtc" | default (printf "%s-%s" $serviceName "etc" ) -}} {{- $configFile := index . "configFile" | default (printf "/etc/%s/%s.conf" $serviceName $serviceName ) -}} {{- $keystoneUser := index . "keystoneUser" | default $serviceName -}} +{{- $openrc := index . "openrc" | default "true" -}} {{- $serviceNamePretty := $serviceName | replace "_" "-" -}} @@ -57,9 +58,11 @@ spec: image: {{ $envAll.Values.images.tags.bootstrap }} imagePullPolicy: {{ $envAll.Values.images.pull_policy }} {{ tuple $envAll $envAll.Values.pod.resources.jobs.bootstrap | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} +{{- if eq $openrc "true" }} env: {{- with $env := dict "ksUserSecret" ( index $envAll.Values.secrets.identity $keystoneUser ) }} {{- include "helm-toolkit.snippets.keystone_openrc_env_vars" $env | indent 12 }} +{{- end }} {{- end }} command: - /tmp/bootstrap.sh diff --git a/helm-toolkit/templates/manifests/_job-db-drop-mysql.yaml.tpl b/helm-toolkit/templates/manifests/_job-db-drop-mysql.yaml.tpl new file mode 100644 index 0000000000..753ff8bd23 --- /dev/null +++ b/helm-toolkit/templates/manifests/_job-db-drop-mysql.yaml.tpl @@ -0,0 +1,123 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +# This function creates a manifest for db creation and user management. +# It can be used in charts dict created similar to the following: +# {- $dbToDropJob := dict "envAll" . "serviceName" "senlin" -} +# { $dbToDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" } +# +# If the service does not use olso then the db can be managed with: +# {- $dbToDrop := dict "inputType" "secret" "adminSecret" .Values.secrets.oslo_db.admin "userSecret" .Values.secrets.oslo_db.horizon -} +# {- $dbToDropJob := dict "envAll" . "serviceName" "horizon" "dbToDrop" $dbToDrop -} +# { $dbToDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" } + +{{- define "helm-toolkit.manifests.job_db_drop_mysql" -}} +{{- $envAll := index . "envAll" -}} +{{- $serviceName := index . "serviceName" -}} +{{- $nodeSelector := index . "nodeSelector" | default ( dict $envAll.Values.labels.job.node_selector_key $envAll.Values.labels.job.node_selector_value ) -}} +{{- $dependencies := index . "dependencies" | default $envAll.Values.dependencies.static.db_drop -}} +{{- $configMapBin := index . "configMapBin" | default (printf "%s-%s" $serviceName "bin" ) -}} +{{- $configMapEtc := index . "configMapEtc" | default (printf "%s-%s" $serviceName "etc" ) -}} +{{- $dbToDrop := index . "dbToDrop" | default ( dict "adminSecret" $envAll.Values.secrets.oslo_db.admin "configFile" (printf "/etc/%s/%s.conf" $serviceName $serviceName ) "configDbSection" "database" "configDbKey" "connection" ) -}} +{{- $dbsToDrop := default (list $dbToDrop) (index . "dbsToDrop") }} + +{{- $serviceNamePretty := $serviceName | replace "_" "-" -}} + +{{- $serviceAccountName := printf "%s-%s" $serviceNamePretty "db-drop" }} +{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ printf "%s-%s" $serviceNamePretty "db-drop" | quote }} + annotations: + "helm.sh/hook": pre-delete + "helm.sh/hook-delete-policy": hook-succeeded +spec: + template: + metadata: + labels: +{{ tuple $envAll $serviceName "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} + spec: + serviceAccountName: {{ $serviceAccountName }} + restartPolicy: OnFailure + nodeSelector: +{{ toYaml $nodeSelector | indent 8 }} + initContainers: +{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} + containers: +{{- range $key1, $dbToDrop := $dbsToDrop }} +{{ $dbToDropType := default "oslo" $dbToDrop.inputType }} + - name: {{ printf "%s-%s-%d" $serviceNamePretty "db-drop" $key1 | quote }} + image: {{ $envAll.Values.images.tags.db_drop }} + imagePullPolicy: {{ $envAll.Values.images.pull_policy }} +{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} + env: + - name: ROOT_DB_CONNECTION + valueFrom: + secretKeyRef: + name: {{ $dbToDrop.adminSecret | quote }} + key: DB_CONNECTION +{{- if eq $dbToDropType "oslo" }} + - name: OPENSTACK_CONFIG_FILE + value: {{ $dbToDrop.configFile | quote }} + - name: OPENSTACK_CONFIG_DB_SECTION + value: {{ $dbToDrop.configDbSection | quote }} + - name: OPENSTACK_CONFIG_DB_KEY + value: {{ $dbToDrop.configDbKey | quote }} +{{- end }} +{{- if eq $dbToDropType "secret" }} + - name: DB_CONNECTION + valueFrom: + secretKeyRef: + name: {{ $dbToDrop.userSecret | quote }} + key: DB_CONNECTION +{{- end }} + command: + - /tmp/db-drop.py + volumeMounts: + - name: db-drop-sh + mountPath: /tmp/db-drop.py + subPath: db-drop.py + readOnly: true +{{- if eq $dbToDropType "oslo" }} + - name: etc-service + mountPath: {{ dir $dbToDrop.configFile | quote }} + - name: db-drop-conf + mountPath: {{ $dbToDrop.configFile | quote }} + subPath: {{ base $dbToDrop.configFile | quote }} + readOnly: true +{{- end }} +{{- end }} + volumes: + - name: db-drop-sh + configMap: + name: {{ $configMapBin | quote }} + defaultMode: 0555 +{{- $local := dict "configMapBinFirst" true -}} +{{- range $key1, $dbToDrop := $dbsToDrop }} +{{- $dbToDropType := default "oslo" $dbToDrop.inputType }} +{{- if and (eq $dbToDropType "oslo") $local.configMapBinFirst }} +{{- $_ := set $local "configMapBinFirst" false }} + - name: etc-service + emptyDir: {} + - name: db-drop-conf + configMap: + name: {{ $configMapEtc | quote }} + defaultMode: 0444 +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/helm-toolkit/templates/manifests/_job-ks-user.yaml.tpl b/helm-toolkit/templates/manifests/_job-ks-user.yaml.tpl index dce11eb213..c4908637cd 100644 --- a/helm-toolkit/templates/manifests/_job-ks-user.yaml.tpl +++ b/helm-toolkit/templates/manifests/_job-ks-user.yaml.tpl @@ -68,8 +68,13 @@ spec: {{- with $env := dict "ksUserSecret" (index $envAll.Values.secrets.identity $serviceUser ) }} {{- include "helm-toolkit.snippets.keystone_user_create_env_vars" $env | indent 12 }} {{- end }} - - name: SERVICE_OS_ROLE - value: {{ index $envAll.Values.endpoints.identity.auth $serviceUser "role" | quote }} + - name: SERVICE_OS_ROLES + {{- $serviceOsRoles := index $envAll.Values.endpoints.identity.auth $serviceUser "role" }} + {{- if kindIs "slice" $serviceOsRoles }} + value: {{ include "helm-toolkit.utils.joinListWithComma" $serviceOsRoles | quote }} + {{- else }} + value: {{ $serviceOsRoles | quote }} + {{- end }} volumes: - name: ks-user-sh configMap: diff --git a/helm-toolkit/templates/manifests/_service-ingress.tpl b/helm-toolkit/templates/manifests/_service-ingress.tpl new file mode 100644 index 0000000000..859b4b1161 --- /dev/null +++ b/helm-toolkit/templates/manifests/_service-ingress.tpl @@ -0,0 +1,43 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +# This function creates a manifest for a services ingress rules. +# It can be used in charts dict created similar to the following: +# {- $serviceIngressOpts := dict "envAll" . "backendServiceType" "key-manager" -} +# { $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" } + +{{- define "helm-toolkit.manifests.service_ingress" -}} +{{- $envAll := index . "envAll" -}} +{{- $backendServiceType := index . "backendServiceType" -}} +--- +apiVersion: v1 +kind: Service +metadata: + name: {{ tuple $backendServiceType "public" $envAll | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} +spec: + ports: + - name: http + port: 80 + selector: + app: ingress-api +{{- if index $envAll.Values.endpoints $backendServiceType }} +{{- if index $envAll.Values.endpoints $backendServiceType "ip" }} +{{- if index $envAll.Values.endpoints $backendServiceType "ip" "ingress" }} + clusterIP: {{ (index $envAll.Values.endpoints $backendServiceType "ip" "ingress") }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/helm-toolkit/templates/scripts/_ks-user.sh.tpl b/helm-toolkit/templates/scripts/_ks-user.sh.tpl index 1b61371bd2..72b81fc716 100644 --- a/helm-toolkit/templates/scripts/_ks-user.sh.tpl +++ b/helm-toolkit/templates/scripts/_ks-user.sh.tpl @@ -76,6 +76,10 @@ openstack user set --password="${SERVICE_OS_PASSWORD}" "${USER_ID}" openstack user show "${USER_ID}" function ks_assign_user_role () { + # Get user role + USER_ROLE_ID=$(openstack role create --or-show -f value -c id \ + "${SERVICE_OS_ROLE}"); + # Manage user role assignment openstack role add \ --user="${USER_ID}" \ @@ -92,9 +96,10 @@ function ks_assign_user_role () { } # Manage user service role -export USER_ROLE_ID=$(openstack role create --or-show -f value -c id \ - "${SERVICE_OS_ROLE}"); -ks_assign_user_role +IFS=',' +for SERVICE_OS_ROLE in ${SERVICE_OS_ROLES}; do + ks_assign_user_role +done # Manage user member role : ${MEMBER_OS_ROLE:="_member_"} diff --git a/helm-toolkit/templates/snippets/_kubernetes_entrypoint_init_container.tpl b/helm-toolkit/templates/snippets/_kubernetes_entrypoint_init_container.tpl index ed371eb318..441e293f84 100644 --- a/helm-toolkit/templates/snippets/_kubernetes_entrypoint_init_container.tpl +++ b/helm-toolkit/templates/snippets/_kubernetes_entrypoint_init_container.tpl @@ -34,6 +34,8 @@ limitations under the License. fieldPath: metadata.namespace - name: INTERFACE_NAME value: eth0 + - name: PATH + value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ - name: DEPENDENCY_SERVICE value: "{{ tuple $deps.services $envAll | include "helm-toolkit.utils.comma_joined_service_list" }}" - name: DEPENDENCY_JOBS @@ -42,6 +44,8 @@ limitations under the License. value: "{{ include "helm-toolkit.utils.joinListWithComma" $deps.daemonset }}" - name: DEPENDENCY_CONTAINER value: "{{ include "helm-toolkit.utils.joinListWithComma" $deps.container }}" + - name: DEPENDENCY_POD + value: {{ if $deps.pod }}{{ toJson $deps.pod | quote }}{{ else }}""{{ end }} - name: COMMAND value: "echo done" command: diff --git a/helm-toolkit/templates/snippets/_kubernetes_pod_rbac_roles.tpl b/helm-toolkit/templates/snippets/_kubernetes_pod_rbac_roles.tpl index 1284b36c96..f9f48ef7b6 100644 --- a/helm-toolkit/templates/snippets/_kubernetes_pod_rbac_roles.tpl +++ b/helm-toolkit/templates/snippets/_kubernetes_pod_rbac_roles.tpl @@ -57,7 +57,7 @@ rules: {{ if eq $v "jobs" }} - jobs {{- end -}} - {{ if or (eq $v "daemonsets") (eq $v "jobs") }} + {{ if or (eq $v "pods") (eq $v "daemonsets") (eq $v "jobs") }} - pods {{- end -}} {{ if eq $v "services" }} diff --git a/helm-toolkit/templates/snippets/_kubernetes_pod_rbac_serviceaccount.tpl b/helm-toolkit/templates/snippets/_kubernetes_pod_rbac_serviceaccount.tpl index 73bc903b9a..b96f099b91 100644 --- a/helm-toolkit/templates/snippets/_kubernetes_pod_rbac_serviceaccount.tpl +++ b/helm-toolkit/templates/snippets/_kubernetes_pod_rbac_serviceaccount.tpl @@ -40,6 +40,8 @@ metadata: {{- $_ := set $allNamespace $saNamespace (printf "%s%s" "jobs," ((index $allNamespace $saNamespace) | default "")) }} {{- else if and (eq $k "daemonset") $v }} {{- $_ := set $allNamespace $saNamespace (printf "%s%s" "daemonsets," ((index $allNamespace $saNamespace) | default "")) }} +{{- else if and (eq $k "pod") $v }} +{{- $_ := set $allNamespace $saNamespace (printf "%s%s" "pods," ((index $allNamespace $saNamespace) | default "")) }} {{- end -}} {{- end -}} {{- $_ := unset $allNamespace $randomKey }} diff --git a/helm-toolkit/templates/utils/_dependency_resolver.tpl b/helm-toolkit/templates/utils/_dependency_resolver.tpl index 45a74fe836..b1b3bd4e50 100644 --- a/helm-toolkit/templates/utils/_dependency_resolver.tpl +++ b/helm-toolkit/templates/utils/_dependency_resolver.tpl @@ -20,7 +20,15 @@ limitations under the License. {{- $dependencyKey := index . "dependencyKey" -}} {{- if $dependencyMixinParam -}} {{- $_ := set $envAll.Values "pod_dependency" dict -}} +{{- if kindIs "string" $dependencyMixinParam }} {{- $_ := include "helm-toolkit.utils.merge" (tuple $envAll.Values.pod_dependency ( index $envAll.Values.dependencies.static $dependencyKey ) ( index $envAll.Values.dependencies.dynamic.targeted $dependencyMixinParam $dependencyKey ) ) -}} +{{- else if kindIs "slice" $dependencyMixinParam }} +{{- range $k, $v := $dependencyMixinParam -}} +{{- if not $envAll.Values.__deps }}{{- $_ := set $envAll.Values "__deps" ( index $envAll.Values.dependencies.static $dependencyKey ) }}{{- end }} +{{- $_ := include "helm-toolkit.utils.merge" (tuple $envAll.Values.pod_dependency $envAll.Values.__deps ( index $envAll.Values.dependencies.dynamic.targeted $v $dependencyKey ) ) -}} +{{- $_ := set $envAll.Values "__deps" $envAll.Values.pod_dependency -}} +{{- end }} +{{- end }} {{- else -}} {{- $_ := set $envAll.Values "pod_dependency" ( index $envAll.Values.dependencies.static $dependencyKey ) -}} {{- end -}} diff --git a/helm-toolkit/templates/utils/_values_template_renderer.tpl b/helm-toolkit/templates/utils/_values_template_renderer.tpl new file mode 100644 index 0000000000..4cc5471ed9 --- /dev/null +++ b/helm-toolkit/templates/utils/_values_template_renderer.tpl @@ -0,0 +1,81 @@ +{{/* +Copyright 2018 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{/* +This function renders out configuration sections into a format suitable for +incorporation into a config-map. This allows various forms of input to be +rendered out as appropriate, as illustrated in the following example: + +With the input: + + conf: + some: + config_to_render: | + #We can use all of gotpl here: eg macros, ranges etc. + Listen 0.0.0.0:{{ tuple "dashboard" "internal" "web" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} + config_to_complete: + #here we can fill out params, but things need to be valid yaml as input + '{{ .Release.Name }}': '{{ printf "%s-%s" .Release.Namespace "namespace" }}' + static_config: + #this is just passed though as yaml to the configmap + foo: bar + +And the template: + + {{- $envAll := . }} + --- + apiVersion: v1 + kind: ConfigMap + metadata: + name: application-etc + data: + {{- include "helm-toolkit.snippets.values_template_renderer" (dict "envAll" $envAll "template" .Values.conf.some.config_to_render "key" "config_to_render.conf") | indent 2 }} + {{- include "helm-toolkit.snippets.values_template_renderer" (dict "envAll" $envAll "template" .Values.conf.some.config_to_complete "key" "config_to_complete.yaml") | indent 2 }} + {{- include "helm-toolkit.snippets.values_template_renderer" (dict "envAll" $envAll "template" .Values.conf.some.static_config "key" "static_config.yaml") | indent 2 }} + +The rendered output will match: + + apiVersion: v1 + kind: ConfigMap + metadata: + name: application-etc + data: + config_to_render.conf: | + #We can use all of gotpl here: eg macros, ranges etc. + Listen 0.0.0.0:80 + + config_to_complete.yaml: | + 'RELEASE-NAME': 'default-namespace' + + static_config.yaml: | + foo: bar + +*/}} + +{{- define "helm-toolkit.snippets.values_template_renderer" -}} +{{- $envAll := index . "envAll" -}} +{{- $template := index . "template" -}} +{{- $key := index . "key" -}} +{{- with $envAll -}} +{{- $templateRendered := tpl ( $template | toYaml ) . }} +{{- if hasPrefix "|\n" $templateRendered }} +{{ $key }}: {{ $templateRendered }} +{{- else }} +{{ $key }}: | +{{ $templateRendered | indent 2 }} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/horizon/templates/configmap-etc.yaml b/horizon/templates/configmap-etc.yaml index e45bb31d03..dc695a1094 100644 --- a/horizon/templates/configmap-etc.yaml +++ b/horizon/templates/configmap-etc.yaml @@ -22,22 +22,10 @@ kind: ConfigMap metadata: name: horizon-etc data: - horizon.conf: | -{{ tuple "etc/_horizon.conf.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} - local_settings: | -{{ tuple "etc/_local_settings.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} - ceilometer_policy.json: | -{{ toJson .Values.conf.ceilometer_policy | indent 4 }} - cinder_policy.json: | -{{ toJson .Values.conf.cinder_policy | indent 4 }} - glance_policy.json: | -{{ toJson .Values.conf.glance_policy | indent 4 }} - heat_policy.json: | -{{ toJson .Values.conf.heat_policy | indent 4 }} - keystone_policy.json: | -{{ toJson .Values.conf.keystone_policy | indent 4 }} - neutron_policy.json: | -{{ toJson .Values.conf.neutron_policy | indent 4 }} - nova_policy.json: | -{{ toJson .Values.conf.nova_policy | indent 4 }} +{{- include "helm-toolkit.snippets.values_template_renderer" (dict "envAll" $envAll "template" .Values.conf.horizon.apache "key" "horizon.conf") | indent 2 }} +{{- include "helm-toolkit.snippets.values_template_renderer" (dict "envAll" $envAll "template" .Values.conf.horizon.local_settings.template "key" "local_settings") | indent 2 }} +{{- range $key, $value := .Values.conf.horizon.policy }} + {{ printf "%s_policy.json" $key }}: | +{{ $value | toPrettyJson | indent 4 }} +{{- end }} {{- end }} diff --git a/horizon/templates/deployment.yaml b/horizon/templates/deployment.yaml index 3e25ac7977..0b4adb71b3 100644 --- a/horizon/templates/deployment.yaml +++ b/horizon/templates/deployment.yaml @@ -96,34 +96,13 @@ spec: mountPath: /etc/openstack-dashboard/local_settings subPath: local_settings readOnly: true + {{- range $key, $value := $envAll.Values.conf.horizon.policy }} + {{- $policyFile := printf "/etc/openstack-dashboard/%s_policy.json" $key }} - name: horizon-etc - mountPath: /etc/openstack-dashboard/ceilometer_policy.json - subPath: ceilometer_policy.json - readOnly: true - - name: horizon-etc - mountPath: /etc/openstack-dashboard/cinder_policy.json - subPath: cinder_policy.json - readOnly: true - - name: horizon-etc - mountPath: /etc/openstack-dashboard/glance_policy.json - subPath: glance_policy.json - readOnly: true - - name: horizon-etc - mountPath: /etc/openstack-dashboard/heat_policy.json - subPath: heat_policy.json - readOnly: true - - name: horizon-etc - mountPath: /etc/openstack-dashboard/keystone_policy.json - subPath: keystone_policy.json - readOnly: true - - name: horizon-etc - mountPath: /etc/openstack-dashboard/neutron_policy.json - subPath: neutron_policy.json - readOnly: true - - name: horizon-etc - mountPath: /etc/openstack-dashboard/nova_policy.json - subPath: nova_policy.json + mountPath: {{ $policyFile }} + subPath: {{ base $policyFile }} readOnly: true + {{- end }} {{ if $mounts_horizon.volumeMounts }}{{ toYaml $mounts_horizon.volumeMounts | indent 12 }}{{ end }} volumes: - name: wsgi-horizon diff --git a/horizon/templates/etc/_horizon.conf.tpl b/horizon/templates/etc/_horizon.conf.tpl deleted file mode 100644 index 184b9235e2..0000000000 --- a/horizon/templates/etc/_horizon.conf.tpl +++ /dev/null @@ -1,51 +0,0 @@ -{{/* -Copyright 2017 The Openstack-Helm Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/}} - -Listen 0.0.0.0:{{ tuple "dashboard" "internal" "web" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} - -LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined -LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" proxy - -SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded -CustomLog /dev/stdout combined env=!forwarded -CustomLog /dev/stdout proxy env=forwarded - - - WSGIScriptReloading On - WSGIDaemonProcess horizon-http processes=5 threads=1 user=horizon group=horizon display-name=%{GROUP} python-path=/var/lib/kolla/venv/lib/python2.7/site-packages - WSGIProcessGroup horizon-http - WSGIScriptAlias / /var/www/cgi-bin/horizon/django.wsgi - WSGIPassAuthorization On - - - Require all granted - - - Alias /static /var/www/html/horizon - - SetHandler None - - - = 2.4> - ErrorLogFormat "%{cu}t %M" - - ErrorLog /dev/stdout - TransferLog /dev/stdout - - SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded - CustomLog /dev/stdout combined env=!forwarded - CustomLog /dev/stdout proxy env=forwarded - diff --git a/horizon/templates/etc/_local_settings.tpl b/horizon/templates/etc/_local_settings.tpl deleted file mode 100644 index 7efbe3e548..0000000000 --- a/horizon/templates/etc/_local_settings.tpl +++ /dev/null @@ -1,688 +0,0 @@ -{{/* -Copyright 2017 The Openstack-Helm Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/}} - -import os - -from django.utils.translation import ugettext_lazy as _ - -from openstack_dashboard import exceptions - -DEBUG = {{ .Values.local_settings.debug }} -TEMPLATE_DEBUG = DEBUG - -COMPRESS_OFFLINE = True -COMPRESS_CSS_HASHING_METHOD = "hash" - -# WEBROOT is the location relative to Webserver root -# should end with a slash. -WEBROOT = '/' -# LOGIN_URL = WEBROOT + 'auth/login/' -# LOGOUT_URL = WEBROOT + 'auth/logout/' -# -# LOGIN_REDIRECT_URL can be used as an alternative for -# HORIZON_CONFIG.user_home, if user_home is not set. -# Do not set it to '/home/', as this will cause circular redirect loop -# LOGIN_REDIRECT_URL = WEBROOT - -# Required for Django 1.5. -# If horizon is running in production (DEBUG is False), set this -# with the list of host/domain names that the application can serve. -# For more information see: -# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts -ALLOWED_HOSTS = ['*'] - -# Set SSL proxy settings: -# For Django 1.4+ pass this header from the proxy after terminating the SSL, -# and don't forget to strip it from the client's request. -# For more information see: -# https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header -#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') -# https://docs.djangoproject.com/en/1.5/ref/settings/#secure-proxy-ssl-header -#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') - -# If Horizon is being served through SSL, then uncomment the following two -# settings to better secure the cookies from security exploits -#CSRF_COOKIE_SECURE = True -#SESSION_COOKIE_SECURE = True - -# Overrides for OpenStack API versions. Use this setting to force the -# OpenStack dashboard to use a specific API version for a given service API. -# Versions specified here should be integers or floats, not strings. -# NOTE: The version should be formatted as it appears in the URL for the -# service API. For example, The identity service APIs have inconsistent -# use of the decimal point, so valid options would be 2.0 or 3. -#OPENSTACK_API_VERSIONS = { -# "data-processing": 1.1, -# "identity": 3, -# "volume": 2, -#} - -OPENSTACK_API_VERSIONS = { - "identity": 3, -} - -# Set this to True if running on multi-domain model. When this is enabled, it -# will require user to enter the Domain name in addition to username for login. -#OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False - -# Overrides the default domain used when running on single-domain model -# with Keystone V3. All entities will be created in the default domain. -#OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default' - -# Set Console type: -# valid options are "AUTO"(default), "VNC", "SPICE", "RDP", "SERIAL" or None -# Set to None explicitly if you want to deactivate the console. -#CONSOLE_TYPE = "AUTO" - -# Default OpenStack Dashboard configuration. -HORIZON_CONFIG = { - 'user_home': 'openstack_dashboard.views.get_user_home', - 'ajax_queue_limit': 10, - 'auto_fade_alerts': { - 'delay': 3000, - 'fade_duration': 1500, - 'types': ['alert-success', 'alert-info'] - }, - 'help_url': "http://docs.openstack.org", - 'exceptions': {'recoverable': exceptions.RECOVERABLE, - 'not_found': exceptions.NOT_FOUND, - 'unauthorized': exceptions.UNAUTHORIZED}, - 'modal_backdrop': 'static', - 'angular_modules': [], - 'js_files': [], - 'js_spec_files': [], -} - -# Specify a regular expression to validate user passwords. -#HORIZON_CONFIG["password_validator"] = { -# "regex": '.*', -# "help_text": _("Your password does not meet the requirements."), -#} - -# Disable simplified floating IP address management for deployments with -# multiple floating IP pools or complex network requirements. -#HORIZON_CONFIG["simple_ip_management"] = False - -# Turn off browser autocompletion for forms including the login form and -# the database creation workflow if so desired. -#HORIZON_CONFIG["password_autocomplete"] = "off" - -# Setting this to True will disable the reveal button for password fields, -# including on the login form. -#HORIZON_CONFIG["disable_password_reveal"] = False - -LOCAL_PATH = '/tmp' - -# Set custom secret key: -# You can either set it to a specific value or you can let horizon generate a -# default secret key that is unique on this machine, e.i. regardless of the -# amount of Python WSGI workers (if used behind Apache+mod_wsgi): However, -# there may be situations where you would want to set this explicitly, e.g. -# when multiple dashboard instances are distributed on different machines -# (usually behind a load-balancer). Either you have to make sure that a session -# gets all requests routed to the same dashboard instance or you set the same -# SECRET_KEY for all of them. -SECRET_KEY='{{ .Values.local_settings.horizon_secret_key }}' - -CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': '{{ tuple "oslo_cache" "internal" "memcache" . | include "helm-toolkit.endpoints.host_and_port_endpoint_uri_lookup" }}', - } -} -DATABASES = { - 'default': { - # Database configuration here - 'ENGINE': 'django.db.backends.mysql', - 'NAME': '{{ .Values.endpoints.oslo_db.path | base }}', - 'USER': '{{ .Values.endpoints.oslo_db.auth.horizon.username }}', - 'PASSWORD': '{{ .Values.endpoints.oslo_db.auth.horizon.password }}', - 'HOST': '{{ tuple "oslo_db" "internal" . | include "helm-toolkit.endpoints.hostname_fqdn_endpoint_lookup" }}', - 'default-character-set': 'utf8', - 'PORT': '{{ tuple "oslo_db" "internal" "mysql" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }}' - } -} -SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db' - -# Send email to the console by default -EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' -# Or send them to /dev/null -#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend' - -# Configure these for your outgoing email host -#EMAIL_HOST = 'smtp.my-company.com' -#EMAIL_PORT = 25\\ -#EMAIL_HOST_USER = 'djangomail' -#EMAIL_HOST_PASSWORD = 'top-secret!' - -# For multiple regions uncomment this configuration, and add (endpoint, title). -#AVAILABLE_REGIONS = [ -# ('http://cluster1.example.com:5000/v2.0', 'cluster1'), -# ('http://cluster2.example.com:5000/v2.0', 'cluster2'), -#] - -OPENSTACK_KEYSTONE_URL = "{{ tuple "identity" "public" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup" }}" -OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_" - - -{{- if .Values.local_settings.auth.sso.enabled }} -# Enables keystone web single-sign-on if set to True. -WEBSSO_ENABLED = True - -# Determines which authentication choice to show as default. -WEBSSO_INITIAL_CHOICE = "{{ .Values.local_settings.auth.sso.initial_choice }}" - -# The list of authentication mechanisms -# which include keystone federation protocols. -# Current supported protocol IDs are 'saml2' and 'oidc' -# which represent SAML 2.0, OpenID Connect respectively. -# Do not remove the mandatory credentials mechanism. -WEBSSO_CHOICES = ( - ("credentials", _("Keystone Credentials")), - {{- range $i, $sso := .Values.local_settings.auth.idp_mapping }} - ({{ $sso.name | quote }}, {{ $sso.label | quote }}), - {{- end }} -) - -WEBSSO_IDP_MAPPING = { - {{- range $i, $sso := .Values.local_settings.auth.idp_mapping }} - {{ $sso.name | quote}}: ({{ $sso.idp | quote }}, {{ $sso.protocol | quote }}), - {{- end }} -} - -{{- end }} - -# Disable SSL certificate checks (useful for self-signed certificates): -#OPENSTACK_SSL_NO_VERIFY = True - -# The CA certificate to use to verify SSL connections -#OPENSTACK_SSL_CACERT = '/path/to/cacert.pem' - -# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the -# capabilities of the auth backend for Keystone. -# If Keystone has been configured to use LDAP as the auth backend then set -# can_edit_user to False and name to 'ldap'. -# -# TODO(tres): Remove these once Keystone has an API to identify auth backend. -OPENSTACK_KEYSTONE_BACKEND = { - 'name': 'native', - 'can_edit_user': True, - 'can_edit_group': True, - 'can_edit_project': True, - 'can_edit_domain': True, - 'can_edit_role': True, -} - -# Setting this to True, will add a new "Retrieve Password" action on instance, -# allowing Admin session password retrieval/decryption. -#OPENSTACK_ENABLE_PASSWORD_RETRIEVE = False - -# The Launch Instance user experience has been significantly enhanced. -# You can choose whether to enable the new launch instance experience, -# the legacy experience, or both. The legacy experience will be removed -# in a future release, but is available as a temporary backup setting to ensure -# compatibility with existing deployments. Further development will not be -# done on the legacy experience. Please report any problems with the new -# experience via the Launchpad tracking system. -# -# Toggle LAUNCH_INSTANCE_LEGACY_ENABLED and LAUNCH_INSTANCE_NG_ENABLED to -# determine the experience to enable. Set them both to true to enable -# both. -#LAUNCH_INSTANCE_LEGACY_ENABLED = True -#LAUNCH_INSTANCE_NG_ENABLED = False - -# The Xen Hypervisor has the ability to set the mount point for volumes -# attached to instances (other Hypervisors currently do not). Setting -# can_set_mount_point to True will add the option to set the mount point -# from the UI. -OPENSTACK_HYPERVISOR_FEATURES = { - 'can_set_mount_point': False, - 'can_set_password': False, -} - -# The OPENSTACK_CINDER_FEATURES settings can be used to enable optional -# services provided by cinder that is not exposed by its extension API. -OPENSTACK_CINDER_FEATURES = { - 'enable_backup': {{ .Values.local_settings.openstack_cinder_features.enable_backup }}, -} - -# The OPENSTACK_NEUTRON_NETWORK settings can be used to enable optional -# services provided by neutron. Options currently available are load -# balancer service, security groups, quotas, VPN service. -OPENSTACK_NEUTRON_NETWORK = { - 'enable_router': {{ .Values.local_settings.openstack_neutron_network.enable_router }}, - 'enable_quotas': {{ .Values.local_settings.openstack_neutron_network.enable_quotas }}, - 'enable_ipv6': {{ .Values.local_settings.openstack_neutron_network.enable_ipv6 }}, - 'enable_distributed_router': {{ .Values.local_settings.openstack_neutron_network.enable_distributed_router }}, - 'enable_ha_router': {{ .Values.local_settings.openstack_neutron_network.enable_ha_router }}, - 'enable_lb': {{ .Values.local_settings.openstack_neutron_network.enable_lb }}, - 'enable_firewall': {{ .Values.local_settings.openstack_neutron_network.enable_firewall }}, - 'enable_vpn': {{ .Values.local_settings.openstack_neutron_network.enable_vpn }}, - 'enable_fip_topology_check': {{ .Values.local_settings.openstack_neutron_network.enable_fip_topology_check }}, - - # The profile_support option is used to detect if an external router can be - # configured via the dashboard. When using specific plugins the - # profile_support can be turned on if needed. - 'profile_support': None, - #'profile_support': 'cisco', - - # Set which provider network types are supported. Only the network types - # in this list will be available to choose from when creating a network. - # Network types include local, flat, vlan, gre, and vxlan. - 'supported_provider_types': ['*'], - - # Set which VNIC types are supported for port binding. Only the VNIC - # types in this list will be available to choose from when creating a - # port. - # VNIC types include 'normal', 'macvtap' and 'direct'. - 'supported_vnic_types': ['*'] -} - -# The OPENSTACK_IMAGE_BACKEND settings can be used to customize features -# in the OpenStack Dashboard related to the Image service, such as the list -# of supported image formats. -#OPENSTACK_IMAGE_BACKEND = { -# 'image_formats': [ -# ('', _('Select format')), -# ('aki', _('AKI - Amazon Kernel Image')), -# ('ami', _('AMI - Amazon Machine Image')), -# ('ari', _('ARI - Amazon Ramdisk Image')), -# ('docker', _('Docker')), -# ('iso', _('ISO - Optical Disk Image')), -# ('ova', _('OVA - Open Virtual Appliance')), -# ('qcow2', _('QCOW2 - QEMU Emulator')), -# ('raw', _('Raw')), -# ('vdi', _('VDI - Virtual Disk Image')), -# ('vhd', ('VHD - Virtual Hard Disk')), -# ('vmdk', _('VMDK - Virtual Machine Disk')), -# ] -#} - -# The IMAGE_CUSTOM_PROPERTY_TITLES settings is used to customize the titles for -# image custom property attributes that appear on image detail pages. -IMAGE_CUSTOM_PROPERTY_TITLES = { - "architecture": _("Architecture"), - "kernel_id": _("Kernel ID"), - "ramdisk_id": _("Ramdisk ID"), - "image_state": _("Euca2ools state"), - "project_id": _("Project ID"), - "image_type": _("Image Type"), -} - -# The IMAGE_RESERVED_CUSTOM_PROPERTIES setting is used to specify which image -# custom properties should not be displayed in the Image Custom Properties -# table. -IMAGE_RESERVED_CUSTOM_PROPERTIES = [] - -# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints -# in the Keystone service catalog. Use this setting when Horizon is running -# external to the OpenStack environment. The default is 'publicURL'. -OPENSTACK_ENDPOINT_TYPE = "publicURL" - -# SECONDARY_ENDPOINT_TYPE specifies the fallback endpoint type to use in the -# case that OPENSTACK_ENDPOINT_TYPE is not present in the endpoints -# in the Keystone service catalog. Use this setting when Horizon is running -# external to the OpenStack environment. The default is None. This -# value should differ from OPENSTACK_ENDPOINT_TYPE if used. -SECONDARY_ENDPOINT_TYPE = "publicURL" - -# The number of objects (Swift containers/objects or images) to display -# on a single page before providing a paging element (a "more" link) -# to paginate results. -API_RESULT_LIMIT = 1000 -API_RESULT_PAGE_SIZE = 20 - -# The size of chunk in bytes for downloading objects from Swift -SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024 - -# Specify a maximum number of items to display in a dropdown. -DROPDOWN_MAX_ITEMS = 30 - -# The timezone of the server. This should correspond with the timezone -# of your entire OpenStack installation, and hopefully be in UTC. -TIME_ZONE = "UTC" - -# When launching an instance, the menu of available flavors is -# sorted by RAM usage, ascending. If you would like a different sort order, -# you can provide another flavor attribute as sorting key. Alternatively, you -# can provide a custom callback method to use for sorting. You can also provide -# a flag for reverse sort. For more info, see -# http://docs.python.org/2/library/functions.html#sorted -#CREATE_INSTANCE_FLAVOR_SORT = { -# 'key': 'name', -# # or -# 'key': my_awesome_callback_method, -# 'reverse': False, -#} - -# Set this to True to display an 'Admin Password' field on the Change Password -# form to verify that it is indeed the admin logged-in who wants to change -# the password. -# ENFORCE_PASSWORD_CHECK = False - -# Modules that provide /auth routes that can be used to handle different types -# of user authentication. Add auth plugins that require extra route handling to -# this list. -#AUTHENTICATION_URLS = [ -# 'openstack_auth.urls', -#] - -# The Horizon Policy Enforcement engine uses these values to load per service -# policy rule files. The content of these files should match the files the -# OpenStack services are using to determine role based access control in the -# target installation. - -# Path to directory containing policy.json files -POLICY_FILES_PATH = '/etc/openstack-dashboard' -# Map of local copy of service policy files -#POLICY_FILES = { -# 'identity': 'keystone_policy.json', -# 'compute': 'nova_policy.json', -# 'volume': 'cinder_policy.json', -# 'image': 'glance_policy.json', -# 'orchestration': 'heat_policy.json', -# 'network': 'neutron_policy.json', -# 'telemetry': 'ceilometer_policy.json', -#} - -# Trove user and database extension support. By default support for -# creating users and databases on database instances is turned on. -# To disable these extensions set the permission here to something -# unusable such as ["!"]. -# TROVE_ADD_USER_PERMS = [] -# TROVE_ADD_DATABASE_PERMS = [] - -# Change this patch to the appropriate static directory containing -# two files: _variables.scss and _styles.scss -#CUSTOM_THEME_PATH = 'static/themes/default' - -LOGGING = { - 'version': 1, - # When set to True this will disable all logging except - # for loggers specified in this configuration dictionary. Note that - # if nothing is specified here and disable_existing_loggers is True, - # django.db.backends will still log unless it is disabled explicitly. - 'disable_existing_loggers': False, - 'handlers': { - 'null': { - 'level': 'DEBUG', - 'class': 'django.utils.log.NullHandler', - }, - 'console': { - # Set the level to "DEBUG" for verbose output logging. - 'level': 'INFO', - 'class': 'logging.StreamHandler', - }, - }, - 'loggers': { - # Logging from django.db.backends is VERY verbose, send to null - # by default. - 'django.db.backends': { - 'handlers': ['null'], - 'propagate': False, - }, - 'requests': { - 'handlers': ['null'], - 'propagate': False, - }, - 'horizon': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'openstack_dashboard': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'novaclient': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'cinderclient': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'glanceclient': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'glanceclient': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'neutronclient': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'heatclient': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'ceilometerclient': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'troveclient': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'swiftclient': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'openstack_auth': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'nose.plugins.manager': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'django': { - 'handlers': ['console'], - 'level': 'DEBUG', - 'propagate': False, - }, - 'iso8601': { - 'handlers': ['null'], - 'propagate': False, - }, - 'scss': { - 'handlers': ['null'], - 'propagate': False, - }, - } -} - -# 'direction' should not be specified for all_tcp/udp/icmp. -# It is specified in the form. -SECURITY_GROUP_RULES = { - 'all_tcp': { - 'name': _('All TCP'), - 'ip_protocol': 'tcp', - 'from_port': '1', - 'to_port': '65535', - }, - 'all_udp': { - 'name': _('All UDP'), - 'ip_protocol': 'udp', - 'from_port': '1', - 'to_port': '65535', - }, - 'all_icmp': { - 'name': _('All ICMP'), - 'ip_protocol': 'icmp', - 'from_port': '-1', - 'to_port': '-1', - }, - 'ssh': { - 'name': 'SSH', - 'ip_protocol': 'tcp', - 'from_port': '22', - 'to_port': '22', - }, - 'smtp': { - 'name': 'SMTP', - 'ip_protocol': 'tcp', - 'from_port': '25', - 'to_port': '25', - }, - 'dns': { - 'name': 'DNS', - 'ip_protocol': 'tcp', - 'from_port': '53', - 'to_port': '53', - }, - 'http': { - 'name': 'HTTP', - 'ip_protocol': 'tcp', - 'from_port': '80', - 'to_port': '80', - }, - 'pop3': { - 'name': 'POP3', - 'ip_protocol': 'tcp', - 'from_port': '110', - 'to_port': '110', - }, - 'imap': { - 'name': 'IMAP', - 'ip_protocol': 'tcp', - 'from_port': '143', - 'to_port': '143', - }, - 'ldap': { - 'name': 'LDAP', - 'ip_protocol': 'tcp', - 'from_port': '389', - 'to_port': '389', - }, - 'https': { - 'name': 'HTTPS', - 'ip_protocol': 'tcp', - 'from_port': '443', - 'to_port': '443', - }, - 'smtps': { - 'name': 'SMTPS', - 'ip_protocol': 'tcp', - 'from_port': '465', - 'to_port': '465', - }, - 'imaps': { - 'name': 'IMAPS', - 'ip_protocol': 'tcp', - 'from_port': '993', - 'to_port': '993', - }, - 'pop3s': { - 'name': 'POP3S', - 'ip_protocol': 'tcp', - 'from_port': '995', - 'to_port': '995', - }, - 'ms_sql': { - 'name': 'MS SQL', - 'ip_protocol': 'tcp', - 'from_port': '1433', - 'to_port': '1433', - }, - 'mysql': { - 'name': 'MYSQL', - 'ip_protocol': 'tcp', - 'from_port': '3306', - 'to_port': '3306', - }, - 'rdp': { - 'name': 'RDP', - 'ip_protocol': 'tcp', - 'from_port': '3389', - 'to_port': '3389', - }, -} - -# Deprecation Notice: -# -# The setting FLAVOR_EXTRA_KEYS has been deprecated. -# Please load extra spec metadata into the Glance Metadata Definition Catalog. -# -# The sample quota definitions can be found in: -# /etc/metadefs/compute-quota.json -# -# The metadata definition catalog supports CLI and API: -# $glance --os-image-api-version 2 help md-namespace-import -# $glance-manage db_load_metadefs -# -# See Metadata Definitions on: http://docs.openstack.org/developer/glance/ - -# Indicate to the Sahara data processing service whether or not -# automatic floating IP allocation is in effect. If it is not -# in effect, the user will be prompted to choose a floating IP -# pool for use in their cluster. False by default. You would want -# to set this to True if you were running Nova Networking with -# auto_assign_floating_ip = True. -#SAHARA_AUTO_IP_ALLOCATION_ENABLED = False - -# The hash algorithm to use for authentication tokens. This must -# match the hash algorithm that the identity server and the -# auth_token middleware are using. Allowed values are the -# algorithms supported by Python's hashlib library. -#OPENSTACK_TOKEN_HASH_ALGORITHM = 'md5' - -# AngularJS requires some settings to be made available to -# the client side. Some settings are required by in-tree / built-in horizon -# features. These settings must be added to REST_API_REQUIRED_SETTINGS in the -# form of ['SETTING_1','SETTING_2'], etc. -# -# You may remove settings from this list for security purposes, but do so at -# the risk of breaking a built-in horizon feature. These settings are required -# for horizon to function properly. Only remove them if you know what you -# are doing. These settings may in the future be moved to be defined within -# the enabled panel configuration. -# You should not add settings to this list for out of tree extensions. -# See: https://wiki.openstack.org/wiki/Horizon/RESTAPI -REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES', - 'LAUNCH_INSTANCE_DEFAULTS', - 'OPENSTACK_IMAGE_FORMATS'] - -# Additional settings can be made available to the client side for -# extensibility by specifying them in REST_API_ADDITIONAL_SETTINGS -# !! Please use extreme caution as the settings are transferred via HTTP/S -# and are not encrypted on the browser. This is an experimental API and -# may be deprecated in the future without notice. -#REST_API_ADDITIONAL_SETTINGS = [] - -# DISALLOW_IFRAME_EMBED can be used to prevent Horizon from being embedded -# within an iframe. Legacy browsers are still vulnerable to a Cross-Frame -# Scripting (XFS) vulnerability, so this option allows extra security hardening -# where iframes are not used in deployment. Default setting is True. -# For more information see: -# http://tinyurl.com/anticlickjack -# DISALLOW_IFRAME_EMBED = True - -STATIC_ROOT = '/var/www/html/horizon' diff --git a/horizon/templates/job-db-drop.yaml b/horizon/templates/job-db-drop.yaml index 304ce33e9a..bf5048bf24 100644 --- a/horizon/templates/job-db-drop.yaml +++ b/horizon/templates/job-db-drop.yaml @@ -15,65 +15,7 @@ limitations under the License. */}} {{- if .Values.manifests.job_db_drop }} - -{{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.db_drop }} - -{{- $mounts_horizon_db_init := .Values.pod.mounts.horizon_db_init.horizon_db_init }} -{{- $mounts_horizon_db_init_init := .Values.pod.mounts.horizon_db_init.init_container }} - -{{- $randStringSuffix := randAlphaNum 5 | lower }} - -{{- $serviceAccountName := print "horizon-db-drop-" $randStringSuffix }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: {{ print "horizon-db-drop-" $randStringSuffix }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded -spec: - template: - metadata: - labels: -{{ tuple $envAll "horizon" "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - spec: - serviceAccountName: {{ $serviceAccountName }} - restartPolicy: OnFailure - nodeSelector: - {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }} - initContainers: -{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - containers: - - name: horizon-db-drop - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.horizon }} - key: DB_CONNECTION - command: - - /tmp/db-drop.py - volumeMounts: - - name: horizon-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true -{{ if $mounts_horizon_db_init.volumeMounts }}{{ toYaml $mounts_horizon_db_init.volumeMounts | indent 10 }}{{ end }} - volumes: - - name: horizon-bin - configMap: - name: horizon-bin - defaultMode: 0555 -{{ if $mounts_horizon_db_init.volumes }}{{ toYaml $mounts_horizon_db_init.volumes | indent 6 }}{{ end }} +{{- $dbToDrop := dict "inputType" "secret" "adminSecret" .Values.secrets.oslo_db.admin "userSecret" .Values.secrets.oslo_db.horizon -}} +{{- $dbDropJob := dict "envAll" . "serviceName" "horizon" "dbToDrop" $dbToDrop -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} {{- end }} diff --git a/horizon/templates/service-ingress.yaml b/horizon/templates/service-ingress.yaml index c300fb6d0a..b279e7405b 100644 --- a/horizon/templates/service-ingress.yaml +++ b/horizon/templates/service-ingress.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress }} -{{- $envAll := . }} -{{- if .Values.network.dashboard.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "dashboard" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress .Values.network.dashboard.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "dashboard" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/horizon/values.yaml b/horizon/values.yaml index d7cad6ab16..daa1579fd7 100644 --- a/horizon/values.yaml +++ b/horizon/values.yaml @@ -23,7 +23,7 @@ images: horizon_db_sync: docker.io/openstackhelm/horizon:newton db_drop: docker.io/openstackhelm/heat:newton horizon: docker.io/openstackhelm/horizon:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" release_group: null @@ -40,1105 +40,1820 @@ network: dashboard: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / external_policy_local: false node_port: enabled: false port: 31000 -# Use "True" and "False" as Titlecase strings with quotes, boolean -# values will not work -local_settings: - horizon_secret_key: 9aee62c0-5253-4a86-b189-e0fb71fa503c - debug: "True" - openstack_cinder_features: - enable_backup: "True" - openstack_neutron_network: - enable_router: "True" - enable_quotas: "True" - enable_ipv6: "True" - enable_distributed_router: "False" - enable_ha_router: "False" - enable_lb: "True" - enable_firewall: "True" - enable_vpn: "True" - enable_fip_topology_check: "True" - auth: - sso: - enabled: False - initial_choice: "credentials" - idp_mapping: - - name: "acme_oidc" - label: "Acme Corporation - OpenID Connect" - idp: "myidp1" - protocol: "oidc" - - name: "acme_saml2" - label: "Acme Corporation - SAML2" - idp: "myidp2" - protocol: "saml2" - conf: - ceilometer_policy: - context_is_admin: role:admin - context_is_project: project_id:%(target.project_id)s - context_is_owner: user_id:%(target.user_id)s - segregation: rule:context_is_admin - cinder_policy: - context_is_admin: role:admin - admin_or_owner: is_admin:True or project_id:%(project_id)s - default: rule:admin_or_owner - admin_api: is_admin:True - volume:create: '' - volume:delete: rule:admin_or_owner - volume:get: rule:admin_or_owner - volume:get_all: rule:admin_or_owner - volume:get_volume_metadata: rule:admin_or_owner - volume:delete_volume_metadata: rule:admin_or_owner - volume:update_volume_metadata: rule:admin_or_owner - volume:get_volume_admin_metadata: rule:admin_api - volume:update_volume_admin_metadata: rule:admin_api - volume:get_snapshot: rule:admin_or_owner - volume:get_all_snapshots: rule:admin_or_owner - volume:create_snapshot: rule:admin_or_owner - volume:delete_snapshot: rule:admin_or_owner - volume:update_snapshot: rule:admin_or_owner - volume:get_snapshot_metadata: rule:admin_or_owner - volume:delete_snapshot_metadata: rule:admin_or_owner - volume:update_snapshot_metadata: rule:admin_or_owner - volume:extend: rule:admin_or_owner - volume:update_readonly_flag: rule:admin_or_owner - volume:retype: rule:admin_or_owner - volume:update: rule:admin_or_owner - volume_extension:types_manage: rule:admin_api - volume_extension:types_extra_specs: rule:admin_api - volume_extension:access_types_qos_specs_id: rule:admin_api - volume_extension:access_types_extra_specs: rule:admin_api - volume_extension:volume_type_access: rule:admin_or_owner - volume_extension:volume_type_access:addProjectAccess: rule:admin_api - volume_extension:volume_type_access:removeProjectAccess: rule:admin_api - volume_extension:volume_type_encryption: rule:admin_api - volume_extension:volume_encryption_metadata: rule:admin_or_owner - volume_extension:extended_snapshot_attributes: rule:admin_or_owner - volume_extension:volume_image_metadata: rule:admin_or_owner - volume_extension:quotas:show: '' - volume_extension:quotas:update: rule:admin_api - volume_extension:quotas:delete: rule:admin_api - volume_extension:quota_classes: rule:admin_api - volume_extension:quota_classes:validate_setup_for_nested_quota_use: rule:admin_api - volume_extension:volume_admin_actions:reset_status: rule:admin_api - volume_extension:snapshot_admin_actions:reset_status: rule:admin_api - volume_extension:backup_admin_actions:reset_status: rule:admin_api - volume_extension:volume_admin_actions:force_delete: rule:admin_api - volume_extension:volume_admin_actions:force_detach: rule:admin_api - volume_extension:snapshot_admin_actions:force_delete: rule:admin_api - volume_extension:backup_admin_actions:force_delete: rule:admin_api - volume_extension:volume_admin_actions:migrate_volume: rule:admin_api - volume_extension:volume_admin_actions:migrate_volume_completion: rule:admin_api - volume_extension:volume_actions:upload_public: rule:admin_api - volume_extension:volume_actions:upload_image: rule:admin_or_owner - volume_extension:volume_host_attribute: rule:admin_api - volume_extension:volume_tenant_attribute: rule:admin_or_owner - volume_extension:volume_mig_status_attribute: rule:admin_api - volume_extension:hosts: rule:admin_api - volume_extension:services:index: rule:admin_api - volume_extension:services:update: rule:admin_api - volume_extension:volume_manage: rule:admin_api - volume_extension:volume_unmanage: rule:admin_api - volume_extension:capabilities: rule:admin_api - volume:create_transfer: rule:admin_or_owner - volume:accept_transfer: '' - volume:delete_transfer: rule:admin_or_owner - volume:get_transfer: rule:admin_or_owner - volume:get_all_transfers: rule:admin_or_owner - volume_extension:replication:promote: rule:admin_api - volume_extension:replication:reenable: rule:admin_api - volume:failover_host: rule:admin_api - volume:freeze_host: rule:admin_api - volume:thaw_host: rule:admin_api - backup:create: '' - backup:delete: rule:admin_or_owner - backup:get: rule:admin_or_owner - backup:get_all: rule:admin_or_owner - backup:restore: rule:admin_or_owner - backup:backup-import: rule:admin_api - backup:backup-export: rule:admin_api - snapshot_extension:snapshot_actions:update_snapshot_status: '' - snapshot_extension:snapshot_manage: rule:admin_api - snapshot_extension:snapshot_unmanage: rule:admin_api - consistencygroup:create: group:nobody - consistencygroup:delete: group:nobody - consistencygroup:update: group:nobody - consistencygroup:get: group:nobody - consistencygroup:get_all: group:nobody - consistencygroup:create_cgsnapshot: group:nobody - consistencygroup:delete_cgsnapshot: group:nobody - consistencygroup:get_cgsnapshot: group:nobody - consistencygroup:get_all_cgsnapshots: group:nobody - scheduler_extension:scheduler_stats:get_pools: rule:admin_api - message:delete: rule:admin_or_owner - message:get: rule:admin_or_owner - message:get_all: rule:admin_or_owner - glance_policy: - context_is_admin: role:admin - admin_or_owner: is_admin:True or project_id:%(project_id)s - default: rule:admin_or_owner - add_image: '' - delete_image: rule:admin_or_owner - get_image: '' - get_images: '' - modify_image: rule:admin_or_owner - publicize_image: '' - copy_from: '' - download_image: '' - upload_image: '' - delete_image_location: '' - get_image_location: '' - set_image_location: '' - add_member: '' - delete_member: '' - get_member: '' - get_members: '' - modify_member: '' - manage_image_cache: role:admin - get_task: '' - get_tasks: '' - add_task: '' - modify_task: '' - get_metadef_namespace: '' - get_metadef_namespaces: '' - modify_metadef_namespace: '' - add_metadef_namespace: '' - delete_metadef_namespace: '' - get_metadef_object: '' - get_metadef_objects: '' - modify_metadef_object: '' - add_metadef_object: '' - list_metadef_resource_types: '' - add_metadef_resource_type_association: '' - get_metadef_property: '' - get_metadef_properties: '' - modify_metadef_property: '' - add_metadef_property: '' - heat_policy: - context_is_admin: role:admin - deny_stack_user: not role:heat_stack_user - deny_everybody: "!" - cloudformation:ListStacks: rule:deny_stack_user - cloudformation:CreateStack: rule:deny_stack_user - cloudformation:DescribeStacks: rule:deny_stack_user - cloudformation:DeleteStack: rule:deny_stack_user - cloudformation:UpdateStack: rule:deny_stack_user - cloudformation:CancelUpdateStack: rule:deny_stack_user - cloudformation:DescribeStackEvents: rule:deny_stack_user - cloudformation:ValidateTemplate: rule:deny_stack_user - cloudformation:GetTemplate: rule:deny_stack_user - cloudformation:EstimateTemplateCost: rule:deny_stack_user - cloudformation:DescribeStackResource: '' - cloudformation:DescribeStackResources: rule:deny_stack_user - cloudformation:ListStackResources: rule:deny_stack_user - cloudwatch:DeleteAlarms: rule:deny_stack_user - cloudwatch:DescribeAlarmHistory: rule:deny_stack_user - cloudwatch:DescribeAlarms: rule:deny_stack_user - cloudwatch:DescribeAlarmsForMetric: rule:deny_stack_user - cloudwatch:DisableAlarmActions: rule:deny_stack_user - cloudwatch:EnableAlarmActions: rule:deny_stack_user - cloudwatch:GetMetricStatistics: rule:deny_stack_user - cloudwatch:ListMetrics: rule:deny_stack_user - cloudwatch:PutMetricAlarm: rule:deny_stack_user - cloudwatch:PutMetricData: '' - cloudwatch:SetAlarmState: rule:deny_stack_user - actions:action: rule:deny_stack_user - build_info:build_info: rule:deny_stack_user - events:index: rule:deny_stack_user - events:show: rule:deny_stack_user - resource:index: rule:deny_stack_user - resource:metadata: '' - resource:signal: '' - resource:mark_unhealthy: rule:deny_stack_user - resource:show: rule:deny_stack_user - stacks:abandon: rule:deny_stack_user - stacks:create: rule:deny_stack_user - stacks:delete: rule:deny_stack_user - stacks:detail: rule:deny_stack_user - stacks:export: rule:deny_stack_user - stacks:generate_template: rule:deny_stack_user - stacks:global_index: rule:deny_everybody - stacks:index: rule:deny_stack_user - stacks:list_resource_types: rule:deny_stack_user - stacks:list_template_versions: rule:deny_stack_user - stacks:list_template_functions: rule:deny_stack_user - stacks:lookup: '' - stacks:preview: rule:deny_stack_user - stacks:resource_schema: rule:deny_stack_user - stacks:show: rule:deny_stack_user - stacks:template: rule:deny_stack_user - stacks:environment: rule:deny_stack_user - stacks:update: rule:deny_stack_user - stacks:update_patch: rule:deny_stack_user - stacks:preview_update: rule:deny_stack_user - stacks:preview_update_patch: rule:deny_stack_user - stacks:validate_template: rule:deny_stack_user - stacks:snapshot: rule:deny_stack_user - stacks:show_snapshot: rule:deny_stack_user - stacks:delete_snapshot: rule:deny_stack_user - stacks:list_snapshots: rule:deny_stack_user - stacks:restore_snapshot: rule:deny_stack_user - stacks:list_outputs: rule:deny_stack_user - stacks:show_output: rule:deny_stack_user - software_configs:global_index: rule:deny_everybody - software_configs:index: rule:deny_stack_user - software_configs:create: rule:deny_stack_user - software_configs:show: rule:deny_stack_user - software_configs:delete: rule:deny_stack_user - software_deployments:index: rule:deny_stack_user - software_deployments:create: rule:deny_stack_user - software_deployments:show: rule:deny_stack_user - software_deployments:update: rule:deny_stack_user - software_deployments:delete: rule:deny_stack_user - software_deployments:metadata: '' - service:index: rule:context_is_admin - resource_types:OS::Nova::Flavor: rule:context_is_admin - resource_types:OS::Cinder::EncryptedVolumeType: rule:context_is_admin - resource_types:OS::Cinder::VolumeType: rule:context_is_admin - resource_types:OS::Manila::ShareType: rule:context_is_admin - resource_types:OS::Neutron::QoSPolicy: rule:context_is_admin - resource_types:OS::Neutron::QoSBandwidthLimitRule: rule:context_is_admin - resource_types:OS::Nova::HostAggregate: rule:context_is_admin - keystone_policy: - admin_required: role:admin or is_admin:1 - service_role: role:service - service_or_admin: rule:admin_required or rule:service_role - owner: user_id:%(user_id)s - admin_or_owner: rule:admin_required or rule:owner - token_subject: user_id:%(target.token.user_id)s - admin_or_token_subject: rule:admin_required or rule:token_subject - service_admin_or_token_subject: rule:service_or_admin or rule:token_subject - default: rule:admin_required - identity:get_region: '' - identity:list_regions: '' - identity:create_region: rule:admin_required - identity:update_region: rule:admin_required - identity:delete_region: rule:admin_required - identity:get_service: rule:admin_required - identity:list_services: rule:admin_required - identity:create_service: rule:admin_required - identity:update_service: rule:admin_required - identity:delete_service: rule:admin_required - identity:get_endpoint: rule:admin_required - identity:list_endpoints: rule:admin_required - identity:create_endpoint: rule:admin_required - identity:update_endpoint: rule:admin_required - identity:delete_endpoint: rule:admin_required - identity:get_domain: rule:admin_required - identity:list_domains: rule:admin_required - identity:create_domain: rule:admin_required - identity:update_domain: rule:admin_required - identity:delete_domain: rule:admin_required - identity:get_project: rule:admin_required or project_id:%(target.project.id)s - identity:list_projects: rule:admin_required - identity:list_user_projects: rule:admin_or_owner - identity:create_project: rule:admin_required - identity:update_project: rule:admin_required - identity:delete_project: rule:admin_required - identity:get_user: rule:admin_required - identity:list_users: rule:admin_required - identity:create_user: rule:admin_required - identity:update_user: rule:admin_required - identity:delete_user: rule:admin_required - identity:change_password: rule:admin_or_owner - identity:get_group: rule:admin_required - identity:list_groups: rule:admin_required - identity:list_groups_for_user: rule:admin_or_owner - identity:create_group: rule:admin_required - identity:update_group: rule:admin_required - identity:delete_group: rule:admin_required - identity:list_users_in_group: rule:admin_required - identity:remove_user_from_group: rule:admin_required - identity:check_user_in_group: rule:admin_required - identity:add_user_to_group: rule:admin_required - identity:get_credential: rule:admin_required - identity:list_credentials: rule:admin_required - identity:create_credential: rule:admin_required - identity:update_credential: rule:admin_required - identity:delete_credential: rule:admin_required - identity:ec2_get_credential: rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s) - identity:ec2_list_credentials: rule:admin_or_owner - identity:ec2_create_credential: rule:admin_or_owner - identity:ec2_delete_credential: rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s) - identity:get_role: rule:admin_required - identity:list_roles: rule:admin_required - identity:create_role: rule:admin_required - identity:update_role: rule:admin_required - identity:delete_role: rule:admin_required - identity:get_domain_role: rule:admin_required - identity:list_domain_roles: rule:admin_required - identity:create_domain_role: rule:admin_required - identity:update_domain_role: rule:admin_required - identity:delete_domain_role: rule:admin_required - identity:get_implied_role: 'rule:admin_required ' - identity:list_implied_roles: rule:admin_required - identity:create_implied_role: rule:admin_required - identity:delete_implied_role: rule:admin_required - identity:list_role_inference_rules: rule:admin_required - identity:check_implied_role: rule:admin_required - identity:check_grant: rule:admin_required - identity:list_grants: rule:admin_required - identity:create_grant: rule:admin_required - identity:revoke_grant: rule:admin_required - identity:list_role_assignments: rule:admin_required - identity:list_role_assignments_for_tree: rule:admin_required - identity:get_policy: rule:admin_required - identity:list_policies: rule:admin_required - identity:create_policy: rule:admin_required - identity:update_policy: rule:admin_required - identity:delete_policy: rule:admin_required - identity:check_token: rule:admin_or_token_subject - identity:validate_token: rule:service_admin_or_token_subject - identity:validate_token_head: rule:service_or_admin - identity:revocation_list: rule:service_or_admin - identity:revoke_token: rule:admin_or_token_subject - identity:create_trust: user_id:%(trust.trustor_user_id)s - identity:list_trusts: '' - identity:list_roles_for_trust: '' - identity:get_role_for_trust: '' - identity:delete_trust: '' - identity:create_consumer: rule:admin_required - identity:get_consumer: rule:admin_required - identity:list_consumers: rule:admin_required - identity:delete_consumer: rule:admin_required - identity:update_consumer: rule:admin_required - identity:authorize_request_token: rule:admin_required - identity:list_access_token_roles: rule:admin_required - identity:get_access_token_role: rule:admin_required - identity:list_access_tokens: rule:admin_required - identity:get_access_token: rule:admin_required - identity:delete_access_token: rule:admin_required - identity:list_projects_for_endpoint: rule:admin_required - identity:add_endpoint_to_project: rule:admin_required - identity:check_endpoint_in_project: rule:admin_required - identity:list_endpoints_for_project: rule:admin_required - identity:remove_endpoint_from_project: rule:admin_required - identity:create_endpoint_group: rule:admin_required - identity:list_endpoint_groups: rule:admin_required - identity:get_endpoint_group: rule:admin_required - identity:update_endpoint_group: rule:admin_required - identity:delete_endpoint_group: rule:admin_required - identity:list_projects_associated_with_endpoint_group: rule:admin_required - identity:list_endpoints_associated_with_endpoint_group: rule:admin_required - identity:get_endpoint_group_in_project: rule:admin_required - identity:list_endpoint_groups_for_project: rule:admin_required - identity:add_endpoint_group_to_project: rule:admin_required - identity:remove_endpoint_group_from_project: rule:admin_required - identity:create_identity_provider: rule:admin_required - identity:list_identity_providers: rule:admin_required - identity:get_identity_providers: rule:admin_required - identity:update_identity_provider: rule:admin_required - identity:delete_identity_provider: rule:admin_required - identity:create_protocol: rule:admin_required - identity:update_protocol: rule:admin_required - identity:get_protocol: rule:admin_required - identity:list_protocols: rule:admin_required - identity:delete_protocol: rule:admin_required - identity:create_mapping: rule:admin_required - identity:get_mapping: rule:admin_required - identity:list_mappings: rule:admin_required - identity:delete_mapping: rule:admin_required - identity:update_mapping: rule:admin_required - identity:create_service_provider: rule:admin_required - identity:list_service_providers: rule:admin_required - identity:get_service_provider: rule:admin_required - identity:update_service_provider: rule:admin_required - identity:delete_service_provider: rule:admin_required - identity:get_auth_catalog: '' - identity:get_auth_projects: '' - identity:get_auth_domains: '' - identity:list_projects_for_groups: '' - identity:list_domains_for_groups: '' - identity:list_revoke_events: '' - identity:create_policy_association_for_endpoint: rule:admin_required - identity:check_policy_association_for_endpoint: rule:admin_required - identity:delete_policy_association_for_endpoint: rule:admin_required - identity:create_policy_association_for_service: rule:admin_required - identity:check_policy_association_for_service: rule:admin_required - identity:delete_policy_association_for_service: rule:admin_required - identity:create_policy_association_for_region_and_service: rule:admin_required - identity:check_policy_association_for_region_and_service: rule:admin_required - identity:delete_policy_association_for_region_and_service: rule:admin_required - identity:get_policy_for_endpoint: rule:admin_required - identity:list_endpoints_for_policy: rule:admin_required - identity:create_domain_config: rule:admin_required - identity:get_domain_config: rule:admin_required - identity:update_domain_config: rule:admin_required - identity:delete_domain_config: rule:admin_required - identity:get_domain_config_default: rule:admin_required - neutron_policy: - context_is_admin: role:admin - owner: tenant_id:%(tenant_id)s - admin_or_owner: rule:context_is_admin or rule:owner - context_is_advsvc: role:advsvc - admin_or_network_owner: rule:context_is_admin or tenant_id:%(network:tenant_id)s - admin_owner_or_network_owner: rule:owner or rule:admin_or_network_owner - admin_only: rule:context_is_admin - regular_user: '' - shared: field:networks:shared=True - shared_firewalls: field:firewalls:shared=True - shared_firewall_policies: field:firewall_policies:shared=True - shared_subnetpools: field:subnetpools:shared=True - shared_address_scopes: field:address_scopes:shared=True - external: field:networks:router:external=True - default: rule:admin_or_owner - create_subnet: rule:admin_or_network_owner - create_subnet:segment_id: rule:admin_only - get_subnet: rule:admin_or_owner or rule:shared - get_subnet:segment_id: rule:admin_only - update_subnet: rule:admin_or_network_owner - delete_subnet: rule:admin_or_network_owner - create_subnetpool: '' - create_subnetpool:shared: rule:admin_only - create_subnetpool:is_default: rule:admin_only - get_subnetpool: rule:admin_or_owner or rule:shared_subnetpools - update_subnetpool: rule:admin_or_owner - update_subnetpool:is_default: rule:admin_only - delete_subnetpool: rule:admin_or_owner - create_address_scope: '' - create_address_scope:shared: rule:admin_only - get_address_scope: rule:admin_or_owner or rule:shared_address_scopes - update_address_scope: rule:admin_or_owner - update_address_scope:shared: rule:admin_only - delete_address_scope: rule:admin_or_owner - create_network: '' - get_network: rule:admin_or_owner or rule:shared or rule:external or rule:context_is_advsvc - get_network:router:external: rule:regular_user - get_network:segments: rule:admin_only - get_network:provider:network_type: rule:admin_only - get_network:provider:physical_network: rule:admin_only - get_network:provider:segmentation_id: rule:admin_only - get_network:queue_id: rule:admin_only - get_network_ip_availabilities: rule:admin_only - get_network_ip_availability: rule:admin_only - create_network:shared: rule:admin_only - create_network:router:external: rule:admin_only - create_network:is_default: rule:admin_only - create_network:segments: rule:admin_only - create_network:provider:network_type: rule:admin_only - create_network:provider:physical_network: rule:admin_only - create_network:provider:segmentation_id: rule:admin_only - update_network: rule:admin_or_owner - update_network:segments: rule:admin_only - update_network:shared: rule:admin_only - update_network:provider:network_type: rule:admin_only - update_network:provider:physical_network: rule:admin_only - update_network:provider:segmentation_id: rule:admin_only - update_network:router:external: rule:admin_only - delete_network: rule:admin_or_owner - create_segment: rule:admin_only - get_segment: rule:admin_only - update_segment: rule:admin_only - delete_segment: rule:admin_only - network_device: 'field:port:device_owner=~^network:' - create_port: '' - create_port:device_owner: not rule:network_device or rule:context_is_advsvc or rule:admin_or_network_owner - create_port:mac_address: rule:context_is_advsvc or rule:admin_or_network_owner - create_port:fixed_ips: rule:context_is_advsvc or rule:admin_or_network_owner - create_port:port_security_enabled: rule:context_is_advsvc or rule:admin_or_network_owner - create_port:binding:host_id: rule:admin_only - create_port:binding:profile: rule:admin_only - create_port:mac_learning_enabled: rule:context_is_advsvc or rule:admin_or_network_owner - create_port:allowed_address_pairs: rule:admin_or_network_owner - get_port: rule:context_is_advsvc or rule:admin_owner_or_network_owner - get_port:queue_id: rule:admin_only - get_port:binding:vif_type: rule:admin_only - get_port:binding:vif_details: rule:admin_only - get_port:binding:host_id: rule:admin_only - get_port:binding:profile: rule:admin_only - update_port: rule:admin_or_owner or rule:context_is_advsvc - update_port:device_owner: not rule:network_device or rule:context_is_advsvc or rule:admin_or_network_owner - update_port:mac_address: rule:admin_only or rule:context_is_advsvc - update_port:fixed_ips: rule:context_is_advsvc or rule:admin_or_network_owner - update_port:port_security_enabled: rule:context_is_advsvc or rule:admin_or_network_owner - update_port:binding:host_id: rule:admin_only - update_port:binding:profile: rule:admin_only - update_port:mac_learning_enabled: rule:context_is_advsvc or rule:admin_or_network_owner - update_port:allowed_address_pairs: rule:admin_or_network_owner - delete_port: rule:context_is_advsvc or rule:admin_owner_or_network_owner - get_router:ha: rule:admin_only - create_router: rule:regular_user - create_router:external_gateway_info:enable_snat: rule:admin_only - create_router:distributed: rule:admin_only - create_router:ha: rule:admin_only - get_router: rule:admin_or_owner - get_router:distributed: rule:admin_only - update_router:external_gateway_info:enable_snat: rule:admin_only - update_router:distributed: rule:admin_only - update_router:ha: rule:admin_only - delete_router: rule:admin_or_owner - add_router_interface: rule:admin_or_owner - remove_router_interface: rule:admin_or_owner - create_router:external_gateway_info:external_fixed_ips: rule:admin_only - update_router:external_gateway_info:external_fixed_ips: rule:admin_only - create_firewall: '' - get_firewall: rule:admin_or_owner - create_firewall:shared: rule:admin_only - get_firewall:shared: rule:admin_only - update_firewall: rule:admin_or_owner - update_firewall:shared: rule:admin_only - delete_firewall: rule:admin_or_owner - create_firewall_policy: '' - get_firewall_policy: rule:admin_or_owner or rule:shared_firewall_policies - create_firewall_policy:shared: rule:admin_or_owner - update_firewall_policy: rule:admin_or_owner - delete_firewall_policy: rule:admin_or_owner - insert_rule: rule:admin_or_owner - remove_rule: rule:admin_or_owner - create_firewall_rule: '' - get_firewall_rule: rule:admin_or_owner or rule:shared_firewalls - update_firewall_rule: rule:admin_or_owner - delete_firewall_rule: rule:admin_or_owner - create_qos_queue: rule:admin_only - get_qos_queue: rule:admin_only - update_agent: rule:admin_only - delete_agent: rule:admin_only - get_agent: rule:admin_only - create_dhcp-network: rule:admin_only - delete_dhcp-network: rule:admin_only - get_dhcp-networks: rule:admin_only - create_l3-router: rule:admin_only - delete_l3-router: rule:admin_only - get_l3-routers: rule:admin_only - get_dhcp-agents: rule:admin_only - get_l3-agents: rule:admin_only - get_loadbalancer-agent: rule:admin_only - get_loadbalancer-pools: rule:admin_only - get_agent-loadbalancers: rule:admin_only - get_loadbalancer-hosting-agent: rule:admin_only - create_floatingip: rule:regular_user - create_floatingip:floating_ip_address: rule:admin_only - update_floatingip: rule:admin_or_owner - delete_floatingip: rule:admin_or_owner - get_floatingip: rule:admin_or_owner - create_network_profile: rule:admin_only - update_network_profile: rule:admin_only - delete_network_profile: rule:admin_only - get_network_profiles: '' - get_network_profile: '' - update_policy_profiles: rule:admin_only - get_policy_profiles: '' - get_policy_profile: '' - create_metering_label: rule:admin_only - delete_metering_label: rule:admin_only - get_metering_label: rule:admin_only - create_metering_label_rule: rule:admin_only - delete_metering_label_rule: rule:admin_only - get_metering_label_rule: rule:admin_only - get_service_provider: rule:regular_user - get_lsn: rule:admin_only - create_lsn: rule:admin_only - create_flavor: rule:admin_only - update_flavor: rule:admin_only - delete_flavor: rule:admin_only - get_flavors: rule:regular_user - get_flavor: rule:regular_user - create_service_profile: rule:admin_only - update_service_profile: rule:admin_only - delete_service_profile: rule:admin_only - get_service_profiles: rule:admin_only - get_service_profile: rule:admin_only - get_policy: rule:regular_user - create_policy: rule:admin_only - update_policy: rule:admin_only - delete_policy: rule:admin_only - get_policy_bandwidth_limit_rule: rule:regular_user - create_policy_bandwidth_limit_rule: rule:admin_only - delete_policy_bandwidth_limit_rule: rule:admin_only - update_policy_bandwidth_limit_rule: rule:admin_only - get_policy_dscp_marking_rule: rule:regular_user - create_policy_dscp_marking_rule: rule:admin_only - delete_policy_dscp_marking_rule: rule:admin_only - update_policy_dscp_marking_rule: rule:admin_only - get_rule_type: rule:regular_user - restrict_wildcard: "(not field:rbac_policy:target_tenant=*) or rule:admin_only" - create_rbac_policy: '' - create_rbac_policy:target_tenant: rule:restrict_wildcard - update_rbac_policy: rule:admin_or_owner - update_rbac_policy:target_tenant: rule:restrict_wildcard and rule:admin_or_owner - get_rbac_policy: rule:admin_or_owner - delete_rbac_policy: rule:admin_or_owner - create_flavor_service_profile: rule:admin_only - delete_flavor_service_profile: rule:admin_only - get_flavor_service_profile: rule:regular_user - get_auto_allocated_topology: rule:admin_or_owner - nova_policy: - context_is_admin: role:admin - admin_or_owner: is_admin:True or project_id:%(project_id)s - default: rule:admin_or_owner - cells_scheduler_filter:TargetCellFilter: is_admin:True - compute:create: rule:admin_or_owner - compute:create:attach_network: rule:admin_or_owner - compute:create:attach_volume: rule:admin_or_owner - compute:create:forced_host: is_admin:True - compute:get: rule:admin_or_owner - compute:get_all: rule:admin_or_owner - compute:get_all_tenants: is_admin:True - compute:update: rule:admin_or_owner - compute:get_instance_metadata: rule:admin_or_owner - compute:get_all_instance_metadata: rule:admin_or_owner - compute:get_all_instance_system_metadata: rule:admin_or_owner - compute:update_instance_metadata: rule:admin_or_owner - compute:delete_instance_metadata: rule:admin_or_owner - compute:get_diagnostics: rule:admin_or_owner - compute:get_instance_diagnostics: rule:admin_or_owner - compute:start: rule:admin_or_owner - compute:stop: rule:admin_or_owner - compute:lock: rule:admin_or_owner - compute:unlock: rule:admin_or_owner - compute:unlock_override: rule:admin_api - compute:get_vnc_console: rule:admin_or_owner - compute:get_spice_console: rule:admin_or_owner - compute:get_rdp_console: rule:admin_or_owner - compute:get_serial_console: rule:admin_or_owner - compute:get_mks_console: rule:admin_or_owner - compute:get_console_output: rule:admin_or_owner - compute:reset_network: rule:admin_or_owner - compute:inject_network_info: rule:admin_or_owner - compute:add_fixed_ip: rule:admin_or_owner - compute:remove_fixed_ip: rule:admin_or_owner - compute:attach_volume: rule:admin_or_owner - compute:detach_volume: rule:admin_or_owner - compute:swap_volume: rule:admin_api - compute:attach_interface: rule:admin_or_owner - compute:detach_interface: rule:admin_or_owner - compute:set_admin_password: rule:admin_or_owner - compute:rescue: rule:admin_or_owner - compute:unrescue: rule:admin_or_owner - compute:suspend: rule:admin_or_owner - compute:resume: rule:admin_or_owner - compute:pause: rule:admin_or_owner - compute:unpause: rule:admin_or_owner - compute:shelve: rule:admin_or_owner - compute:shelve_offload: rule:admin_or_owner - compute:unshelve: rule:admin_or_owner - compute:snapshot: rule:admin_or_owner - compute:snapshot_volume_backed: rule:admin_or_owner - compute:backup: rule:admin_or_owner - compute:resize: rule:admin_or_owner - compute:confirm_resize: rule:admin_or_owner - compute:revert_resize: rule:admin_or_owner - compute:rebuild: rule:admin_or_owner - compute:reboot: rule:admin_or_owner - compute:delete: rule:admin_or_owner - compute:soft_delete: rule:admin_or_owner - compute:force_delete: rule:admin_or_owner - compute:security_groups:add_to_instance: rule:admin_or_owner - compute:security_groups:remove_from_instance: rule:admin_or_owner - compute:restore: rule:admin_or_owner - compute:volume_snapshot_create: rule:admin_or_owner - compute:volume_snapshot_delete: rule:admin_or_owner - admin_api: is_admin:True - compute_extension:accounts: rule:admin_api - compute_extension:admin_actions: rule:admin_api - compute_extension:admin_actions:pause: rule:admin_or_owner - compute_extension:admin_actions:unpause: rule:admin_or_owner - compute_extension:admin_actions:suspend: rule:admin_or_owner - compute_extension:admin_actions:resume: rule:admin_or_owner - compute_extension:admin_actions:lock: rule:admin_or_owner - compute_extension:admin_actions:unlock: rule:admin_or_owner - compute_extension:admin_actions:resetNetwork: rule:admin_api - compute_extension:admin_actions:injectNetworkInfo: rule:admin_api - compute_extension:admin_actions:createBackup: rule:admin_or_owner - compute_extension:admin_actions:migrateLive: rule:admin_api - compute_extension:admin_actions:resetState: rule:admin_api - compute_extension:admin_actions:migrate: rule:admin_api - compute_extension:aggregates: rule:admin_api - compute_extension:agents: rule:admin_api - compute_extension:attach_interfaces: rule:admin_or_owner - compute_extension:baremetal_nodes: rule:admin_api - compute_extension:cells: rule:admin_api - compute_extension:cells:create: rule:admin_api - compute_extension:cells:delete: rule:admin_api - compute_extension:cells:update: rule:admin_api - compute_extension:cells:sync_instances: rule:admin_api - compute_extension:certificates: rule:admin_or_owner - compute_extension:cloudpipe: rule:admin_api - compute_extension:cloudpipe_update: rule:admin_api - compute_extension:config_drive: rule:admin_or_owner - compute_extension:console_output: rule:admin_or_owner - compute_extension:consoles: rule:admin_or_owner - compute_extension:createserverext: rule:admin_or_owner - compute_extension:deferred_delete: rule:admin_or_owner - compute_extension:disk_config: rule:admin_or_owner - compute_extension:evacuate: rule:admin_api - compute_extension:extended_server_attributes: rule:admin_api - compute_extension:extended_status: rule:admin_or_owner - compute_extension:extended_availability_zone: rule:admin_or_owner - compute_extension:extended_ips: rule:admin_or_owner - compute_extension:extended_ips_mac: rule:admin_or_owner - compute_extension:extended_vif_net: rule:admin_or_owner - compute_extension:extended_volumes: rule:admin_or_owner - compute_extension:fixed_ips: rule:admin_api - compute_extension:flavor_access: rule:admin_or_owner - compute_extension:flavor_access:addTenantAccess: rule:admin_api - compute_extension:flavor_access:removeTenantAccess: rule:admin_api - compute_extension:flavor_disabled: rule:admin_or_owner - compute_extension:flavor_rxtx: rule:admin_or_owner - compute_extension:flavor_swap: rule:admin_or_owner - compute_extension:flavorextradata: rule:admin_or_owner - compute_extension:flavorextraspecs:index: rule:admin_or_owner - compute_extension:flavorextraspecs:show: rule:admin_or_owner - compute_extension:flavorextraspecs:create: rule:admin_api - compute_extension:flavorextraspecs:update: rule:admin_api - compute_extension:flavorextraspecs:delete: rule:admin_api - compute_extension:flavormanage: rule:admin_api - compute_extension:floating_ip_dns: rule:admin_or_owner - compute_extension:floating_ip_pools: rule:admin_or_owner - compute_extension:floating_ips: rule:admin_or_owner - compute_extension:floating_ips_bulk: rule:admin_api - compute_extension:fping: rule:admin_or_owner - compute_extension:fping:all_tenants: rule:admin_api - compute_extension:hide_server_addresses: is_admin:False - compute_extension:hosts: rule:admin_api - compute_extension:hypervisors: rule:admin_api - compute_extension:image_size: rule:admin_or_owner - compute_extension:instance_actions: rule:admin_or_owner - compute_extension:instance_actions:events: rule:admin_api - compute_extension:instance_usage_audit_log: rule:admin_api - compute_extension:keypairs: rule:admin_or_owner - compute_extension:keypairs:index: rule:admin_or_owner - compute_extension:keypairs:show: rule:admin_or_owner - compute_extension:keypairs:create: rule:admin_or_owner - compute_extension:keypairs:delete: rule:admin_or_owner - compute_extension:multinic: rule:admin_or_owner - compute_extension:networks: rule:admin_api - compute_extension:networks:view: rule:admin_or_owner - compute_extension:networks_associate: rule:admin_api - compute_extension:os-tenant-networks: rule:admin_or_owner - compute_extension:quotas:show: rule:admin_or_owner - compute_extension:quotas:update: rule:admin_api - compute_extension:quotas:delete: rule:admin_api - compute_extension:quota_classes: rule:admin_or_owner - compute_extension:rescue: rule:admin_or_owner - compute_extension:security_group_default_rules: rule:admin_api - compute_extension:security_groups: rule:admin_or_owner - compute_extension:server_diagnostics: rule:admin_api - compute_extension:server_groups: rule:admin_or_owner - compute_extension:server_password: rule:admin_or_owner - compute_extension:server_usage: rule:admin_or_owner - compute_extension:services: rule:admin_api - compute_extension:shelve: rule:admin_or_owner - compute_extension:shelveOffload: rule:admin_api - compute_extension:simple_tenant_usage:show: rule:admin_or_owner - compute_extension:simple_tenant_usage:list: rule:admin_api - compute_extension:unshelve: rule:admin_or_owner - compute_extension:users: rule:admin_api - compute_extension:virtual_interfaces: rule:admin_or_owner - compute_extension:virtual_storage_arrays: rule:admin_or_owner - compute_extension:volumes: rule:admin_or_owner - compute_extension:volume_attachments:index: rule:admin_or_owner - compute_extension:volume_attachments:show: rule:admin_or_owner - compute_extension:volume_attachments:create: rule:admin_or_owner - compute_extension:volume_attachments:update: rule:admin_api - compute_extension:volume_attachments:delete: rule:admin_or_owner - compute_extension:volumetypes: rule:admin_or_owner - compute_extension:availability_zone:list: rule:admin_or_owner - compute_extension:availability_zone:detail: rule:admin_api - compute_extension:used_limits_for_admin: rule:admin_api - compute_extension:migrations:index: rule:admin_api - compute_extension:os-assisted-volume-snapshots:create: rule:admin_api - compute_extension:os-assisted-volume-snapshots:delete: rule:admin_api - compute_extension:console_auth_tokens: rule:admin_api - compute_extension:os-server-external-events:create: rule:admin_api - network:get_all: rule:admin_or_owner - network:get: rule:admin_or_owner - network:create: rule:admin_or_owner - network:delete: rule:admin_or_owner - network:associate: rule:admin_or_owner - network:disassociate: rule:admin_or_owner - network:get_vifs_by_instance: rule:admin_or_owner - network:allocate_for_instance: rule:admin_or_owner - network:deallocate_for_instance: rule:admin_or_owner - network:validate_networks: rule:admin_or_owner - network:get_instance_uuids_by_ip_filter: rule:admin_or_owner - network:get_instance_id_by_floating_address: rule:admin_or_owner - network:setup_networks_on_host: rule:admin_or_owner - network:get_backdoor_port: rule:admin_or_owner - network:get_floating_ip: rule:admin_or_owner - network:get_floating_ip_pools: rule:admin_or_owner - network:get_floating_ip_by_address: rule:admin_or_owner - network:get_floating_ips_by_project: rule:admin_or_owner - network:get_floating_ips_by_fixed_address: rule:admin_or_owner - network:allocate_floating_ip: rule:admin_or_owner - network:associate_floating_ip: rule:admin_or_owner - network:disassociate_floating_ip: rule:admin_or_owner - network:release_floating_ip: rule:admin_or_owner - network:migrate_instance_start: rule:admin_or_owner - network:migrate_instance_finish: rule:admin_or_owner - network:get_fixed_ip: rule:admin_or_owner - network:get_fixed_ip_by_address: rule:admin_or_owner - network:add_fixed_ip_to_instance: rule:admin_or_owner - network:remove_fixed_ip_from_instance: rule:admin_or_owner - network:add_network_to_project: rule:admin_or_owner - network:get_instance_nw_info: rule:admin_or_owner - network:get_dns_domains: rule:admin_or_owner - network:add_dns_entry: rule:admin_or_owner - network:modify_dns_entry: rule:admin_or_owner - network:delete_dns_entry: rule:admin_or_owner - network:get_dns_entries_by_address: rule:admin_or_owner - network:get_dns_entries_by_name: rule:admin_or_owner - network:create_private_dns_domain: rule:admin_or_owner - network:create_public_dns_domain: rule:admin_or_owner - network:delete_dns_domain: rule:admin_or_owner - network:attach_external_network: rule:admin_api - network:get_vif_by_mac_address: rule:admin_or_owner - os_compute_api:servers:detail:get_all_tenants: is_admin:True - os_compute_api:servers:index:get_all_tenants: is_admin:True - os_compute_api:servers:confirm_resize: rule:admin_or_owner - os_compute_api:servers:create: rule:admin_or_owner - os_compute_api:servers:create:attach_network: rule:admin_or_owner - os_compute_api:servers:create:attach_volume: rule:admin_or_owner - os_compute_api:servers:create:forced_host: rule:admin_api - os_compute_api:servers:delete: rule:admin_or_owner - os_compute_api:servers:update: rule:admin_or_owner - os_compute_api:servers:detail: rule:admin_or_owner - os_compute_api:servers:index: rule:admin_or_owner - os_compute_api:servers:reboot: rule:admin_or_owner - os_compute_api:servers:rebuild: rule:admin_or_owner - os_compute_api:servers:resize: rule:admin_or_owner - os_compute_api:servers:revert_resize: rule:admin_or_owner - os_compute_api:servers:show: rule:admin_or_owner - os_compute_api:servers:show:host_status: rule:admin_api - os_compute_api:servers:create_image: rule:admin_or_owner - os_compute_api:servers:create_image:allow_volume_backed: rule:admin_or_owner - os_compute_api:servers:start: rule:admin_or_owner - os_compute_api:servers:stop: rule:admin_or_owner - os_compute_api:servers:trigger_crash_dump: rule:admin_or_owner - os_compute_api:servers:migrations:force_complete: rule:admin_api - os_compute_api:servers:migrations:delete: rule:admin_api - os_compute_api:servers:discoverable: "@" - os_compute_api:servers:migrations:index: rule:admin_api - os_compute_api:servers:migrations:show: rule:admin_api - os_compute_api:os-access-ips:discoverable: "@" - os_compute_api:os-access-ips: rule:admin_or_owner - os_compute_api:os-admin-actions: rule:admin_api - os_compute_api:os-admin-actions:discoverable: "@" - os_compute_api:os-admin-actions:reset_network: rule:admin_api - os_compute_api:os-admin-actions:inject_network_info: rule:admin_api - os_compute_api:os-admin-actions:reset_state: rule:admin_api - os_compute_api:os-admin-password: rule:admin_or_owner - os_compute_api:os-admin-password:discoverable: "@" - os_compute_api:os-aggregates:discoverable: "@" - os_compute_api:os-aggregates:index: rule:admin_api - os_compute_api:os-aggregates:create: rule:admin_api - os_compute_api:os-aggregates:show: rule:admin_api - os_compute_api:os-aggregates:update: rule:admin_api - os_compute_api:os-aggregates:delete: rule:admin_api - os_compute_api:os-aggregates:add_host: rule:admin_api - os_compute_api:os-aggregates:remove_host: rule:admin_api - os_compute_api:os-aggregates:set_metadata: rule:admin_api - os_compute_api:os-agents: rule:admin_api - os_compute_api:os-agents:discoverable: "@" - os_compute_api:os-attach-interfaces: rule:admin_or_owner - os_compute_api:os-attach-interfaces:discoverable: "@" - os_compute_api:os-baremetal-nodes: rule:admin_api - os_compute_api:os-baremetal-nodes:discoverable: "@" - os_compute_api:os-block-device-mapping-v1:discoverable: "@" - os_compute_api:os-cells: rule:admin_api - os_compute_api:os-cells:create: rule:admin_api - os_compute_api:os-cells:delete: rule:admin_api - os_compute_api:os-cells:update: rule:admin_api - os_compute_api:os-cells:sync_instances: rule:admin_api - os_compute_api:os-cells:discoverable: "@" - os_compute_api:os-certificates:create: rule:admin_or_owner - os_compute_api:os-certificates:show: rule:admin_or_owner - os_compute_api:os-certificates:discoverable: "@" - os_compute_api:os-cloudpipe: rule:admin_api - os_compute_api:os-cloudpipe:discoverable: "@" - os_compute_api:os-config-drive: rule:admin_or_owner - os_compute_api:os-config-drive:discoverable: "@" - os_compute_api:os-consoles:discoverable: "@" - os_compute_api:os-consoles:create: rule:admin_or_owner - os_compute_api:os-consoles:delete: rule:admin_or_owner - os_compute_api:os-consoles:index: rule:admin_or_owner - os_compute_api:os-consoles:show: rule:admin_or_owner - os_compute_api:os-console-output:discoverable: "@" - os_compute_api:os-console-output: rule:admin_or_owner - os_compute_api:os-remote-consoles: rule:admin_or_owner - os_compute_api:os-remote-consoles:discoverable: "@" - os_compute_api:os-create-backup:discoverable: "@" - os_compute_api:os-create-backup: rule:admin_or_owner - os_compute_api:os-deferred-delete: rule:admin_or_owner - os_compute_api:os-deferred-delete:discoverable: "@" - os_compute_api:os-disk-config: rule:admin_or_owner - os_compute_api:os-disk-config:discoverable: "@" - os_compute_api:os-evacuate: rule:admin_api - os_compute_api:os-evacuate:discoverable: "@" - os_compute_api:os-extended-server-attributes: rule:admin_api - os_compute_api:os-extended-server-attributes:discoverable: "@" - os_compute_api:os-extended-status: rule:admin_or_owner - os_compute_api:os-extended-status:discoverable: "@" - os_compute_api:os-extended-availability-zone: rule:admin_or_owner - os_compute_api:os-extended-availability-zone:discoverable: "@" - os_compute_api:extensions: rule:admin_or_owner - os_compute_api:extensions:discoverable: "@" - os_compute_api:extension_info:discoverable: "@" - os_compute_api:os-extended-volumes: rule:admin_or_owner - os_compute_api:os-extended-volumes:discoverable: "@" - os_compute_api:os-fixed-ips: rule:admin_api - os_compute_api:os-fixed-ips:discoverable: "@" - os_compute_api:os-flavor-access: rule:admin_or_owner - os_compute_api:os-flavor-access:discoverable: "@" - os_compute_api:os-flavor-access:remove_tenant_access: rule:admin_api - os_compute_api:os-flavor-access:add_tenant_access: rule:admin_api - os_compute_api:os-flavor-rxtx: rule:admin_or_owner - os_compute_api:os-flavor-rxtx:discoverable: "@" - os_compute_api:flavors: rule:admin_or_owner - os_compute_api:flavors:discoverable: "@" - os_compute_api:os-flavor-extra-specs:discoverable: "@" - os_compute_api:os-flavor-extra-specs:index: rule:admin_or_owner - os_compute_api:os-flavor-extra-specs:show: rule:admin_or_owner - os_compute_api:os-flavor-extra-specs:create: rule:admin_api - os_compute_api:os-flavor-extra-specs:update: rule:admin_api - os_compute_api:os-flavor-extra-specs:delete: rule:admin_api - os_compute_api:os-flavor-manage:discoverable: "@" - os_compute_api:os-flavor-manage: rule:admin_api - os_compute_api:os-floating-ip-dns: rule:admin_or_owner - os_compute_api:os-floating-ip-dns:discoverable: "@" - os_compute_api:os-floating-ip-dns:domain:update: rule:admin_api - os_compute_api:os-floating-ip-dns:domain:delete: rule:admin_api - os_compute_api:os-floating-ip-pools: rule:admin_or_owner - os_compute_api:os-floating-ip-pools:discoverable: "@" - os_compute_api:os-floating-ips: rule:admin_or_owner - os_compute_api:os-floating-ips:discoverable: "@" - os_compute_api:os-floating-ips-bulk: rule:admin_api - os_compute_api:os-floating-ips-bulk:discoverable: "@" - os_compute_api:os-fping: rule:admin_or_owner - os_compute_api:os-fping:discoverable: "@" - os_compute_api:os-fping:all_tenants: rule:admin_api - os_compute_api:os-hide-server-addresses: is_admin:False - os_compute_api:os-hide-server-addresses:discoverable: "@" - os_compute_api:os-hosts: rule:admin_api - os_compute_api:os-hosts:discoverable: "@" - os_compute_api:os-hypervisors: rule:admin_api - os_compute_api:os-hypervisors:discoverable: "@" - os_compute_api:images:discoverable: "@" - os_compute_api:image-size: rule:admin_or_owner - os_compute_api:image-size:discoverable: "@" - os_compute_api:os-instance-actions: rule:admin_or_owner - os_compute_api:os-instance-actions:discoverable: "@" - os_compute_api:os-instance-actions:events: rule:admin_api - os_compute_api:os-instance-usage-audit-log: rule:admin_api - os_compute_api:os-instance-usage-audit-log:discoverable: "@" - os_compute_api:ips:discoverable: "@" - os_compute_api:ips:index: rule:admin_or_owner - os_compute_api:ips:show: rule:admin_or_owner - os_compute_api:os-keypairs:discoverable: "@" - os_compute_api:os-keypairs: rule:admin_or_owner - os_compute_api:os-keypairs:index: rule:admin_api or user_id:%(user_id)s - os_compute_api:os-keypairs:show: rule:admin_api or user_id:%(user_id)s - os_compute_api:os-keypairs:create: rule:admin_api or user_id:%(user_id)s - os_compute_api:os-keypairs:delete: rule:admin_api or user_id:%(user_id)s - os_compute_api:limits:discoverable: "@" - os_compute_api:limits: rule:admin_or_owner - os_compute_api:os-lock-server:discoverable: "@" - os_compute_api:os-lock-server:lock: rule:admin_or_owner - os_compute_api:os-lock-server:unlock: rule:admin_or_owner - os_compute_api:os-lock-server:unlock:unlock_override: rule:admin_api - os_compute_api:os-migrate-server:discoverable: "@" - os_compute_api:os-migrate-server:migrate: rule:admin_api - os_compute_api:os-migrate-server:migrate_live: rule:admin_api - os_compute_api:os-multinic: rule:admin_or_owner - os_compute_api:os-multinic:discoverable: "@" - os_compute_api:os-networks: rule:admin_api - os_compute_api:os-networks:view: rule:admin_or_owner - os_compute_api:os-networks:discoverable: "@" - os_compute_api:os-networks-associate: rule:admin_api - os_compute_api:os-networks-associate:discoverable: "@" - os_compute_api:os-pause-server:discoverable: "@" - os_compute_api:os-pause-server:pause: rule:admin_or_owner - os_compute_api:os-pause-server:unpause: rule:admin_or_owner - os_compute_api:os-pci:pci_servers: rule:admin_or_owner - os_compute_api:os-pci:discoverable: "@" - os_compute_api:os-pci:index: rule:admin_api - os_compute_api:os-pci:detail: rule:admin_api - os_compute_api:os-pci:show: rule:admin_api - os_compute_api:os-personality:discoverable: "@" - os_compute_api:os-preserve-ephemeral-rebuild:discoverable: "@" - os_compute_api:os-quota-sets:discoverable: "@" - os_compute_api:os-quota-sets:show: rule:admin_or_owner - os_compute_api:os-quota-sets:defaults: "@" - os_compute_api:os-quota-sets:update: rule:admin_api - os_compute_api:os-quota-sets:delete: rule:admin_api - os_compute_api:os-quota-sets:detail: rule:admin_api - os_compute_api:os-quota-class-sets:update: rule:admin_api - os_compute_api:os-quota-class-sets:show: is_admin:True or quota_class:%(quota_class)s - os_compute_api:os-quota-class-sets:discoverable: "@" - os_compute_api:os-rescue: rule:admin_or_owner - os_compute_api:os-rescue:discoverable: "@" - os_compute_api:os-scheduler-hints:discoverable: "@" - os_compute_api:os-security-group-default-rules:discoverable: "@" - os_compute_api:os-security-group-default-rules: rule:admin_api - os_compute_api:os-security-groups: rule:admin_or_owner - os_compute_api:os-security-groups:discoverable: "@" - os_compute_api:os-server-diagnostics: rule:admin_api - os_compute_api:os-server-diagnostics:discoverable: "@" - os_compute_api:os-server-password: rule:admin_or_owner - os_compute_api:os-server-password:discoverable: "@" - os_compute_api:os-server-usage: rule:admin_or_owner - os_compute_api:os-server-usage:discoverable: "@" - os_compute_api:os-server-groups: rule:admin_or_owner - os_compute_api:os-server-groups:discoverable: "@" - os_compute_api:os-server-tags:index: "@" - os_compute_api:os-server-tags:show: "@" - os_compute_api:os-server-tags:update: "@" - os_compute_api:os-server-tags:update_all: "@" - os_compute_api:os-server-tags:delete: "@" - os_compute_api:os-server-tags:delete_all: "@" - os_compute_api:os-services: rule:admin_api - os_compute_api:os-services:discoverable: "@" - os_compute_api:server-metadata:discoverable: "@" - os_compute_api:server-metadata:index: rule:admin_or_owner - os_compute_api:server-metadata:show: rule:admin_or_owner - os_compute_api:server-metadata:delete: rule:admin_or_owner - os_compute_api:server-metadata:create: rule:admin_or_owner - os_compute_api:server-metadata:update: rule:admin_or_owner - os_compute_api:server-metadata:update_all: rule:admin_or_owner - os_compute_api:os-shelve:shelve: rule:admin_or_owner - os_compute_api:os-shelve:shelve:discoverable: "@" - os_compute_api:os-shelve:shelve_offload: rule:admin_api - os_compute_api:os-simple-tenant-usage:discoverable: "@" - os_compute_api:os-simple-tenant-usage:show: rule:admin_or_owner - os_compute_api:os-simple-tenant-usage:list: rule:admin_api - os_compute_api:os-suspend-server:discoverable: "@" - os_compute_api:os-suspend-server:suspend: rule:admin_or_owner - os_compute_api:os-suspend-server:resume: rule:admin_or_owner - os_compute_api:os-tenant-networks: rule:admin_or_owner - os_compute_api:os-tenant-networks:discoverable: "@" - os_compute_api:os-shelve:unshelve: rule:admin_or_owner - os_compute_api:os-user-data:discoverable: "@" - os_compute_api:os-virtual-interfaces: rule:admin_or_owner - os_compute_api:os-virtual-interfaces:discoverable: "@" - os_compute_api:os-volumes: rule:admin_or_owner - os_compute_api:os-volumes:discoverable: "@" - os_compute_api:os-volumes-attachments:index: rule:admin_or_owner - os_compute_api:os-volumes-attachments:show: rule:admin_or_owner - os_compute_api:os-volumes-attachments:create: rule:admin_or_owner - os_compute_api:os-volumes-attachments:update: rule:admin_api - os_compute_api:os-volumes-attachments:delete: rule:admin_or_owner - os_compute_api:os-volumes-attachments:discoverable: "@" - os_compute_api:os-availability-zone:list: rule:admin_or_owner - os_compute_api:os-availability-zone:discoverable: "@" - os_compute_api:os-availability-zone:detail: rule:admin_api - os_compute_api:os-used-limits: rule:admin_api - os_compute_api:os-used-limits:discoverable: "@" - os_compute_api:os-migrations:index: rule:admin_api - os_compute_api:os-migrations:discoverable: "@" - os_compute_api:os-assisted-volume-snapshots:create: rule:admin_api - os_compute_api:os-assisted-volume-snapshots:delete: rule:admin_api - os_compute_api:os-assisted-volume-snapshots:discoverable: "@" - os_compute_api:os-console-auth-tokens: rule:admin_api - os_compute_api:os-console-auth-tokens:discoverable: "@" - os_compute_api:os-server-external-events:create: rule:admin_api - os_compute_api:os-server-external-events:discoverable: "@" + horizon: + apache: | + Listen 0.0.0.0:{{ tuple "dashboard" "internal" "web" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} + + LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined + LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" proxy + + SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded + CustomLog /dev/stdout combined env=!forwarded + CustomLog /dev/stdout proxy env=forwarded + + + WSGIScriptReloading On + WSGIDaemonProcess horizon-http processes=5 threads=1 user=horizon group=horizon display-name=%{GROUP} python-path=/var/lib/kolla/venv/lib/python2.7/site-packages + WSGIProcessGroup horizon-http + WSGIScriptAlias / /var/www/cgi-bin/horizon/django.wsgi + WSGIPassAuthorization On + + + Require all granted + + + Alias /static /var/www/html/horizon + + SetHandler None + + + = 2.4> + ErrorLogFormat "%{cu}t %M" + + ErrorLog /dev/stdout + TransferLog /dev/stdout + + SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded + CustomLog /dev/stdout combined env=!forwarded + CustomLog /dev/stdout proxy env=forwarded + + local_settings: + config: + # Use "True" and "False" as Titlecase strings with quotes, boolean + # values will not work + horizon_secret_key: 9aee62c0-5253-4a86-b189-e0fb71fa503c + debug: "False" + keystone_multidomain_support: "True" + keystone_default_domain: Default + openstack_cinder_features: + enable_backup: "True" + openstack_neutron_network: + enable_router: "True" + enable_quotas: "True" + enable_ipv6: "True" + enable_distributed_router: "False" + enable_ha_router: "False" + enable_lb: "True" + enable_firewall: "True" + enable_vpn: "True" + enable_fip_topology_check: "True" + auth: + sso: + enabled: False + initial_choice: "credentials" + idp_mapping: + - name: "acme_oidc" + label: "Acme Corporation - OpenID Connect" + idp: "myidp1" + protocol: "oidc" + - name: "acme_saml2" + label: "Acme Corporation - SAML2" + idp: "myidp2" + protocol: "saml2" + template: | + import os + + from django.utils.translation import ugettext_lazy as _ + + from openstack_dashboard import exceptions + + DEBUG = {{ .Values.conf.horizon.local_settings.config.debug }} + TEMPLATE_DEBUG = DEBUG + + COMPRESS_OFFLINE = True + COMPRESS_CSS_HASHING_METHOD = "hash" + + # WEBROOT is the location relative to Webserver root + # should end with a slash. + WEBROOT = '/' + # LOGIN_URL = WEBROOT + 'auth/login/' + # LOGOUT_URL = WEBROOT + 'auth/logout/' + # + # LOGIN_REDIRECT_URL can be used as an alternative for + # HORIZON_CONFIG.user_home, if user_home is not set. + # Do not set it to '/home/', as this will cause circular redirect loop + # LOGIN_REDIRECT_URL = WEBROOT + + # Required for Django 1.5. + # If horizon is running in production (DEBUG is False), set this + # with the list of host/domain names that the application can serve. + # For more information see: + # https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts + ALLOWED_HOSTS = ['*'] + + # Set SSL proxy settings: + # For Django 1.4+ pass this header from the proxy after terminating the SSL, + # and don't forget to strip it from the client's request. + # For more information see: + # https://docs.djangoproject.com/en/1.4/ref/settings/#secure-proxy-ssl-header + #SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') + # https://docs.djangoproject.com/en/1.5/ref/settings/#secure-proxy-ssl-header + #SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') + + # If Horizon is being served through SSL, then uncomment the following two + # settings to better secure the cookies from security exploits + #CSRF_COOKIE_SECURE = True + #SESSION_COOKIE_SECURE = True + + # Overrides for OpenStack API versions. Use this setting to force the + # OpenStack dashboard to use a specific API version for a given service API. + # Versions specified here should be integers or floats, not strings. + # NOTE: The version should be formatted as it appears in the URL for the + # service API. For example, The identity service APIs have inconsistent + # use of the decimal point, so valid options would be 2.0 or 3. + #OPENSTACK_API_VERSIONS = { + # "data-processing": 1.1, + # "identity": 3, + # "volume": 2, + #} + + OPENSTACK_API_VERSIONS = { + "identity": 3, + } + + # Set this to True if running on multi-domain model. When this is enabled, it + # will require user to enter the Domain name in addition to username for login. + OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = '{{ .Values.conf.horizon.local_settings.config.keystone_multidomain_support }}' + + # Overrides the default domain used when running on single-domain model + # with Keystone V3. All entities will be created in the default domain. + OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = '{{ .Values.conf.horizon.local_settings.config.keystone_default_domain }}' + + # Set Console type: + # valid options are "AUTO"(default), "VNC", "SPICE", "RDP", "SERIAL" or None + # Set to None explicitly if you want to deactivate the console. + #CONSOLE_TYPE = "AUTO" + + # Default OpenStack Dashboard configuration. + HORIZON_CONFIG = { + 'user_home': 'openstack_dashboard.views.get_user_home', + 'ajax_queue_limit': 10, + 'auto_fade_alerts': { + 'delay': 3000, + 'fade_duration': 1500, + 'types': ['alert-success', 'alert-info'] + }, + 'help_url': "http://docs.openstack.org", + 'exceptions': {'recoverable': exceptions.RECOVERABLE, + 'not_found': exceptions.NOT_FOUND, + 'unauthorized': exceptions.UNAUTHORIZED}, + 'modal_backdrop': 'static', + 'angular_modules': [], + 'js_files': [], + 'js_spec_files': [], + } + + # Specify a regular expression to validate user passwords. + #HORIZON_CONFIG["password_validator"] = { + # "regex": '.*', + # "help_text": _("Your password does not meet the requirements."), + #} + + # Disable simplified floating IP address management for deployments with + # multiple floating IP pools or complex network requirements. + #HORIZON_CONFIG["simple_ip_management"] = False + + # Turn off browser autocompletion for forms including the login form and + # the database creation workflow if so desired. + #HORIZON_CONFIG["password_autocomplete"] = "off" + + # Setting this to True will disable the reveal button for password fields, + # including on the login form. + #HORIZON_CONFIG["disable_password_reveal"] = False + + LOCAL_PATH = '/tmp' + + # Set custom secret key: + # You can either set it to a specific value or you can let horizon generate a + # default secret key that is unique on this machine, e.i. regardless of the + # amount of Python WSGI workers (if used behind Apache+mod_wsgi): However, + # there may be situations where you would want to set this explicitly, e.g. + # when multiple dashboard instances are distributed on different machines + # (usually behind a load-balancer). Either you have to make sure that a session + # gets all requests routed to the same dashboard instance or you set the same + # SECRET_KEY for all of them. + SECRET_KEY='{{ .Values.conf.horizon.local_settings.config.horizon_secret_key }}' + + CACHES = { + 'default': { + 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', + 'LOCATION': '{{ tuple "oslo_cache" "internal" "memcache" . | include "helm-toolkit.endpoints.host_and_port_endpoint_uri_lookup" }}', + } + } + DATABASES = { + 'default': { + # Database configuration here + 'ENGINE': 'django.db.backends.mysql', + 'NAME': '{{ .Values.endpoints.oslo_db.path | base }}', + 'USER': '{{ .Values.endpoints.oslo_db.auth.horizon.username }}', + 'PASSWORD': '{{ .Values.endpoints.oslo_db.auth.horizon.password }}', + 'HOST': '{{ tuple "oslo_db" "internal" . | include "helm-toolkit.endpoints.hostname_fqdn_endpoint_lookup" }}', + 'default-character-set': 'utf8', + 'PORT': '{{ tuple "oslo_db" "internal" "mysql" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }}' + } + } + SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db' + + # Send email to the console by default + EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' + # Or send them to /dev/null + #EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend' + + # Configure these for your outgoing email host + #EMAIL_HOST = 'smtp.my-company.com' + #EMAIL_PORT = 25\\ + #EMAIL_HOST_USER = 'djangomail' + #EMAIL_HOST_PASSWORD = 'top-secret!' + + # For multiple regions uncomment this configuration, and add (endpoint, title). + #AVAILABLE_REGIONS = [ + # ('http://cluster1.example.com:5000/v2.0', 'cluster1'), + # ('http://cluster2.example.com:5000/v2.0', 'cluster2'), + #] + + OPENSTACK_KEYSTONE_URL = "{{ tuple "identity" "public" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup" }}" + OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_" + + + {{- if .Values.conf.horizon.local_settings.config.auth.sso.enabled }} + # Enables keystone web single-sign-on if set to True. + WEBSSO_ENABLED = True + + # Determines which authentication choice to show as default. + WEBSSO_INITIAL_CHOICE = "{{ .Values.conf.horizon.local_settings.config.auth.sso.initial_choice }}" + + # The list of authentication mechanisms + # which include keystone federation protocols. + # Current supported protocol IDs are 'saml2' and 'oidc' + # which represent SAML 2.0, OpenID Connect respectively. + # Do not remove the mandatory credentials mechanism. + WEBSSO_CHOICES = ( + ("credentials", _("Keystone Credentials")), + {{- range $i, $sso := .Values.conf.horizon.local_settings.config.auth.idp_mapping }} + ({{ $sso.name | quote }}, {{ $sso.label | quote }}), + {{- end }} + ) + + WEBSSO_IDP_MAPPING = { + {{- range $i, $sso := .Values.conf.horizon.local_settings.config.auth.idp_mapping }} + {{ $sso.name | quote}}: ({{ $sso.idp | quote }}, {{ $sso.protocol | quote }}), + {{- end }} + } + + {{- end }} + + # Disable SSL certificate checks (useful for self-signed certificates): + #OPENSTACK_SSL_NO_VERIFY = True + + # The CA certificate to use to verify SSL connections + #OPENSTACK_SSL_CACERT = '/path/to/cacert.pem' + + # The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the + # capabilities of the auth backend for Keystone. + # If Keystone has been configured to use LDAP as the auth backend then set + # can_edit_user to False and name to 'ldap'. + # + # TODO(tres): Remove these once Keystone has an API to identify auth backend. + OPENSTACK_KEYSTONE_BACKEND = { + 'name': 'native', + 'can_edit_user': True, + 'can_edit_group': True, + 'can_edit_project': True, + 'can_edit_domain': True, + 'can_edit_role': True, + } + + # Setting this to True, will add a new "Retrieve Password" action on instance, + # allowing Admin session password retrieval/decryption. + #OPENSTACK_ENABLE_PASSWORD_RETRIEVE = False + + # The Launch Instance user experience has been significantly enhanced. + # You can choose whether to enable the new launch instance experience, + # the legacy experience, or both. The legacy experience will be removed + # in a future release, but is available as a temporary backup setting to ensure + # compatibility with existing deployments. Further development will not be + # done on the legacy experience. Please report any problems with the new + # experience via the Launchpad tracking system. + # + # Toggle LAUNCH_INSTANCE_LEGACY_ENABLED and LAUNCH_INSTANCE_NG_ENABLED to + # determine the experience to enable. Set them both to true to enable + # both. + #LAUNCH_INSTANCE_LEGACY_ENABLED = True + #LAUNCH_INSTANCE_NG_ENABLED = False + + # The Xen Hypervisor has the ability to set the mount point for volumes + # attached to instances (other Hypervisors currently do not). Setting + # can_set_mount_point to True will add the option to set the mount point + # from the UI. + OPENSTACK_HYPERVISOR_FEATURES = { + 'can_set_mount_point': False, + 'can_set_password': False, + } + + # The OPENSTACK_CINDER_FEATURES settings can be used to enable optional + # services provided by cinder that is not exposed by its extension API. + OPENSTACK_CINDER_FEATURES = { + 'enable_backup': {{ .Values.conf.horizon.local_settings.config.openstack_cinder_features.enable_backup }}, + } + + # The OPENSTACK_NEUTRON_NETWORK settings can be used to enable optional + # services provided by neutron. Options currently available are load + # balancer service, security groups, quotas, VPN service. + OPENSTACK_NEUTRON_NETWORK = { + 'enable_router': {{ .Values.conf.horizon.local_settings.config.openstack_neutron_network.enable_router }}, + 'enable_quotas': {{ .Values.conf.horizon.local_settings.config.openstack_neutron_network.enable_quotas }}, + 'enable_ipv6': {{ .Values.conf.horizon.local_settings.config.openstack_neutron_network.enable_ipv6 }}, + 'enable_distributed_router': {{ .Values.conf.horizon.local_settings.config.openstack_neutron_network.enable_distributed_router }}, + 'enable_ha_router': {{ .Values.conf.horizon.local_settings.config.openstack_neutron_network.enable_ha_router }}, + 'enable_lb': {{ .Values.conf.horizon.local_settings.config.openstack_neutron_network.enable_lb }}, + 'enable_firewall': {{ .Values.conf.horizon.local_settings.config.openstack_neutron_network.enable_firewall }}, + 'enable_vpn': {{ .Values.conf.horizon.local_settings.config.openstack_neutron_network.enable_vpn }}, + 'enable_fip_topology_check': {{ .Values.conf.horizon.local_settings.config.openstack_neutron_network.enable_fip_topology_check }}, + + # The profile_support option is used to detect if an external router can be + # configured via the dashboard. When using specific plugins the + # profile_support can be turned on if needed. + 'profile_support': None, + #'profile_support': 'cisco', + + # Set which provider network types are supported. Only the network types + # in this list will be available to choose from when creating a network. + # Network types include local, flat, vlan, gre, and vxlan. + 'supported_provider_types': ['*'], + + # Set which VNIC types are supported for port binding. Only the VNIC + # types in this list will be available to choose from when creating a + # port. + # VNIC types include 'normal', 'macvtap' and 'direct'. + 'supported_vnic_types': ['*'] + } + + # The OPENSTACK_IMAGE_BACKEND settings can be used to customize features + # in the OpenStack Dashboard related to the Image service, such as the list + # of supported image formats. + #OPENSTACK_IMAGE_BACKEND = { + # 'image_formats': [ + # ('', _('Select format')), + # ('aki', _('AKI - Amazon Kernel Image')), + # ('ami', _('AMI - Amazon Machine Image')), + # ('ari', _('ARI - Amazon Ramdisk Image')), + # ('docker', _('Docker')), + # ('iso', _('ISO - Optical Disk Image')), + # ('ova', _('OVA - Open Virtual Appliance')), + # ('qcow2', _('QCOW2 - QEMU Emulator')), + # ('raw', _('Raw')), + # ('vdi', _('VDI - Virtual Disk Image')), + # ('vhd', ('VHD - Virtual Hard Disk')), + # ('vmdk', _('VMDK - Virtual Machine Disk')), + # ] + #} + + # The IMAGE_CUSTOM_PROPERTY_TITLES settings is used to customize the titles for + # image custom property attributes that appear on image detail pages. + IMAGE_CUSTOM_PROPERTY_TITLES = { + "architecture": _("Architecture"), + "kernel_id": _("Kernel ID"), + "ramdisk_id": _("Ramdisk ID"), + "image_state": _("Euca2ools state"), + "project_id": _("Project ID"), + "image_type": _("Image Type"), + } + + # The IMAGE_RESERVED_CUSTOM_PROPERTIES setting is used to specify which image + # custom properties should not be displayed in the Image Custom Properties + # table. + IMAGE_RESERVED_CUSTOM_PROPERTIES = [] + + # OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints + # in the Keystone service catalog. Use this setting when Horizon is running + # external to the OpenStack environment. The default is 'publicURL'. + OPENSTACK_ENDPOINT_TYPE = "publicURL" + + # SECONDARY_ENDPOINT_TYPE specifies the fallback endpoint type to use in the + # case that OPENSTACK_ENDPOINT_TYPE is not present in the endpoints + # in the Keystone service catalog. Use this setting when Horizon is running + # external to the OpenStack environment. The default is None. This + # value should differ from OPENSTACK_ENDPOINT_TYPE if used. + SECONDARY_ENDPOINT_TYPE = "publicURL" + + # The number of objects (Swift containers/objects or images) to display + # on a single page before providing a paging element (a "more" link) + # to paginate results. + API_RESULT_LIMIT = 1000 + API_RESULT_PAGE_SIZE = 20 + + # The size of chunk in bytes for downloading objects from Swift + SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024 + + # Specify a maximum number of items to display in a dropdown. + DROPDOWN_MAX_ITEMS = 30 + + # The timezone of the server. This should correspond with the timezone + # of your entire OpenStack installation, and hopefully be in UTC. + TIME_ZONE = "UTC" + + # When launching an instance, the menu of available flavors is + # sorted by RAM usage, ascending. If you would like a different sort order, + # you can provide another flavor attribute as sorting key. Alternatively, you + # can provide a custom callback method to use for sorting. You can also provide + # a flag for reverse sort. For more info, see + # http://docs.python.org/2/library/functions.html#sorted + #CREATE_INSTANCE_FLAVOR_SORT = { + # 'key': 'name', + # # or + # 'key': my_awesome_callback_method, + # 'reverse': False, + #} + + # Set this to True to display an 'Admin Password' field on the Change Password + # form to verify that it is indeed the admin logged-in who wants to change + # the password. + # ENFORCE_PASSWORD_CHECK = False + + # Modules that provide /auth routes that can be used to handle different types + # of user authentication. Add auth plugins that require extra route handling to + # this list. + #AUTHENTICATION_URLS = [ + # 'openstack_auth.urls', + #] + + # The Horizon Policy Enforcement engine uses these values to load per service + # policy rule files. The content of these files should match the files the + # OpenStack services are using to determine role based access control in the + # target installation. + + # Path to directory containing policy.json files + POLICY_FILES_PATH = '/etc/openstack-dashboard' + # Map of local copy of service policy files + #POLICY_FILES = { + # 'identity': 'keystone_policy.json', + # 'compute': 'nova_policy.json', + # 'volume': 'cinder_policy.json', + # 'image': 'glance_policy.json', + # 'orchestration': 'heat_policy.json', + # 'network': 'neutron_policy.json', + # 'telemetry': 'ceilometer_policy.json', + #} + + # Trove user and database extension support. By default support for + # creating users and databases on database instances is turned on. + # To disable these extensions set the permission here to something + # unusable such as ["!"]. + # TROVE_ADD_USER_PERMS = [] + # TROVE_ADD_DATABASE_PERMS = [] + + # Change this patch to the appropriate static directory containing + # two files: _variables.scss and _styles.scss + #CUSTOM_THEME_PATH = 'static/themes/default' + + LOGGING = { + 'version': 1, + # When set to True this will disable all logging except + # for loggers specified in this configuration dictionary. Note that + # if nothing is specified here and disable_existing_loggers is True, + # django.db.backends will still log unless it is disabled explicitly. + 'disable_existing_loggers': False, + 'handlers': { + 'null': { + 'level': 'DEBUG', + 'class': 'django.utils.log.NullHandler', + }, + 'console': { + # Set the level to "DEBUG" for verbose output logging. + 'level': 'INFO', + 'class': 'logging.StreamHandler', + }, + }, + 'loggers': { + # Logging from django.db.backends is VERY verbose, send to null + # by default. + 'django.db.backends': { + 'handlers': ['null'], + 'propagate': False, + }, + 'requests': { + 'handlers': ['null'], + 'propagate': False, + }, + 'horizon': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'openstack_dashboard': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'novaclient': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'cinderclient': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'glanceclient': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'glanceclient': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'neutronclient': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'heatclient': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'ceilometerclient': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'troveclient': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'swiftclient': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'openstack_auth': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'nose.plugins.manager': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'django': { + 'handlers': ['console'], + 'level': 'DEBUG', + 'propagate': False, + }, + 'iso8601': { + 'handlers': ['null'], + 'propagate': False, + }, + 'scss': { + 'handlers': ['null'], + 'propagate': False, + }, + } + } + + # 'direction' should not be specified for all_tcp/udp/icmp. + # It is specified in the form. + SECURITY_GROUP_RULES = { + 'all_tcp': { + 'name': _('All TCP'), + 'ip_protocol': 'tcp', + 'from_port': '1', + 'to_port': '65535', + }, + 'all_udp': { + 'name': _('All UDP'), + 'ip_protocol': 'udp', + 'from_port': '1', + 'to_port': '65535', + }, + 'all_icmp': { + 'name': _('All ICMP'), + 'ip_protocol': 'icmp', + 'from_port': '-1', + 'to_port': '-1', + }, + 'ssh': { + 'name': 'SSH', + 'ip_protocol': 'tcp', + 'from_port': '22', + 'to_port': '22', + }, + 'smtp': { + 'name': 'SMTP', + 'ip_protocol': 'tcp', + 'from_port': '25', + 'to_port': '25', + }, + 'dns': { + 'name': 'DNS', + 'ip_protocol': 'tcp', + 'from_port': '53', + 'to_port': '53', + }, + 'http': { + 'name': 'HTTP', + 'ip_protocol': 'tcp', + 'from_port': '80', + 'to_port': '80', + }, + 'pop3': { + 'name': 'POP3', + 'ip_protocol': 'tcp', + 'from_port': '110', + 'to_port': '110', + }, + 'imap': { + 'name': 'IMAP', + 'ip_protocol': 'tcp', + 'from_port': '143', + 'to_port': '143', + }, + 'ldap': { + 'name': 'LDAP', + 'ip_protocol': 'tcp', + 'from_port': '389', + 'to_port': '389', + }, + 'https': { + 'name': 'HTTPS', + 'ip_protocol': 'tcp', + 'from_port': '443', + 'to_port': '443', + }, + 'smtps': { + 'name': 'SMTPS', + 'ip_protocol': 'tcp', + 'from_port': '465', + 'to_port': '465', + }, + 'imaps': { + 'name': 'IMAPS', + 'ip_protocol': 'tcp', + 'from_port': '993', + 'to_port': '993', + }, + 'pop3s': { + 'name': 'POP3S', + 'ip_protocol': 'tcp', + 'from_port': '995', + 'to_port': '995', + }, + 'ms_sql': { + 'name': 'MS SQL', + 'ip_protocol': 'tcp', + 'from_port': '1433', + 'to_port': '1433', + }, + 'mysql': { + 'name': 'MYSQL', + 'ip_protocol': 'tcp', + 'from_port': '3306', + 'to_port': '3306', + }, + 'rdp': { + 'name': 'RDP', + 'ip_protocol': 'tcp', + 'from_port': '3389', + 'to_port': '3389', + }, + } + + # Deprecation Notice: + # + # The setting FLAVOR_EXTRA_KEYS has been deprecated. + # Please load extra spec metadata into the Glance Metadata Definition Catalog. + # + # The sample quota definitions can be found in: + # /etc/metadefs/compute-quota.json + # + # The metadata definition catalog supports CLI and API: + # $glance --os-image-api-version 2 help md-namespace-import + # $glance-manage db_load_metadefs + # + # See Metadata Definitions on: http://docs.openstack.org/developer/glance/ + + # Indicate to the Sahara data processing service whether or not + # automatic floating IP allocation is in effect. If it is not + # in effect, the user will be prompted to choose a floating IP + # pool for use in their cluster. False by default. You would want + # to set this to True if you were running Nova Networking with + # auto_assign_floating_ip = True. + #SAHARA_AUTO_IP_ALLOCATION_ENABLED = False + + # The hash algorithm to use for authentication tokens. This must + # match the hash algorithm that the identity server and the + # auth_token middleware are using. Allowed values are the + # algorithms supported by Python's hashlib library. + #OPENSTACK_TOKEN_HASH_ALGORITHM = 'md5' + + # AngularJS requires some settings to be made available to + # the client side. Some settings are required by in-tree / built-in horizon + # features. These settings must be added to REST_API_REQUIRED_SETTINGS in the + # form of ['SETTING_1','SETTING_2'], etc. + # + # You may remove settings from this list for security purposes, but do so at + # the risk of breaking a built-in horizon feature. These settings are required + # for horizon to function properly. Only remove them if you know what you + # are doing. These settings may in the future be moved to be defined within + # the enabled panel configuration. + # You should not add settings to this list for out of tree extensions. + # See: https://wiki.openstack.org/wiki/Horizon/RESTAPI + REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES', + 'LAUNCH_INSTANCE_DEFAULTS', + 'OPENSTACK_IMAGE_FORMATS'] + + # Additional settings can be made available to the client side for + # extensibility by specifying them in REST_API_ADDITIONAL_SETTINGS + # !! Please use extreme caution as the settings are transferred via HTTP/S + # and are not encrypted on the browser. This is an experimental API and + # may be deprecated in the future without notice. + #REST_API_ADDITIONAL_SETTINGS = [] + + # DISALLOW_IFRAME_EMBED can be used to prevent Horizon from being embedded + # within an iframe. Legacy browsers are still vulnerable to a Cross-Frame + # Scripting (XFS) vulnerability, so this option allows extra security hardening + # where iframes are not used in deployment. Default setting is True. + # For more information see: + # http://tinyurl.com/anticlickjack + # DISALLOW_IFRAME_EMBED = True + + STATIC_ROOT = '/var/www/html/horizon' + policy: + ceilometer: + context_is_admin: 'role:admin' + context_is_owner: 'user_id:%(target.user_id)s' + context_is_project: 'project_id:%(target.project_id)s' + segregation: 'rule:context_is_admin' + cinder: + admin_api: 'is_admin:True' + admin_or_owner: 'is_admin:True or project_id:%(project_id)s' + 'backup:backup-export': 'rule:admin_api' + 'backup:backup-import': 'rule:admin_api' + 'backup:create': '' + 'backup:delete': 'rule:admin_or_owner' + 'backup:get': 'rule:admin_or_owner' + 'backup:get_all': 'rule:admin_or_owner' + 'backup:restore': 'rule:admin_or_owner' + 'consistencygroup:create': 'group:nobody' + 'consistencygroup:create_cgsnapshot': 'group:nobody' + 'consistencygroup:delete': 'group:nobody' + 'consistencygroup:delete_cgsnapshot': 'group:nobody' + 'consistencygroup:get': 'group:nobody' + 'consistencygroup:get_all': 'group:nobody' + 'consistencygroup:get_all_cgsnapshots': 'group:nobody' + 'consistencygroup:get_cgsnapshot': 'group:nobody' + 'consistencygroup:update': 'group:nobody' + context_is_admin: 'role:admin' + default: 'rule:admin_or_owner' + 'message:delete': 'rule:admin_or_owner' + 'message:get': 'rule:admin_or_owner' + 'message:get_all': 'rule:admin_or_owner' + 'scheduler_extension:scheduler_stats:get_pools': 'rule:admin_api' + 'snapshot_extension:snapshot_actions:update_snapshot_status': '' + 'snapshot_extension:snapshot_manage': 'rule:admin_api' + 'snapshot_extension:snapshot_unmanage': 'rule:admin_api' + 'volume:accept_transfer': '' + 'volume:create': '' + 'volume:create_snapshot': 'rule:admin_or_owner' + 'volume:create_transfer': 'rule:admin_or_owner' + 'volume:delete': 'rule:admin_or_owner' + 'volume:delete_snapshot': 'rule:admin_or_owner' + 'volume:delete_snapshot_metadata': 'rule:admin_or_owner' + 'volume:delete_transfer': 'rule:admin_or_owner' + 'volume:delete_volume_metadata': 'rule:admin_or_owner' + 'volume:extend': 'rule:admin_or_owner' + 'volume:failover_host': 'rule:admin_api' + 'volume:freeze_host': 'rule:admin_api' + 'volume:get': 'rule:admin_or_owner' + 'volume:get_all': 'rule:admin_or_owner' + 'volume:get_all_snapshots': 'rule:admin_or_owner' + 'volume:get_all_transfers': 'rule:admin_or_owner' + 'volume:get_snapshot': 'rule:admin_or_owner' + 'volume:get_snapshot_metadata': 'rule:admin_or_owner' + 'volume:get_transfer': 'rule:admin_or_owner' + 'volume:get_volume_admin_metadata': 'rule:admin_api' + 'volume:get_volume_metadata': 'rule:admin_or_owner' + 'volume:retype': 'rule:admin_or_owner' + 'volume:thaw_host': 'rule:admin_api' + 'volume:update': 'rule:admin_or_owner' + 'volume:update_readonly_flag': 'rule:admin_or_owner' + 'volume:update_snapshot': 'rule:admin_or_owner' + 'volume:update_snapshot_metadata': 'rule:admin_or_owner' + 'volume:update_volume_admin_metadata': 'rule:admin_api' + 'volume:update_volume_metadata': 'rule:admin_or_owner' + 'volume_extension:access_types_extra_specs': 'rule:admin_api' + 'volume_extension:access_types_qos_specs_id': 'rule:admin_api' + 'volume_extension:backup_admin_actions:force_delete': 'rule:admin_api' + 'volume_extension:backup_admin_actions:reset_status': 'rule:admin_api' + 'volume_extension:capabilities': 'rule:admin_api' + 'volume_extension:extended_snapshot_attributes': 'rule:admin_or_owner' + 'volume_extension:hosts': 'rule:admin_api' + 'volume_extension:quota_classes': 'rule:admin_api' + 'volume_extension:quota_classes:validate_setup_for_nested_quota_use': 'rule:admin_api' + 'volume_extension:quotas:delete': 'rule:admin_api' + 'volume_extension:quotas:show': '' + 'volume_extension:quotas:update': 'rule:admin_api' + 'volume_extension:replication:promote': 'rule:admin_api' + 'volume_extension:replication:reenable': 'rule:admin_api' + 'volume_extension:services:index': 'rule:admin_api' + 'volume_extension:services:update': 'rule:admin_api' + 'volume_extension:snapshot_admin_actions:force_delete': 'rule:admin_api' + 'volume_extension:snapshot_admin_actions:reset_status': 'rule:admin_api' + 'volume_extension:types_extra_specs': 'rule:admin_api' + 'volume_extension:types_manage': 'rule:admin_api' + 'volume_extension:volume_actions:upload_image': 'rule:admin_or_owner' + 'volume_extension:volume_actions:upload_public': 'rule:admin_api' + 'volume_extension:volume_admin_actions:force_delete': 'rule:admin_api' + 'volume_extension:volume_admin_actions:force_detach': 'rule:admin_api' + 'volume_extension:volume_admin_actions:migrate_volume': 'rule:admin_api' + 'volume_extension:volume_admin_actions:migrate_volume_completion': 'rule:admin_api' + 'volume_extension:volume_admin_actions:reset_status': 'rule:admin_api' + 'volume_extension:volume_encryption_metadata': 'rule:admin_or_owner' + 'volume_extension:volume_host_attribute': 'rule:admin_api' + 'volume_extension:volume_image_metadata': 'rule:admin_or_owner' + 'volume_extension:volume_manage': 'rule:admin_api' + 'volume_extension:volume_mig_status_attribute': 'rule:admin_api' + 'volume_extension:volume_tenant_attribute': 'rule:admin_or_owner' + 'volume_extension:volume_type_access': 'rule:admin_or_owner' + 'volume_extension:volume_type_access:addProjectAccess': 'rule:admin_api' + 'volume_extension:volume_type_access:removeProjectAccess': 'rule:admin_api' + 'volume_extension:volume_type_encryption': 'rule:admin_api' + 'volume_extension:volume_unmanage': 'rule:admin_api' + glance: + add_image: '' + add_member: '' + add_metadef_namespace: '' + add_metadef_object: '' + add_metadef_property: '' + add_metadef_resource_type_association: '' + add_task: '' + admin_or_owner: 'is_admin:True or project_id:%(project_id)s' + context_is_admin: 'role:admin' + copy_from: '' + default: 'rule:admin_or_owner' + delete_image: 'rule:admin_or_owner' + delete_image_location: '' + delete_member: '' + delete_metadef_namespace: '' + download_image: '' + get_image: '' + get_image_location: '' + get_images: '' + get_member: '' + get_members: '' + get_metadef_namespace: '' + get_metadef_namespaces: '' + get_metadef_object: '' + get_metadef_objects: '' + get_metadef_properties: '' + get_metadef_property: '' + get_task: '' + get_tasks: '' + list_metadef_resource_types: '' + manage_image_cache: 'role:admin' + modify_image: 'rule:admin_or_owner' + modify_member: '' + modify_metadef_namespace: '' + modify_metadef_object: '' + modify_metadef_property: '' + modify_task: '' + publicize_image: '' + set_image_location: '' + upload_image: '' + heat: + 'actions:action': 'rule:deny_stack_user' + 'build_info:build_info': 'rule:deny_stack_user' + 'cloudformation:CancelUpdateStack': 'rule:deny_stack_user' + 'cloudformation:CreateStack': 'rule:deny_stack_user' + 'cloudformation:DeleteStack': 'rule:deny_stack_user' + 'cloudformation:DescribeStackEvents': 'rule:deny_stack_user' + 'cloudformation:DescribeStackResource': '' + 'cloudformation:DescribeStackResources': 'rule:deny_stack_user' + 'cloudformation:DescribeStacks': 'rule:deny_stack_user' + 'cloudformation:EstimateTemplateCost': 'rule:deny_stack_user' + 'cloudformation:GetTemplate': 'rule:deny_stack_user' + 'cloudformation:ListStackResources': 'rule:deny_stack_user' + 'cloudformation:ListStacks': 'rule:deny_stack_user' + 'cloudformation:UpdateStack': 'rule:deny_stack_user' + 'cloudformation:ValidateTemplate': 'rule:deny_stack_user' + 'cloudwatch:DeleteAlarms': 'rule:deny_stack_user' + 'cloudwatch:DescribeAlarmHistory': 'rule:deny_stack_user' + 'cloudwatch:DescribeAlarms': 'rule:deny_stack_user' + 'cloudwatch:DescribeAlarmsForMetric': 'rule:deny_stack_user' + 'cloudwatch:DisableAlarmActions': 'rule:deny_stack_user' + 'cloudwatch:EnableAlarmActions': 'rule:deny_stack_user' + 'cloudwatch:GetMetricStatistics': 'rule:deny_stack_user' + 'cloudwatch:ListMetrics': 'rule:deny_stack_user' + 'cloudwatch:PutMetricAlarm': 'rule:deny_stack_user' + 'cloudwatch:PutMetricData': '' + 'cloudwatch:SetAlarmState': 'rule:deny_stack_user' + context_is_admin: 'role:admin' + deny_everybody: '!' + deny_stack_user: 'not role:heat_stack_user' + 'events:index': 'rule:deny_stack_user' + 'events:show': 'rule:deny_stack_user' + 'resource:index': 'rule:deny_stack_user' + 'resource:mark_unhealthy': 'rule:deny_stack_user' + 'resource:metadata': '' + 'resource:show': 'rule:deny_stack_user' + 'resource:signal': '' + 'resource_types:OS::Cinder::EncryptedVolumeType': 'rule:context_is_admin' + 'resource_types:OS::Cinder::VolumeType': 'rule:context_is_admin' + 'resource_types:OS::Manila::ShareType': 'rule:context_is_admin' + 'resource_types:OS::Neutron::QoSBandwidthLimitRule': 'rule:context_is_admin' + 'resource_types:OS::Neutron::QoSPolicy': 'rule:context_is_admin' + 'resource_types:OS::Nova::Flavor': 'rule:context_is_admin' + 'resource_types:OS::Nova::HostAggregate': 'rule:context_is_admin' + 'service:index': 'rule:context_is_admin' + 'software_configs:create': 'rule:deny_stack_user' + 'software_configs:delete': 'rule:deny_stack_user' + 'software_configs:global_index': 'rule:deny_everybody' + 'software_configs:index': 'rule:deny_stack_user' + 'software_configs:show': 'rule:deny_stack_user' + 'software_deployments:create': 'rule:deny_stack_user' + 'software_deployments:delete': 'rule:deny_stack_user' + 'software_deployments:index': 'rule:deny_stack_user' + 'software_deployments:metadata': '' + 'software_deployments:show': 'rule:deny_stack_user' + 'software_deployments:update': 'rule:deny_stack_user' + 'stacks:abandon': 'rule:deny_stack_user' + 'stacks:create': 'rule:deny_stack_user' + 'stacks:delete': 'rule:deny_stack_user' + 'stacks:delete_snapshot': 'rule:deny_stack_user' + 'stacks:detail': 'rule:deny_stack_user' + 'stacks:environment': 'rule:deny_stack_user' + 'stacks:export': 'rule:deny_stack_user' + 'stacks:generate_template': 'rule:deny_stack_user' + 'stacks:global_index': 'rule:deny_everybody' + 'stacks:index': 'rule:deny_stack_user' + 'stacks:list_outputs': 'rule:deny_stack_user' + 'stacks:list_resource_types': 'rule:deny_stack_user' + 'stacks:list_snapshots': 'rule:deny_stack_user' + 'stacks:list_template_functions': 'rule:deny_stack_user' + 'stacks:list_template_versions': 'rule:deny_stack_user' + 'stacks:lookup': '' + 'stacks:preview': 'rule:deny_stack_user' + 'stacks:preview_update': 'rule:deny_stack_user' + 'stacks:preview_update_patch': 'rule:deny_stack_user' + 'stacks:resource_schema': 'rule:deny_stack_user' + 'stacks:restore_snapshot': 'rule:deny_stack_user' + 'stacks:show': 'rule:deny_stack_user' + 'stacks:show_output': 'rule:deny_stack_user' + 'stacks:show_snapshot': 'rule:deny_stack_user' + 'stacks:snapshot': 'rule:deny_stack_user' + 'stacks:template': 'rule:deny_stack_user' + 'stacks:update': 'rule:deny_stack_user' + 'stacks:update_patch': 'rule:deny_stack_user' + 'stacks:validate_template': 'rule:deny_stack_user' + keystone: + admin_or_owner: 'rule:admin_required or rule:owner' + admin_or_token_subject: 'rule:admin_required or rule:token_subject' + admin_required: 'role:admin or is_admin:1' + default: 'rule:admin_required' + 'identity:add_endpoint_group_to_project': 'rule:admin_required' + 'identity:add_endpoint_to_project': 'rule:admin_required' + 'identity:add_user_to_group': 'rule:admin_required' + 'identity:authorize_request_token': 'rule:admin_required' + 'identity:change_password': 'rule:admin_or_owner' + 'identity:check_endpoint_in_project': 'rule:admin_required' + 'identity:check_grant': 'rule:admin_required' + 'identity:check_implied_role': 'rule:admin_required' + 'identity:check_policy_association_for_endpoint': 'rule:admin_required' + 'identity:check_policy_association_for_region_and_service': 'rule:admin_required' + 'identity:check_policy_association_for_service': 'rule:admin_required' + 'identity:check_token': 'rule:admin_or_token_subject' + 'identity:check_user_in_group': 'rule:admin_required' + 'identity:create_consumer': 'rule:admin_required' + 'identity:create_credential': 'rule:admin_required' + 'identity:create_domain': 'rule:admin_required' + 'identity:create_domain_config': 'rule:admin_required' + 'identity:create_domain_role': 'rule:admin_required' + 'identity:create_endpoint': 'rule:admin_required' + 'identity:create_endpoint_group': 'rule:admin_required' + 'identity:create_grant': 'rule:admin_required' + 'identity:create_group': 'rule:admin_required' + 'identity:create_identity_provider': 'rule:admin_required' + 'identity:create_implied_role': 'rule:admin_required' + 'identity:create_mapping': 'rule:admin_required' + 'identity:create_policy': 'rule:admin_required' + 'identity:create_policy_association_for_endpoint': 'rule:admin_required' + 'identity:create_policy_association_for_region_and_service': 'rule:admin_required' + 'identity:create_policy_association_for_service': 'rule:admin_required' + 'identity:create_project': 'rule:admin_required' + 'identity:create_protocol': 'rule:admin_required' + 'identity:create_region': 'rule:admin_required' + 'identity:create_role': 'rule:admin_required' + 'identity:create_service': 'rule:admin_required' + 'identity:create_service_provider': 'rule:admin_required' + 'identity:create_trust': 'user_id:%(trust.trustor_user_id)s' + 'identity:create_user': 'rule:admin_required' + 'identity:delete_access_token': 'rule:admin_required' + 'identity:delete_consumer': 'rule:admin_required' + 'identity:delete_credential': 'rule:admin_required' + 'identity:delete_domain': 'rule:admin_required' + 'identity:delete_domain_config': 'rule:admin_required' + 'identity:delete_domain_role': 'rule:admin_required' + 'identity:delete_endpoint': 'rule:admin_required' + 'identity:delete_endpoint_group': 'rule:admin_required' + 'identity:delete_group': 'rule:admin_required' + 'identity:delete_identity_provider': 'rule:admin_required' + 'identity:delete_implied_role': 'rule:admin_required' + 'identity:delete_mapping': 'rule:admin_required' + 'identity:delete_policy': 'rule:admin_required' + 'identity:delete_policy_association_for_endpoint': 'rule:admin_required' + 'identity:delete_policy_association_for_region_and_service': 'rule:admin_required' + 'identity:delete_policy_association_for_service': 'rule:admin_required' + 'identity:delete_project': 'rule:admin_required' + 'identity:delete_protocol': 'rule:admin_required' + 'identity:delete_region': 'rule:admin_required' + 'identity:delete_role': 'rule:admin_required' + 'identity:delete_service': 'rule:admin_required' + 'identity:delete_service_provider': 'rule:admin_required' + 'identity:delete_trust': '' + 'identity:delete_user': 'rule:admin_required' + 'identity:ec2_create_credential': 'rule:admin_or_owner' + 'identity:ec2_delete_credential': 'rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s)' + 'identity:ec2_get_credential': 'rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s)' + 'identity:ec2_list_credentials': 'rule:admin_or_owner' + 'identity:get_access_token': 'rule:admin_required' + 'identity:get_access_token_role': 'rule:admin_required' + 'identity:get_auth_catalog': '' + 'identity:get_auth_domains': '' + 'identity:get_auth_projects': '' + 'identity:get_consumer': 'rule:admin_required' + 'identity:get_credential': 'rule:admin_required' + 'identity:get_domain': 'rule:admin_required' + 'identity:get_domain_config': 'rule:admin_required' + 'identity:get_domain_config_default': 'rule:admin_required' + 'identity:get_domain_role': 'rule:admin_required' + 'identity:get_endpoint': 'rule:admin_required' + 'identity:get_endpoint_group': 'rule:admin_required' + 'identity:get_endpoint_group_in_project': 'rule:admin_required' + 'identity:get_group': 'rule:admin_required' + 'identity:get_identity_providers': 'rule:admin_required' + 'identity:get_implied_role': 'rule:admin_required ' + 'identity:get_mapping': 'rule:admin_required' + 'identity:get_policy': 'rule:admin_required' + 'identity:get_policy_for_endpoint': 'rule:admin_required' + 'identity:get_project': 'rule:admin_required or project_id:%(target.project.id)s' + 'identity:get_protocol': 'rule:admin_required' + 'identity:get_region': '' + 'identity:get_role': 'rule:admin_required' + 'identity:get_role_for_trust': '' + 'identity:get_service': 'rule:admin_required' + 'identity:get_service_provider': 'rule:admin_required' + 'identity:get_user': 'rule:admin_required' + 'identity:list_access_token_roles': 'rule:admin_required' + 'identity:list_access_tokens': 'rule:admin_required' + 'identity:list_consumers': 'rule:admin_required' + 'identity:list_credentials': 'rule:admin_required' + 'identity:list_domain_roles': 'rule:admin_required' + 'identity:list_domains': 'rule:admin_required' + 'identity:list_domains_for_groups': '' + 'identity:list_endpoint_groups': 'rule:admin_required' + 'identity:list_endpoint_groups_for_project': 'rule:admin_required' + 'identity:list_endpoints': 'rule:admin_required' + 'identity:list_endpoints_associated_with_endpoint_group': 'rule:admin_required' + 'identity:list_endpoints_for_policy': 'rule:admin_required' + 'identity:list_endpoints_for_project': 'rule:admin_required' + 'identity:list_grants': 'rule:admin_required' + 'identity:list_groups': 'rule:admin_required' + 'identity:list_groups_for_user': 'rule:admin_or_owner' + 'identity:list_identity_providers': 'rule:admin_required' + 'identity:list_implied_roles': 'rule:admin_required' + 'identity:list_mappings': 'rule:admin_required' + 'identity:list_policies': 'rule:admin_required' + 'identity:list_projects': 'rule:admin_required' + 'identity:list_projects_associated_with_endpoint_group': 'rule:admin_required' + 'identity:list_projects_for_endpoint': 'rule:admin_required' + 'identity:list_projects_for_groups': '' + 'identity:list_protocols': 'rule:admin_required' + 'identity:list_regions': '' + 'identity:list_revoke_events': '' + 'identity:list_role_assignments': 'rule:admin_required' + 'identity:list_role_assignments_for_tree': 'rule:admin_required' + 'identity:list_role_inference_rules': 'rule:admin_required' + 'identity:list_roles': 'rule:admin_required' + 'identity:list_roles_for_trust': '' + 'identity:list_service_providers': 'rule:admin_required' + 'identity:list_services': 'rule:admin_required' + 'identity:list_trusts': '' + 'identity:list_user_projects': 'rule:admin_or_owner' + 'identity:list_users': 'rule:admin_required' + 'identity:list_users_in_group': 'rule:admin_required' + 'identity:remove_endpoint_from_project': 'rule:admin_required' + 'identity:remove_endpoint_group_from_project': 'rule:admin_required' + 'identity:remove_user_from_group': 'rule:admin_required' + 'identity:revocation_list': 'rule:service_or_admin' + 'identity:revoke_grant': 'rule:admin_required' + 'identity:revoke_token': 'rule:admin_or_token_subject' + 'identity:update_consumer': 'rule:admin_required' + 'identity:update_credential': 'rule:admin_required' + 'identity:update_domain': 'rule:admin_required' + 'identity:update_domain_config': 'rule:admin_required' + 'identity:update_domain_role': 'rule:admin_required' + 'identity:update_endpoint': 'rule:admin_required' + 'identity:update_endpoint_group': 'rule:admin_required' + 'identity:update_group': 'rule:admin_required' + 'identity:update_identity_provider': 'rule:admin_required' + 'identity:update_mapping': 'rule:admin_required' + 'identity:update_policy': 'rule:admin_required' + 'identity:update_project': 'rule:admin_required' + 'identity:update_protocol': 'rule:admin_required' + 'identity:update_region': 'rule:admin_required' + 'identity:update_role': 'rule:admin_required' + 'identity:update_service': 'rule:admin_required' + 'identity:update_service_provider': 'rule:admin_required' + 'identity:update_user': 'rule:admin_required' + 'identity:validate_token': 'rule:service_admin_or_token_subject' + 'identity:validate_token_head': 'rule:service_or_admin' + owner: 'user_id:%(user_id)s' + service_admin_or_token_subject: 'rule:service_or_admin or rule:token_subject' + service_or_admin: 'rule:admin_required or rule:service_role' + service_role: 'role:service' + token_subject: 'user_id:%(target.token.user_id)s' + neutron: + add_router_interface: 'rule:admin_or_owner' + admin_only: 'rule:context_is_admin' + admin_or_network_owner: 'rule:context_is_admin or tenant_id:%(network:tenant_id)s' + admin_or_owner: 'rule:context_is_admin or rule:owner' + admin_owner_or_network_owner: 'rule:owner or rule:admin_or_network_owner' + context_is_admin: 'role:admin' + context_is_advsvc: 'role:advsvc' + create_address_scope: '' + 'create_address_scope:shared': 'rule:admin_only' + create_dhcp-network: 'rule:admin_only' + create_firewall: '' + 'create_firewall:shared': 'rule:admin_only' + create_firewall_policy: '' + 'create_firewall_policy:shared': 'rule:admin_or_owner' + create_firewall_rule: '' + create_flavor: 'rule:admin_only' + create_flavor_service_profile: 'rule:admin_only' + create_floatingip: 'rule:regular_user' + 'create_floatingip:floating_ip_address': 'rule:admin_only' + create_l3-router: 'rule:admin_only' + create_lsn: 'rule:admin_only' + create_metering_label: 'rule:admin_only' + create_metering_label_rule: 'rule:admin_only' + create_network: '' + 'create_network:is_default': 'rule:admin_only' + 'create_network:provider:network_type': 'rule:admin_only' + 'create_network:provider:physical_network': 'rule:admin_only' + 'create_network:provider:segmentation_id': 'rule:admin_only' + 'create_network:router:external': 'rule:admin_only' + 'create_network:segments': 'rule:admin_only' + 'create_network:shared': 'rule:admin_only' + create_network_profile: 'rule:admin_only' + create_policy: 'rule:admin_only' + create_policy_bandwidth_limit_rule: 'rule:admin_only' + create_policy_dscp_marking_rule: 'rule:admin_only' + create_port: '' + 'create_port:allowed_address_pairs': 'rule:admin_or_network_owner' + 'create_port:binding:host_id': 'rule:admin_only' + 'create_port:binding:profile': 'rule:admin_only' + 'create_port:device_owner': 'not rule:network_device or rule:context_is_advsvc or rule:admin_or_network_owner' + 'create_port:fixed_ips': 'rule:context_is_advsvc or rule:admin_or_network_owner' + 'create_port:mac_address': 'rule:context_is_advsvc or rule:admin_or_network_owner' + 'create_port:mac_learning_enabled': 'rule:context_is_advsvc or rule:admin_or_network_owner' + 'create_port:port_security_enabled': 'rule:context_is_advsvc or rule:admin_or_network_owner' + create_qos_queue: 'rule:admin_only' + create_rbac_policy: '' + 'create_rbac_policy:target_tenant': 'rule:restrict_wildcard' + create_router: 'rule:regular_user' + 'create_router:distributed': 'rule:admin_only' + 'create_router:external_gateway_info:enable_snat': 'rule:admin_only' + 'create_router:external_gateway_info:external_fixed_ips': 'rule:admin_only' + 'create_router:ha': 'rule:admin_only' + create_segment: 'rule:admin_only' + create_service_profile: 'rule:admin_only' + create_subnet: 'rule:admin_or_network_owner' + 'create_subnet:segment_id': 'rule:admin_only' + create_subnetpool: '' + 'create_subnetpool:is_default': 'rule:admin_only' + 'create_subnetpool:shared': 'rule:admin_only' + default: 'rule:admin_or_owner' + delete_address_scope: 'rule:admin_or_owner' + delete_agent: 'rule:admin_only' + delete_dhcp-network: 'rule:admin_only' + delete_firewall: 'rule:admin_or_owner' + delete_firewall_policy: 'rule:admin_or_owner' + delete_firewall_rule: 'rule:admin_or_owner' + delete_flavor: 'rule:admin_only' + delete_flavor_service_profile: 'rule:admin_only' + delete_floatingip: 'rule:admin_or_owner' + delete_l3-router: 'rule:admin_only' + delete_metering_label: 'rule:admin_only' + delete_metering_label_rule: 'rule:admin_only' + delete_network: 'rule:admin_or_owner' + delete_network_profile: 'rule:admin_only' + delete_policy: 'rule:admin_only' + delete_policy_bandwidth_limit_rule: 'rule:admin_only' + delete_policy_dscp_marking_rule: 'rule:admin_only' + delete_port: 'rule:context_is_advsvc or rule:admin_owner_or_network_owner' + delete_rbac_policy: 'rule:admin_or_owner' + delete_router: 'rule:admin_or_owner' + delete_segment: 'rule:admin_only' + delete_service_profile: 'rule:admin_only' + delete_subnet: 'rule:admin_or_network_owner' + delete_subnetpool: 'rule:admin_or_owner' + external: 'field:networks:router:external=True' + get_address_scope: 'rule:admin_or_owner or rule:shared_address_scopes' + get_agent: 'rule:admin_only' + get_agent-loadbalancers: 'rule:admin_only' + get_auto_allocated_topology: 'rule:admin_or_owner' + get_dhcp-agents: 'rule:admin_only' + get_dhcp-networks: 'rule:admin_only' + get_firewall: 'rule:admin_or_owner' + 'get_firewall:shared': 'rule:admin_only' + get_firewall_policy: 'rule:admin_or_owner or rule:shared_firewall_policies' + get_firewall_rule: 'rule:admin_or_owner or rule:shared_firewalls' + get_flavor: 'rule:regular_user' + get_flavor_service_profile: 'rule:regular_user' + get_flavors: 'rule:regular_user' + get_floatingip: 'rule:admin_or_owner' + get_l3-agents: 'rule:admin_only' + get_l3-routers: 'rule:admin_only' + get_loadbalancer-agent: 'rule:admin_only' + get_loadbalancer-hosting-agent: 'rule:admin_only' + get_loadbalancer-pools: 'rule:admin_only' + get_lsn: 'rule:admin_only' + get_metering_label: 'rule:admin_only' + get_metering_label_rule: 'rule:admin_only' + get_network: 'rule:admin_or_owner or rule:shared or rule:external or rule:context_is_advsvc' + 'get_network:provider:network_type': 'rule:admin_only' + 'get_network:provider:physical_network': 'rule:admin_only' + 'get_network:provider:segmentation_id': 'rule:admin_only' + 'get_network:queue_id': 'rule:admin_only' + 'get_network:router:external': 'rule:regular_user' + 'get_network:segments': 'rule:admin_only' + get_network_ip_availabilities: 'rule:admin_only' + get_network_ip_availability: 'rule:admin_only' + get_network_profile: '' + get_network_profiles: '' + get_policy: 'rule:regular_user' + get_policy_bandwidth_limit_rule: 'rule:regular_user' + get_policy_dscp_marking_rule: 'rule:regular_user' + get_policy_profile: '' + get_policy_profiles: '' + get_port: 'rule:context_is_advsvc or rule:admin_owner_or_network_owner' + 'get_port:binding:host_id': 'rule:admin_only' + 'get_port:binding:profile': 'rule:admin_only' + 'get_port:binding:vif_details': 'rule:admin_only' + 'get_port:binding:vif_type': 'rule:admin_only' + 'get_port:queue_id': 'rule:admin_only' + get_qos_queue: 'rule:admin_only' + get_rbac_policy: 'rule:admin_or_owner' + get_router: 'rule:admin_or_owner' + 'get_router:distributed': 'rule:admin_only' + 'get_router:ha': 'rule:admin_only' + get_rule_type: 'rule:regular_user' + get_segment: 'rule:admin_only' + get_service_profile: 'rule:admin_only' + get_service_profiles: 'rule:admin_only' + get_service_provider: 'rule:regular_user' + get_subnet: 'rule:admin_or_owner or rule:shared' + 'get_subnet:segment_id': 'rule:admin_only' + get_subnetpool: 'rule:admin_or_owner or rule:shared_subnetpools' + insert_rule: 'rule:admin_or_owner' + network_device: 'field:port:device_owner=~^network:' + owner: 'tenant_id:%(tenant_id)s' + regular_user: '' + remove_router_interface: 'rule:admin_or_owner' + remove_rule: 'rule:admin_or_owner' + restrict_wildcard: '(not field:rbac_policy:target_tenant=*) or rule:admin_only' + shared: 'field:networks:shared=True' + shared_address_scopes: 'field:address_scopes:shared=True' + shared_firewall_policies: 'field:firewall_policies:shared=True' + shared_firewalls: 'field:firewalls:shared=True' + shared_subnetpools: 'field:subnetpools:shared=True' + update_address_scope: 'rule:admin_or_owner' + 'update_address_scope:shared': 'rule:admin_only' + update_agent: 'rule:admin_only' + update_firewall: 'rule:admin_or_owner' + 'update_firewall:shared': 'rule:admin_only' + update_firewall_policy: 'rule:admin_or_owner' + update_firewall_rule: 'rule:admin_or_owner' + update_flavor: 'rule:admin_only' + update_floatingip: 'rule:admin_or_owner' + update_network: 'rule:admin_or_owner' + 'update_network:provider:network_type': 'rule:admin_only' + 'update_network:provider:physical_network': 'rule:admin_only' + 'update_network:provider:segmentation_id': 'rule:admin_only' + 'update_network:router:external': 'rule:admin_only' + 'update_network:segments': 'rule:admin_only' + 'update_network:shared': 'rule:admin_only' + update_network_profile: 'rule:admin_only' + update_policy: 'rule:admin_only' + update_policy_bandwidth_limit_rule: 'rule:admin_only' + update_policy_dscp_marking_rule: 'rule:admin_only' + update_policy_profiles: 'rule:admin_only' + update_port: 'rule:admin_or_owner or rule:context_is_advsvc' + 'update_port:allowed_address_pairs': 'rule:admin_or_network_owner' + 'update_port:binding:host_id': 'rule:admin_only' + 'update_port:binding:profile': 'rule:admin_only' + 'update_port:device_owner': 'not rule:network_device or rule:context_is_advsvc or rule:admin_or_network_owner' + 'update_port:fixed_ips': 'rule:context_is_advsvc or rule:admin_or_network_owner' + 'update_port:mac_address': 'rule:admin_only or rule:context_is_advsvc' + 'update_port:mac_learning_enabled': 'rule:context_is_advsvc or rule:admin_or_network_owner' + 'update_port:port_security_enabled': 'rule:context_is_advsvc or rule:admin_or_network_owner' + update_rbac_policy: 'rule:admin_or_owner' + 'update_rbac_policy:target_tenant': 'rule:restrict_wildcard and rule:admin_or_owner' + 'update_router:distributed': 'rule:admin_only' + 'update_router:external_gateway_info:enable_snat': 'rule:admin_only' + 'update_router:external_gateway_info:external_fixed_ips': 'rule:admin_only' + 'update_router:ha': 'rule:admin_only' + update_segment: 'rule:admin_only' + update_service_profile: 'rule:admin_only' + update_subnet: 'rule:admin_or_network_owner' + update_subnetpool: 'rule:admin_or_owner' + 'update_subnetpool:is_default': 'rule:admin_only' + nova: + admin_api: 'is_admin:True' + admin_or_owner: 'is_admin:True or project_id:%(project_id)s' + 'cells_scheduler_filter:TargetCellFilter': 'is_admin:True' + 'compute:add_fixed_ip': 'rule:admin_or_owner' + 'compute:attach_interface': 'rule:admin_or_owner' + 'compute:attach_volume': 'rule:admin_or_owner' + 'compute:backup': 'rule:admin_or_owner' + 'compute:confirm_resize': 'rule:admin_or_owner' + 'compute:create': 'rule:admin_or_owner' + 'compute:create:attach_network': 'rule:admin_or_owner' + 'compute:create:attach_volume': 'rule:admin_or_owner' + 'compute:create:forced_host': 'is_admin:True' + 'compute:delete': 'rule:admin_or_owner' + 'compute:delete_instance_metadata': 'rule:admin_or_owner' + 'compute:detach_interface': 'rule:admin_or_owner' + 'compute:detach_volume': 'rule:admin_or_owner' + 'compute:force_delete': 'rule:admin_or_owner' + 'compute:get': 'rule:admin_or_owner' + 'compute:get_all': 'rule:admin_or_owner' + 'compute:get_all_instance_metadata': 'rule:admin_or_owner' + 'compute:get_all_instance_system_metadata': 'rule:admin_or_owner' + 'compute:get_all_tenants': 'is_admin:True' + 'compute:get_console_output': 'rule:admin_or_owner' + 'compute:get_diagnostics': 'rule:admin_or_owner' + 'compute:get_instance_diagnostics': 'rule:admin_or_owner' + 'compute:get_instance_metadata': 'rule:admin_or_owner' + 'compute:get_mks_console': 'rule:admin_or_owner' + 'compute:get_rdp_console': 'rule:admin_or_owner' + 'compute:get_serial_console': 'rule:admin_or_owner' + 'compute:get_spice_console': 'rule:admin_or_owner' + 'compute:get_vnc_console': 'rule:admin_or_owner' + 'compute:inject_network_info': 'rule:admin_or_owner' + 'compute:lock': 'rule:admin_or_owner' + 'compute:pause': 'rule:admin_or_owner' + 'compute:reboot': 'rule:admin_or_owner' + 'compute:rebuild': 'rule:admin_or_owner' + 'compute:remove_fixed_ip': 'rule:admin_or_owner' + 'compute:rescue': 'rule:admin_or_owner' + 'compute:reset_network': 'rule:admin_or_owner' + 'compute:resize': 'rule:admin_or_owner' + 'compute:restore': 'rule:admin_or_owner' + 'compute:resume': 'rule:admin_or_owner' + 'compute:revert_resize': 'rule:admin_or_owner' + 'compute:security_groups:add_to_instance': 'rule:admin_or_owner' + 'compute:security_groups:remove_from_instance': 'rule:admin_or_owner' + 'compute:set_admin_password': 'rule:admin_or_owner' + 'compute:shelve': 'rule:admin_or_owner' + 'compute:shelve_offload': 'rule:admin_or_owner' + 'compute:snapshot': 'rule:admin_or_owner' + 'compute:snapshot_volume_backed': 'rule:admin_or_owner' + 'compute:soft_delete': 'rule:admin_or_owner' + 'compute:start': 'rule:admin_or_owner' + 'compute:stop': 'rule:admin_or_owner' + 'compute:suspend': 'rule:admin_or_owner' + 'compute:swap_volume': 'rule:admin_api' + 'compute:unlock': 'rule:admin_or_owner' + 'compute:unlock_override': 'rule:admin_api' + 'compute:unpause': 'rule:admin_or_owner' + 'compute:unrescue': 'rule:admin_or_owner' + 'compute:unshelve': 'rule:admin_or_owner' + 'compute:update': 'rule:admin_or_owner' + 'compute:update_instance_metadata': 'rule:admin_or_owner' + 'compute:volume_snapshot_create': 'rule:admin_or_owner' + 'compute:volume_snapshot_delete': 'rule:admin_or_owner' + 'compute_extension:accounts': 'rule:admin_api' + 'compute_extension:admin_actions': 'rule:admin_api' + 'compute_extension:admin_actions:createBackup': 'rule:admin_or_owner' + 'compute_extension:admin_actions:injectNetworkInfo': 'rule:admin_api' + 'compute_extension:admin_actions:lock': 'rule:admin_or_owner' + 'compute_extension:admin_actions:migrate': 'rule:admin_api' + 'compute_extension:admin_actions:migrateLive': 'rule:admin_api' + 'compute_extension:admin_actions:pause': 'rule:admin_or_owner' + 'compute_extension:admin_actions:resetNetwork': 'rule:admin_api' + 'compute_extension:admin_actions:resetState': 'rule:admin_api' + 'compute_extension:admin_actions:resume': 'rule:admin_or_owner' + 'compute_extension:admin_actions:suspend': 'rule:admin_or_owner' + 'compute_extension:admin_actions:unlock': 'rule:admin_or_owner' + 'compute_extension:admin_actions:unpause': 'rule:admin_or_owner' + 'compute_extension:agents': 'rule:admin_api' + 'compute_extension:aggregates': 'rule:admin_api' + 'compute_extension:attach_interfaces': 'rule:admin_or_owner' + 'compute_extension:availability_zone:detail': 'rule:admin_api' + 'compute_extension:availability_zone:list': 'rule:admin_or_owner' + 'compute_extension:baremetal_nodes': 'rule:admin_api' + 'compute_extension:cells': 'rule:admin_api' + 'compute_extension:cells:create': 'rule:admin_api' + 'compute_extension:cells:delete': 'rule:admin_api' + 'compute_extension:cells:sync_instances': 'rule:admin_api' + 'compute_extension:cells:update': 'rule:admin_api' + 'compute_extension:certificates': 'rule:admin_or_owner' + 'compute_extension:cloudpipe': 'rule:admin_api' + 'compute_extension:cloudpipe_update': 'rule:admin_api' + 'compute_extension:config_drive': 'rule:admin_or_owner' + 'compute_extension:console_auth_tokens': 'rule:admin_api' + 'compute_extension:console_output': 'rule:admin_or_owner' + 'compute_extension:consoles': 'rule:admin_or_owner' + 'compute_extension:createserverext': 'rule:admin_or_owner' + 'compute_extension:deferred_delete': 'rule:admin_or_owner' + 'compute_extension:disk_config': 'rule:admin_or_owner' + 'compute_extension:evacuate': 'rule:admin_api' + 'compute_extension:extended_availability_zone': 'rule:admin_or_owner' + 'compute_extension:extended_ips': 'rule:admin_or_owner' + 'compute_extension:extended_ips_mac': 'rule:admin_or_owner' + 'compute_extension:extended_server_attributes': 'rule:admin_api' + 'compute_extension:extended_status': 'rule:admin_or_owner' + 'compute_extension:extended_vif_net': 'rule:admin_or_owner' + 'compute_extension:extended_volumes': 'rule:admin_or_owner' + 'compute_extension:fixed_ips': 'rule:admin_api' + 'compute_extension:flavor_access': 'rule:admin_or_owner' + 'compute_extension:flavor_access:addTenantAccess': 'rule:admin_api' + 'compute_extension:flavor_access:removeTenantAccess': 'rule:admin_api' + 'compute_extension:flavor_disabled': 'rule:admin_or_owner' + 'compute_extension:flavor_rxtx': 'rule:admin_or_owner' + 'compute_extension:flavor_swap': 'rule:admin_or_owner' + 'compute_extension:flavorextradata': 'rule:admin_or_owner' + 'compute_extension:flavorextraspecs:create': 'rule:admin_api' + 'compute_extension:flavorextraspecs:delete': 'rule:admin_api' + 'compute_extension:flavorextraspecs:index': 'rule:admin_or_owner' + 'compute_extension:flavorextraspecs:show': 'rule:admin_or_owner' + 'compute_extension:flavorextraspecs:update': 'rule:admin_api' + 'compute_extension:flavormanage': 'rule:admin_api' + 'compute_extension:floating_ip_dns': 'rule:admin_or_owner' + 'compute_extension:floating_ip_pools': 'rule:admin_or_owner' + 'compute_extension:floating_ips': 'rule:admin_or_owner' + 'compute_extension:floating_ips_bulk': 'rule:admin_api' + 'compute_extension:fping': 'rule:admin_or_owner' + 'compute_extension:fping:all_tenants': 'rule:admin_api' + 'compute_extension:hide_server_addresses': 'is_admin:False' + 'compute_extension:hosts': 'rule:admin_api' + 'compute_extension:hypervisors': 'rule:admin_api' + 'compute_extension:image_size': 'rule:admin_or_owner' + 'compute_extension:instance_actions': 'rule:admin_or_owner' + 'compute_extension:instance_actions:events': 'rule:admin_api' + 'compute_extension:instance_usage_audit_log': 'rule:admin_api' + 'compute_extension:keypairs': 'rule:admin_or_owner' + 'compute_extension:keypairs:create': 'rule:admin_or_owner' + 'compute_extension:keypairs:delete': 'rule:admin_or_owner' + 'compute_extension:keypairs:index': 'rule:admin_or_owner' + 'compute_extension:keypairs:show': 'rule:admin_or_owner' + 'compute_extension:migrations:index': 'rule:admin_api' + 'compute_extension:multinic': 'rule:admin_or_owner' + 'compute_extension:networks': 'rule:admin_api' + 'compute_extension:networks:view': 'rule:admin_or_owner' + 'compute_extension:networks_associate': 'rule:admin_api' + 'compute_extension:os-assisted-volume-snapshots:create': 'rule:admin_api' + 'compute_extension:os-assisted-volume-snapshots:delete': 'rule:admin_api' + 'compute_extension:os-server-external-events:create': 'rule:admin_api' + 'compute_extension:os-tenant-networks': 'rule:admin_or_owner' + 'compute_extension:quota_classes': 'rule:admin_or_owner' + 'compute_extension:quotas:delete': 'rule:admin_api' + 'compute_extension:quotas:show': 'rule:admin_or_owner' + 'compute_extension:quotas:update': 'rule:admin_api' + 'compute_extension:rescue': 'rule:admin_or_owner' + 'compute_extension:security_group_default_rules': 'rule:admin_api' + 'compute_extension:security_groups': 'rule:admin_or_owner' + 'compute_extension:server_diagnostics': 'rule:admin_api' + 'compute_extension:server_groups': 'rule:admin_or_owner' + 'compute_extension:server_password': 'rule:admin_or_owner' + 'compute_extension:server_usage': 'rule:admin_or_owner' + 'compute_extension:services': 'rule:admin_api' + 'compute_extension:shelve': 'rule:admin_or_owner' + 'compute_extension:shelveOffload': 'rule:admin_api' + 'compute_extension:simple_tenant_usage:list': 'rule:admin_api' + 'compute_extension:simple_tenant_usage:show': 'rule:admin_or_owner' + 'compute_extension:unshelve': 'rule:admin_or_owner' + 'compute_extension:used_limits_for_admin': 'rule:admin_api' + 'compute_extension:users': 'rule:admin_api' + 'compute_extension:virtual_interfaces': 'rule:admin_or_owner' + 'compute_extension:virtual_storage_arrays': 'rule:admin_or_owner' + 'compute_extension:volume_attachments:create': 'rule:admin_or_owner' + 'compute_extension:volume_attachments:delete': 'rule:admin_or_owner' + 'compute_extension:volume_attachments:index': 'rule:admin_or_owner' + 'compute_extension:volume_attachments:show': 'rule:admin_or_owner' + 'compute_extension:volume_attachments:update': 'rule:admin_api' + 'compute_extension:volumes': 'rule:admin_or_owner' + 'compute_extension:volumetypes': 'rule:admin_or_owner' + context_is_admin: 'role:admin' + default: 'rule:admin_or_owner' + 'network:add_dns_entry': 'rule:admin_or_owner' + 'network:add_fixed_ip_to_instance': 'rule:admin_or_owner' + 'network:add_network_to_project': 'rule:admin_or_owner' + 'network:allocate_floating_ip': 'rule:admin_or_owner' + 'network:allocate_for_instance': 'rule:admin_or_owner' + 'network:associate': 'rule:admin_or_owner' + 'network:associate_floating_ip': 'rule:admin_or_owner' + 'network:attach_external_network': 'rule:admin_api' + 'network:create': 'rule:admin_or_owner' + 'network:create_private_dns_domain': 'rule:admin_or_owner' + 'network:create_public_dns_domain': 'rule:admin_or_owner' + 'network:deallocate_for_instance': 'rule:admin_or_owner' + 'network:delete': 'rule:admin_or_owner' + 'network:delete_dns_domain': 'rule:admin_or_owner' + 'network:delete_dns_entry': 'rule:admin_or_owner' + 'network:disassociate': 'rule:admin_or_owner' + 'network:disassociate_floating_ip': 'rule:admin_or_owner' + 'network:get': 'rule:admin_or_owner' + 'network:get_all': 'rule:admin_or_owner' + 'network:get_backdoor_port': 'rule:admin_or_owner' + 'network:get_dns_domains': 'rule:admin_or_owner' + 'network:get_dns_entries_by_address': 'rule:admin_or_owner' + 'network:get_dns_entries_by_name': 'rule:admin_or_owner' + 'network:get_fixed_ip': 'rule:admin_or_owner' + 'network:get_fixed_ip_by_address': 'rule:admin_or_owner' + 'network:get_floating_ip': 'rule:admin_or_owner' + 'network:get_floating_ip_by_address': 'rule:admin_or_owner' + 'network:get_floating_ip_pools': 'rule:admin_or_owner' + 'network:get_floating_ips_by_fixed_address': 'rule:admin_or_owner' + 'network:get_floating_ips_by_project': 'rule:admin_or_owner' + 'network:get_instance_id_by_floating_address': 'rule:admin_or_owner' + 'network:get_instance_nw_info': 'rule:admin_or_owner' + 'network:get_instance_uuids_by_ip_filter': 'rule:admin_or_owner' + 'network:get_vif_by_mac_address': 'rule:admin_or_owner' + 'network:get_vifs_by_instance': 'rule:admin_or_owner' + 'network:migrate_instance_finish': 'rule:admin_or_owner' + 'network:migrate_instance_start': 'rule:admin_or_owner' + 'network:modify_dns_entry': 'rule:admin_or_owner' + 'network:release_floating_ip': 'rule:admin_or_owner' + 'network:remove_fixed_ip_from_instance': 'rule:admin_or_owner' + 'network:setup_networks_on_host': 'rule:admin_or_owner' + 'network:validate_networks': 'rule:admin_or_owner' + 'os_compute_api:extension_info:discoverable': '@' + 'os_compute_api:extensions': 'rule:admin_or_owner' + 'os_compute_api:extensions:discoverable': '@' + 'os_compute_api:flavors': 'rule:admin_or_owner' + 'os_compute_api:flavors:discoverable': '@' + 'os_compute_api:image-size': 'rule:admin_or_owner' + 'os_compute_api:image-size:discoverable': '@' + 'os_compute_api:images:discoverable': '@' + 'os_compute_api:ips:discoverable': '@' + 'os_compute_api:ips:index': 'rule:admin_or_owner' + 'os_compute_api:ips:show': 'rule:admin_or_owner' + 'os_compute_api:limits': 'rule:admin_or_owner' + 'os_compute_api:limits:discoverable': '@' + 'os_compute_api:os-access-ips': 'rule:admin_or_owner' + 'os_compute_api:os-access-ips:discoverable': '@' + 'os_compute_api:os-admin-actions': 'rule:admin_api' + 'os_compute_api:os-admin-actions:discoverable': '@' + 'os_compute_api:os-admin-actions:inject_network_info': 'rule:admin_api' + 'os_compute_api:os-admin-actions:reset_network': 'rule:admin_api' + 'os_compute_api:os-admin-actions:reset_state': 'rule:admin_api' + 'os_compute_api:os-admin-password': 'rule:admin_or_owner' + 'os_compute_api:os-admin-password:discoverable': '@' + 'os_compute_api:os-agents': 'rule:admin_api' + 'os_compute_api:os-agents:discoverable': '@' + 'os_compute_api:os-aggregates:add_host': 'rule:admin_api' + 'os_compute_api:os-aggregates:create': 'rule:admin_api' + 'os_compute_api:os-aggregates:delete': 'rule:admin_api' + 'os_compute_api:os-aggregates:discoverable': '@' + 'os_compute_api:os-aggregates:index': 'rule:admin_api' + 'os_compute_api:os-aggregates:remove_host': 'rule:admin_api' + 'os_compute_api:os-aggregates:set_metadata': 'rule:admin_api' + 'os_compute_api:os-aggregates:show': 'rule:admin_api' + 'os_compute_api:os-aggregates:update': 'rule:admin_api' + 'os_compute_api:os-assisted-volume-snapshots:create': 'rule:admin_api' + 'os_compute_api:os-assisted-volume-snapshots:delete': 'rule:admin_api' + 'os_compute_api:os-assisted-volume-snapshots:discoverable': '@' + 'os_compute_api:os-attach-interfaces': 'rule:admin_or_owner' + 'os_compute_api:os-attach-interfaces:discoverable': '@' + 'os_compute_api:os-availability-zone:detail': 'rule:admin_api' + 'os_compute_api:os-availability-zone:discoverable': '@' + 'os_compute_api:os-availability-zone:list': 'rule:admin_or_owner' + 'os_compute_api:os-baremetal-nodes': 'rule:admin_api' + 'os_compute_api:os-baremetal-nodes:discoverable': '@' + 'os_compute_api:os-block-device-mapping-v1:discoverable': '@' + 'os_compute_api:os-cells': 'rule:admin_api' + 'os_compute_api:os-cells:create': 'rule:admin_api' + 'os_compute_api:os-cells:delete': 'rule:admin_api' + 'os_compute_api:os-cells:discoverable': '@' + 'os_compute_api:os-cells:sync_instances': 'rule:admin_api' + 'os_compute_api:os-cells:update': 'rule:admin_api' + 'os_compute_api:os-certificates:create': 'rule:admin_or_owner' + 'os_compute_api:os-certificates:discoverable': '@' + 'os_compute_api:os-certificates:show': 'rule:admin_or_owner' + 'os_compute_api:os-cloudpipe': 'rule:admin_api' + 'os_compute_api:os-cloudpipe:discoverable': '@' + 'os_compute_api:os-config-drive': 'rule:admin_or_owner' + 'os_compute_api:os-config-drive:discoverable': '@' + 'os_compute_api:os-console-auth-tokens': 'rule:admin_api' + 'os_compute_api:os-console-auth-tokens:discoverable': '@' + 'os_compute_api:os-console-output': 'rule:admin_or_owner' + 'os_compute_api:os-console-output:discoverable': '@' + 'os_compute_api:os-consoles:create': 'rule:admin_or_owner' + 'os_compute_api:os-consoles:delete': 'rule:admin_or_owner' + 'os_compute_api:os-consoles:discoverable': '@' + 'os_compute_api:os-consoles:index': 'rule:admin_or_owner' + 'os_compute_api:os-consoles:show': 'rule:admin_or_owner' + 'os_compute_api:os-create-backup': 'rule:admin_or_owner' + 'os_compute_api:os-create-backup:discoverable': '@' + 'os_compute_api:os-deferred-delete': 'rule:admin_or_owner' + 'os_compute_api:os-deferred-delete:discoverable': '@' + 'os_compute_api:os-disk-config': 'rule:admin_or_owner' + 'os_compute_api:os-disk-config:discoverable': '@' + 'os_compute_api:os-evacuate': 'rule:admin_api' + 'os_compute_api:os-evacuate:discoverable': '@' + 'os_compute_api:os-extended-availability-zone': 'rule:admin_or_owner' + 'os_compute_api:os-extended-availability-zone:discoverable': '@' + 'os_compute_api:os-extended-server-attributes': 'rule:admin_api' + 'os_compute_api:os-extended-server-attributes:discoverable': '@' + 'os_compute_api:os-extended-status': 'rule:admin_or_owner' + 'os_compute_api:os-extended-status:discoverable': '@' + 'os_compute_api:os-extended-volumes': 'rule:admin_or_owner' + 'os_compute_api:os-extended-volumes:discoverable': '@' + 'os_compute_api:os-fixed-ips': 'rule:admin_api' + 'os_compute_api:os-fixed-ips:discoverable': '@' + 'os_compute_api:os-flavor-access': 'rule:admin_or_owner' + 'os_compute_api:os-flavor-access:add_tenant_access': 'rule:admin_api' + 'os_compute_api:os-flavor-access:discoverable': '@' + 'os_compute_api:os-flavor-access:remove_tenant_access': 'rule:admin_api' + 'os_compute_api:os-flavor-extra-specs:create': 'rule:admin_api' + 'os_compute_api:os-flavor-extra-specs:delete': 'rule:admin_api' + 'os_compute_api:os-flavor-extra-specs:discoverable': '@' + 'os_compute_api:os-flavor-extra-specs:index': 'rule:admin_or_owner' + 'os_compute_api:os-flavor-extra-specs:show': 'rule:admin_or_owner' + 'os_compute_api:os-flavor-extra-specs:update': 'rule:admin_api' + 'os_compute_api:os-flavor-manage': 'rule:admin_api' + 'os_compute_api:os-flavor-manage:discoverable': '@' + 'os_compute_api:os-flavor-rxtx': 'rule:admin_or_owner' + 'os_compute_api:os-flavor-rxtx:discoverable': '@' + 'os_compute_api:os-floating-ip-dns': 'rule:admin_or_owner' + 'os_compute_api:os-floating-ip-dns:discoverable': '@' + 'os_compute_api:os-floating-ip-dns:domain:delete': 'rule:admin_api' + 'os_compute_api:os-floating-ip-dns:domain:update': 'rule:admin_api' + 'os_compute_api:os-floating-ip-pools': 'rule:admin_or_owner' + 'os_compute_api:os-floating-ip-pools:discoverable': '@' + 'os_compute_api:os-floating-ips': 'rule:admin_or_owner' + 'os_compute_api:os-floating-ips-bulk': 'rule:admin_api' + 'os_compute_api:os-floating-ips-bulk:discoverable': '@' + 'os_compute_api:os-floating-ips:discoverable': '@' + 'os_compute_api:os-fping': 'rule:admin_or_owner' + 'os_compute_api:os-fping:all_tenants': 'rule:admin_api' + 'os_compute_api:os-fping:discoverable': '@' + 'os_compute_api:os-hide-server-addresses': 'is_admin:False' + 'os_compute_api:os-hide-server-addresses:discoverable': '@' + 'os_compute_api:os-hosts': 'rule:admin_api' + 'os_compute_api:os-hosts:discoverable': '@' + 'os_compute_api:os-hypervisors': 'rule:admin_api' + 'os_compute_api:os-hypervisors:discoverable': '@' + 'os_compute_api:os-instance-actions': 'rule:admin_or_owner' + 'os_compute_api:os-instance-actions:discoverable': '@' + 'os_compute_api:os-instance-actions:events': 'rule:admin_api' + 'os_compute_api:os-instance-usage-audit-log': 'rule:admin_api' + 'os_compute_api:os-instance-usage-audit-log:discoverable': '@' + 'os_compute_api:os-keypairs': 'rule:admin_or_owner' + 'os_compute_api:os-keypairs:create': 'rule:admin_api or user_id:%(user_id)s' + 'os_compute_api:os-keypairs:delete': 'rule:admin_api or user_id:%(user_id)s' + 'os_compute_api:os-keypairs:discoverable': '@' + 'os_compute_api:os-keypairs:index': 'rule:admin_api or user_id:%(user_id)s' + 'os_compute_api:os-keypairs:show': 'rule:admin_api or user_id:%(user_id)s' + 'os_compute_api:os-lock-server:discoverable': '@' + 'os_compute_api:os-lock-server:lock': 'rule:admin_or_owner' + 'os_compute_api:os-lock-server:unlock': 'rule:admin_or_owner' + 'os_compute_api:os-lock-server:unlock:unlock_override': 'rule:admin_api' + 'os_compute_api:os-migrate-server:discoverable': '@' + 'os_compute_api:os-migrate-server:migrate': 'rule:admin_api' + 'os_compute_api:os-migrate-server:migrate_live': 'rule:admin_api' + 'os_compute_api:os-migrations:discoverable': '@' + 'os_compute_api:os-migrations:index': 'rule:admin_api' + 'os_compute_api:os-multinic': 'rule:admin_or_owner' + 'os_compute_api:os-multinic:discoverable': '@' + 'os_compute_api:os-networks': 'rule:admin_api' + 'os_compute_api:os-networks-associate': 'rule:admin_api' + 'os_compute_api:os-networks-associate:discoverable': '@' + 'os_compute_api:os-networks:discoverable': '@' + 'os_compute_api:os-networks:view': 'rule:admin_or_owner' + 'os_compute_api:os-pause-server:discoverable': '@' + 'os_compute_api:os-pause-server:pause': 'rule:admin_or_owner' + 'os_compute_api:os-pause-server:unpause': 'rule:admin_or_owner' + 'os_compute_api:os-pci:detail': 'rule:admin_api' + 'os_compute_api:os-pci:discoverable': '@' + 'os_compute_api:os-pci:index': 'rule:admin_api' + 'os_compute_api:os-pci:pci_servers': 'rule:admin_or_owner' + 'os_compute_api:os-pci:show': 'rule:admin_api' + 'os_compute_api:os-personality:discoverable': '@' + 'os_compute_api:os-preserve-ephemeral-rebuild:discoverable': '@' + 'os_compute_api:os-quota-class-sets:discoverable': '@' + 'os_compute_api:os-quota-class-sets:show': 'is_admin:True or quota_class:%(quota_class)s' + 'os_compute_api:os-quota-class-sets:update': 'rule:admin_api' + 'os_compute_api:os-quota-sets:defaults': '@' + 'os_compute_api:os-quota-sets:delete': 'rule:admin_api' + 'os_compute_api:os-quota-sets:detail': 'rule:admin_api' + 'os_compute_api:os-quota-sets:discoverable': '@' + 'os_compute_api:os-quota-sets:show': 'rule:admin_or_owner' + 'os_compute_api:os-quota-sets:update': 'rule:admin_api' + 'os_compute_api:os-remote-consoles': 'rule:admin_or_owner' + 'os_compute_api:os-remote-consoles:discoverable': '@' + 'os_compute_api:os-rescue': 'rule:admin_or_owner' + 'os_compute_api:os-rescue:discoverable': '@' + 'os_compute_api:os-scheduler-hints:discoverable': '@' + 'os_compute_api:os-security-group-default-rules': 'rule:admin_api' + 'os_compute_api:os-security-group-default-rules:discoverable': '@' + 'os_compute_api:os-security-groups': 'rule:admin_or_owner' + 'os_compute_api:os-security-groups:discoverable': '@' + 'os_compute_api:os-server-diagnostics': 'rule:admin_api' + 'os_compute_api:os-server-diagnostics:discoverable': '@' + 'os_compute_api:os-server-external-events:create': 'rule:admin_api' + 'os_compute_api:os-server-external-events:discoverable': '@' + 'os_compute_api:os-server-groups': 'rule:admin_or_owner' + 'os_compute_api:os-server-groups:discoverable': '@' + 'os_compute_api:os-server-password': 'rule:admin_or_owner' + 'os_compute_api:os-server-password:discoverable': '@' + 'os_compute_api:os-server-tags:delete': '@' + 'os_compute_api:os-server-tags:delete_all': '@' + 'os_compute_api:os-server-tags:index': '@' + 'os_compute_api:os-server-tags:show': '@' + 'os_compute_api:os-server-tags:update': '@' + 'os_compute_api:os-server-tags:update_all': '@' + 'os_compute_api:os-server-usage': 'rule:admin_or_owner' + 'os_compute_api:os-server-usage:discoverable': '@' + 'os_compute_api:os-services': 'rule:admin_api' + 'os_compute_api:os-services:discoverable': '@' + 'os_compute_api:os-shelve:shelve': 'rule:admin_or_owner' + 'os_compute_api:os-shelve:shelve:discoverable': '@' + 'os_compute_api:os-shelve:shelve_offload': 'rule:admin_api' + 'os_compute_api:os-shelve:unshelve': 'rule:admin_or_owner' + 'os_compute_api:os-simple-tenant-usage:discoverable': '@' + 'os_compute_api:os-simple-tenant-usage:list': 'rule:admin_api' + 'os_compute_api:os-simple-tenant-usage:show': 'rule:admin_or_owner' + 'os_compute_api:os-suspend-server:discoverable': '@' + 'os_compute_api:os-suspend-server:resume': 'rule:admin_or_owner' + 'os_compute_api:os-suspend-server:suspend': 'rule:admin_or_owner' + 'os_compute_api:os-tenant-networks': 'rule:admin_or_owner' + 'os_compute_api:os-tenant-networks:discoverable': '@' + 'os_compute_api:os-used-limits': 'rule:admin_api' + 'os_compute_api:os-used-limits:discoverable': '@' + 'os_compute_api:os-user-data:discoverable': '@' + 'os_compute_api:os-virtual-interfaces': 'rule:admin_or_owner' + 'os_compute_api:os-virtual-interfaces:discoverable': '@' + 'os_compute_api:os-volumes': 'rule:admin_or_owner' + 'os_compute_api:os-volumes-attachments:create': 'rule:admin_or_owner' + 'os_compute_api:os-volumes-attachments:delete': 'rule:admin_or_owner' + 'os_compute_api:os-volumes-attachments:discoverable': '@' + 'os_compute_api:os-volumes-attachments:index': 'rule:admin_or_owner' + 'os_compute_api:os-volumes-attachments:show': 'rule:admin_or_owner' + 'os_compute_api:os-volumes-attachments:update': 'rule:admin_api' + 'os_compute_api:os-volumes:discoverable': '@' + 'os_compute_api:server-metadata:create': 'rule:admin_or_owner' + 'os_compute_api:server-metadata:delete': 'rule:admin_or_owner' + 'os_compute_api:server-metadata:discoverable': '@' + 'os_compute_api:server-metadata:index': 'rule:admin_or_owner' + 'os_compute_api:server-metadata:show': 'rule:admin_or_owner' + 'os_compute_api:server-metadata:update': 'rule:admin_or_owner' + 'os_compute_api:server-metadata:update_all': 'rule:admin_or_owner' + 'os_compute_api:servers:confirm_resize': 'rule:admin_or_owner' + 'os_compute_api:servers:create': 'rule:admin_or_owner' + 'os_compute_api:servers:create:attach_network': 'rule:admin_or_owner' + 'os_compute_api:servers:create:attach_volume': 'rule:admin_or_owner' + 'os_compute_api:servers:create:forced_host': 'rule:admin_api' + 'os_compute_api:servers:create_image': 'rule:admin_or_owner' + 'os_compute_api:servers:create_image:allow_volume_backed': 'rule:admin_or_owner' + 'os_compute_api:servers:delete': 'rule:admin_or_owner' + 'os_compute_api:servers:detail': 'rule:admin_or_owner' + 'os_compute_api:servers:detail:get_all_tenants': 'is_admin:True' + 'os_compute_api:servers:discoverable': '@' + 'os_compute_api:servers:index': 'rule:admin_or_owner' + 'os_compute_api:servers:index:get_all_tenants': 'is_admin:True' + 'os_compute_api:servers:migrations:delete': 'rule:admin_api' + 'os_compute_api:servers:migrations:force_complete': 'rule:admin_api' + 'os_compute_api:servers:migrations:index': 'rule:admin_api' + 'os_compute_api:servers:migrations:show': 'rule:admin_api' + 'os_compute_api:servers:reboot': 'rule:admin_or_owner' + 'os_compute_api:servers:rebuild': 'rule:admin_or_owner' + 'os_compute_api:servers:resize': 'rule:admin_or_owner' + 'os_compute_api:servers:revert_resize': 'rule:admin_or_owner' + 'os_compute_api:servers:show': 'rule:admin_or_owner' + 'os_compute_api:servers:show:host_status': 'rule:admin_api' + 'os_compute_api:servers:start': 'rule:admin_or_owner' + 'os_compute_api:servers:stop': 'rule:admin_or_owner' + 'os_compute_api:servers:trigger_crash_dump': 'rule:admin_or_owner' + 'os_compute_api:servers:update': 'rule:admin_or_owner' dependencies: static: @@ -1236,7 +1951,7 @@ secrets: admin: horizon-db-admin horizon: horizon-db-user -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: diff --git a/ingress/values.yaml b/ingress/values.yaml index 7511532009..625ea57a49 100644 --- a/ingress/values.yaml +++ b/ingress/values.yaml @@ -25,11 +25,11 @@ deployment: images: tags: - entrypoint: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + entrypoint: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 # https://github.com/kubernetes/ingress-nginx/blob/09524cd3363693463da5bf4a9bb3900da435ad05/Changelog.md#090 ingress: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 error_pages: gcr.io/google_containers/defaultbackend:1.0 - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" pod: diff --git a/ironic/templates/configmap-etc.yaml b/ironic/templates/configmap-etc.yaml index 276289d9b6..73bd0def3e 100644 --- a/ironic/templates/configmap-etc.yaml +++ b/ironic/templates/configmap-etc.yaml @@ -194,22 +194,6 @@ data: {{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.ironic | indent 4 }} policy.json: | {{ toJson .Values.conf.policy | indent 4 }} - tftp-map-file: | -{{ if .Values.conf.tftp_map_file.override -}} -{{ .Values.conf.tftp_map_file.override | indent 4 }} -{{- else -}} -{{ tuple "etc/_tftp-map-file.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} -{{- end }} -{{- if .Values.conf.tftp_map_file.append -}} -{{ .Values.conf.tftp_map_file.append | indent 4 }} -{{- end }} - nginx.conf: | -{{ if .Values.conf.nginx.override -}} -{{ .Values.conf.nginx.override | indent 4 }} -{{- else -}} -{{ tuple "etc/_nginx.conf.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} -{{- end }} -{{- if .Values.conf.nginx.append -}} -{{ .Values.conf.nginx.append | indent 4 }} -{{- end }} -{{- end }} +{{- include "helm-toolkit.snippets.values_template_renderer" (dict "envAll" $envAll "template" .Values.conf.tftp_map_file "key" "tftp-map-file") | indent 2 }} +{{- include "helm-toolkit.snippets.values_template_renderer" (dict "envAll" $envAll "template" .Values.conf.nginx "key" "nginx.conf") | indent 2 }} +{{- end }} \ No newline at end of file diff --git a/ironic/templates/etc/_nginx.conf.tpl b/ironic/templates/etc/_nginx.conf.tpl deleted file mode 100644 index e070746b3c..0000000000 --- a/ironic/templates/etc/_nginx.conf.tpl +++ /dev/null @@ -1,41 +0,0 @@ -user nginx; -worker_processes 1; - -error_log /var/log/nginx/error.log warn; -pid /var/run/nginx.pid; - - -events { - worker_connections 1024; -} - - -http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - '$status $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /var/log/nginx/access.log main; - - sendfile on; - #tcp_nopush on; - - keepalive_timeout 65; - - #gzip on; - - server { - listen OSH_PXE_IP:{{ tuple "baremetal" "internal" "pxe_http" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }}; - server_name localhost; - - #charset koi8-r; - #access_log /var/log/nginx/host.access.log main; - - location / { - root /var/lib/openstack-helm/httpboot; - } - } -} diff --git a/ironic/templates/etc/_tftp-map-file.tpl b/ironic/templates/etc/_tftp-map-file.tpl deleted file mode 100644 index 812abe0c5c..0000000000 --- a/ironic/templates/etc/_tftp-map-file.tpl +++ /dev/null @@ -1,4 +0,0 @@ -re ^(/tftpboot/) /tftpboot/\2 -re ^/tftpboot/ /tftpboot/ -re ^(^/) /tftpboot/\1 -re ^([^/]) /tftpboot/\1 diff --git a/ironic/templates/job-db-drop.yaml b/ironic/templates/job-db-drop.yaml new file mode 100644 index 0000000000..ced2e56e82 --- /dev/null +++ b/ironic/templates/job-db-drop.yaml @@ -0,0 +1,20 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- if .Values.manifests.job_db_drop }} +{{- $dbDropJob := dict "envAll" . "serviceName" "ironic" -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} +{{- end }} diff --git a/ironic/templates/service-ingress-api.yaml b/ironic/templates/service-ingress-api.yaml index 8b74b8cf1b..37ab0aa6c7 100644 --- a/ironic/templates/service-ingress-api.yaml +++ b/ironic/templates/service-ingress-api.yaml @@ -15,16 +15,6 @@ limitations under the License. */}} {{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} -{{- $envAll := . }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "baremetal" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "baremetal" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/ironic/values.yaml b/ironic/values.yaml index e0f35d108f..9c9a966758 100644 --- a/ironic/values.yaml +++ b/ironic/values.yaml @@ -45,7 +45,7 @@ images: ironic_pxe: docker.io/openstackhelm/ironic:newton ironic_pxe_init: docker.io/openstackhelm/ironic:newton ironic_pxe_http: docker.io/nginx:1.13.3 - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" conf: @@ -53,12 +53,40 @@ conf: override: append: policy: {} - tftp_map_file: - override: - append: - nginx: - override: - append: + tftp_map_file: | + re ^(/tftpboot/) /tftpboot/\2 + re ^/tftpboot/ /tftpboot/ + re ^(^/) /tftpboot/\1 + re ^([^/]) /tftpboot/\1 + nginx: | + user nginx; + worker_processes 1; + error_log /var/log/nginx/error.log warn; + pid /var/run/nginx.pid; + events { + worker_connections 1024; + } + http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + access_log /var/log/nginx/access.log main; + sendfile on; + #tcp_nopush on; + keepalive_timeout 65; + #gzip on; + server { + listen OSH_PXE_IP:{{ tuple "baremetal" "internal" "pxe_http" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }}; + server_name localhost; + #charset koi8-r; + #access_log /var/log/nginx/host.access.log main; + location / { + root /var/lib/openstack-helm/httpboot; + } + } + } ironic: DEFAULT: enabled_drivers: agent_ipmitool @@ -111,6 +139,11 @@ network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" + annotations: + nginx.ingress.kubernetes.io/rewrite-target: / node_port: enabled: false port: 30511 @@ -231,7 +264,7 @@ secrets: admin: ironic-rabbitmq-admin ironic: ironic-rabbitmq-user -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -498,6 +531,7 @@ manifests: deployment_api: true ingress_api: true job_bootstrap: true + job_db_drop: false job_db_init: true job_db_sync: true job_ks_endpoints: true diff --git a/keystone/templates/bin/_domain-manage.py.tpl b/keystone/templates/bin/_domain-manage.py.tpl new file mode 100644 index 0000000000..c77ed20b85 --- /dev/null +++ b/keystone/templates/bin/_domain-manage.py.tpl @@ -0,0 +1,55 @@ +#!/usr/bin/python +{{/* +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +import json +import requests +import sys + +def main(args): + base_url, token, domainId, domainName, filename = args[1], args[2], args[3], args[4], args[5] + url = "%s/domains/%s/config" % (base_url, domainId) + print("Connecting to url: %r" % url) + + headers = { + 'Content-Type': "application/json", + 'X-Auth-Token': token, + 'Cache-Control': "no-cache" + } + + response = requests.request("GET", url, headers=headers) + + if response.status_code == 404: + print("domain config not found - put") + action = "PUT" + else: + print("domain config found - patch") + action = "PATCH" + + with open(filename, "rb") as f: + data = {"config": json.load(f)} + + response = requests.request(action, url, + data=json.dumps(data), + headers=headers) + + + print("Response code on action [%s]: %s" % (action, response.status_code)) + if (int(response.status_code) / 100) != 2: + sys.exit(1) + +if __name__ == "__main__": + if len(sys.argv) != 6: + sys.exit(1) + main(sys.argv) diff --git a/keystone/templates/bin/_domain-manage.sh.tpl b/keystone/templates/bin/_domain-manage.sh.tpl index 01a23ff792..9df3d842f7 100644 --- a/keystone/templates/bin/_domain-manage.sh.tpl +++ b/keystone/templates/bin/_domain-manage.sh.tpl @@ -16,7 +16,17 @@ See the License for the specific language governing permissions and limitations under the License. */}} -set -ex +set -e +endpt={{ tuple "identity" "internal" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup" }} +path={{ .Values.conf.keystone.identity.domain_config_dir | default "/etc/keystonedomains" }} + {{- range $k, $v := .Values.conf.ks_domains }} -keystone-manage domain_config_upload --domain-name {{ $k }} || true + +filename=${path}/keystone.{{ $k }}.json +python /tmp/domain-manage.py \ + $endpt \ + $(openstack token issue -f value -c id) \ + $(openstack domain show {{ $k }} -f value -c id) \ + {{ $k }} $filename + {{- end }} diff --git a/keystone/templates/configmap-bin.yaml b/keystone/templates/configmap-bin.yaml index 99d3a6652f..206c832ed3 100644 --- a/keystone/templates/configmap-bin.yaml +++ b/keystone/templates/configmap-bin.yaml @@ -45,6 +45,8 @@ data: {{ tuple "bin/_domain-manage-init.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} domain-manage.sh: | {{ tuple "bin/_domain-manage.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} + domain-manage.py: | +{{ tuple "bin/_domain-manage.py.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} rabbit-init.sh: | {{- include "helm-toolkit.scripts.rabbit_init" . | indent 4 }} {{- end }} diff --git a/keystone/templates/configmap-etc.yaml b/keystone/templates/configmap-etc.yaml index bb52de8ccd..a6116d07af 100644 --- a/keystone/templates/configmap-etc.yaml +++ b/keystone/templates/configmap-etc.yaml @@ -50,7 +50,7 @@ data: sso_callback_template.html: | {{- tuple .Values.conf.sso_callback_template "etc/_sso_callback_template.html.tpl" . | include "helm-toolkit.utils.configmap_templater" }} {{- range $k, $v := .Values.conf.ks_domains }} - keystone.{{ $k }}.conf: | -{{ include "helm-toolkit.utils.to_oslo_conf" $v | indent 4 }} + keystone.{{ $k }}.json: | +{{ toJson $v | indent 4 }} {{- end }} {{- end }} diff --git a/keystone/templates/deployment-api.yaml b/keystone/templates/deployment-api.yaml index 072fd1aaa1..2828d39aa2 100644 --- a/keystone/templates/deployment-api.yaml +++ b/keystone/templates/deployment-api.yaml @@ -106,6 +106,12 @@ spec: mountPath: /tmp/keystone-api.sh subPath: keystone-api.sh readOnly: true +{{- if .Values.endpoints.ldap.auth.client.tls.ca }} + - name: keystone-ldap-tls + mountPath: /etc/keystone/ldap/tls.ca + subPath: tls.ca + readOnly: true +{{- end }} {{- if eq .Values.conf.keystone.token.provider "fernet" }} - name: keystone-fernet-keys mountPath: {{ .Values.conf.keystone.fernet_tokens.key_repository }} @@ -126,6 +132,11 @@ spec: configMap: name: keystone-bin defaultMode: 0555 +{{- if .Values.endpoints.ldap.auth.client.tls.ca }} + - name: keystone-ldap-tls + secret: + secretName: keystone-ldap-tls +{{- end }} {{- if eq .Values.conf.keystone.token.provider "fernet" }} - name: keystone-fernet-keys secret: diff --git a/keystone/templates/etc/_wsgi-keystone.conf.tpl b/keystone/templates/etc/_wsgi-keystone.conf.tpl index 6e126e1ef2..96966e512f 100644 --- a/keystone/templates/etc/_wsgi-keystone.conf.tpl +++ b/keystone/templates/etc/_wsgi-keystone.conf.tpl @@ -25,7 +25,7 @@ CustomLog /dev/stdout combined env=!forwarded CustomLog /dev/stdout proxy env=forwarded - WSGIDaemonProcess keystone-public processes=1 threads=4 user=keystone group=keystone display-name=%{GROUP} + WSGIDaemonProcess keystone-public processes=1 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / /var/www/cgi-bin/keystone/keystone-wsgi-public WSGIApplicationGroup %{GLOBAL} @@ -41,7 +41,7 @@ CustomLog /dev/stdout proxy env=forwarded - WSGIDaemonProcess keystone-admin processes=1 threads=4 user=keystone group=keystone display-name=%{GROUP} + WSGIDaemonProcess keystone-admin processes=1 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-admin WSGIScriptAlias / /var/www/cgi-bin/keystone/keystone-wsgi-admin WSGIApplicationGroup %{GLOBAL} diff --git a/keystone/templates/job-db-drop.yaml b/keystone/templates/job-db-drop.yaml index a9039af124..d692c89fd6 100644 --- a/keystone/templates/job-db-drop.yaml +++ b/keystone/templates/job-db-drop.yaml @@ -15,77 +15,6 @@ limitations under the License. */}} {{- if .Values.manifests.job_db_drop }} -{{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.db_drop }} - -{{- $mounts_keystone_db_init := .Values.pod.mounts.keystone_db_init.keystone_db_init }} -{{- $mounts_keystone_db_init_init := .Values.pod.mounts.keystone_db_init.init_container }} - -{{- $randStringSuffix := randAlphaNum 5 | lower }} - -{{- $serviceAccountName := print "keystone-db-drop-" $randStringSuffix }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: {{ print "keystone-db-drop-" $randStringSuffix }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded -spec: - template: - metadata: - labels: -{{ tuple $envAll "keystone" "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - spec: - serviceAccountName: {{ $serviceAccountName }} - restartPolicy: OnFailure - nodeSelector: - {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }} - initContainers: -{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - containers: - - name: keystone-db-drop - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: OPENSTACK_CONFIG_FILE - value: /etc/keystone/keystone.conf - - name: OPENSTACK_CONFIG_DB_SECTION - value: database - - name: OPENSTACK_CONFIG_DB_KEY - value: connection - command: - - /tmp/db-drop.py - volumeMounts: - - name: etckeystone - mountPath: /etc/keystone - - name: keystone-etc - mountPath: /etc/keystone/keystone.conf - subPath: keystone.conf - readOnly: true - - name: keystone-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true -{{ if $mounts_keystone_db_init.volumeMounts }}{{ toYaml $mounts_keystone_db_init.volumeMounts | indent 10 }}{{ end }} - volumes: - - name: etckeystone - emptyDir: {} - - name: keystone-etc - configMap: - name: keystone-etc - defaultMode: 0444 - - name: keystone-bin - configMap: - name: keystone-bin - defaultMode: 0555 -{{ if $mounts_keystone_db_init.volumes }}{{ toYaml $mounts_keystone_db_init.volumes | indent 6 }}{{ end }} +{{- $dbDropJob := dict "envAll" . "serviceName" "keystone" -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} {{- end }} diff --git a/keystone/templates/job-domain-manage.yaml b/keystone/templates/job-domain-manage.yaml index d6f51f34f7..d374c92d12 100644 --- a/keystone/templates/job-domain-manage.yaml +++ b/keystone/templates/job-domain-manage.yaml @@ -75,14 +75,18 @@ spec: mountPath: /tmp/domain-manage.sh subPath: domain-manage.sh readOnly: true + - name: keystone-bin + mountPath: /tmp/domain-manage.py + subPath: domain-manage.py + readOnly: true - name: keystone-etc mountPath: /etc/keystone/keystone.conf subPath: keystone.conf readOnly: true {{- range $k, $v := .Values.conf.ks_domains }} - name: keystone-etc - mountPath: {{ $envAll.Values.conf.keystone.identity.domain_config_dir | default "/etc/keystonedomains" }}/keystone.{{ $k }}.conf - subPath: keystone.{{ $k }}.conf + mountPath: {{ $envAll.Values.conf.keystone.identity.domain_config_dir | default "/etc/keystonedomains" }}/keystone.{{ $k }}.json + subPath: keystone.{{ $k }}.json readOnly: true {{- end }} {{- if eq .Values.conf.keystone.token.provider "fernet" }} diff --git a/keystone/templates/secret-ldap-tls.yaml b/keystone/templates/secret-ldap-tls.yaml new file mode 100644 index 0000000000..1197c37d3b --- /dev/null +++ b/keystone/templates/secret-ldap-tls.yaml @@ -0,0 +1,26 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- if .Values.endpoints.ldap.auth.client.tls.ca }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ .Values.secrets.ldap.tls }} +type: Opaque +data: + tls.ca: {{ .Values.endpoints.ldap.auth.client.tls.ca | default "" | b64enc }} +{{- end }} diff --git a/keystone/templates/service-ingress-api.yaml b/keystone/templates/service-ingress-api.yaml index ae8678df86..21d222c5ab 100644 --- a/keystone/templates/service-ingress-api.yaml +++ b/keystone/templates/service-ingress-api.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_api }} -{{- $envAll := . }} -{{- if .Values.network.api.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "identity" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "identity" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/keystone/values.yaml b/keystone/values.yaml index 1912ce0232..cbc94976c4 100644 --- a/keystone/values.yaml +++ b/keystone/values.yaml @@ -42,13 +42,14 @@ images: keystone_credential_rotate: docker.io/openstackhelm/keystone:newton keystone_api: docker.io/openstackhelm/keystone:newton keystone_domain_manage: docker.io/openstackhelm/keystone:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" bootstrap: enabled: true ks_user: admin script: | + openstack role create --or-show _member_ openstack role add \ --user="${OS_USERNAME}" \ --user-domain="${OS_USER_DOMAIN_NAME}" \ @@ -56,12 +57,23 @@ bootstrap: --project="${OS_PROJECT_NAME}" \ "_member_" + #NOTE(portdirect): required for all users who operate heat stacks + openstack role create --or-show heat_stack_owner + openstack role add \ + --user="${OS_USERNAME}" \ + --user-domain="${OS_USER_DOMAIN_NAME}" \ + --project-domain="${OS_PROJECT_DOMAIN_NAME}" \ + --project="${OS_PROJECT_NAME}" \ + "heat_stack_owner" + network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / external_policy_local: false node_port: @@ -112,8 +124,8 @@ dependencies: service: oslo_db rabbit_init: services: - - service: oslo_messaging - endpoint: internal + - service: oslo_messaging + endpoint: internal domain_manage: services: - endpoint: internal @@ -547,185 +559,185 @@ conf: run_tempest: false tests: KeystoneBasic.add_and_remove_user_role: - - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.authenticate_user_and_validate_token: - - args: {} - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: {} + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_add_and_list_user_roles: - - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_and_delete_ec2credential: - - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_and_delete_role: - - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_and_delete_service: - - args: - description: test_description - service_type: Rally_test_type - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: + description: test_description + service_type: Rally_test_type + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_and_get_role: - - args: {} - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: {} + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_and_list_ec2credentials: - - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_and_list_services: - - args: - description: test_description - service_type: Rally_test_type - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: + description: test_description + service_type: Rally_test_type + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_and_list_tenants: - - args: {} - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: {} + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_and_list_users: - - args: {} - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: {} + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_delete_user: - - args: {} - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: {} + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_tenant: - - args: {} - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: {} + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_tenant_with_users: - - args: - users_per_tenant: 1 - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: + users_per_tenant: 1 + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_update_and_delete_tenant: - - args: {} - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: {} + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_user: - - args: {} - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: {} + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_user_set_enabled_and_delete: - - args: - enabled: true - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 - - args: - enabled: false - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: + enabled: true + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 + - args: + enabled: false + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.create_user_update_password: - - args: {} - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - args: {} + runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 KeystoneBasic.get_entities: - - runner: - concurrency: 1 - times: 1 - type: constant - sla: - failure_rate: - max: 0 + - runner: + concurrency: 1 + times: 1 + type: constant + sla: + failure_rate: + max: 0 mpm_event: override: append: @@ -747,8 +759,10 @@ secrets: oslo_messaging: admin: keystone-rabbitmq-admin keystone: keystone-rabbitmq-user + ldap: + tls: keystone-ldap-tls -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -833,6 +847,19 @@ endpoints: port: memcache: default: 11211 + ldap: + auth: + client: + tls: + # NOTE(lamt): Specify a CA value here will place a LDAPS certificate at + # /etc/certs/tls.ca. To ensure keystone uses LDAPS, the + # following key will need to be overrided under section [ldap] or the + # correct domain-specific setting, else it will not be enabled: + # + # use_tls: true + # tls_req_cert: allow # Valid values: demand, never, allow + # tls_cacertfile: /etc/certs/tls.ca # abs path to the CA cert + ca: null manifests: configmap_bin: true diff --git a/ldap/templates/_helpers.tpl b/ldap/templates/_helpers.tpl index f0d83d2edb..c2a40b8821 100644 --- a/ldap/templates/_helpers.tpl +++ b/ldap/templates/_helpers.tpl @@ -14,3 +14,9 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this {{- $name := default .Chart.Name .Values.nameOverride -}} {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} {{- end -}} + +{{- define "splitdomain" -}} +{{- $name := index . 0 -}} +{{- $local := dict "first" true }} +{{- range $k, $v := splitList "." $name }}{{- if not $local.first -}},{{- end -}}dc={{- $v -}}{{- $_ := set $local "first" false -}}{{- end -}} +{{- end -}} diff --git a/ldap/templates/bin/_bootstrap.sh.tpl b/ldap/templates/bin/_bootstrap.sh.tpl new file mode 100644 index 0000000000..3e65185a0e --- /dev/null +++ b/ldap/templates/bin/_bootstrap.sh.tpl @@ -0,0 +1,8 @@ +#!/bin/bash +set -xe + +{{- $url := tuple "ldap" "internal" . | include "helm-toolkit.endpoints.hostname_fqdn_endpoint_lookup" }} +{{- $port := tuple "ldap" "internal" "ldap" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} +LDAPHOST="ldap://{{ $url }}:{{ $port }}" +ADMIN="cn={{ .Values.secrets.identity.admin }},{{ tuple .Values.openldap.domain . | include "splitdomain" }}" +ldapadd -x -D $ADMIN -H $LDAPHOST -w {{ .Values.openldap.password }} -f /etc/sample_data.ldif diff --git a/ldap/templates/configmap-bin.yaml b/ldap/templates/configmap-bin.yaml new file mode 100644 index 0000000000..e3c1b4af03 --- /dev/null +++ b/ldap/templates/configmap-bin.yaml @@ -0,0 +1,27 @@ +{{/* +Copyright 2018 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} +{{- if .Values.manifests.configmap_bin }} +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: ldap-bin +data: +{{- if .Values.bootstrap.enabled }} + bootstrap.sh: | +{{ tuple "bin/_bootstrap.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} +{{- end }} +{{- end }} diff --git a/ldap/templates/configmap-etc.yaml b/ldap/templates/configmap-etc.yaml new file mode 100644 index 0000000000..e724e6d712 --- /dev/null +++ b/ldap/templates/configmap-etc.yaml @@ -0,0 +1,27 @@ +{{/* +Copyright 2018 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} +{{- if .Values.manifests.configmap_etc }} +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: ldap-etc +data: +{{- if .Values.bootstrap.enabled }} + sample_data.ldif: | +{{ .Values.data.sample | indent 4 }} +{{- end }} +{{- end }} diff --git a/ldap/templates/job-bootstrap.yaml b/ldap/templates/job-bootstrap.yaml new file mode 100644 index 0000000000..bf96682836 --- /dev/null +++ b/ldap/templates/job-bootstrap.yaml @@ -0,0 +1,18 @@ +{{/* +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- if and .Values.manifests.job_bootstrap .Values.bootstrap.enabled }} +{{- $bootstrapJob := dict "envAll" . "serviceName" "ldap" "configFile" "/etc/sample_data.ldif" "keystoneUser" "admin" "openrc" "false" -}} +{{ $bootstrapJob | include "helm-toolkit.manifests.job_bootstrap" }} +{{- end }} diff --git a/ldap/values.yaml b/ldap/values.yaml index ce90d53e46..11cd17dc1d 100644 --- a/ldap/values.yaml +++ b/ldap/values.yaml @@ -25,6 +25,14 @@ pod: default: kubernetes.io/hostname replicas: server: 1 + lifecycle: + upgrades: + deployments: + revision_history: 3 + pod_replacement_strategy: RollingUpdate + rolling_update: + max_unavailable: 1 + max_surge: 3 resources: enabled: false server: @@ -34,16 +42,40 @@ pod: limits: memory: "1024Mi" cpu: "2000m" + jobs: + bootstrap: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "1024Mi" + cpu: "2000m" + mounts: + ldap_data_load: + init_container: null + ldap_data_load: + images: tags: - ldap: "docker.io/osixia/openldap:1.1.9" - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + bootstrap: "docker.io/osixia/openldap:1.2.0" + ldap: "docker.io/osixia/openldap:1.2.0" + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: IfNotPresent dependencies: static: ldap: jobs: null + bootstrap: + services: + - endpoint: internal + service: ldap + server: + jobs: + - ldap-load-data + services: + - endpoint: internal + service: ldap storage: pvc: @@ -58,6 +90,12 @@ labels: server: node_selector_key: openstack-control-plane node_selector_value: enabled + job: + node_selector_key: openstack-control-plane + node_selector_value: enabled + +bootstrap: + enabled: false endpoints: cluster_domain_suffix: cluster.local @@ -72,10 +110,89 @@ endpoints: ldap: default: 389 +data: + sample: | + dn: ou=People,dc=cluster,dc=local + objectclass: organizationalunit + ou: People + description: We the People + + # NOTE: Password is "password" without quotes + dn: uid=alice,ou=People,dc=cluster,dc=local + objectClass: inetOrgPerson + objectClass: top + objectClass: posixAccount + objectClass: shadowAccount + objectClass: person + sn: Alice + cn: alice + uid: alice + userPassword: {SSHA}+i3t/DLCgLDGaIOAmfeFJ2kDeJWmPUDH + description: SHA + gidNumber: 1000 + uidNumber: 1493 + homeDirectory: /home/alice + mail: alice@example.com + + # NOTE: Password is "password" without quotes + dn: uid=bob,ou=People,dc=cluster,dc=local + objectClass: inetOrgPerson + objectClass: top + objectClass: posixAccount + objectClass: shadowAccount + objectClass: person + sn: Bob + cn: bob + uid: bob + userPassword: {SSHA}fCJ5vuW1BQ4/OfOVkkx1qjwi7yHFuGNB + description: MD5 + gidNumber: 1000 + uidNumber: 5689 + homeDirectory: /home/bob + mail: bob@example.com + + dn: ou=Groups,dc=cluster,dc=local + objectclass: organizationalunit + ou: Groups + description: We the People + + dn: cn=cryptography,ou=Groups,dc=cluster,dc=local + objectclass: top + objectclass: posixGroup + gidNumber: 418 + cn: overwatch + description: Cryptography Team + memberUID: uid=alice,ou=People,dc=cluster,dc=local + memberUID: uid=bob,ou=People,dc=cluster,dc=local + + dn: cn=blue,ou=Groups,dc=cluster,dc=local + objectclass: top + objectclass: posixGroup + gidNumber: 419 + cn: blue + description: Blue Team + memberUID: uid=bob,ou=People,dc=cluster,dc=local + + dn: cn=red,ou=Groups,dc=cluster,dc=local + objectclass: top + objectclass: posixGroup + gidNumber: 420 + cn: red + description: Red Team + memberUID: uid=alice,ou=People,dc=cluster,dc=local + +secrets: + identity: + admin: admin + ldap: ldap + openldap: domain: cluster.local password: password manifests: + configmap_bin: true + configmap_etc: true + job_bootstrap: true statefulset: true service: true diff --git a/libvirt/templates/bin/_libvirt.sh.tpl b/libvirt/templates/bin/_libvirt.sh.tpl index 02ef2994ef..63271f81b2 100644 --- a/libvirt/templates/bin/_libvirt.sh.tpl +++ b/libvirt/templates/bin/_libvirt.sh.tpl @@ -30,6 +30,14 @@ if [[ -c /dev/kvm ]]; then chown root:kvm /dev/kvm fi +if [ -d /sys/kernel/mm/hugepages ]; then + if [ -n "$(grep KVM_HUGEPAGES=0 /etc/default/qemu-kvm)" ]; then + sed -i 's/.*KVM_HUGEPAGES=0.*/KVM_HUGEPAGES=1/g' /etc/default/qemu-kvm + else + echo KVM_HUGEPAGES=1 >> /etc/default/qemu-kvm + fi +fi + if [ -n "${LIBVIRT_CEPH_SECRET_UUID}" ] ; then libvirtd --listen & diff --git a/libvirt/templates/daemonset-libvirt.yaml b/libvirt/templates/daemonset-libvirt.yaml index b861400e45..2f2791479a 100644 --- a/libvirt/templates/daemonset-libvirt.yaml +++ b/libvirt/templates/daemonset-libvirt.yaml @@ -109,6 +109,8 @@ spec: mountPath: /etc/libvirt/qemu.conf subPath: qemu.conf readOnly: true + - name: etc-libvirt-qemu + mountPath: /etc/libvirt/qemu - mountPath: /lib/modules name: libmodules readOnly: true @@ -179,5 +181,8 @@ spec: - name: machine-id hostPath: path: /etc/machine-id + - name: etc-libvirt-qemu + hostPath: + path: /etc/libvirt/qemu {{ if $mounts_libvirt.volumes }}{{ toYaml $mounts_libvirt.volumes | indent 8 }}{{ end }} {{- end }} diff --git a/libvirt/values.yaml b/libvirt/values.yaml index 430dd3cc2b..2b7307d17a 100644 --- a/libvirt/values.yaml +++ b/libvirt/values.yaml @@ -28,7 +28,7 @@ labels: images: tags: libvirt: docker.io/openstackhelm/libvirt:ubuntu-xenial-1.3.1 - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" ceph: diff --git a/magnum/templates/job-db-drop.yaml b/magnum/templates/job-db-drop.yaml index f7615b5f5f..38dbc3e617 100644 --- a/magnum/templates/job-db-drop.yaml +++ b/magnum/templates/job-db-drop.yaml @@ -15,72 +15,6 @@ limitations under the License. */}} {{- if .Values.manifests.job_db_drop }} -{{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.db_drop }} - -{{- $randStringSuffix := randAlphaNum 5 | lower }} - -{{- $serviceAccountName := print "magnum-db-drop-" $randStringSuffix }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: {{ print "magnum-db-drop-" $randStringSuffix }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded -spec: - template: - metadata: - labels: -{{ tuple $envAll "magnum" "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - spec: - serviceAccountName: {{ $serviceAccountName }} - restartPolicy: OnFailure - nodeSelector: - {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }} - initContainers: -{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - containers: - - name: magnum-db-drop - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: OPENSTACK_CONFIG_FILE - value: /etc/magnum/magnum.conf - - name: OPENSTACK_CONFIG_DB_SECTION - value: database - - name: OPENSTACK_CONFIG_DB_KEY - value: connection - command: - - /tmp/db-drop.py - volumeMounts: - - name: magnum-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true - - name: etcmagnum - mountPath: /etc/magnum - - name: magnum-etc - mountPath: /etc/magnum/magnum.conf - subPath: magnum.conf - readOnly: true - volumes: - - name: etcmagnum - emptyDir: {} - - name: magnum-etc - configMap: - name: magnum-etc - defaultMode: 0444 - - name: magnum-bin - configMap: - name: magnum-bin - defaultMode: 0555 +{{- $dbDropJob := dict "envAll" . "serviceName" "magnum" -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} {{- end }} diff --git a/magnum/templates/service-ingress-api.yaml b/magnum/templates/service-ingress-api.yaml index 79546a644e..113f67c751 100644 --- a/magnum/templates/service-ingress-api.yaml +++ b/magnum/templates/service-ingress-api.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_api }} -{{- $envAll := . }} -{{- if .Values.network.api.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "container-infra" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "container-infra" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/magnum/values.yaml b/magnum/values.yaml index 51f12a2149..ce3d847319 100644 --- a/magnum/values.yaml +++ b/magnum/values.yaml @@ -42,7 +42,7 @@ images: ks_endpoints: docker.io/openstackhelm/heat:newton magnum_api: docker.io/openstackhelm/magnum:newton magnum_conductor: docker.io/openstackhelm/magnum:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" conf: @@ -57,7 +57,7 @@ conf: filter:request_id: paste.filter_factory: oslo_middleware:RequestId.factory filter:cors: - paste.filter_factory: oslo_middleware.cors:filter_factory + paste.filter_factory: oslo_middleware.cors:filter_factory oslo_config_project: magnum filter:healthcheck: paste.filter_factory: oslo_middleware:Healthcheck.factory @@ -116,7 +116,7 @@ conf: auth_version: v3 memcache_security_strategy: ENCRYPT api: - #NOTE(portdirect): the bind port should not be defined, and is manipulated + # NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. port: null host: 0.0.0.0 @@ -125,8 +125,10 @@ network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / external_policy_local: false node_port: @@ -191,8 +193,8 @@ dependencies: service: identity rabbit_init: services: - - service: oslo_messaging - endpoint: internal + - service: oslo_messaging + endpoint: internal # Names of secrets used by bootstrap and environmental checks secrets: identity: @@ -205,7 +207,7 @@ secrets: admin: magnum-rabbitmq-admin magnum: magnum-rabbitmq-user -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: diff --git a/mariadb/templates/bin/_start.sh.tpl b/mariadb/templates/bin/_start.sh.tpl index 945e3f40b2..96e4b47f86 100644 --- a/mariadb/templates/bin/_start.sh.tpl +++ b/mariadb/templates/bin/_start.sh.tpl @@ -17,6 +17,12 @@ limitations under the License. set -xe +# MariaDB 10.2.13 has a regression which breaks clustering, patch +# around this for now +if /usr/sbin/mysqld --version | grep --silent 10.2.13 ; then + sed -i 's^LSOF_OUT=.*^LSOF_OUT=$(lsof -sTCP:LISTEN -i TCP:${PORT} -a -c nc -c socat -F c 2> /dev/null || :)^' /usr/bin/wsrep_sst_xtrabackup-v2 +fi + # Bootstrap database CLUSTER_INIT_ARGS="" CLUSTER_CONFIG_PATH=/etc/mysql/conf.d/10-cluster-config.cnf diff --git a/mariadb/values.yaml b/mariadb/values.yaml index 766819701f..7ff5fa8c60 100644 --- a/mariadb/values.yaml +++ b/mariadb/values.yaml @@ -14,11 +14,14 @@ images: tags: + # NOTE: if you update from 10.2.13 please look at + # https://review.openstack.org/#/q/Ifd09d7effe7d382074ca9e6678df36bdd4bce0af + # and check whether it's still needed mariadb: docker.io/mariadb:10.2.13 prometheus_create_mysql_user: docker.io/mariadb:10.2.13 prometheus_mysql_exporter: docker.io/prom/mysqld-exporter:v0.10.0 prometheus_mysql_exporter_helm_tests: docker.io/openstackhelm/heat:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: IfNotPresent labels: @@ -132,7 +135,7 @@ network: prometheus_mysql_exporter: port: 9104 -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: diff --git a/memcached/values.yaml b/memcached/values.yaml index 4065ea63cf..e3ca737383 100644 --- a/memcached/values.yaml +++ b/memcached/values.yaml @@ -20,7 +20,7 @@ conf: memcached: max_connections: 8192 - #NOTE(pordirect): this should match the value in + # NOTE(pordirect): this should match the value in # `pod.resources.memcached.memory` memory: 1024 @@ -44,7 +44,7 @@ endpoints: images: pull_policy: IfNotPresent tags: - dep_check: 'quay.io/stackanetes/kubernetes-entrypoint:v0.2.1' + dep_check: 'quay.io/stackanetes/kubernetes-entrypoint:v0.3.0' memcached: 'docker.io/memcached:1.5.5' labels: diff --git a/mistral/requirements.yaml b/mistral/requirements.yaml index 307a18eaaf..53782e69b2 100644 --- a/mistral/requirements.yaml +++ b/mistral/requirements.yaml @@ -16,4 +16,3 @@ dependencies: - name: helm-toolkit repository: http://localhost:8879/charts version: 0.1.0 - diff --git a/mistral/templates/job-db-drop.yaml b/mistral/templates/job-db-drop.yaml index 8783791a69..5ebf48a7fe 100644 --- a/mistral/templates/job-db-drop.yaml +++ b/mistral/templates/job-db-drop.yaml @@ -15,72 +15,6 @@ limitations under the License. */}} {{- if .Values.manifests.job_db_drop }} -{{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.db_drop }} - -{{- $randStringSuffix := randAlphaNum 5 | lower }} - -{{- $serviceAccountName := print "mistral-db-drop-" $randStringSuffix }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: {{ print "mistral-db-drop-" $randStringSuffix }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded -spec: - template: - metadata: - labels: -{{ tuple $envAll "mistral" "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - spec: - serviceAccountName: {{ $serviceAccountName }} - restartPolicy: OnFailure - nodeSelector: - {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }} - initContainers: -{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - containers: - - name: mistral-db-drop - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: OPENSTACK_CONFIG_FILE - value: /etc/mistral/mistral.conf - - name: OPENSTACK_CONFIG_DB_SECTION - value: database - - name: OPENSTACK_CONFIG_DB_KEY - value: connection - command: - - /tmp/db-drop.py - volumeMounts: - - name: mistral-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true - - name: pod-etc-mistral - mountPath: /etc/mistral - - name: mistral-etc - mountPath: /etc/mistral/mistral.conf - subPath: mistral.conf - readOnly: true - volumes: - - name: mistral-bin - configMap: - name: mistral-bin - defaultMode: 0555 - - name: pod-etc-mistral - emptyDir: {} - - name: mistral-etc - configMap: - name: mistral-etc - defaultMode: 0444 +{{- $dbDropJob := dict "envAll" . "serviceName" "mistral" -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} {{- end }} diff --git a/mistral/templates/service-ingress-api.yaml b/mistral/templates/service-ingress-api.yaml index b3e473f26f..0c76f4678f 100644 --- a/mistral/templates/service-ingress-api.yaml +++ b/mistral/templates/service-ingress-api.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_api }} -{{- $envAll := . }} -{{- if .Values.network.api.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "workflow" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "workflow" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/mistral/values.yaml b/mistral/values.yaml index 03a9736d1c..405f38c6b7 100644 --- a/mistral/values.yaml +++ b/mistral/values.yaml @@ -39,7 +39,7 @@ release_group: null images: tags: bootstrap: docker.io/openstackhelm/heat:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 db_init: docker.io/openstackhelm/heat:newton mistral_db_sync: docker.io/kolla/ubuntu-source-mistral-api:3.0.3 db_drop: docker.io/openstackhelm/heat:newton @@ -57,8 +57,10 @@ network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / node_port: enabled: false @@ -142,8 +144,8 @@ dependencies: service: identity rabbit_init: services: - - service: oslo_messaging - endpoint: internal + - service: oslo_messaging + endpoint: internal # Names of secrets used by bootstrap and environmental checks secrets: @@ -157,7 +159,7 @@ secrets: admin: mistral-rabbitmq-admin mistral: mistral-rabbitmq-user -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -318,7 +320,7 @@ conf: DEFAULT: transport_url: null api: - #NOTE(portdirect): the bind port should not be defined, and is manipulated + # NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. port: null api_workers: 8 diff --git a/mongodb/values.yaml b/mongodb/values.yaml index 37db16dadb..bd4b997d20 100644 --- a/mongodb/values.yaml +++ b/mongodb/values.yaml @@ -40,7 +40,7 @@ pod: images: tags: mongodb: docker.io/mongo:3.4.9-jessie - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: IfNotPresent storage: diff --git a/neutron/templates/bin/_db-sync.sh.tpl b/neutron/templates/bin/_db-sync.sh.tpl index 030cf8c53e..4ccff8e0d4 100644 --- a/neutron/templates/bin/_db-sync.sh.tpl +++ b/neutron/templates/bin/_db-sync.sh.tpl @@ -20,7 +20,7 @@ set -ex neutron-db-manage \ --config-file /etc/neutron/neutron.conf \ -{{- if eq .Values.network.backend "opencontrail" }} +{{- if ( has "opencontrail" .Values.network.backend ) }} --config-file /etc/neutron/plugins/opencontrail/ContrailPlugin.ini \ {{- else }} --config-file /etc/neutron/plugins/ml2/ml2_conf.ini \ diff --git a/neutron/templates/bin/_neutron-dhcp-agent.sh.tpl b/neutron/templates/bin/_neutron-dhcp-agent.sh.tpl index 48be1cd069..2e4c40df38 100644 --- a/neutron/templates/bin/_neutron-dhcp-agent.sh.tpl +++ b/neutron/templates/bin/_neutron-dhcp-agent.sh.tpl @@ -22,6 +22,6 @@ exec neutron-dhcp-agent \ --config-file /etc/neutron/dhcp_agent.ini \ --config-file /etc/neutron/metadata_agent.ini \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini -{{- if eq .Values.network.backend "ovs" }} \ +{{- if ( has "openvswitch" .Values.network.backend ) }} \ --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini {{- end }} diff --git a/neutron/templates/bin/_neutron-l3-agent.sh.tpl b/neutron/templates/bin/_neutron-l3-agent.sh.tpl index 94d291b7d6..6b613c011d 100644 --- a/neutron/templates/bin/_neutron-l3-agent.sh.tpl +++ b/neutron/templates/bin/_neutron-l3-agent.sh.tpl @@ -22,6 +22,6 @@ exec neutron-l3-agent \ --config-file /etc/neutron/l3_agent.ini \ --config-file /etc/neutron/metadata_agent.ini \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini -{{- if eq .Values.network.backend "ovs" }} \ +{{- if ( has "openvswitch" .Values.network.backend ) }} \ --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini {{- end }} diff --git a/neutron/templates/bin/_neutron-linuxbridge-agent-init.sh.tpl b/neutron/templates/bin/_neutron-linuxbridge-agent-init.sh.tpl index 52a372897a..9054c8aa28 100644 --- a/neutron/templates/bin/_neutron-linuxbridge-agent-init.sh.tpl +++ b/neutron/templates/bin/_neutron-linuxbridge-agent-init.sh.tpl @@ -32,7 +32,6 @@ if [ -n "${external_bridge}" ] ; then fi fi - # configure all bridge mappings defined in config {{- range $br, $phys := .Values.network.auto_bridge_add }} if [ -n "{{- $br -}}" ] ; then diff --git a/neutron/templates/bin/_neutron-metadata-agent.sh.tpl b/neutron/templates/bin/_neutron-metadata-agent.sh.tpl index 94fdb70287..8607791772 100644 --- a/neutron/templates/bin/_neutron-metadata-agent.sh.tpl +++ b/neutron/templates/bin/_neutron-metadata-agent.sh.tpl @@ -20,11 +20,7 @@ set -x exec neutron-metadata-agent \ --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/metadata_agent.ini \ -{{- if eq .Values.network.backend "opencontrail" }} - --config-file /etc/neutron/plugins/opencontrail/ContrailPlugin.ini -{{- else }} --config-file /etc/neutron/plugins/ml2/ml2_conf.ini -{{- end }} -{{- if eq .Values.network.backend "ovs" }} \ +{{- if ( has "openvswitch" .Values.network.backend ) }} \ --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini {{- end }} diff --git a/neutron/templates/bin/_neutron-openvswitch-agent-init.sh.tpl b/neutron/templates/bin/_neutron-openvswitch-agent-init.sh.tpl index e084043a93..23158aabde 100644 --- a/neutron/templates/bin/_neutron-openvswitch-agent-init.sh.tpl +++ b/neutron/templates/bin/_neutron-openvswitch-agent-init.sh.tpl @@ -29,6 +29,15 @@ chown neutron: /run/openvswitch/db.sock # see https://github.com/att-comdev/openstack-helm/issues/88 timeout 3m neutron-sanity-check --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --ovsdb_native --nokeepalived_ipv6_support +# handle any bridge mappings +{{- range $bridge, $port := .Values.network.auto_bridge_add }} +ovs-vsctl --no-wait --may-exist add-br {{ $bridge }} +{{ if $port }} +ovs-vsctl --no-wait --may-exist add-port {{ $bridge }} {{ $port }} +ip link set dev {{ $port }} up +{{ end }} +{{- end }} + tunnel_interface="{{- .Values.network.interface.tunnel -}}" if [ -z "${tunnel_interface}" ] ; then # search for interface with default routing diff --git a/neutron/templates/bin/_neutron-server.sh.tpl b/neutron/templates/bin/_neutron-server.sh.tpl index e3cbbdb95f..e43b9497e6 100644 --- a/neutron/templates/bin/_neutron-server.sh.tpl +++ b/neutron/templates/bin/_neutron-server.sh.tpl @@ -22,11 +22,14 @@ COMMAND="${@:-start}" function start () { exec neutron-server \ --config-file /etc/neutron/neutron.conf \ -{{- if eq .Values.network.backend "opencontrail" }} +{{- if ( has "opencontrail" .Values.network.backend ) }} --config-file /etc/neutron/plugins/opencontrail/ContrailPlugin.ini {{- else }} --config-file /etc/neutron/plugins/ml2/ml2_conf.ini {{- end }} +{{- if ( has "sriov" .Values.network.backend ) }} \ + --config-file /etc/neutron/plugins/ml2/sriov_agent.ini +{{- end }} } function stop () { diff --git a/neutron/templates/bin/_neutron-sriov-agent-init.sh.tpl b/neutron/templates/bin/_neutron-sriov-agent-init.sh.tpl new file mode 100644 index 0000000000..2d38f58518 --- /dev/null +++ b/neutron/templates/bin/_neutron-sriov-agent-init.sh.tpl @@ -0,0 +1,39 @@ +#!/bin/bash + +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +set -ex + +{{- range $k, $sriov := .Values.network.interface.sriov }} +if [ "x{{ $sriov.num_vfs }}" != "x" ]; then + echo "{{ $sriov.num_vfs }}" > /sys/class/net/{{ $sriov.device }}/device/sriov_numvfs +else + NUM_VFS=$(cat /sys/class/net/{{ $sriov.device }}/device/sriov_totalvfs) + echo "${NUM_VFS}" > /sys/class/net/{{ $sriov.device }}/device/sriov_numvfs +fi +ip link set {{ $sriov.device }} up +ip link show {{ $sriov.device }} +{{- if $sriov.promisc }} +ip link set {{ $sriov.device }} promisc on +#NOTE(portdirect): get the bus that the port is on +NIC_BUS=$(lshw -c network -businfo | awk '/{{ $sriov.device }}/ {print $1}') +#NOTE(portdirect): get first port on the nic +NIC_FIRST_PORT=$(lshw -c network -businfo | awk "/${NIC_BUS%%.*}/ { print \$2; exit }" +#NOTE(portdirect): Enable promisc mode on the nic, by setting it for the 1st port +ethtool --set-priv-flags ${NIC_FIRST_PORT} vf-true-promisc-support on +{{- end }} +{{- end }} diff --git a/neutron/templates/bin/_neutron-sriov-agent.sh.tpl b/neutron/templates/bin/_neutron-sriov-agent.sh.tpl new file mode 100644 index 0000000000..7c3dce0294 --- /dev/null +++ b/neutron/templates/bin/_neutron-sriov-agent.sh.tpl @@ -0,0 +1,24 @@ +#!/bin/bash + +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +set -ex + +exec neutron-sriov-nic-agent \ + --config-file /etc/neutron/neutron.conf \ + --config-file /etc/neutron/plugins/ml2/ml2_conf.ini \ + --config-file /etc/neutron/plugins/ml2/sriov_agent.ini diff --git a/neutron/templates/configmap-bin.yaml b/neutron/templates/configmap-bin.yaml index 7137467f86..17afd9e32e 100644 --- a/neutron/templates/configmap-bin.yaml +++ b/neutron/templates/configmap-bin.yaml @@ -61,6 +61,10 @@ data: {{ tuple "bin/_neutron-openvswitch-agent-init.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} neutron-openvswitch-agent-init-modules.sh: | {{ tuple "bin/_neutron-openvswitch-agent-init-modules.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} + neutron-sriov-agent.sh: | +{{ tuple "bin/_neutron-sriov-agent.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} + neutron-sriov-agent-init.sh: | +{{ tuple "bin/_neutron-sriov-agent-init.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} neutron-server.sh: | {{ tuple "bin/_neutron-server.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} rabbit-init.sh: | diff --git a/neutron/templates/configmap-etc.yaml b/neutron/templates/configmap-etc.yaml index 22b8c5edcb..dd52c09a58 100644 --- a/neutron/templates/configmap-etc.yaml +++ b/neutron/templates/configmap-etc.yaml @@ -14,116 +14,134 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.configmap_etc }} -{{- $envAll := . }} +{{- define "neutron.configmap.etc" }} +{{- $configMapName := index . 0 }} +{{- $envAll := index . 1 }} +{{- with $envAll }} -{{- if empty .Values.conf.neutron.keystone_authtoken.auth_uri -}} -{{- tuple "identity" "internal" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup"| set .Values.conf.neutron.keystone_authtoken "auth_uri" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.keystone_authtoken.auth_uri -}} +{{- tuple "identity" "internal" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup"| set $envAll.Values.conf.neutron.keystone_authtoken "auth_uri" | quote | trunc 0 -}} {{- end }} -{{- if empty .Values.conf.neutron.keystone_authtoken.auth_url -}} -{{- tuple "identity" "internal" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup"| set .Values.conf.neutron.keystone_authtoken "auth_url" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.keystone_authtoken.auth_url -}} +{{- tuple "identity" "internal" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup"| set $envAll.Values.conf.neutron.keystone_authtoken "auth_url" | quote | trunc 0 -}} {{- end }} {{- if empty .Values.conf.neutron.keystone_authtoken.project_name -}} {{- set .Values.conf.neutron.keystone_authtoken "project_name" .Values.endpoints.identity.auth.neutron.project_name | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.keystone_authtoken.project_domain_name -}} -{{- set .Values.conf.neutron.keystone_authtoken "project_domain_name" .Values.endpoints.identity.auth.neutron.project_domain_name | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.keystone_authtoken.project_domain_name -}} +{{- set $envAll.Values.conf.neutron.keystone_authtoken "project_domain_name" $envAll.Values.endpoints.identity.auth.neutron.project_domain_name | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.keystone_authtoken.user_domain_name -}} -{{- set .Values.conf.neutron.keystone_authtoken "user_domain_name" .Values.endpoints.identity.auth.neutron.user_domain_name | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.keystone_authtoken.user_domain_name -}} +{{- set $envAll.Values.conf.neutron.keystone_authtoken "user_domain_name" $envAll.Values.endpoints.identity.auth.neutron.user_domain_name | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.keystone_authtoken.username -}} -{{- set .Values.conf.neutron.keystone_authtoken "username" .Values.endpoints.identity.auth.neutron.username | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.keystone_authtoken.username -}} +{{- set $envAll.Values.conf.neutron.keystone_authtoken "username" $envAll.Values.endpoints.identity.auth.neutron.username | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.keystone_authtoken.password -}} -{{- set .Values.conf.neutron.keystone_authtoken "password" .Values.endpoints.identity.auth.neutron.password | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.keystone_authtoken.password -}} +{{- set $envAll.Values.conf.neutron.keystone_authtoken "password" $envAll.Values.endpoints.identity.auth.neutron.password | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.keystone_authtoken.region_name -}} -{{- set .Values.conf.neutron.keystone_authtoken "region_name" .Values.endpoints.identity.auth.neutron.region_name | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.keystone_authtoken.region_name -}} +{{- set $envAll.Values.conf.neutron.keystone_authtoken "region_name" $envAll.Values.endpoints.identity.auth.neutron.region_name | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.keystone_authtoken.memcached_servers -}} -{{- tuple "oslo_cache" "internal" "memcache" . | include "helm-toolkit.endpoints.host_and_port_endpoint_uri_lookup" | set .Values.conf.neutron.keystone_authtoken "memcached_servers" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.keystone_authtoken.memcached_servers -}} +{{- tuple "oslo_cache" "internal" "memcache" . | include "helm-toolkit.endpoints.host_and_port_endpoint_uri_lookup" | set $envAll.Values.conf.neutron.keystone_authtoken "memcached_servers" | quote | trunc 0 -}} {{- end }} {{- if empty .Values.conf.neutron.keystone_authtoken.memcache_secret_key -}} {{- set .Values.conf.neutron.keystone_authtoken "memcache_secret_key" ( default ( randAlphaNum 64 ) .Values.endpoints.oslo_cache.auth.memcache_secret_key ) | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.database.connection -}} -{{- tuple "oslo_db" "internal" "neutron" "mysql" . | include "helm-toolkit.endpoints.authenticated_endpoint_uri_lookup"| set .Values.conf.neutron.database "connection" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.database.connection -}} +{{- tuple "oslo_db" "internal" "neutron" "mysql" . | include "helm-toolkit.endpoints.authenticated_endpoint_uri_lookup"| set $envAll.Values.conf.neutron.database "connection" | quote | trunc 0 -}} {{- end }} -{{- if empty .Values.conf.neutron.DEFAULT.transport_url -}} -{{- tuple "oslo_messaging" "internal" "neutron" "amqp" . | include "helm-toolkit.endpoints.authenticated_endpoint_uri_lookup" | set .Values.conf.neutron.DEFAULT "transport_url" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.DEFAULT.transport_url -}} +{{- tuple "oslo_messaging" "internal" "neutron" "amqp" . | include "helm-toolkit.endpoints.authenticated_endpoint_uri_lookup" | set $envAll.Values.conf.neutron.DEFAULT "transport_url" | quote | trunc 0 -}} {{- end }} -{{- if empty .Values.conf.neutron.nova.auth_url -}} -{{- tuple "identity" "internal" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup"| set .Values.conf.neutron.nova "auth_url" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.nova.auth_url -}} +{{- tuple "identity" "internal" "api" . | include "helm-toolkit.endpoints.keystone_endpoint_uri_lookup"| set $envAll.Values.conf.neutron.nova "auth_url" | quote | trunc 0 -}} {{- end }} -{{- if empty .Values.conf.neutron.nova.region_name -}} -{{- set .Values.conf.neutron.nova "region_name" .Values.endpoints.identity.auth.nova.region_name | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.nova.region_name -}} +{{- set $envAll.Values.conf.neutron.nova "region_name" $envAll.Values.endpoints.identity.auth.nova.region_name | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.nova.project_name -}} -{{- set .Values.conf.neutron.nova "project_name" .Values.endpoints.identity.auth.nova.project_name | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.nova.project_name -}} +{{- set $envAll.Values.conf.neutron.nova "project_name" $envAll.Values.endpoints.identity.auth.nova.project_name | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.nova.project_domain_name -}} -{{- set .Values.conf.neutron.nova "project_domain_name" .Values.endpoints.identity.auth.nova.project_domain_name | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.nova.project_domain_name -}} +{{- set $envAll.Values.conf.neutron.nova "project_domain_name" $envAll.Values.endpoints.identity.auth.nova.project_domain_name | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.nova.user_domain_name -}} -{{- set .Values.conf.neutron.nova "user_domain_name" .Values.endpoints.identity.auth.nova.user_domain_name | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.nova.user_domain_name -}} +{{- set $envAll.Values.conf.neutron.nova "user_domain_name" $envAll.Values.endpoints.identity.auth.nova.user_domain_name | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.nova.username -}} -{{- set .Values.conf.neutron.nova "username" .Values.endpoints.identity.auth.nova.username | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.nova.username -}} +{{- set $envAll.Values.conf.neutron.nova "username" $envAll.Values.endpoints.identity.auth.nova.username | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.nova.password -}} -{{- set .Values.conf.neutron.nova "password" .Values.endpoints.identity.auth.nova.password | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.nova.password -}} +{{- set $envAll.Values.conf.neutron.nova "password" $envAll.Values.endpoints.identity.auth.nova.password | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.metadata_agent.DEFAULT.nova_metadata_ip -}} -{{- tuple "compute_metadata" "public" . | include "helm-toolkit.endpoints.hostname_fqdn_endpoint_lookup" | set .Values.conf.metadata_agent.DEFAULT "nova_metadata_ip" | quote | trunc 0 -}} -{{- set .Values.conf.metadata_agent.DEFAULT "nova_metadata_port" 80 | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.metadata_agent.DEFAULT.nova_metadata_ip -}} +{{- tuple "compute_metadata" "public" . | include "helm-toolkit.endpoints.hostname_fqdn_endpoint_lookup" | set $envAll.Values.conf.metadata_agent.DEFAULT "nova_metadata_ip" | quote | trunc 0 -}} +{{- set $envAll.Values.conf.metadata_agent.DEFAULT "nova_metadata_port" 80 | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.metadata_agent.cache.memcache_servers -}} -{{- tuple "oslo_cache" "internal" "memcache" . | include "helm-toolkit.endpoints.host_and_port_endpoint_uri_lookup" | set .Values.conf.metadata_agent.cache "memcache_servers" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.metadata_agent.cache.memcache_servers -}} +{{- tuple "oslo_cache" "internal" "memcache" . | include "helm-toolkit.endpoints.host_and_port_endpoint_uri_lookup" | set $envAll.Values.conf.metadata_agent.cache "memcache_servers" | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.neutron.DEFAULT.interface_driver -}} -{{- if eq .Values.network.backend "ovs" -}} -{{- set .Values.conf.neutron.DEFAULT "interface_driver" "openvswitch" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.neutron.DEFAULT.interface_driver -}} +{{- $_ := set $envAll.Values "__interface_driver" ( list ) }} +{{- if ( has "openvswitch" $envAll.Values.network.backend ) -}} +{{ $__interface_driver := append $envAll.Values.__interface_driver "openvswitch" }} +{{- $_ := set $envAll.Values "__interface_driver" $__interface_driver }} {{- end -}} -{{- if eq .Values.network.backend "linuxbridge" -}} -{{- set .Values.conf.neutron.DEFAULT "interface_driver" "linuxbridge" | quote | trunc 0 -}} +{{- if ( has "linuxbridge" $envAll.Values.network.backend ) -}} +{{ $__interface_driver := append $envAll.Values.__interface_driver "linuxbridge" }} +{{- $_ := set $envAll.Values "__interface_driver" $__interface_driver }} {{- end -}} +{{- set $envAll.Values.conf.neutron.DEFAULT "interface_driver" $envAll.Values.__interface_driver | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.dhcp_agent.DEFAULT.interface_driver -}} -{{- if eq .Values.network.backend "ovs" -}} -{{- set .Values.conf.dhcp_agent.DEFAULT "interface_driver" "openvswitch" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.dhcp_agent.DEFAULT.interface_driver -}} +{{- $_ := set $envAll.Values "__interface_driver" ( list ) }} +{{- if ( has "openvswitch" $envAll.Values.network.backend ) -}} +{{ $__interface_driver := append $envAll.Values.__interface_driver "openvswitch" }} +{{- $_ := set $envAll.Values "__interface_driver" $__interface_driver }} {{- end -}} -{{- if eq .Values.network.backend "linuxbridge" -}} -{{- set .Values.conf.dhcp_agent.DEFAULT "interface_driver" "linuxbridge" | quote | trunc 0 -}} +{{- if ( has "linuxbridge" $envAll.Values.network.backend ) -}} +{{ $__interface_driver := append $envAll.Values.__interface_driver "linuxbridge" }} +{{- $_ := set $envAll.Values "__interface_driver" $__interface_driver }} {{- end -}} +{{- set $envAll.Values.conf.dhcp_agent.DEFAULT "interface_driver" $envAll.Values.__interface_driver | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.l3_agent.DEFAULT.interface_driver -}} -{{- if eq .Values.network.backend "ovs" -}} -{{- set .Values.conf.l3_agent.DEFAULT "interface_driver" "openvswitch" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.l3_agent.DEFAULT.interface_driver -}} +{{- $_ := set $envAll.Values "__interface_driver" ( list ) }} +{{- if ( has "openvswitch" $envAll.Values.network.backend ) -}} +{{ $__interface_driver := append $envAll.Values.__interface_driver "openvswitch" }} +{{- $_ := set $envAll.Values "__interface_driver" $__interface_driver }} {{- end -}} -{{- if eq .Values.network.backend "linuxbridge" -}} -{{- set .Values.conf.l3_agent.DEFAULT "interface_driver" "linuxbridge" | quote | trunc 0 -}} +{{- if ( has "linuxbridge" $envAll.Values.network.backend ) -}} +{{ $__interface_driver := append $envAll.Values.__interface_driver "linuxbridge" }} +{{- $_ := set $envAll.Values "__interface_driver" $__interface_driver }} {{- end -}} +{{- set $envAll.Values.conf.l3_agent.DEFAULT "interface_driver" $envAll.Values.__interface_driver | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.plugins.ml2_conf.ml2.mechanism_drivers -}} -{{- if eq .Values.network.backend "ovs" -}} -{{- set .Values.conf.plugins.ml2_conf.ml2 "mechanism_drivers" "openvswitch,l2population" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.plugins.ml2_conf.ml2.mechanism_drivers -}} +{{- $_ := set $envAll.Values "__mechanism_drivers" ( list "l2population" ) }} +{{- if ( has "openvswitch" $envAll.Values.network.backend ) -}} +{{ $__mechanism_drivers := append $envAll.Values.__mechanism_drivers "openvswitch" }} +{{- $_ := set $envAll.Values "__mechanism_drivers" $__mechanism_drivers }} {{- end -}} -{{- if eq .Values.network.backend "linuxbridge" -}} -{{- set .Values.conf.plugins.ml2_conf.ml2 "mechanism_drivers" "linuxbridge,l2population" | quote | trunc 0 -}} +{{- if ( has "linuxbridge" $envAll.Values.network.backend ) -}} +{{ $__mechanism_drivers := append $envAll.Values.__mechanism_drivers "linuxbridge" }} +{{- $_ := set $envAll.Values "__mechanism_drivers" $__mechanism_drivers }} {{- end -}} +{{- set $envAll.Values.conf.plugins.ml2_conf.ml2 "mechanism_drivers" $envAll.Values.__mechanism_drivers | quote | trunc 0 -}} {{- end -}} {{- if empty .Values.conf.neutron.DEFAULT.bind_port -}} @@ -147,61 +165,51 @@ limitations under the License. apiVersion: v1 kind: ConfigMap metadata: - name: neutron-etc + name: {{ $configMapName }} data: rally_tests.yaml: | -{{ toYaml .Values.conf.rally_tests.tests | indent 4 }} +{{ toYaml $envAll.Values.conf.rally_tests.tests | indent 4 }} api-paste.ini: | -{{ include "helm-toolkit.utils.to_ini" .Values.conf.paste | indent 4 }} +{{ include "helm-toolkit.utils.to_ini" $envAll.Values.conf.paste | indent 4 }} policy.json: | -{{ toJson .Values.conf.policy | indent 4 }} +{{ toJson $envAll.Values.conf.policy | indent 4 }} neutron.conf: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.neutron | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.neutron | indent 4 }} dhcp_agent.ini: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.dhcp_agent | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.dhcp_agent | indent 4 }} l3_agent.ini: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.l3_agent | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.l3_agent | indent 4 }} metadata_agent.ini: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.metadata_agent | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.metadata_agent | indent 4 }} metering_agent.ini: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.metering_agent | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.metering_agent | indent 4 }} ml2_conf.ini: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.plugins.ml2_conf | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.plugins.ml2_conf | indent 4 }} ml2_conf_sriov.ini: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.plugins.ml2_conf_sriov | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.plugins.ml2_conf_sriov | indent 4 }} macvtap_agent.ini: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.plugins.macvtap_agent | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.plugins.macvtap_agent | indent 4 }} linuxbridge_agent.ini: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.plugins.linuxbridge_agent | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.plugins.linuxbridge_agent | indent 4 }} openvswitch_agent.ini: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.plugins.openvswitch_agent | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.plugins.openvswitch_agent | indent 4 }} sriov_agent.ini: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.plugins.sriov_agent | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.plugins.sriov_agent | indent 4 }} ContrailPlugin.ini: | -{{ include "helm-toolkit.utils.to_oslo_conf" .Values.conf.plugins.opencontrail | indent 4 }} +{{ include "helm-toolkit.utils.to_oslo_conf" $envAll.Values.conf.plugins.opencontrail | indent 4 }} dnsmasq.conf: "" neutron_sudoers: | -{{- tuple .Values.conf.neutron_sudoers "etc/_neutron_sudoers.tpl" . | include "helm-toolkit.utils.configmap_templater" }} +{{ $envAll.Values.conf.neutron_sudoers | indent 4 }} rootwrap.conf: | -{{- tuple .Values.conf.rootwrap "etc/_rootwrap.conf.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - debug.filters: | -{{- tuple .Values.conf.rootwrap_filters.debug "etc/rootwrap.d/_debug.filters.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - dibbler.filters: | -{{- tuple .Values.conf.rootwrap_filters.dibbler "etc/rootwrap.d/_dibbler.filters.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - ipset-firewall.filters: | -{{- tuple .Values.conf.rootwrap_filters.ipset_firewall "etc/rootwrap.d/_ipset-firewall.filters.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - l3.filters: | -{{- tuple .Values.conf.rootwrap_filters.l3 "etc/rootwrap.d/_l3.filters.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - netns-cleanup.filters: | -{{- tuple .Values.conf.rootwrap_filters.netns_cleanup "etc/rootwrap.d/_netns-cleanup.filters.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - dhcp.filters: | -{{- tuple .Values.conf.rootwrap_filters.dhcp "etc/rootwrap.d/_dhcp.filters.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - ebtables.filters: | -{{- tuple .Values.conf.rootwrap_filters.ebtables "etc/rootwrap.d/_ebtables.filters.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - iptables-firewall.filters: | -{{- tuple .Values.conf.rootwrap_filters.iptables_firewall "etc/rootwrap.d/_iptables-firewall.filters.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - linuxbridge-plugin.filters: | -{{- tuple .Values.conf.rootwrap_filters.linuxbridge_plugin "etc/rootwrap.d/_linuxbridge-plugin.filters.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - openvswitch-plugin.filters: | -{{- tuple .Values.conf.rootwrap_filters.openvswitch_plugin "etc/rootwrap.d/_openvswitch-plugin.filters.tpl" . | include "helm-toolkit.utils.configmap_templater" }} +{{ $envAll.Values.conf.rootwrap | indent 4 }} +{{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} +{{- $filePrefix := replace "_" "-" $key }} + {{ printf "%s.filters" $filePrefix }}: | +{{ $value.content | indent 4 }} +{{- end }} +{{- end }} +{{- end }} + +{{- if .Values.manifests.configmap_etc }} +{{- list "neutron-etc" . | include "neutron.configmap.etc" }} {{- end }} diff --git a/neutron/templates/daemonset-dhcp-agent.yaml b/neutron/templates/daemonset-dhcp-agent.yaml index 97b73d4536..ab98e341a8 100644 --- a/neutron/templates/daemonset-dhcp-agent.yaml +++ b/neutron/templates/daemonset-dhcp-agent.yaml @@ -14,17 +14,17 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.daemonset_dhcp_agent }} -{{- $envAll := . }} - -{{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "dhcp" -}} -{{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} +{{- define "neutron.dhcp_agent.daemonset" }} +{{- $daemonset := index . 0 }} +{{- $configMapName := index . 1 }} +{{- $serviceAccountName := index . 2 }} +{{- $dependencies := index . 3 }} +{{- $envAll := index . 4 }} +{{- with $envAll }} {{- $mounts_neutron_dhcp_agent := .Values.pod.mounts.neutron_dhcp_agent.neutron_dhcp_agent }} {{- $mounts_neutron_dhcp_agent_init := .Values.pod.mounts.neutron_dhcp_agent.init_container }} -{{- $serviceAccountName := "neutron-dhcp-agent" }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} --- apiVersion: extensions/v1beta1 kind: DaemonSet @@ -70,7 +70,7 @@ spec: mountPath: /etc/neutron/plugins/ml2/ml2_conf.ini subPath: ml2_conf.ini readOnly: true - {{- if eq .Values.network.backend "ovs" }} + {{- if ( has "openvswitch" .Values.network.backend ) }} - name: neutron-etc mountPath: /etc/neutron/plugins/ml2/openvswitch_agent.ini subPath: openvswitch_agent.ini @@ -101,46 +101,16 @@ spec: mountPath: /etc/neutron/rootwrap.conf subPath: rootwrap.conf readOnly: true + {{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} + {{- if ( has "dhcp_agent" $value.pods ) }} + {{- $filePrefix := replace "_" "-" $key }} + {{- $rootwrapFile := printf "/etc/neutron/rootwrap.d/%s.filters" $filePrefix }} - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/debug.filters - subPath: debug.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dibbler.filters - subPath: dibbler.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ipset-firewall.filters - subPath: ipset-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/l3.filters - subPath: l3.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/netns-cleanup.filters - subPath: netns-cleanup.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dhcp.filters - subPath: dhcp.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ebtables.filters - subPath: ebtables.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/iptables-firewall.filters - subPath: iptables-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/linuxbridge-plugin.filters - subPath: linuxbridge-plugin.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/openvswitch-plugin.filters - subPath: openvswitch-plugin.filters + mountPath: {{ $rootwrapFile }} + subPath: {{ base $rootwrapFile }} readOnly: true + {{- end }} + {{- end }} - name: socket mountPath: /var/lib/neutron/openstack-helm {{ if $mounts_neutron_dhcp_agent.volumeMounts }}{{ toYaml $mounts_neutron_dhcp_agent.volumeMounts | indent 12 }}{{ end }} @@ -151,9 +121,9 @@ spec: defaultMode: 0555 - name: neutron-etc configMap: - name: neutron-etc + name: {{ $configMapName }} defaultMode: 0444 - {{- if eq .Values.network.backend "ovs" }} + {{- if ( has "openvswitch" .Values.network.backend ) }} - name: runopenvswitch hostPath: path: /run/openvswitch @@ -163,3 +133,17 @@ spec: path: /var/lib/neutron/openstack-helm {{ if $mounts_neutron_dhcp_agent.volumes }}{{ toYaml $mounts_neutron_dhcp_agent.volumes | indent 8 }}{{ end }} {{- end }} +{{- end }} + +{{- if .Values.manifests.daemonset_dhcp_agent }} +{{- $envAll := . }} +{{- $daemonset := "dhcp-agent" }} +{{- $configMapName := "neutron-etc" }} +{{- $serviceAccountName := "neutron-dhcp-agent" }} +{{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "dhcp" -}} +{{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} +{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} +{{- $daemonset_yaml := list $daemonset $configMapName $serviceAccountName $dependencies . | include "neutron.dhcp_agent.daemonset" | toString | fromYaml }} +{{- $configmap_yaml := "neutron.configmap.etc" }} +{{- list $daemonset $daemonset_yaml $configmap_yaml $configMapName . | include "helm-toolkit.utils.daemonset_overrides" }} +{{- end }} diff --git a/neutron/templates/daemonset-l3-agent.yaml b/neutron/templates/daemonset-l3-agent.yaml index 2c6afc7ac8..bacbe04cf5 100644 --- a/neutron/templates/daemonset-l3-agent.yaml +++ b/neutron/templates/daemonset-l3-agent.yaml @@ -14,17 +14,17 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.daemonset_l3_agent }} -{{- $envAll := . }} - -{{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "l3" -}} -{{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} +{{- define "neutron.l3_agent.daemonset" }} +{{- $daemonset := index . 0 }} +{{- $configMapName := index . 1 }} +{{- $serviceAccountName := index . 2 }} +{{- $dependencies := index . 3 }} +{{- $envAll := index . 4 }} +{{- with $envAll }} {{- $mounts_neutron_l3_agent := .Values.pod.mounts.neutron_l3_agent.neutron_l3_agent }} {{- $mounts_neutron_l3_agent_init := .Values.pod.mounts.neutron_l3_agent.init_container }} -{{- $serviceAccountName := "neutron-l3-agent" }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} --- apiVersion: extensions/v1beta1 kind: DaemonSet @@ -70,7 +70,7 @@ spec: mountPath: /etc/neutron/plugins/ml2/ml2_conf.ini subPath: ml2_conf.ini readOnly: true - {{- if eq .Values.network.backend "ovs" }} + {{- if ( has "openvswitch" .Values.network.backend ) }} - name: neutron-etc mountPath: /etc/neutron/plugins/ml2/openvswitch_agent.ini subPath: openvswitch_agent.ini @@ -97,46 +97,16 @@ spec: mountPath: /etc/neutron/rootwrap.conf subPath: rootwrap.conf readOnly: true + {{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} + {{- if ( has "l3_agent" $value.pods ) }} + {{- $filePrefix := replace "_" "-" $key }} + {{- $rootwrapFile := printf "/etc/neutron/rootwrap.d/%s.filters" $filePrefix }} - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/debug.filters - subPath: debug.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dibbler.filters - subPath: dibbler.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ipset-firewall.filters - subPath: ipset-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/l3.filters - subPath: l3.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/netns-cleanup.filters - subPath: netns-cleanup.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dhcp.filters - subPath: dhcp.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ebtables.filters - subPath: ebtables.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/iptables-firewall.filters - subPath: iptables-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/linuxbridge-plugin.filters - subPath: linuxbridge-plugin.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/openvswitch-plugin.filters - subPath: openvswitch-plugin.filters + mountPath: {{ $rootwrapFile }} + subPath: {{ base $rootwrapFile }} readOnly: true + {{- end }} + {{- end }} - name: libmodules mountPath: /lib/modules readOnly: true @@ -150,9 +120,9 @@ spec: defaultMode: 0555 - name: neutron-etc configMap: - name: neutron-etc + name: {{ $configMapName }} defaultMode: 0444 - {{- if eq .Values.network.backend "ovs" }} + {{- if ( has "openvswitch" .Values.network.backend ) }} - name: runopenvswitch hostPath: path: /run/openvswitch @@ -165,3 +135,17 @@ spec: path: /var/lib/neutron/openstack-helm {{ if $mounts_neutron_l3_agent.volumes }}{{ toYaml $mounts_neutron_l3_agent.volumes | indent 8 }}{{ end }} {{- end }} +{{- end }} + +{{- if .Values.manifests.daemonset_l3_agent }} +{{- $envAll := . }} +{{- $daemonset := "l3-agent" }} +{{- $configMapName := "neutron-etc" }} +{{- $serviceAccountName := "neutron-l3-agent" }} +{{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "l3" -}} +{{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} +{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} +{{- $daemonset_yaml := list $daemonset $configMapName $serviceAccountName $dependencies . | include "neutron.l3_agent.daemonset" | toString | fromYaml }} +{{- $configmap_yaml := "neutron.configmap.etc" }} +{{- list $daemonset $daemonset_yaml $configmap_yaml $configMapName . | include "helm-toolkit.utils.daemonset_overrides" }} +{{- end }} diff --git a/neutron/templates/daemonset-lb-agent.yaml b/neutron/templates/daemonset-lb-agent.yaml index 3461add711..821f2bb7ce 100644 --- a/neutron/templates/daemonset-lb-agent.yaml +++ b/neutron/templates/daemonset-lb-agent.yaml @@ -14,17 +14,17 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if and .Values.manifests.daemonset_lb_agent ( eq .Values.network.backend "linuxbridge" ) }} -{{- $envAll := . }} - -{{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "lb_agent" -}} -{{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} +{{- define "neutron.lb_agent.daemonset" }} +{{- $daemonset := index . 0 }} +{{- $configMapName := index . 1 }} +{{- $serviceAccountName := index . 2 }} +{{- $dependencies := index . 3 }} +{{- $envAll := index . 4 }} +{{- with $envAll }} {{- $mounts_neutron_lb_agent := .Values.pod.mounts.neutron_lb_agent.neutron_lb_agent }} {{- $mounts_neutron_lb_agent_init := .Values.pod.mounts.neutron_lb_agent.init_container }} -{{- $serviceAccountName := "neutron-lb-agent" }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} --- apiVersion: extensions/v1beta1 kind: DaemonSet @@ -104,46 +104,16 @@ spec: mountPath: /etc/neutron/rootwrap.conf subPath: rootwrap.conf readOnly: true + {{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} + {{- if ( has "lb_agent" $value.pods ) }} + {{- $filePrefix := replace "_" "-" $key }} + {{- $rootwrapFile := printf "/etc/neutron/rootwrap.d/%s.filters" $filePrefix }} - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/debug.filters - subPath: debug.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dibbler.filters - subPath: dibbler.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ipset-firewall.filters - subPath: ipset-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/l3.filters - subPath: l3.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/netns-cleanup.filters - subPath: netns-cleanup.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dhcp.filters - subPath: dhcp.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ebtables.filters - subPath: ebtables.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/iptables-firewall.filters - subPath: iptables-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/linuxbridge-plugin.filters - subPath: linuxbridge-plugin.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/openvswitch-plugin.filters - subPath: openvswitch-plugin.filters + mountPath: {{ $rootwrapFile }} + subPath: {{ base $rootwrapFile }} readOnly: true + {{- end }} + {{- end }} - name: run mountPath: /run {{ if $mounts_neutron_lb_agent.volumeMounts }}{{ toYaml $mounts_neutron_lb_agent.volumeMounts | indent 12 }}{{ end }} @@ -193,46 +163,16 @@ spec: mountPath: /etc/neutron/rootwrap.conf subPath: rootwrap.conf readOnly: true + {{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} + {{- if ( has "lb_agent" $value.pods ) }} + {{- $filePrefix := replace "_" "-" $key }} + {{- $rootwrapFile := printf "/etc/neutron/rootwrap.d/%s.filters" $filePrefix }} - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/debug.filters - subPath: debug.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dibbler.filters - subPath: dibbler.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ipset-firewall.filters - subPath: ipset-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/l3.filters - subPath: l3.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/netns-cleanup.filters - subPath: netns-cleanup.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dhcp.filters - subPath: dhcp.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ebtables.filters - subPath: ebtables.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/iptables-firewall.filters - subPath: iptables-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/linuxbridge-plugin.filters - subPath: linuxbridge-plugin.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/openvswitch-plugin.filters - subPath: openvswitch-plugin.filters + mountPath: {{ $rootwrapFile }} + subPath: {{ base $rootwrapFile }} readOnly: true + {{- end }} + {{- end }} - name: run mountPath: /run {{ if $mounts_neutron_lb_agent.volumeMounts }}{{ toYaml $mounts_neutron_lb_agent.volumeMounts | indent 12 }}{{ end }} @@ -245,7 +185,7 @@ spec: defaultMode: 0555 - name: neutron-etc configMap: - name: neutron-etc + name: {{ $configMapName }} defaultMode: 0444 - name: run hostPath: @@ -255,3 +195,17 @@ spec: path: / {{ if $mounts_neutron_lb_agent.volumes }}{{ toYaml $mounts_neutron_lb_agent.volumes | indent 8 }}{{ end }} {{- end }} +{{- end }} + +{{- if and .Values.manifests.daemonset_lb_agent ( has "linuxbridge" .Values.network.backend ) }} +{{- $envAll := . }} +{{- $daemonset := "lb-agent" }} +{{- $configMapName := "neutron-etc" }} +{{- $serviceAccountName := "neutron-lb-agent" }} +{{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "lb_agent" -}} +{{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} +{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} +{{- $daemonset_yaml := list $daemonset $configMapName $serviceAccountName $dependencies . | include "neutron.lb_agent.daemonset" | toString | fromYaml }} +{{- $configmap_yaml := "neutron.configmap.etc" }} +{{- list $daemonset $daemonset_yaml $configmap_yaml $configMapName . | include "helm-toolkit.utils.daemonset_overrides" }} +{{- end }} diff --git a/neutron/templates/daemonset-metadata-agent.yaml b/neutron/templates/daemonset-metadata-agent.yaml index 3cd660c41b..32dc87ac2b 100644 --- a/neutron/templates/daemonset-metadata-agent.yaml +++ b/neutron/templates/daemonset-metadata-agent.yaml @@ -14,17 +14,17 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.daemonset_metadata_agent }} -{{- $envAll := . }} - -{{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "metadata" -}} -{{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} +{{- define "neutron.metadata_agent.daemonset" }} +{{- $daemonset := index . 0 }} +{{- $configMapName := index . 1 }} +{{- $serviceAccountName := index . 2 }} +{{- $dependencies := index . 3 }} +{{- $envAll := index . 4 }} +{{- with $envAll }} {{- $mounts_neutron_metadata_agent := .Values.pod.mounts.neutron_metadata_agent.neutron_metadata_agent }} {{- $mounts_neutron_metadata_agent_init := .Values.pod.mounts.neutron_metadata_agent.init_container }} -{{- $serviceAccountName := "neutron-metadata-agent" }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} --- apiVersion: extensions/v1beta1 kind: DaemonSet @@ -88,18 +88,11 @@ spec: mountPath: /etc/neutron/neutron.conf subPath: neutron.conf readOnly: true - {{- if eq .Values.network.backend "opencontrail" }} - - name: neutron-etc - mountPath: /etc/neutron/plugins/opencontrail/ContrailPlugin.ini - subPath: ContrailPlugin.ini - readOnly: true - {{- else }} - name: neutron-etc mountPath: /etc/neutron/plugins/ml2/ml2_conf.ini subPath: ml2_conf.ini readOnly: true - {{- end }} - {{- if eq .Values.network.backend "ovs" }} + {{- if ( has "openvswitch" .Values.network.backend ) }} - name: neutron-etc mountPath: /etc/neutron/plugins/ml2/openvswitch_agent.ini subPath: openvswitch_agent.ini @@ -122,46 +115,16 @@ spec: mountPath: /etc/neutron/rootwrap.conf subPath: rootwrap.conf readOnly: true + {{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} + {{- if ( has "metadata_agent" $value.pods ) }} + {{- $filePrefix := replace "_" "-" $key }} + {{- $rootwrapFile := printf "/etc/neutron/rootwrap.d/%s.filters" $filePrefix }} - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/debug.filters - subPath: debug.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dibbler.filters - subPath: dibbler.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ipset-firewall.filters - subPath: ipset-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/l3.filters - subPath: l3.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/netns-cleanup.filters - subPath: netns-cleanup.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dhcp.filters - subPath: dhcp.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ebtables.filters - subPath: ebtables.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/iptables-firewall.filters - subPath: iptables-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/linuxbridge-plugin.filters - subPath: linuxbridge-plugin.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/openvswitch-plugin.filters - subPath: openvswitch-plugin.filters + mountPath: {{ $rootwrapFile }} + subPath: {{ base $rootwrapFile }} readOnly: true + {{- end }} + {{- end }} - name: socket mountPath: /var/lib/neutron/openstack-helm {{ if $mounts_neutron_metadata_agent.volumeMounts }}{{ toYaml $mounts_neutron_metadata_agent.volumeMounts | indent 12 }}{{ end }} @@ -172,9 +135,9 @@ spec: defaultMode: 0555 - name: neutron-etc configMap: - name: neutron-etc + name: {{ $configMapName }} defaultMode: 0444 - {{- if eq .Values.network.backend "ovs" }} + {{- if ( has "openvswitch" .Values.network.backend ) }} - name: runopenvswitch hostPath: path: /run/openvswitch @@ -184,3 +147,17 @@ spec: path: /var/lib/neutron/openstack-helm {{ if $mounts_neutron_metadata_agent.volumes }}{{ toYaml $mounts_neutron_metadata_agent.volumes | indent 8 }}{{ end }} {{- end }} +{{- end }} + +{{- if .Values.manifests.daemonset_metadata_agent }} +{{- $envAll := . }} +{{- $daemonset := "metadata-agent" }} +{{- $configMapName := "neutron-etc" }} +{{- $serviceAccountName := "neutron-metadata-agent" }} +{{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "metadata" -}} +{{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} +{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} +{{- $daemonset_yaml := list $daemonset $configMapName $serviceAccountName $dependencies . | include "neutron.metadata_agent.daemonset" | toString | fromYaml }} +{{- $configmap_yaml := "neutron.configmap.etc" }} +{{- list $daemonset $daemonset_yaml $configmap_yaml $configMapName . | include "helm-toolkit.utils.daemonset_overrides" }} +{{- end }} diff --git a/neutron/templates/daemonset-ovs-agent.yaml b/neutron/templates/daemonset-ovs-agent.yaml index f1f69927fe..bde5b26f5f 100644 --- a/neutron/templates/daemonset-ovs-agent.yaml +++ b/neutron/templates/daemonset-ovs-agent.yaml @@ -14,17 +14,17 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if and .Values.manifests.daemonset_ovs_agent ( eq .Values.network.backend "ovs" ) }} -{{- $envAll := . }} - -{{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "ovs_agent" -}} -{{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} +{{- define "neutron.ovs_agent.daemonset" }} +{{- $daemonset := index . 0 }} +{{- $configMapName := index . 1 }} +{{- $serviceAccountName := index . 2 }} +{{- $dependencies := index . 3 }} +{{- $envAll := index . 4 }} +{{- with $envAll }} {{- $mounts_neutron_ovs_agent := .Values.pod.mounts.neutron_ovs_agent.neutron_ovs_agent }} {{- $mounts_neutron_ovs_agent_init := .Values.pod.mounts.neutron_ovs_agent.init_container }} -{{- $serviceAccountName := "neutron-ovs-agent" }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} --- apiVersion: extensions/v1beta1 kind: DaemonSet @@ -104,46 +104,16 @@ spec: mountPath: /etc/neutron/rootwrap.conf subPath: rootwrap.conf readOnly: true + {{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} + {{- if ( has "ovs_agent" $value.pods ) }} + {{- $filePrefix := replace "_" "-" $key }} + {{- $rootwrapFile := printf "/etc/neutron/rootwrap.d/%s.filters" $filePrefix }} - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/debug.filters - subPath: debug.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dibbler.filters - subPath: dibbler.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ipset-firewall.filters - subPath: ipset-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/l3.filters - subPath: l3.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/netns-cleanup.filters - subPath: netns-cleanup.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dhcp.filters - subPath: dhcp.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ebtables.filters - subPath: ebtables.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/iptables-firewall.filters - subPath: iptables-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/linuxbridge-plugin.filters - subPath: linuxbridge-plugin.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/openvswitch-plugin.filters - subPath: openvswitch-plugin.filters + mountPath: {{ $rootwrapFile }} + subPath: {{ base $rootwrapFile }} readOnly: true + {{- end }} + {{- end }} - name: run mountPath: /run {{ if $mounts_neutron_ovs_agent.volumeMounts }}{{ toYaml $mounts_neutron_ovs_agent.volumeMounts | indent 12 }}{{ end }} @@ -195,46 +165,16 @@ spec: mountPath: /etc/neutron/rootwrap.conf subPath: rootwrap.conf readOnly: true + {{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} + {{- if ( has "ovs_agent" $value.pods ) }} + {{- $filePrefix := replace "_" "-" $key }} + {{- $rootwrapFile := printf "/etc/neutron/rootwrap.d/%s.filters" $filePrefix }} - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/debug.filters - subPath: debug.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dibbler.filters - subPath: dibbler.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ipset-firewall.filters - subPath: ipset-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/l3.filters - subPath: l3.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/netns-cleanup.filters - subPath: netns-cleanup.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/dhcp.filters - subPath: dhcp.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/ebtables.filters - subPath: ebtables.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/iptables-firewall.filters - subPath: iptables-firewall.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/linuxbridge-plugin.filters - subPath: linuxbridge-plugin.filters - readOnly: true - - name: neutron-etc - mountPath: /etc/neutron/rootwrap.d/openvswitch-plugin.filters - subPath: openvswitch-plugin.filters + mountPath: {{ $rootwrapFile }} + subPath: {{ base $rootwrapFile }} readOnly: true + {{- end }} + {{- end }} - name: run mountPath: /run {{ if $mounts_neutron_ovs_agent.volumeMounts }}{{ toYaml $mounts_neutron_ovs_agent.volumeMounts | indent 12 }}{{ end }} @@ -249,7 +189,7 @@ spec: defaultMode: 0555 - name: neutron-etc configMap: - name: neutron-etc + name: {{ $configMapName }} defaultMode: 0444 - name: run hostPath: @@ -259,3 +199,17 @@ spec: path: / {{ if $mounts_neutron_ovs_agent.volumes }}{{ toYaml $mounts_neutron_ovs_agent.volumes | indent 8 }}{{ end }} {{- end }} +{{- end }} + +{{- if and .Values.manifests.daemonset_ovs_agent ( has "openvswitch" .Values.network.backend ) }} +{{- $envAll := . }} +{{- $daemonset := "ovs-agent" }} +{{- $configMapName := "neutron-etc" }} +{{- $serviceAccountName := "neutron-ovs-agent" }} +{{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "ovs_agent" -}} +{{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} +{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} +{{- $daemonset_yaml := list $daemonset $configMapName $serviceAccountName $dependencies . | include "neutron.ovs_agent.daemonset" | toString | fromYaml }} +{{- $configmap_yaml := "neutron.configmap.etc" }} +{{- list $daemonset $daemonset_yaml $configmap_yaml $configMapName . | include "helm-toolkit.utils.daemonset_overrides" }} +{{- end }} diff --git a/neutron/templates/daemonset-sriov-agent.yaml b/neutron/templates/daemonset-sriov-agent.yaml new file mode 100644 index 0000000000..97459a882c --- /dev/null +++ b/neutron/templates/daemonset-sriov-agent.yaml @@ -0,0 +1,187 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- define "neutron.sriov_agent.daemonset" }} +{{- $daemonset := index . 0 }} +{{- $configMapName := index . 1 }} +{{- $serviceAccountName := index . 2 }} +{{- $dependencies := index . 3 }} +{{- $envAll := index . 4 }} +{{- with $envAll }} + +{{- $mounts_neutron_sriov_agent := .Values.pod.mounts.neutron_sriov_agent.neutron_sriov_agent }} +{{- $mounts_neutron_sriov_agent_init := .Values.pod.mounts.neutron_sriov_agent.init_container }} + +--- +apiVersion: extensions/v1beta1 +kind: DaemonSet +metadata: + name: neutron-sriov-agent +spec: +{{ tuple $envAll "sriov_agent" | include "helm-toolkit.snippets.kubernetes_upgrades_daemonset" | indent 2 }} + template: + metadata: + labels: +{{ tuple $envAll "neutron" "neutron-sriov-agent" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} + annotations: + configmap-bin-hash: {{ tuple "configmap-bin.yaml" . | include "helm-toolkit.utils.hash" }} + configmap-etc-hash: {{ tuple "configmap-etc.yaml" . | include "helm-toolkit.utils.hash" }} + spec: + serviceAccountName: {{ $serviceAccountName }} + nodeSelector: + {{ .Values.labels.sriov.node_selector_key }}: {{ .Values.labels.sriov.node_selector_value }} + dnsPolicy: ClusterFirstWithHostNet + hostNetwork: true + initContainers: +{{ tuple $envAll $dependencies $mounts_neutron_sriov_agent_init | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} + - name: neutron-sriov-agent-init + image: {{ .Values.images.tags.neutron_sriov_agent_init }} + imagePullPolicy: {{ .Values.images.pull_policy }} +{{ tuple $envAll $envAll.Values.pod.resources.agent.sriov | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} + securityContext: + privileged: true + runAsUser: 0 + command: + - /tmp/neutron-sriov-agent-init.sh + volumeMounts: + - name: neutron-bin + mountPath: /tmp/neutron-sriov-agent-init.sh + subPath: neutron-sriov-agent-init.sh + readOnly: true + - name: pod-shared + mountPath: /tmp/pod-shared + - name: neutron-etc + mountPath: /etc/neutron/neutron.conf + subPath: neutron.conf + readOnly: true + - name: neutron-etc + mountPath: /etc/neutron/plugins/ml2/ml2_conf.ini + subPath: ml2_conf.ini + readOnly: true + - name: neutron-etc + mountPath: /etc/neutron/plugins/ml2/sriov_agent.ini + subPath: sriov_agent.ini + readOnly: true + - name: neutron-etc + # NOTE (Portdirect): We mount here to override Kollas + # custom sudoers file when using Kolla images, this + # location will also work fine for other images. + mountPath: /etc/sudoers.d/kolla_neutron_sudoers + subPath: neutron_sudoers + readOnly: true + - name: neutron-etc + mountPath: /etc/neutron/rootwrap.conf + subPath: rootwrap.conf + readOnly: true + {{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} + {{- if ( has "sriov_agent" $value.pods ) }} + {{- $filePrefix := replace "_" "-" $key }} + {{- $rootwrapFile := printf "/etc/neutron/rootwrap.d/%s.filters" $filePrefix }} + - name: neutron-etc + mountPath: {{ $rootwrapFile }} + subPath: {{ base $rootwrapFile }} + readOnly: true + {{- end }} + {{- end }} + - name: run + mountPath: /run +{{ if $mounts_neutron_sriov_agent.volumeMounts }}{{ toYaml $mounts_neutron_sriov_agent.volumeMounts | indent 12 }}{{ end }} + containers: + - name: neutron-sriov-agent + image: {{ .Values.images.tags.neutron_sriov_agent }} + imagePullPolicy: {{ .Values.images.pull_policy }} +{{ tuple $envAll $envAll.Values.pod.resources.agent.sriov | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} + securityContext: + runAsUser: {{ .Values.pod.user.neutron.uid }} + privileged: true + command: + - /tmp/neutron-sriov-agent.sh + volumeMounts: + - name: neutron-bin + mountPath: /tmp/neutron-sriov-agent.sh + subPath: neutron-sriov-agent.sh + readOnly: true + - name: pod-shared + mountPath: /tmp/pod-shared + - name: neutron-etc + mountPath: /etc/neutron/neutron.conf + subPath: neutron.conf + readOnly: true + - name: neutron-etc + mountPath: /etc/neutron/plugins/ml2/ml2_conf.ini + subPath: ml2_conf.ini + readOnly: true + - name: neutron-etc + mountPath: /etc/neutron/plugins/ml2/sriov_agent.ini + subPath: sriov_agent.ini + readOnly: true + - name: neutron-etc + # NOTE (Portdirect): We mount here to override Kollas + # custom sudoers file when using Kolla images, this + # location will also work fine for other images. + mountPath: /etc/sudoers.d/kolla_neutron_sudoers + subPath: neutron_sudoers + readOnly: true + - name: neutron-etc + mountPath: /etc/neutron/rootwrap.conf + subPath: rootwrap.conf + readOnly: true + {{- range $key, $value := $envAll.Values.conf.rootwrap_filters }} + {{- if ( has "sriov_agent" $value.pods ) }} + {{- $filePrefix := replace "_" "-" $key }} + {{- $rootwrapFile := printf "/etc/neutron/rootwrap.d/%s.filters" $filePrefix }} + - name: neutron-etc + mountPath: {{ $rootwrapFile }} + subPath: {{ base $rootwrapFile }} + readOnly: true + {{- end }} + {{- end }} + - name: run + mountPath: /run +{{ if $mounts_neutron_sriov_agent.volumeMounts }}{{ toYaml $mounts_neutron_sriov_agent.volumeMounts | indent 12 }}{{ end }} + volumes: + - name: pod-shared + emptyDir: {} + - name: neutron-bin + configMap: + name: neutron-bin + defaultMode: 0555 + - name: neutron-etc + configMap: + name: {{ $configMapName }} + defaultMode: 0444 + - name: run + hostPath: + path: /run + - name: host-rootfs + hostPath: + path: / +{{ if $mounts_neutron_sriov_agent.volumes }}{{ toYaml $mounts_neutron_sriov_agent.volumes | indent 8 }}{{ end }} +{{- end }} +{{- end }} + +{{- if and .Values.manifests.daemonset_sriov_agent ( has "sriov" .Values.network.backend ) }} +{{- $envAll := . }} +{{- $daemonset := "sriov-agent" }} +{{- $configMapName := "neutron-etc" }} +{{- $serviceAccountName := "neutron-sriov-agent" }} +{{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "sriov_agent" -}} +{{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} +{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} +{{- $daemonset_yaml := list $daemonset $configMapName $serviceAccountName $dependencies . | include "neutron.sriov_agent.daemonset" | toString | fromYaml }} +{{- $configmap_yaml := "neutron.configmap.etc" }} +{{- list $daemonset $daemonset_yaml $configmap_yaml $configMapName . | include "helm-toolkit.utils.daemonset_overrides" }} +{{- end }} diff --git a/neutron/templates/deployment-server.yaml b/neutron/templates/deployment-server.yaml index cff7ae7e0f..fc95caf5d0 100644 --- a/neutron/templates/deployment-server.yaml +++ b/neutron/templates/deployment-server.yaml @@ -49,7 +49,7 @@ spec: terminationGracePeriodSeconds: {{ .Values.pod.lifecycle.termination_grace_period.server.timeout | default "30" }} initContainers: {{ tuple $envAll $dependencies $mounts_neutron_server_init | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - {{- if eq .Values.network.backend "opencontrail" }} + {{- if ( has "opencontrail" .Values.network.backend ) }} - name: opencontrail-neutron-init image: {{ .Values.images.tags.opencontrail_neutron_init }} imagePullPolicy: {{ .Values.images.pull_policy }} @@ -90,7 +90,7 @@ spec: mountPath: /etc/neutron/neutron.conf subPath: neutron.conf readOnly: true - {{- if eq .Values.network.backend "opencontrail" }} + {{- if ( has "opencontrail" .Values.network.backend ) }} - name: neutron-etc mountPath: /etc/neutron/plugins/opencontrail/ContrailPlugin.ini subPath: ContrailPlugin.ini @@ -125,7 +125,7 @@ spec: configMap: name: neutron-etc defaultMode: 0444 - {{- if eq .Values.network.backend "opencontrail" }} + {{- if ( has "opencontrail" .Values.network.backend ) }} - name: neutron-plugin-shared emptyDir: {} {{- end }} diff --git a/neutron/templates/etc/_rootwrap.conf.tpl b/neutron/templates/etc/_rootwrap.conf.tpl deleted file mode 100644 index 0e7c3c5789..0000000000 --- a/neutron/templates/etc/_rootwrap.conf.tpl +++ /dev/null @@ -1,34 +0,0 @@ -# Configuration for neutron-rootwrap -# This file should be owned by (and only-writeable by) the root user - -[DEFAULT] -# List of directories to load filter definitions from (separated by ','). -# These directories MUST all be only writeable by root ! -filters_path=/etc/neutron/rootwrap.d,/usr/share/neutron/rootwrap - -# List of directories to search executables in, in case filters do not -# explicitely specify a full path (separated by ',') -# If not specified, defaults to system PATH environment variable. -# These directories MUST all be only writeable by root ! -exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin,/var/lib/openstack/bin,/var/lib/kolla/venv/bin - -# Enable logging to syslog -# Default value is False -use_syslog=False - -# Which syslog facility to use. -# Valid values include auth, authpriv, syslog, local0, local1... -# Default value is 'syslog' -syslog_log_facility=syslog - -# Which messages to log. -# INFO means log all usage -# ERROR means only log unsuccessful attempts -syslog_log_level=ERROR - -[xenapi] -# XenAPI configuration is only required by the L2 agent if it is to -# target a XenServer/XCP compute host's dom0. -xenapi_connection_url= -xenapi_connection_username=root -xenapi_connection_password= diff --git a/neutron/templates/etc/rootwrap.d/_debug.filters.tpl b/neutron/templates/etc/rootwrap.d/_debug.filters.tpl deleted file mode 100644 index 89cb042a3a..0000000000 --- a/neutron/templates/etc/rootwrap.d/_debug.filters.tpl +++ /dev/null @@ -1,18 +0,0 @@ -# neutron-rootwrap command filters for nodes on which neutron is -# expected to control network -# -# This file should be owned by (and only-writeable by) the root user - -# format seems to be -# cmd-name: filter-name, raw-command, user, args - -[Filters] - -# This is needed because we should ping -# from inside a namespace which requires root -# _alt variants allow to match -c and -w in any order -# (used by NeutronDebugAgent.ping_all) -ping: RegExpFilter, ping, root, ping, -w, \d+, -c, \d+, [0-9\.]+ -ping_alt: RegExpFilter, ping, root, ping, -c, \d+, -w, \d+, [0-9\.]+ -ping6: RegExpFilter, ping6, root, ping6, -w, \d+, -c, \d+, [0-9A-Fa-f:]+ -ping6_alt: RegExpFilter, ping6, root, ping6, -c, \d+, -w, \d+, [0-9A-Fa-f:]+ diff --git a/neutron/templates/etc/rootwrap.d/_dhcp.filters.tpl b/neutron/templates/etc/rootwrap.d/_dhcp.filters.tpl deleted file mode 100644 index 3f06b4ae26..0000000000 --- a/neutron/templates/etc/rootwrap.d/_dhcp.filters.tpl +++ /dev/null @@ -1,34 +0,0 @@ -# neutron-rootwrap command filters for nodes on which neutron is -# expected to control network -# -# This file should be owned by (and only-writeable by) the root user - -# format seems to be -# cmd-name: filter-name, raw-command, user, args - -[Filters] - -# dhcp-agent -dnsmasq: CommandFilter, dnsmasq, root -# dhcp-agent uses kill as well, that's handled by the generic KillFilter -# it looks like these are the only signals needed, per -# neutron/agent/linux/dhcp.py -kill_dnsmasq: KillFilter, root, /sbin/dnsmasq, -9, -HUP, -15 -kill_dnsmasq_usr: KillFilter, root, /usr/sbin/dnsmasq, -9, -HUP, -15 - -ovs-vsctl: CommandFilter, ovs-vsctl, root -ivs-ctl: CommandFilter, ivs-ctl, root -mm-ctl: CommandFilter, mm-ctl, root -dhcp_release: CommandFilter, dhcp_release, root -dhcp_release6: CommandFilter, dhcp_release6, root - -# metadata proxy -metadata_proxy: CommandFilter, neutron-ns-metadata-proxy, root -# RHEL invocation of the metadata proxy will report /usr/bin/python -kill_metadata: KillFilter, root, python, -9 -kill_metadata7: KillFilter, root, python2.7, -9 - -# ip_lib -ip: IpFilter, ip, root -find: RegExpFilter, find, root, find, /sys/class/net, -maxdepth, 1, -type, l, -printf, %.* -ip_exec: IpNetnsExecFilter, ip, root diff --git a/neutron/templates/etc/rootwrap.d/_dibbler.filters.tpl b/neutron/templates/etc/rootwrap.d/_dibbler.filters.tpl deleted file mode 100644 index eea55252f3..0000000000 --- a/neutron/templates/etc/rootwrap.d/_dibbler.filters.tpl +++ /dev/null @@ -1,16 +0,0 @@ -# neutron-rootwrap command filters for nodes on which neutron is -# expected to control network -# -# This file should be owned by (and only-writeable by) the root user - -# format seems to be -# cmd-name: filter-name, raw-command, user, args - -[Filters] - -# Filters for the dibbler-based reference implementation of the pluggable -# Prefix Delegation driver. Other implementations using an alternative agent -# should include a similar filter in this folder. - -# prefix_delegation_agent -dibbler-client: CommandFilter, dibbler-client, root diff --git a/neutron/templates/etc/rootwrap.d/_ebtables.filters.tpl b/neutron/templates/etc/rootwrap.d/_ebtables.filters.tpl deleted file mode 100644 index 8e810e7b55..0000000000 --- a/neutron/templates/etc/rootwrap.d/_ebtables.filters.tpl +++ /dev/null @@ -1,11 +0,0 @@ -# neutron-rootwrap command filters for nodes on which neutron is -# expected to control network -# -# This file should be owned by (and only-writeable by) the root user - -# format seems to be -# cmd-name: filter-name, raw-command, user, args - -[Filters] - -ebtables: CommandFilter, ebtables, root diff --git a/neutron/templates/etc/rootwrap.d/_ipset-firewall.filters.tpl b/neutron/templates/etc/rootwrap.d/_ipset-firewall.filters.tpl deleted file mode 100644 index 52c66373b2..0000000000 --- a/neutron/templates/etc/rootwrap.d/_ipset-firewall.filters.tpl +++ /dev/null @@ -1,12 +0,0 @@ -# neutron-rootwrap command filters for nodes on which neutron is -# expected to control network -# -# This file should be owned by (and only-writeable by) the root user - -# format seems to be -# cmd-name: filter-name, raw-command, user, args - -[Filters] -# neutron/agent/linux/iptables_firewall.py -# "ipset", "-A", ... -ipset: CommandFilter, ipset, root diff --git a/neutron/templates/etc/rootwrap.d/_iptables-firewall.filters.tpl b/neutron/templates/etc/rootwrap.d/_iptables-firewall.filters.tpl deleted file mode 100644 index 0a81f9ddb4..0000000000 --- a/neutron/templates/etc/rootwrap.d/_iptables-firewall.filters.tpl +++ /dev/null @@ -1,27 +0,0 @@ -# neutron-rootwrap command filters for nodes on which neutron is -# expected to control network -# -# This file should be owned by (and only-writeable by) the root user - -# format seems to be -# cmd-name: filter-name, raw-command, user, args - -[Filters] - -# neutron/agent/linux/iptables_firewall.py -# "iptables-save", ... -iptables-save: CommandFilter, iptables-save, root -iptables-restore: CommandFilter, iptables-restore, root -ip6tables-save: CommandFilter, ip6tables-save, root -ip6tables-restore: CommandFilter, ip6tables-restore, root - -# neutron/agent/linux/iptables_firewall.py -# "iptables", "-A", ... -iptables: CommandFilter, iptables, root -ip6tables: CommandFilter, ip6tables, root - -# neutron/agent/linux/iptables_firewall.py -sysctl: CommandFilter, sysctl, root - -# neutron/agent/linux/ip_conntrack.py -conntrack: CommandFilter, conntrack, root diff --git a/neutron/templates/etc/rootwrap.d/_l3.filters.tpl b/neutron/templates/etc/rootwrap.d/_l3.filters.tpl deleted file mode 100644 index 789a16f80e..0000000000 --- a/neutron/templates/etc/rootwrap.d/_l3.filters.tpl +++ /dev/null @@ -1,52 +0,0 @@ -# neutron-rootwrap command filters for nodes on which neutron is -# expected to control network -# -# This file should be owned by (and only-writeable by) the root user - -# format seems to be -# cmd-name: filter-name, raw-command, user, args - -[Filters] - -# arping -arping: CommandFilter, arping, root - -# l3_agent -sysctl: CommandFilter, sysctl, root -route: CommandFilter, route, root -radvd: CommandFilter, radvd, root - -# metadata proxy -metadata_proxy: CommandFilter, neutron-ns-metadata-proxy, root -# RHEL invocation of the metadata proxy will report /usr/bin/python -kill_metadata: KillFilter, root, python, -15, -9 -kill_metadata7: KillFilter, root, python2.7, -15, -9 -kill_radvd_usr: KillFilter, root, /usr/sbin/radvd, -15, -9, -HUP -kill_radvd: KillFilter, root, /sbin/radvd, -15, -9, -HUP - -# ip_lib -ip: IpFilter, ip, root -find: RegExpFilter, find, root, find, /sys/class/net, -maxdepth, 1, -type, l, -printf, %.* -ip_exec: IpNetnsExecFilter, ip, root - -# For ip monitor -kill_ip_monitor: KillFilter, root, ip, -9 - -# ovs_lib (if OVSInterfaceDriver is used) -ovs-vsctl: CommandFilter, ovs-vsctl, root - -# iptables_manager -iptables-save: CommandFilter, iptables-save, root -iptables-restore: CommandFilter, iptables-restore, root -ip6tables-save: CommandFilter, ip6tables-save, root -ip6tables-restore: CommandFilter, ip6tables-restore, root - -# Keepalived -keepalived: CommandFilter, keepalived, root -kill_keepalived: KillFilter, root, /usr/sbin/keepalived, -HUP, -15, -9 - -# l3 agent to delete floatingip's conntrack state -conntrack: CommandFilter, conntrack, root - -# keepalived state change monitor -keepalived_state_change: CommandFilter, neutron-keepalived-state-change, root diff --git a/neutron/templates/etc/rootwrap.d/_linuxbridge-plugin.filters.tpl b/neutron/templates/etc/rootwrap.d/_linuxbridge-plugin.filters.tpl deleted file mode 100644 index f0934357ba..0000000000 --- a/neutron/templates/etc/rootwrap.d/_linuxbridge-plugin.filters.tpl +++ /dev/null @@ -1,28 +0,0 @@ -# neutron-rootwrap command filters for nodes on which neutron is -# expected to control network -# -# This file should be owned by (and only-writeable by) the root user - -# format seems to be -# cmd-name: filter-name, raw-command, user, args - -[Filters] - -# linuxbridge-agent -# unclear whether both variants are necessary, but I'm transliterating -# from the old mechanism -brctl: CommandFilter, brctl, root -bridge: CommandFilter, bridge, root - -# ip_lib -ip: IpFilter, ip, root -find: RegExpFilter, find, root, find, /sys/class/net, -maxdepth, 1, -type, l, -printf, %.* -ip_exec: IpNetnsExecFilter, ip, root - -# tc commands needed for QoS support -tc_replace_tbf: RegExpFilter, tc, root, tc, qdisc, replace, dev, .+, root, tbf, rate, .+, latency, .+, burst, .+ -tc_add_ingress: RegExpFilter, tc, root, tc, qdisc, add, dev, .+, ingress, handle, .+ -tc_delete: RegExpFilter, tc, root, tc, qdisc, del, dev, .+, .+ -tc_show_qdisc: RegExpFilter, tc, root, tc, qdisc, show, dev, .+ -tc_show_filters: RegExpFilter, tc, root, tc, filter, show, dev, .+, parent, .+ -tc_add_filter: RegExpFilter, tc, root, tc, filter, add, dev, .+, parent, .+, protocol, all, prio, .+, basic, police, rate, .+, burst, .+, mtu, .+, drop diff --git a/neutron/templates/etc/rootwrap.d/_netns-cleanup.filters.tpl b/neutron/templates/etc/rootwrap.d/_netns-cleanup.filters.tpl deleted file mode 100644 index 1ee142e54c..0000000000 --- a/neutron/templates/etc/rootwrap.d/_netns-cleanup.filters.tpl +++ /dev/null @@ -1,12 +0,0 @@ -# neutron-rootwrap command filters for nodes on which neutron is -# expected to control network -# -# This file should be owned by (and only-writeable by) the root user - -# format seems to be -# cmd-name: filter-name, raw-command, user, args - -[Filters] - -# netns-cleanup -netstat: CommandFilter, netstat, root diff --git a/neutron/templates/etc/rootwrap.d/_openvswitch-plugin.filters.tpl b/neutron/templates/etc/rootwrap.d/_openvswitch-plugin.filters.tpl deleted file mode 100644 index c738733bb4..0000000000 --- a/neutron/templates/etc/rootwrap.d/_openvswitch-plugin.filters.tpl +++ /dev/null @@ -1,24 +0,0 @@ -# neutron-rootwrap command filters for nodes on which neutron is -# expected to control network -# -# This file should be owned by (and only-writeable by) the root user - -# format seems to be -# cmd-name: filter-name, raw-command, user, args - -[Filters] - -# openvswitch-agent -# unclear whether both variants are necessary, but I'm transliterating -# from the old mechanism -ovs-vsctl: CommandFilter, ovs-vsctl, root -# NOTE(yamamoto): of_interface=native doesn't use ovs-ofctl -ovs-ofctl: CommandFilter, ovs-ofctl, root -kill_ovsdb_client: KillFilter, root, /usr/bin/ovsdb-client, -9 -ovsdb-client: CommandFilter, ovsdb-client, root -xe: CommandFilter, xe, root - -# ip_lib -ip: IpFilter, ip, root -find: RegExpFilter, find, root, find, /sys/class/net, -maxdepth, 1, -type, l, -printf, %.* -ip_exec: IpNetnsExecFilter, ip, root diff --git a/neutron/templates/job-db-drop.yaml b/neutron/templates/job-db-drop.yaml index 8492c849bf..74fc91e48e 100644 --- a/neutron/templates/job-db-drop.yaml +++ b/neutron/templates/job-db-drop.yaml @@ -16,72 +16,6 @@ limitations under the License. */}} {{- if .Values.manifests.job_db_drop }} -{{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.db_drop }} - -{{- $randStringSuffix := randAlphaNum 5 | lower }} - -{{- $serviceAccountName := print "neutron-db-drop-" $randStringSuffix }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: {{ print "neutron-db-drop-" $randStringSuffix }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded -spec: - template: - metadata: - labels: -{{ tuple $envAll "neutron" "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - spec: - serviceAccountName: {{ $serviceAccountName }} - restartPolicy: OnFailure - nodeSelector: - {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }} - initContainers: -{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - containers: - - name: neutron-db-drop - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: OPENSTACK_CONFIG_FILE - value: /etc/neutron/neutron.conf - - name: OPENSTACK_CONFIG_DB_SECTION - value: database - - name: OPENSTACK_CONFIG_DB_KEY - value: connection - command: - - /tmp/db-drop.py - volumeMounts: - - name: neutron-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true - - name: etcneutron - mountPath: /etc/neutron - - name: neutron-etc - mountPath: /etc/neutron/neutron.conf - subPath: neutron.conf - readOnly: true - volumes: - - name: etcneutron - emptyDir: {} - - name: neutron-etc - configMap: - name: neutron-etc - defaultMode: 0444 - - name: neutron-bin - configMap: - name: neutron-bin - defaultMode: 0555 +{{- $dbDropJob := dict "envAll" . "serviceName" "neutron" -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} {{- end }} diff --git a/neutron/templates/job-db-sync.yaml b/neutron/templates/job-db-sync.yaml index 5a7d558f83..d8bcaa441f 100644 --- a/neutron/templates/job-db-sync.yaml +++ b/neutron/templates/job-db-sync.yaml @@ -16,7 +16,7 @@ limitations under the License. {{- if .Values.manifests.job_db_sync }} -{{- if eq .Values.network.backend "opencontrail" }} +{{- if ( has "opencontrail" .Values.network.backend ) }} {{- $podVolMounts := list (dict "name" "db-sync-conf" "mountPath" "/etc/neutron/plugins/opencontrail/ContrailPlugin.ini" "subPath" "ContrailPlugin.ini" "readOnly" true )}} {{- $dbSyncJob := dict "envAll" . "serviceName" "neutron" "podVolMounts" $podVolMounts -}} {{ $dbSyncJob | include "helm-toolkit.manifests.job_db_sync" }} diff --git a/neutron/templates/service-ingress-neutron.yaml b/neutron/templates/service-ingress-neutron.yaml index 9ba0b0bd8d..ab472e8c3a 100644 --- a/neutron/templates/service-ingress-neutron.yaml +++ b/neutron/templates/service-ingress-neutron.yaml @@ -14,18 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_server }} -{{- $envAll := . }} -{{- if .Values.network.server.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "network" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_server .Values.network.server.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "network" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/neutron/values.yaml b/neutron/values.yaml index 8a145bd5c7..6522afaefd 100644 --- a/neutron/values.yaml +++ b/neutron/values.yaml @@ -36,7 +36,9 @@ images: neutron_l3: docker.io/openstackhelm/neutron:newton neutron_openvswitch_agent: docker.io/openstackhelm/neutron:newton neutron_linuxbridge_agent: docker.io/openstackhelm/neutron:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + neutron_sriov_agent: docker.io/openstackhelm/neutron:newton-sriov-1804 + neutron_sriov_agent_init: docker.io/openstackhelm/neutron:newton-sriov-1804 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 opencontrail_neutron_init: pull_policy: "IfNotPresent" @@ -57,13 +59,16 @@ labels: lb: node_selector_key: linuxbridge node_selector_value: enabled - # ovs is a special case, requiring a special + # openvswitch is a special case, requiring a special # label that can apply to both control hosts # and compute hosts, until we get more sophisticated # with our daemonset scheduling ovs: node_selector_key: openvswitch node_selector_value: enabled + sriov: + node_selector_key: sriov + node_selector_value: enabled server: node_selector_key: openstack-control-plane node_selector_value: enabled @@ -73,8 +78,9 @@ labels: network: # provide what type of network wiring will be used - # possible options: ovs, linuxbridge - backend: ovs + # possible options: openvswitch, linuxbridge, sriov + backend: + - openvswitch external_bridge: br-ex ip_address: 0.0.0.0 interface: @@ -95,11 +101,19 @@ network: # br-physnet1: eth3 # br0: if0 # br1: iface_two + sriov: + # To perform setup of network interfaces using the SR-IOV init + # container you can use a section similar to: + # sriov: + # - device: ${DEV} + # num_vfs: 8 server: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / external_policy_local: false node_port: @@ -115,26 +129,56 @@ bootstrap: dependencies: dynamic: targeted: - ovs: + openvswitch: dhcp: - daemonset: - - neutron-ovs-agent + pod: + - labels: + application: neutron + component: neutron-ovs-agent l3: - daemonset: - - neutron-ovs-agent + pod: + - labels: + application: neutron + component: neutron-ovs-agent metadata: - daemonset: - - neutron-ovs-agent + pod: + - labels: + application: neutron + component: neutron-ovs-agent linuxbridge: dhcp: - daemonset: - - neutron-lb-agent + pod: + - labels: + application: neutron + component: neutron-lb-agent l3: - daemonset: - - neutron-lb-agent + pod: + - labels: + application: neutron + component: neutron-lb-agent metadata: - daemonset: - - neutron-lb-agent + pod: + - labels: + application: neutron + component: neutron-lb-agent + lb_agent: + pod: null + sriov: + dhcp: + pod: + - labels: + application: neutron + component: neutron-sriov-agent + l3: + pod: + - labels: + application: neutron + component: neutron-sriov-agent + metadata: + pod: + - labels: + application: neutron + component: neutron-sriov-agent static: bootstrap: services: @@ -157,7 +201,7 @@ dependencies: - endpoint: internal service: oslo_db dhcp: - daemonset: null + pod: null jobs: - neutron-rabbit-init services: @@ -183,10 +227,10 @@ dependencies: service: identity rabbit_init: services: - - service: oslo_messaging - endpoint: internal + - service: oslo_messaging + endpoint: internal l3: - daemonset: null + pod: null jobs: - neutron-rabbit-init services: @@ -197,6 +241,7 @@ dependencies: - endpoint: internal service: compute lb_agent: + pod: null jobs: - neutron-rabbit-init services: @@ -205,7 +250,7 @@ dependencies: - endpoint: internal service: network metadata: - daemonset: null + pod: null jobs: - neutron-rabbit-init services: @@ -220,9 +265,13 @@ dependencies: ovs_agent: jobs: - neutron-rabbit-init - daemonset: - - openvswitch-vswitchd - - openvswitch-db + pod: + - labels: + application: openvswitch + component: openvswitch-vswitchd + - labels: + application: openvswitch + component: openvswitch-vswitchd-db services: - endpoint: internal service: oslo_messaging @@ -279,6 +328,9 @@ pod: neutron_ovs_agent: init_container: null neutron_ovs_agent: + neutron_sriov_agent: + init_container: null + neutron_sriov_agent: neutron_tests: init_container: null neutron_tests: @@ -317,6 +369,10 @@ pod: enabled: true min_ready_seconds: 0 max_unavailable: 1 + sriov_agent: + enabled: true + min_ready_seconds: 0 + max_unavailable: 1 disruption_budget: server: min_available: 0 @@ -361,6 +417,13 @@ pod: limits: memory: "1024Mi" cpu: "2000m" + sriov: + requests: + memory: "128Mi" + cpu: "100m" + limits: + memory: "1024Mi" + cpu: "2000m" server: requests: memory: "128Mi" @@ -912,50 +975,379 @@ conf: get_subports: '' add_subports: rule:admin_or_owner remove_subports: rule:admin_or_owner - neutron_sudoers: - override: - append: - rootwrap: - override: - append: + neutron_sudoers: | + # This sudoers file supports rootwrap for both Kolla and LOCI Images. + Defaults !requiretty + Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/var/lib/openstack/bin:/var/lib/kolla/venv/bin" + neutron ALL = (root) NOPASSWD: /var/lib/kolla/venv/bin/neutron-rootwrap /etc/neutron/rootwrap.conf *, /var/lib/openstack/bin/neutron-rootwrap /etc/neutron/rootwrap.conf * + rootwrap: | + # Configuration for neutron-rootwrap + # This file should be owned by (and only-writeable by) the root user + + [DEFAULT] + # List of directories to load filter definitions from (separated by ','). + # These directories MUST all be only writeable by root ! + filters_path=/etc/neutron/rootwrap.d,/usr/share/neutron/rootwrap + + # List of directories to search executables in, in case filters do not + # explicitely specify a full path (separated by ',') + # If not specified, defaults to system PATH environment variable. + # These directories MUST all be only writeable by root ! + exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin,/var/lib/openstack/bin,/var/lib/kolla/venv/bin + + # Enable logging to syslog + # Default value is False + use_syslog=False + + # Which syslog facility to use. + # Valid values include auth, authpriv, syslog, local0, local1... + # Default value is 'syslog' + syslog_log_facility=syslog + + # Which messages to log. + # INFO means log all usage + # ERROR means only log unsuccessful attempts + syslog_log_level=ERROR + + [xenapi] + # XenAPI configuration is only required by the L2 agent if it is to + # target a XenServer/XCP compute host's dom0. + xenapi_connection_url= + xenapi_connection_username=root + xenapi_connection_password= rootwrap_filters: debug: - override: - append: + pods: + - dhcp_agent + - l3_agent + - lb_agent + - metadata_agent + - ovs_agent + - sriov_agent + content: | + # neutron-rootwrap command filters for nodes on which neutron is + # expected to control network + # + # This file should be owned by (and only-writeable by) the root user + + # format seems to be + # cmd-name: filter-name, raw-command, user, args + + [Filters] + + # This is needed because we should ping + # from inside a namespace which requires root + # _alt variants allow to match -c and -w in any order + # (used by NeutronDebugAgent.ping_all) + ping: RegExpFilter, ping, root, ping, -w, \d+, -c, \d+, [0-9\.]+ + ping_alt: RegExpFilter, ping, root, ping, -c, \d+, -w, \d+, [0-9\.]+ + ping6: RegExpFilter, ping6, root, ping6, -w, \d+, -c, \d+, [0-9A-Fa-f:]+ + ping6_alt: RegExpFilter, ping6, root, ping6, -c, \d+, -w, \d+, [0-9A-Fa-f:]+ dibbler: - override: - append: + pods: + - dhcp_agent + - l3_agent + - lb_agent + - metadata_agent + - ovs_agent + - sriov_agent + content: | + # neutron-rootwrap command filters for nodes on which neutron is + # expected to control network + # + # This file should be owned by (and only-writeable by) the root user + + # format seems to be + # cmd-name: filter-name, raw-command, user, args + + [Filters] + + # Filters for the dibbler-based reference implementation of the pluggable + # Prefix Delegation driver. Other implementations using an alternative agent + # should include a similar filter in this folder. + + # prefix_delegation_agent + dibbler-client: CommandFilter, dibbler-client, root ipset_firewall: - override: - append: + pods: + - dhcp_agent + - l3_agent + - lb_agent + - metadata_agent + - ovs_agent + - sriov_agent + content: | + # neutron-rootwrap command filters for nodes on which neutron is + # expected to control network + # + # This file should be owned by (and only-writeable by) the root user + + # format seems to be + # cmd-name: filter-name, raw-command, user, args + + [Filters] + # neutron/agent/linux/iptables_firewall.py + # "ipset", "-A", ... + ipset: CommandFilter, ipset, root l3: - override: - append: + pods: + - dhcp_agent + - l3_agent + - lb_agent + - metadata_agent + - ovs_agent + - sriov_agent + content: | + # neutron-rootwrap command filters for nodes on which neutron is + # expected to control network + # + # This file should be owned by (and only-writeable by) the root user + + # format seems to be + # cmd-name: filter-name, raw-command, user, args + + [Filters] + + # arping + arping: CommandFilter, arping, root + + # l3_agent + sysctl: CommandFilter, sysctl, root + route: CommandFilter, route, root + radvd: CommandFilter, radvd, root + + # metadata proxy + metadata_proxy: CommandFilter, neutron-ns-metadata-proxy, root + # RHEL invocation of the metadata proxy will report /usr/bin/python + kill_metadata: KillFilter, root, python, -15, -9 + kill_metadata7: KillFilter, root, python2.7, -15, -9 + kill_radvd_usr: KillFilter, root, /usr/sbin/radvd, -15, -9, -HUP + kill_radvd: KillFilter, root, /sbin/radvd, -15, -9, -HUP + + # ip_lib + ip: IpFilter, ip, root + find: RegExpFilter, find, root, find, /sys/class/net, -maxdepth, 1, -type, l, -printf, %.* + ip_exec: IpNetnsExecFilter, ip, root + + # For ip monitor + kill_ip_monitor: KillFilter, root, ip, -9 + + # ovs_lib (if OVSInterfaceDriver is used) + ovs-vsctl: CommandFilter, ovs-vsctl, root + + # iptables_manager + iptables-save: CommandFilter, iptables-save, root + iptables-restore: CommandFilter, iptables-restore, root + ip6tables-save: CommandFilter, ip6tables-save, root + ip6tables-restore: CommandFilter, ip6tables-restore, root + + # Keepalived + keepalived: CommandFilter, keepalived, root + kill_keepalived: KillFilter, root, /usr/sbin/keepalived, -HUP, -15, -9 + + # l3 agent to delete floatingip's conntrack state + conntrack: CommandFilter, conntrack, root + + # keepalived state change monitor + keepalived_state_change: CommandFilter, neutron-keepalived-state-change, root netns_cleanup: - override: - append: + pods: + - dhcp_agent + - l3_agent + - lb_agent + - metadata_agent + - ovs_agent + - sriov_agent + content: | + # neutron-rootwrap command filters for nodes on which neutron is + # expected to control network + # + # This file should be owned by (and only-writeable by) the root user + + # format seems to be + # cmd-name: filter-name, raw-command, user, args + + [Filters] + + # netns-cleanup + netstat: CommandFilter, netstat, root dhcp: - override: - append: + pods: + - dhcp_agent + - l3_agent + - lb_agent + - metadata_agent + - ovs_agent + - sriov_agent + content: | + # neutron-rootwrap command filters for nodes on which neutron is + # expected to control network + # + # This file should be owned by (and only-writeable by) the root user + + # format seems to be + # cmd-name: filter-name, raw-command, user, args + + [Filters] + + # dhcp-agent + dnsmasq: CommandFilter, dnsmasq, root + # dhcp-agent uses kill as well, that's handled by the generic KillFilter + # it looks like these are the only signals needed, per + # neutron/agent/linux/dhcp.py + kill_dnsmasq: KillFilter, root, /sbin/dnsmasq, -9, -HUP, -15 + kill_dnsmasq_usr: KillFilter, root, /usr/sbin/dnsmasq, -9, -HUP, -15 + + ovs-vsctl: CommandFilter, ovs-vsctl, root + ivs-ctl: CommandFilter, ivs-ctl, root + mm-ctl: CommandFilter, mm-ctl, root + dhcp_release: CommandFilter, dhcp_release, root + dhcp_release6: CommandFilter, dhcp_release6, root + + # metadata proxy + metadata_proxy: CommandFilter, neutron-ns-metadata-proxy, root + # RHEL invocation of the metadata proxy will report /usr/bin/python + kill_metadata: KillFilter, root, python, -9 + kill_metadata7: KillFilter, root, python2.7, -9 + + # ip_lib + ip: IpFilter, ip, root + find: RegExpFilter, find, root, find, /sys/class/net, -maxdepth, 1, -type, l, -printf, %.* + ip_exec: IpNetnsExecFilter, ip, root ebtables: - override: - append: + pods: + - dhcp_agent + - l3_agent + - lb_agent + - metadata_agent + - ovs_agent + - sriov_agent + content: | + # neutron-rootwrap command filters for nodes on which neutron is + # expected to control network + # + # This file should be owned by (and only-writeable by) the root user + + # format seems to be + # cmd-name: filter-name, raw-command, user, args + + [Filters] + + ebtables: CommandFilter, ebtables, root iptables_firewall: - override: - append: + pods: + - dhcp_agent + - l3_agent + - lb_agent + - metadata_agent + - ovs_agent + - sriov_agent + content: | + # neutron-rootwrap command filters for nodes on which neutron is + # expected to control network + # + # This file should be owned by (and only-writeable by) the root user + + # format seems to be + # cmd-name: filter-name, raw-command, user, args + + [Filters] + + # neutron/agent/linux/iptables_firewall.py + # "iptables-save", ... + iptables-save: CommandFilter, iptables-save, root + iptables-restore: CommandFilter, iptables-restore, root + ip6tables-save: CommandFilter, ip6tables-save, root + ip6tables-restore: CommandFilter, ip6tables-restore, root + + # neutron/agent/linux/iptables_firewall.py + # "iptables", "-A", ... + iptables: CommandFilter, iptables, root + ip6tables: CommandFilter, ip6tables, root + + # neutron/agent/linux/iptables_firewall.py + sysctl: CommandFilter, sysctl, root + + # neutron/agent/linux/ip_conntrack.py + conntrack: CommandFilter, conntrack, root linuxbridge_plugin: - override: - append: + pods: + - dhcp_agent + - l3_agent + - lb_agent + - metadata_agent + - ovs_agent + - sriov_agent + content: | + # neutron-rootwrap command filters for nodes on which neutron is + # expected to control network + # + # This file should be owned by (and only-writeable by) the root user + + # format seems to be + # cmd-name: filter-name, raw-command, user, args + + [Filters] + + # linuxbridge-agent + # unclear whether both variants are necessary, but I'm transliterating + # from the old mechanism + brctl: CommandFilter, brctl, root + bridge: CommandFilter, bridge, root + + # ip_lib + ip: IpFilter, ip, root + find: RegExpFilter, find, root, find, /sys/class/net, -maxdepth, 1, -type, l, -printf, %.* + ip_exec: IpNetnsExecFilter, ip, root + + # tc commands needed for QoS support + tc_replace_tbf: RegExpFilter, tc, root, tc, qdisc, replace, dev, .+, root, tbf, rate, .+, latency, .+, burst, .+ + tc_add_ingress: RegExpFilter, tc, root, tc, qdisc, add, dev, .+, ingress, handle, .+ + tc_delete: RegExpFilter, tc, root, tc, qdisc, del, dev, .+, .+ + tc_show_qdisc: RegExpFilter, tc, root, tc, qdisc, show, dev, .+ + tc_show_filters: RegExpFilter, tc, root, tc, filter, show, dev, .+, parent, .+ + tc_add_filter: RegExpFilter, tc, root, tc, filter, add, dev, .+, parent, .+, protocol, all, prio, .+, basic, police, rate, .+, burst, .+, mtu, .+, drop openvswitch_plugin: - override: - append: + pods: + - dhcp_agent + - l3_agent + - lb_agent + - metadata_agent + - ovs_agent + - sriov_agent + content: | + # neutron-rootwrap command filters for nodes on which neutron is + # expected to control network + # + # This file should be owned by (and only-writeable by) the root user + + # format seems to be + # cmd-name: filter-name, raw-command, user, args + + [Filters] + + # openvswitch-agent + # unclear whether both variants are necessary, but I'm transliterating + # from the old mechanism + ovs-vsctl: CommandFilter, ovs-vsctl, root + # NOTE(yamamoto): of_interface=native doesn't use ovs-ofctl + ovs-ofctl: CommandFilter, ovs-ofctl, root + kill_ovsdb_client: KillFilter, root, /usr/bin/ovsdb-client, -9 + ovsdb-client: CommandFilter, ovsdb-client, root + xe: CommandFilter, xe, root + + # ip_lib + ip: IpFilter, ip, root + find: RegExpFilter, find, root, find, /sys/class/net, -maxdepth, 1, -type, l, -printf, %.* + ip_exec: IpNetnsExecFilter, ip, root neutron: DEFAULT: #NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. bind_port: null default_availability_zones: nova - api_workers: 4 + api_workers: 1 + rpc_workers: 1 allow_overlapping_ips: True # core_plugin can be: ml2, calico core_plugin: ml2 @@ -1043,7 +1435,12 @@ conf: l2_population: True arp_responder: True macvtap_agent: null - sriov_agent: null + sriov_agent: + securitygroup: + firewall_driver: neutron.agent.firewall.NoopFirewallDriver + sriov_nic: + physical_device_mappings: physnet2:enp3s0f1 + exclude_devices: null dhcp_agent: DEFAULT: #(NOTE)portdirect: if unset this is populated dyanmicly from the value in @@ -1081,7 +1478,7 @@ secrets: admin: neutron-rabbitmq-admin neutron: neutron-rabbitmq-user -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -1145,7 +1542,7 @@ endpoints: host_fqdn_override: default: null path: - default: "/v2/%(tenant_id)s" + default: "/v2.1/%(tenant_id)s" scheme: default: 'http' port: @@ -1240,6 +1637,7 @@ manifests: daemonset_lb_agent: true daemonset_metadata_agent: true daemonset_ovs_agent: true + daemonset_sriov_agent: true deployment_server: true ingress_server: true job_bootstrap: true diff --git a/nova/templates/bin/_nova-api-metadata-init.sh.tpl b/nova/templates/bin/_nova-api-metadata-init.sh.tpl index 5610b87983..bcb509e5ee 100644 --- a/nova/templates/bin/_nova-api-metadata-init.sh.tpl +++ b/nova/templates/bin/_nova-api-metadata-init.sh.tpl @@ -18,7 +18,7 @@ limitations under the License. set -ex -metadata_ip="{{- .Values.network.metadata.ip -}}" +metadata_ip="{{- .Values.endpoints.compute_metadata.ip.ingress -}}" if [ -z "${metadata_ip}" ] ; then metadata_ip=$(getent hosts metadata | awk '{print $1}') fi @@ -27,4 +27,3 @@ cat </tmp/pod-shared/nova-api-metadata.ini [DEFAULT] metadata_host=$metadata_ip EOF - diff --git a/nova/templates/configmap-etc.yaml b/nova/templates/configmap-etc.yaml index e7663f066e..0d1d9f958d 100644 --- a/nova/templates/configmap-etc.yaml +++ b/nova/templates/configmap-etc.yaml @@ -103,10 +103,8 @@ limitations under the License. {{- tuple "oslo_cache" "internal" "memcache" . | include "helm-toolkit.endpoints.host_and_port_endpoint_uri_lookup" | set .Values.conf.nova.cache "memcache_servers" | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.nova.DEFAULT.metadata_host -}} -{{- if .Values.network.metadata.ip -}} -{{- set .Values.conf.nova.DEFAULT "metadata_host" .Values.network.metadata.ip | quote | trunc 0 -}} -{{- end -}} +{{- if and (empty .Values.conf.nova.DEFAULT.metadata_host) .Values.endpoints.compute_metadata.ip.ingress -}} +{{- set .Values.conf.nova.DEFAULT "metadata_host" .Values.endpoints.compute_metadata.ip.ingress | quote | trunc 0 -}} {{- end -}} {{- if empty .Values.conf.nova.DEFAULT.metadata_port -}} @@ -215,7 +213,7 @@ data: policy.yaml: | {{ toYaml .Values.conf.policy | indent 4 }} nova_sudoers: | -{{- tuple .Values.conf.neutron_sudoers "etc/_nova_sudoers.tpl" . | include "helm-toolkit.utils.configmap_templater" }} +{{- tuple .Values.conf.nova_sudoers "etc/_nova_sudoers.tpl" . | include "helm-toolkit.utils.configmap_templater" }} rootwrap.conf: | {{- tuple .Values.conf.rootwrap "etc/_rootwrap.conf.tpl" . | include "helm-toolkit.utils.configmap_templater" }} api-metadata.filters: | diff --git a/nova/templates/daemonset-compute.yaml b/nova/templates/daemonset-compute.yaml index 5e337c5796..e815d48d66 100644 --- a/nova/templates/daemonset-compute.yaml +++ b/nova/templates/daemonset-compute.yaml @@ -126,7 +126,7 @@ spec: - name: pod-shared mountPath: /tmp/pod-shared {{ end }} - {{- if eq .Values.network.backend "opencontrail" }} + {{- if ( has "opencontrail" .Values.network.backend ) }} - name: opencontrail-compute-init image: {{ .Values.images.tags.opencontrail_compute_init }} imagePullPolicy: {{ .Values.images.pull_policy }} @@ -234,7 +234,7 @@ spec: - name: machine-id mountPath: /etc/machine-id readOnly: true - {{- if eq .Values.network.backend "opencontrail" }} + {{- if ( has "opencontrail" .Values.network.backend ) }} - name: opencontrail-plugin-shared mountPath: /opt/plugin readOnly: true @@ -342,7 +342,7 @@ spec: - name: machine-id hostPath: path: /etc/machine-id - {{- if eq .Values.network.backend "opencontrail" }} + {{- if ( has "opencontrail" .Values.network.backend ) }} - name: opencontrail-plugin-shared emptyDir: {} # TO-DO: Fix vif-plug-vrouter driver to detect executable dirs path @@ -359,9 +359,11 @@ spec: {{- $daemonset := "compute" }} {{- $configMapName := "nova-etc" }} {{- $serviceAccountName := "nova-compute" }} + {{- $dependencyOpts := dict "envAll" $envAll "dependencyMixinParam" $envAll.Values.network.backend "dependencyKey" "compute" -}} {{- $dependencies := include "helm-toolkit.utils.dependency_resolver" $dependencyOpts | toString | fromYaml }} -{{ tuple . $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} + +{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} {{- $daemonset_yaml := list $daemonset $configMapName $serviceAccountName $dependencies . | include "nova.compute.daemonset" | toString | fromYaml }} {{- $configmap_yaml := "nova.configmap.etc" }} {{- list $daemonset $daemonset_yaml $configmap_yaml $configMapName . | include "helm-toolkit.utils.daemonset_overrides" }} diff --git a/nova/templates/etc/_rootwrap.conf.tpl b/nova/templates/etc/_rootwrap.conf.tpl index 5b70b52798..aebcb20db3 100644 --- a/nova/templates/etc/_rootwrap.conf.tpl +++ b/nova/templates/etc/_rootwrap.conf.tpl @@ -10,7 +10,7 @@ filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap # explicitely specify a full path (separated by ',') # If not specified, defaults to system PATH environment variable. # These directories MUST all be only writeable by root ! -{{- if eq .Values.network.backend "opencontrail" }} +{{- if ( has "opencontrail" .Values.network.backend ) }} exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin,/var/lib/openstack/bin,/var/lib/kolla/venv/bin,/opt/plugin/bin/ {{- else }} exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin,/var/lib/openstack/bin,/var/lib/kolla/venv/bin diff --git a/nova/templates/ingress-novncproxy.yaml b/nova/templates/ingress-novncproxy.yaml new file mode 100644 index 0000000000..c5a00ec57f --- /dev/null +++ b/nova/templates/ingress-novncproxy.yaml @@ -0,0 +1,20 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- if and .Values.manifests.ingress_novncproxy .Values.network.novncproxy.ingress.public }} +{{- $ingressOpts := dict "envAll" . "backendService" "novncproxy" "backendServiceType" "compute_novnc_proxy" "backendPort" "n-novnc" -}} +{{ $ingressOpts | include "helm-toolkit.manifests.ingress" }} +{{- end }} diff --git a/nova/templates/job-db-drop.yaml b/nova/templates/job-db-drop.yaml index b7be5c2d59..48df2d40fd 100644 --- a/nova/templates/job-db-drop.yaml +++ b/nova/templates/job-db-drop.yaml @@ -15,101 +15,11 @@ limitations under the License. */}} {{- if .Values.manifests.job_db_drop }} -{{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.db_drop }} - -{{- $randStringSuffix := randAlphaNum 5 | lower }} - -{{- $serviceAccountName := print "nova-db-drop-" $randStringSuffix }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: {{ print "nova-db-drop-" $randStringSuffix }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded -spec: - template: - metadata: - labels: -{{ tuple $envAll "nova" "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - spec: - serviceAccountName: {{ $serviceAccountName }} - restartPolicy: OnFailure - nodeSelector: - {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }} - initContainers: -{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - containers: - - name: nova-db-drop - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: OPENSTACK_CONFIG_FILE - value: /etc/nova/nova.conf - - name: OPENSTACK_CONFIG_DB_SECTION - value: database - - name: OPENSTACK_CONFIG_DB_KEY - value: connection - command: - - /tmp/db-drop.py - volumeMounts: - - name: nova-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true - - name: etcnova - mountPath: /etc/nova - - name: nova-etc - mountPath: /etc/nova/nova.conf - subPath: nova.conf - readOnly: true - - name: nova-db-drop-api - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: OPENSTACK_CONFIG_FILE - value: /etc/nova/nova.conf - - name: OPENSTACK_CONFIG_DB_SECTION - value: api_database - - name: OPENSTACK_CONFIG_DB_KEY - value: connection - command: - - /tmp/db-drop.py - volumeMounts: - - name: nova-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true - - name: etcnova - mountPath: /etc/nova - - name: nova-etc - mountPath: /etc/nova/nova.conf - subPath: nova.conf - readOnly: true - volumes: - - name: etcnova - emptyDir: {} - - name: nova-etc - configMap: - name: nova-etc - defaultMode: 0444 - - name: nova-bin - configMap: - name: nova-bin - defaultMode: 0555 +{{- $serviceName := "nova" -}} +{{- $dbSvc := dict "adminSecret" .Values.secrets.oslo_db.admin "configFile" (printf "/etc/%s/%s.conf" $serviceName $serviceName ) "configDbSection" "database" "configDbKey" "connection" -}} +{{- $dbApi := dict "adminSecret" .Values.secrets.oslo_db.admin "configFile" (printf "/etc/%s/%s.conf" $serviceName $serviceName ) "configDbSection" "api_database" "configDbKey" "connection" -}} +{{- $dbCell := dict "adminSecret" .Values.secrets.oslo_db.admin "configFile" (printf "/etc/%s/%s.conf" $serviceName $serviceName ) "configDbSection" "cell0_database" "configDbKey" "connection" -}} +{{- $dbsToDrop := list $dbSvc $dbApi $dbCell }} +{{- $dbDropJob := dict "envAll" . "serviceName" $serviceName "dbsToDrop" $dbsToDrop -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} {{- end }} diff --git a/nova/templates/service-ingress-metadata.yaml b/nova/templates/service-ingress-metadata.yaml index fecc0adfca..ee4ac7ae1c 100644 --- a/nova/templates/service-ingress-metadata.yaml +++ b/nova/templates/service-ingress-metadata.yaml @@ -14,22 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_metadata }} -{{- $envAll := . }} -{{- if .Values.network.metadata.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "compute_metadata" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 -{{- if .Values.network.metadata.ip }} - clusterIP: {{ .Values.network.metadata.ip }} -{{- end }} - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_metadata .Values.network.metadata.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "compute_metadata" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/nova/templates/service-ingress-novncproxy.yaml b/nova/templates/service-ingress-novncproxy.yaml new file mode 100644 index 0000000000..fce765af4f --- /dev/null +++ b/nova/templates/service-ingress-novncproxy.yaml @@ -0,0 +1,20 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- if and .Values.manifests.service_ingress_novncproxy .Values.network.novncproxy.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "compute_novnc_proxy" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} +{{- end }} diff --git a/nova/templates/service-ingress-osapi.yaml b/nova/templates/service-ingress-osapi.yaml index aff81afda3..98bff28c1c 100644 --- a/nova/templates/service-ingress-osapi.yaml +++ b/nova/templates/service-ingress-osapi.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_osapi }} -{{- $envAll := . }} -{{- if .Values.network.osapi.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "compute" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_osapi .Values.network.osapi.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "compute" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/nova/templates/service-ingress-placement.yaml b/nova/templates/service-ingress-placement.yaml index ab5a269c1f..91f559b2a7 100644 --- a/nova/templates/service-ingress-placement.yaml +++ b/nova/templates/service-ingress-placement.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_placement }} -{{- $envAll := . }} -{{- if .Values.network.placement.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "placement" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_placement .Values.network.placement.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "placement" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/nova/values.yaml b/nova/values.yaml index 069ba2cd6e..f42df75880 100644 --- a/nova/values.yaml +++ b/nova/values.yaml @@ -67,7 +67,7 @@ images: bootstrap: docker.io/openstackhelm/heat:newton db_drop: docker.io/openstackhelm/heat:newton db_init: docker.io/openstackhelm/heat:newton - dep_check: 'quay.io/stackanetes/kubernetes-entrypoint:v0.2.1' + dep_check: 'quay.io/stackanetes/kubernetes-entrypoint:v0.3.0' rabbit_init: docker.io/rabbitmq:3.7-management ks_user: docker.io/openstackhelm/heat:newton ks_service: docker.io/openstackhelm/heat:newton @@ -98,57 +98,62 @@ bootstrap: enabled: true options: m1_tiny: - name: "m1.tiny" - id: "auto" - ram: 512 - disk: 1 - vcpus: 1 + name: "m1.tiny" + id: "auto" + ram: 512 + disk: 1 + vcpus: 1 m1_small: - name: "m1.small" - id: "auto" - ram: 2048 - disk: 20 - vcpus: 1 + name: "m1.small" + id: "auto" + ram: 2048 + disk: 20 + vcpus: 1 m1_medium: - name: "m1.medium" - id: "auto" - ram: 4096 - disk: 40 - vcpus: 2 + name: "m1.medium" + id: "auto" + ram: 4096 + disk: 40 + vcpus: 2 m1_large: - name: "m1.large" - id: "auto" - ram: 8192 - disk: 80 - vcpus: 4 + name: "m1.large" + id: "auto" + ram: 8192 + disk: 80 + vcpus: 4 m1_xlarge: - name: "m1.xlarge" - id: "auto" - ram: 16384 - disk: 160 - vcpus: 8 + name: "m1.xlarge" + id: "auto" + ram: 16384 + disk: 160 + vcpus: 8 network: - backend: ovs + # provide what type of network wiring will be used + # possible options: openvswitch, linuxbridge, sriov + backend: + - openvswitch osapi: port: 8774 ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / external_policy_local: false node_port: enabled: false port: 30774 metadata: - # IF blank, set clusterIP and metadata_host dynamically - ip: port: 8775 ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / external_policy_local: false node_port: @@ -158,13 +163,22 @@ network: port: 8778 ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / node_port: enabled: false port: 30778 novncproxy: + ingress: + public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" + annotations: + nginx.ingress.kubernetes.io/rewrite-target: / node_port: enabled: false port: 30680 @@ -186,14 +200,24 @@ ceph: dependencies: dynamic: targeted: - ovs: + openvswitch: compute: - daemonset: - - neutron-ovs-agent + pod: + - labels: + application: neutron + component: neutron-ovs-agent linuxbridge: compute: - daemonset: - - neutron-lb-agent + pod: + - labels: + application: neutron + component: neutron-lb-agent + sriov: + compute: + pod: + - labels: + application: neutron + component: neutron-sriov-agent static: api: jobs: @@ -230,8 +254,10 @@ dependencies: - endpoint: internal service: compute compute: - daemonset: - - libvirt + pod: + - labels: + application: libvirt + component: libvirt jobs: - nova-db-sync - nova-rabbit-init @@ -315,8 +341,8 @@ dependencies: service: identity rabbit_init: services: - - service: oslo_messaging - endpoint: internal + - service: oslo_messaging + endpoint: internal novncproxy: jobs: - nova-db-sync @@ -969,7 +995,7 @@ conf: cpu_allocation_ratio: 3.0 state_path: /var/lib/nova osapi_compute_listen: 0.0.0.0 - #NOTE(portdirect): the bind port should not be defined, and is manipulated + # NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. osapi_compute_listen_port: null osapi_compute_workers: 1 @@ -1066,7 +1092,7 @@ secrets: admin: nova-rabbitmq-admin nova: nova-rabbitmq-user -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -1174,7 +1200,7 @@ endpoints: project_name: service user_domain_name: default project_domain_name: default - #NOTE(portdirect): the neutron user is not managed by the nova chart + # NOTE(portdirect): the neutron user is not managed by the nova chart # these values should match those set in the neutron chart. neutron: region_name: RegionOne @@ -1183,7 +1209,7 @@ endpoints: user_domain_name: default username: neutron password: password - #NOTE(portdirect): the ironic user is not managed by the nova chart + # NOTE(portdirect): the ironic user is not managed by the nova chart # these values should match those set in the ironic chart. ironic: auth_type: password @@ -1247,7 +1273,7 @@ endpoints: host_fqdn_override: default: null path: - default: "/v2/%(tenant_id)s" + default: "/v2.1/%(tenant_id)s" scheme: default: 'http' port: @@ -1258,6 +1284,9 @@ endpoints: default: 6080 compute_metadata: name: nova + ip: + # IF blank, set clusterIP and metadata_host dynamically + ingress: null hosts: default: nova-metadata public: metadata @@ -1275,6 +1304,7 @@ endpoints: name: nova hosts: default: nova-novncproxy + public: novncproxy host_fqdn_override: default: null path: @@ -1284,10 +1314,12 @@ endpoints: port: novnc_proxy: default: 6080 + public: 80 compute_spice_proxy: name: nova hosts: default: nova-spiceproxy + public: placement host_fqdn_override: default: null path: @@ -1348,11 +1380,11 @@ pod: nova: uid: 42424 affinity: - anti: - type: - default: preferredDuringSchedulingIgnoredDuringExecution - topologyKey: - default: kubernetes.io/hostname + anti: + type: + default: preferredDuringSchedulingIgnoredDuringExecution + topologyKey: + default: kubernetes.io/hostname mounts: nova_compute: init_container: null @@ -1592,6 +1624,7 @@ manifests: deployment_spiceproxy: true deployment_scheduler: true ingress_metadata: true + ingress_novncproxy: true ingress_placement: true ingress_osapi: true job_bootstrap: true @@ -1617,6 +1650,7 @@ manifests: secret_keystone_placement: true secret_rabbitmq: true service_ingress_metadata: true + service_ingress_novncproxy: true service_ingress_placement: true service_ingress_osapi: true service_metadata: true diff --git a/openvswitch/templates/bin/_openvswitch-db-server.sh.tpl b/openvswitch/templates/bin/_openvswitch-db-server.sh.tpl index fc3365e6b4..cec29ec45b 100644 --- a/openvswitch/templates/bin/_openvswitch-db-server.sh.tpl +++ b/openvswitch/templates/bin/_openvswitch-db-server.sh.tpl @@ -22,6 +22,7 @@ COMMAND="${@:-start}" OVS_DB=/run/openvswitch/conf.db OVS_SOCKET=/run/openvswitch/db.sock OVS_SCHEMA=/usr/share/openvswitch/vswitch.ovsschema +OVS_PID=/run/openvswitch/ovsdb-server.pid function start () { mkdir -p "$(dirname ${OVS_DB})" @@ -38,11 +39,13 @@ function start () { -vconsole:emer \ -vconsole:err \ -vconsole:info \ + --pidfile=${OVS_PID} \ --remote=punix:${OVS_SOCKET} } function stop () { - ovs-appctl -T1 -t /run/openvswitch/ovsdb-server.1.ctl exit + PID=$(cat $OVS_PID) + ovs-appctl -T1 -t /run/openvswitch/ovsdb-server.${PID}.ctl exit } $COMMAND diff --git a/openvswitch/templates/bin/_openvswitch-vswitchd.sh.tpl b/openvswitch/templates/bin/_openvswitch-vswitchd.sh.tpl index 30a7db79b7..36fe51fdc8 100644 --- a/openvswitch/templates/bin/_openvswitch-vswitchd.sh.tpl +++ b/openvswitch/templates/bin/_openvswitch-vswitchd.sh.tpl @@ -20,6 +20,7 @@ set -ex COMMAND="${@:-start}" OVS_SOCKET=/run/openvswitch/db.sock +OVS_PID=/run/openvswitch/ovs-vswitchd.pid function start () { t=0 @@ -63,11 +64,13 @@ function start () { -vconsole:emer \ -vconsole:err \ -vconsole:info \ + --pidfile=${OVS_PID} \ --mlockall } function stop () { - ovs-appctl -T1 -t /run/openvswitch/ovs-vswitchd.1.ctl exit + PID=$(cat $OVS_PID) + ovs-appctl -T1 -t /run/openvswitch/ovs-vswitchd.${PID}.ctl exit } $COMMAND diff --git a/openvswitch/values.yaml b/openvswitch/values.yaml index ff7a387623..2b7ed949f5 100644 --- a/openvswitch/values.yaml +++ b/openvswitch/values.yaml @@ -23,7 +23,7 @@ images: tags: openvswitch_db_server: docker.io/openstackhelm/openvswitch:v2.8.1 openvswitch_vswitchd: docker.io/openstackhelm/openvswitch:v2.8.1 - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" labels: diff --git a/postgresql/templates/statefulset.yaml b/postgresql/templates/statefulset.yaml index 02252be577..e29f6822d1 100644 --- a/postgresql/templates/statefulset.yaml +++ b/postgresql/templates/statefulset.yaml @@ -44,7 +44,7 @@ spec: - name: postgresql image: {{ .Values.images.tags.postgresql }} imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.server | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} +{{ tuple $envAll $envAll.Values.pod.resources.server | include "helm-toolkit.snippets.kubernetes_resources" | indent 8 }} ports: - containerPort: {{ tuple "postgresql" "internal" "postgresql" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} env: diff --git a/postgresql/values.yaml b/postgresql/values.yaml index c8b6869a0b..7c170ee4a4 100644 --- a/postgresql/values.yaml +++ b/postgresql/values.yaml @@ -40,7 +40,7 @@ pod: images: tags: postgresql: "docker.io/postgres:9.5" - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: IfNotPresent storage: diff --git a/rabbitmq/templates/bin/_rabbitmq-liveness.sh.tpl b/rabbitmq/templates/bin/_rabbitmq-liveness.sh.tpl index 4943ef54ef..2f30aa4373 100644 --- a/rabbitmq/templates/bin/_rabbitmq-liveness.sh.tpl +++ b/rabbitmq/templates/bin/_rabbitmq-liveness.sh.tpl @@ -16,4 +16,6 @@ See the License for the specific language governing permissions and limitations under the License. */}} +set -e + exec rabbitmqctl status diff --git a/rabbitmq/templates/bin/_rabbitmq-readiness.sh.tpl b/rabbitmq/templates/bin/_rabbitmq-readiness.sh.tpl index 4943ef54ef..2f30aa4373 100644 --- a/rabbitmq/templates/bin/_rabbitmq-readiness.sh.tpl +++ b/rabbitmq/templates/bin/_rabbitmq-readiness.sh.tpl @@ -16,4 +16,6 @@ See the License for the specific language governing permissions and limitations under the License. */}} +set -e + exec rabbitmqctl status diff --git a/rabbitmq/templates/bin/_rabbitmq-test.sh.tpl b/rabbitmq/templates/bin/_rabbitmq-test.sh.tpl new file mode 100644 index 0000000000..04b2f0c451 --- /dev/null +++ b/rabbitmq/templates/bin/_rabbitmq-test.sh.tpl @@ -0,0 +1,77 @@ +#!/bin/bash + +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +set -e + +# Extract connection details +RABBIT_HOSTNAME=`echo $RABBITMQ_ADMIN_CONNECTION | awk -F'[@]' '{print $2}' \ + | awk -F'[:/]' '{print $1}'` +RABBIT_PORT=`echo $RABBITMQ_ADMIN_CONNECTION | awk -F'[@]' '{print $2}' \ + | awk -F'[:/]' '{print $2}'` + +# Extract Admin User creadential +RABBITMQ_ADMIN_USERNAME=`echo $RABBITMQ_ADMIN_CONNECTION | awk -F'[@]' '{print $1}' \ + | awk -F'[//:]' '{print $4}'` +RABBITMQ_ADMIN_PASSWORD=`echo $RABBITMQ_ADMIN_CONNECTION | awk -F'[@]' '{print $1}' \ + | awk -F'[//:]' '{print $5}'` + +function rabbit_find_paritions () { + PARTITIONS=$(rabbitmqadmin \ + --host="${RABBIT_HOSTNAME}" \ + --port="${RABBIT_PORT}" \ + --username="${RABBITMQ_ADMIN_USERNAME}" \ + --password="${RABBITMQ_ADMIN_PASSWORD}" \ + list nodes -f raw_json | \ + python -c "import json,sys; +obj=json.load(sys.stdin); +for num, node in enumerate(obj): + print node['partitions'];") + + for PARTITION in ${PARTITIONS}; do + if [[ $PARTITION != '[]' ]]; then + echo "Cluster partition found" + exit 1 + fi + done + echo "No cluster partitions found" +} +# Check no nodes report cluster partitioning +rabbit_find_paritions + +function rabbit_check_users_match () { + # Check users match on all nodes + NODES=$(rabbitmqadmin \ + --host="${RABBIT_HOSTNAME}" \ + --port="${RABBIT_PORT}" \ + --username="${RABBITMQ_ADMIN_USERNAME}" \ + --password="${RABBITMQ_ADMIN_PASSWORD}" \ + list nodes -f bash) + USER_LIST=$(mktemp --directory) + for NODE in ${NODES}; do + rabbitmqadmin \ + --host=${NODE#*@} \ + --port="${RABBIT_PORT}" \ + --username="${RABBITMQ_ADMIN_USERNAME}" \ + --password="${RABBITMQ_ADMIN_PASSWORD}" \ + list users -f bash > ${USER_LIST}/${NODE#*@} + done + cd ${USER_LIST}; diff -q --from-file $(ls ${USER_LIST}) + echo "User lists match for all nodes" +} +# Check users match on all nodes +rabbit_check_users_match diff --git a/rabbitmq/templates/configmap-bin.yaml b/rabbitmq/templates/configmap-bin.yaml index 600f523578..743e3077ce 100644 --- a/rabbitmq/templates/configmap-bin.yaml +++ b/rabbitmq/templates/configmap-bin.yaml @@ -22,6 +22,8 @@ kind: ConfigMap metadata: name: {{ printf "%s-%s" $envAll.Release.Name "rabbitmq-bin" | quote }} data: + rabbitmq-test.sh: | +{{ tuple "bin/_rabbitmq-test.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} rabbitmq-liveness.sh: | {{ tuple "bin/_rabbitmq-liveness.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} rabbitmq-readiness.sh: | diff --git a/rabbitmq/templates/configmap-etc.yaml b/rabbitmq/templates/configmap-etc.yaml index 8f329b8f94..5eef45aff3 100644 --- a/rabbitmq/templates/configmap-etc.yaml +++ b/rabbitmq/templates/configmap-etc.yaml @@ -17,20 +17,17 @@ limitations under the License. {{- if .Values.manifests.configmap_etc }} {{- $envAll := . }} -{{- if empty .Values.conf.rabbitmq.cluster_formation.k8s.service_name -}} -{{- tuple "oslo_messaging" "internal" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" | set .Values.conf.rabbitmq.cluster_formation.k8s "service_name" | quote | trunc 0 -}} -{{- end -}} -{{- if empty .Values.conf.rabbitmq.cluster_formation.k8s.host -}} -{{- print "kubernetes.default.svc." .Values.endpoints.cluster_domain_suffix | set .Values.conf.rabbitmq.cluster_formation.k8s "host" | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.rabbitmq.cluster_formation.k8s.host -}} +{{- print "kubernetes.default.svc." $envAll.Values.endpoints.cluster_domain_suffix | set $envAll.Values.conf.rabbitmq.cluster_formation.k8s "host" | quote | trunc 0 -}} {{- end -}} -{{- print "0.0.0.0:" ( tuple "oslo_messaging" "internal" "amqp" . | include "helm-toolkit.endpoints.endpoint_port_lookup") | set .Values.conf.rabbitmq.listeners.tcp "1" | quote | trunc 0 -}} +{{- print "0.0.0.0:" ( tuple "oslo_messaging" "internal" "amqp" . | include "helm-toolkit.endpoints.endpoint_port_lookup") | set $envAll.Values.conf.rabbitmq.listeners.tcp "1" | quote | trunc 0 -}} -{{- if empty .Values.conf.rabbitmq.default_user -}} -{{- set .Values.conf.rabbitmq "default_user" .Values.endpoints.oslo_messaging.auth.user.username | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.rabbitmq.default_user -}} +{{- set $envAll.Values.conf.rabbitmq "default_user" $envAll.Values.endpoints.oslo_messaging.auth.user.username | quote | trunc 0 -}} {{- end -}} -{{- if empty .Values.conf.rabbitmq.default_pass -}} -{{- set .Values.conf.rabbitmq "default_pass" .Values.endpoints.oslo_messaging.auth.user.password | quote | trunc 0 -}} +{{- if empty $envAll.Values.conf.rabbitmq.default_pass -}} +{{- set $envAll.Values.conf.rabbitmq "default_pass" $envAll.Values.endpoints.oslo_messaging.auth.user.password | quote | trunc 0 -}} {{- end -}} --- @@ -42,5 +39,5 @@ data: enabled_plugins: | {{ tuple "etc/_enabled_plugins.tpl" . | include "helm-toolkit.utils.template" | indent 4 }} rabbitmq.conf: | -{{ include "rabbitmq.to_rabbit_config" .Values.conf.rabbitmq | indent 4 }} +{{ include "rabbitmq.utils.to_rabbit_config" $envAll.Values.conf.rabbitmq | indent 4 }} {{ end }} diff --git a/rabbitmq/templates/ingress-management.yaml b/rabbitmq/templates/ingress-management.yaml new file mode 100644 index 0000000000..cdd2c925d8 --- /dev/null +++ b/rabbitmq/templates/ingress-management.yaml @@ -0,0 +1,25 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- if and .Values.manifests.ingress_management .Values.network.management.ingress.public }} +{{- $envAll := . }} +{{- if empty $envAll.Values.endpoints.oslo_messaging.hosts.public }} +{{- $service_public_name := .Release.Name | trunc 12 }} +{{- $_ := set $envAll.Values.endpoints.oslo_messaging.hosts "public" ( printf "%s-%s-%s" $service_public_name "mgr" ( $service_public_name | sha256sum | trunc 6 )) }} +{{- end }} +{{- $ingressOpts := dict "envAll" . "backendService" "management" "backendServiceType" "oslo_messaging" "backendPort" "http" -}} +{{ $ingressOpts | include "helm-toolkit.manifests.ingress" }} +{{- end }} diff --git a/rabbitmq/templates/monitoring/prometheus/exporter-deployment.yaml b/rabbitmq/templates/monitoring/prometheus/exporter-deployment.yaml index 5767cd54a8..6cb2e27ba9 100644 --- a/rabbitmq/templates/monitoring/prometheus/exporter-deployment.yaml +++ b/rabbitmq/templates/monitoring/prometheus/exporter-deployment.yaml @@ -16,7 +16,7 @@ limitations under the License. {{- if and .Values.manifests.monitoring.prometheus.deployment_exporter .Values.monitoring.prometheus.enabled }} {{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.prometheus_rabbitmq_exporter }} +{{- $dependencies := $envAll.Values.dependencies.static.prometheus_rabbitmq_exporter }} {{- $rcControllerName := printf "%s-%s" $envAll.Release.Name "rabbitmq-exporter" }} {{ tuple $envAll $dependencies $rcControllerName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} @@ -26,41 +26,41 @@ kind: Deployment metadata: name: {{ $rcControllerName | quote }} spec: - replicas: {{ .Values.pod.replicas.prometheus_rabbitmq_exporter }} + replicas: {{ $envAll.Values.pod.replicas.prometheus_rabbitmq_exporter }} {{ tuple $envAll | include "helm-toolkit.snippets.kubernetes_upgrades_deployment" | indent 2 }} template: metadata: labels: {{ tuple $envAll "prometheus_rabbitmq_exporter" "exporter" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - namespace: {{ .Values.endpoints.prometheus_rabbitmq_exporter.namespace }} + namespace: {{ $envAll.Values.endpoints.prometheus_rabbitmq_exporter.namespace }} spec: serviceAccountName: {{ $rcControllerName | quote }} nodeSelector: - {{ .Values.labels.prometheus_rabbitmq_exporter.node_selector_key }}: {{ .Values.labels.prometheus_rabbitmq_exporter.node_selector_value }} - terminationGracePeriodSeconds: {{ .Values.pod.lifecycle.termination_grace_period.prometheus_rabbitmq_exporter.timeout | default "30" }} + {{ $envAll.Values.labels.prometheus_rabbitmq_exporter.node_selector_key }}: {{ $envAll.Values.labels.prometheus_rabbitmq_exporter.node_selector_value }} + terminationGracePeriodSeconds: {{ $envAll.Values.pod.lifecycle.termination_grace_period.prometheus_rabbitmq_exporter.timeout | default "30" }} initContainers: {{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} containers: - name: rabbitmq-exporter - image: {{ .Values.images.tags.prometheus_rabbitmq_exporter }} - imagePullPolicy: {{ .Values.images.pull_policy }} + image: {{ $envAll.Values.images.tags.prometheus_rabbitmq_exporter }} + imagePullPolicy: {{ $envAll.Values.images.pull_policy }} {{ tuple $envAll $envAll.Values.pod.resources.prometheus_rabbitmq_exporter | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} ports: - name: metrics - containerPort: {{ .Values.network.prometheus_rabbitmq_exporter.port }} + containerPort: {{ $envAll.Values.network.prometheus_rabbitmq_exporter.port }} env: - name: RABBIT_URL value: http://{{ tuple "oslo_messaging" "internal" . | include "helm-toolkit.endpoints.hostname_fqdn_endpoint_lookup" }}:15672 - name: RABBIT_USER - value: {{ .Values.endpoints.oslo_messaging.auth.user.username | quote }} + value: {{ $envAll.Values.endpoints.oslo_messaging.auth.user.username | quote }} - name: RABBIT_PASSWORD - value: {{ .Values.endpoints.oslo_messaging.auth.user.password | quote }} + value: {{ $envAll.Values.endpoints.oslo_messaging.auth.user.password | quote }} - name: RABBIT_CAPABILITIES - value: {{ tuple .Values.conf.prometheus_exporter.capabilities $envAll | include "helm-toolkit.utils.joinListWithComma" | quote }} + value: {{ tuple $envAll.Values.conf.prometheus_exporter.capabilities $envAll | include "helm-toolkit.utils.joinListWithComma" | quote }} - name: PUBLISH_PORT - value: {{ .Values.network.prometheus_rabbitmq_exporter.port | quote }} + value: {{ $envAll.Values.network.prometheus_rabbitmq_exporter.port | quote }} - name: LOG_LEVEL - value: {{ .Values.conf.prometheus_exporter.log_level | quote }} + value: {{ $envAll.Values.conf.prometheus_exporter.log_level | quote }} - name: SKIPVERIFY - value: {{ .Values.conf.prometheus_exporter.skipverify | quote }} + value: {{ $envAll.Values.conf.prometheus_exporter.skipverify | quote }} {{- end }} diff --git a/rabbitmq/templates/monitoring/prometheus/exporter-service.yaml b/rabbitmq/templates/monitoring/prometheus/exporter-service.yaml index fbcc21f227..f49a126748 100644 --- a/rabbitmq/templates/monitoring/prometheus/exporter-service.yaml +++ b/rabbitmq/templates/monitoring/prometheus/exporter-service.yaml @@ -25,13 +25,13 @@ metadata: labels: {{ tuple $envAll "prometheus_rabbitmq_exporter" "metrics" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 4 }} annotations: -{{- if .Values.monitoring.prometheus.enabled }} +{{- if $envAll.Values.monitoring.prometheus.enabled }} {{ tuple $prometheus_annotations | include "helm-toolkit.snippets.prometheus_service_annotations" | indent 4 }} {{- end }} spec: ports: - name: metrics - port: {{ .Values.network.prometheus_rabbitmq_exporter.port }} + port: {{ $envAll.Values.network.prometheus_rabbitmq_exporter.port }} selector: {{ tuple $envAll "prometheus_rabbitmq_exporter" "exporter" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 4 }} {{- end }} diff --git a/rabbitmq/templates/pod-test.yaml b/rabbitmq/templates/pod-test.yaml new file mode 100644 index 0000000000..b47678ba85 --- /dev/null +++ b/rabbitmq/templates/pod-test.yaml @@ -0,0 +1,55 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- if .Values.manifests.pod_test }} +{{- $envAll := . }} +{{- $dependencies := $envAll.Values.dependencies.static.tests }} + +{{- $serviceAccountName := print .Release.Name "-test" }} +{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} +--- +apiVersion: v1 +kind: Pod +metadata: + name: "{{.Release.Name}}-test" + annotations: + "helm.sh/hook": test-success +spec: + serviceAccountName: {{ $serviceAccountName }} + nodeSelector: + {{ $envAll.Values.labels.test.node_selector_key }}: {{ $envAll.Values.labels.test.node_selector_value }} + restartPolicy: Never + initContainers: +{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} + containers: + - name: {{.Release.Name}}-rabbitmq-test + image: {{ $envAll.Values.images.tags.scripted_test }} + env: + - name: RABBITMQ_ADMIN_CONNECTION + value: "{{ tuple "oslo_messaging" "internal" "user" "http" $envAll | include "helm-toolkit.endpoints.authenticated_endpoint_uri_lookup" }}" + command: + - /tmp/rabbitmq-test.sh + volumeMounts: + - name: rabbitmq-bin + mountPath: /tmp/rabbitmq-test.sh + subPath: rabbitmq-test.sh + readOnly: true + volumes: + - name: rabbitmq-bin + configMap: + name: {{ printf "%s-%s" $envAll.Release.Name "rabbitmq-bin" | quote }} + defaultMode: 0555 +{{- end }} diff --git a/rabbitmq/templates/service-discovery.yaml b/rabbitmq/templates/service-discovery.yaml new file mode 100644 index 0000000000..54c16f27e7 --- /dev/null +++ b/rabbitmq/templates/service-discovery.yaml @@ -0,0 +1,39 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- if .Values.manifests.service_discovery }} +{{- $envAll := . }} +{{- if empty $envAll.Values.endpoints.oslo_messaging.hosts.discovery }} +{{- $service_discovery_name := .Release.Name | trunc 12 }} +{{- $_ := set $envAll.Values.endpoints.oslo_messaging.hosts "discovery" ( printf "%s-%s-%s" $service_discovery_name "dsv" ( $service_discovery_name | sha256sum | trunc 6 )) }} +{{- end }} +--- +apiVersion: v1 +kind: Service +metadata: + name: {{ tuple "oslo_messaging" "discovery" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} +spec: + ports: + - port: {{ tuple "oslo_messaging" "internal" "amqp" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} + name: amqp + - port: {{ add (tuple "oslo_messaging" "internal" "amqp" . | include "helm-toolkit.endpoints.endpoint_port_lookup") 20000 }} + name: clustering + - port: {{ tuple "oslo_messaging" "internal" "http" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} + name: http + clusterIP: None + selector: +{{ tuple $envAll "rabbitmq" "server" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 4 }} +{{ end }} diff --git a/rabbitmq/templates/service-ingress-management.yaml b/rabbitmq/templates/service-ingress-management.yaml new file mode 100644 index 0000000000..deca9b9901 --- /dev/null +++ b/rabbitmq/templates/service-ingress-management.yaml @@ -0,0 +1,25 @@ +{{/* +Copyright 2017 The Openstack-Helm Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- if and .Values.manifests.service_ingress_management .Values.network.management.ingress.public }} +{{- $envAll := . }} +{{- if empty $envAll.Values.endpoints.oslo_messaging.hosts.public }} +{{- $service_public_name := .Release.Name | trunc 12 }} +{{- $_ := set $envAll.Values.endpoints.oslo_messaging.hosts "public" ( printf "%s-%s-%s" $service_public_name "mgr" ( $service_public_name | sha256sum | trunc 6 )) }} +{{- end }} +{{- $serviceIngressOpts := dict "envAll" . "backendService" "management" "backendServiceType" "oslo_messaging" "backendPort" "http" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} +{{- end }} diff --git a/rabbitmq/templates/service.yaml b/rabbitmq/templates/service.yaml index e9ba424ceb..262226e4bd 100644 --- a/rabbitmq/templates/service.yaml +++ b/rabbitmq/templates/service.yaml @@ -25,6 +25,8 @@ spec: ports: - port: {{ tuple "oslo_messaging" "internal" "amqp" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} name: amqp + - port: {{ add (tuple "oslo_messaging" "internal" "amqp" . | include "helm-toolkit.endpoints.endpoint_port_lookup") 20000 }} + name: clustering - port: {{ tuple "oslo_messaging" "internal" "http" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} name: http selector: diff --git a/rabbitmq/templates/statefulset.yaml b/rabbitmq/templates/statefulset.yaml index c18cf41b42..e2935e9a70 100644 --- a/rabbitmq/templates/statefulset.yaml +++ b/rabbitmq/templates/statefulset.yaml @@ -16,7 +16,12 @@ limitations under the License. {{- if .Values.manifests.statefulset }} {{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.rabbitmq }} +{{- if empty $envAll.Values.endpoints.oslo_messaging.hosts.discovery }} +{{- $service_discovery_name := .Release.Name | trunc 12 }} +{{- $_ := set $envAll.Values.endpoints.oslo_messaging.hosts "discovery" ( printf "%s-%s-%s" $service_discovery_name "dsv" ( $service_discovery_name | sha256sum | trunc 6 )) }} +{{- end }} + +{{- $dependencies := $envAll.Values.dependencies.static.rabbitmq }} {{- $rcControllerName := printf "%s-%s" $envAll.Release.Name "rabbitmq" }} {{ tuple $envAll $dependencies $rcControllerName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} @@ -58,8 +63,8 @@ kind: StatefulSet metadata: name: {{ $rcControllerName | quote }} spec: - serviceName: {{ tuple "oslo_messaging" "internal" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} - replicas: {{ .Values.pod.replicas.server }} + serviceName: {{ tuple "oslo_messaging" "discovery" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} + replicas: {{ $envAll.Values.pod.replicas.server }} template: metadata: labels: @@ -72,13 +77,13 @@ spec: affinity: {{ tuple $envAll "rabbitmq" "server" | include "helm-toolkit.snippets.kubernetes_pod_anti_affinity" | indent 8 }} nodeSelector: - {{ .Values.labels.server.node_selector_key }}: {{ .Values.labels.server.node_selector_value }} + {{ $envAll.Values.labels.server.node_selector_key }}: {{ $envAll.Values.labels.server.node_selector_value }} initContainers: {{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} -{{- if .Values.volume.chown_on_start }} +{{- if $envAll.Values.volume.chown_on_start }} - name: rabbitmq-perms - image: {{ .Values.images.tags.rabbitmq }} - imagePullPolicy: {{ .Values.images.pull_policy }} + image: {{ $envAll.Values.images.tags.rabbitmq }} + imagePullPolicy: {{ $envAll.Values.images.pull_policy }} securityContext: runAsUser: 0 {{ tuple $envAll $envAll.Values.pod.resources.server | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} @@ -93,7 +98,7 @@ spec: {{- end }} containers: - name: rabbitmq - image: {{ .Values.images.tags.rabbitmq }} + image: {{ $envAll.Values.images.tags.rabbitmq }} {{ tuple $envAll $envAll.Values.pod.resources.server | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} command: - /tmp/rabbitmq-start.sh @@ -104,19 +109,26 @@ spec: - name: amqp protocol: TCP containerPort: {{ tuple "oslo_messaging" "internal" "amqp" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }} + - name: clustering + protocol: TCP + containerPort: {{ add (tuple "oslo_messaging" "internal" "amqp" . | include "helm-toolkit.endpoints.endpoint_port_lookup") 20000 }} env: - - name: MY_POD_IP + - name: MY_POD_NAME valueFrom: fieldRef: - fieldPath: status.podIP + fieldPath: metadata.name - name: RABBITMQ_USE_LONGNAME value: "true" - name: RABBITMQ_NODENAME - value: "rabbit@$(MY_POD_IP)" + value: "rabbit@$(MY_POD_NAME).{{ tuple "oslo_messaging" "discovery" . | include "helm-toolkit.endpoints.hostname_fqdn_endpoint_lookup" }}" - name: K8S_SERVICE_NAME - value: {{ tuple "oslo_messaging" "internal" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" | quote }} + value: {{ tuple "oslo_messaging" "discovery" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} + # NOTE(portdirect): We use the discovery fqdn here, as we resolve + # nodes via their pods hostname/nodename + - name: K8S_HOSTNAME_SUFFIX + value: ".{{ tuple "oslo_messaging" "discovery" . | include "helm-toolkit.endpoints.hostname_fqdn_endpoint_lookup" }}" - name: RABBITMQ_ERLANG_COOKIE - value: "{{ .Values.endpoints.oslo_messaging.auth.erlang_cookie }}" + value: "{{ $envAll.Values.endpoints.oslo_messaging.auth.erlang_cookie }}" readinessProbe: initialDelaySeconds: 10 timeoutSeconds: 10 @@ -151,11 +163,11 @@ spec: configMap: name: {{ printf "%s-%s" $envAll.Release.Name "rabbitmq-etc" | quote }} defaultMode: 0444 - {{- if not .Values.volume.enabled }} + {{- if not $envAll.Values.volume.enabled }} - name: rabbitmq-data emptyDir: {} {{- end }} -{{- if .Values.volume.enabled }} +{{- if $envAll.Values.volume.enabled }} volumeClaimTemplates: - metadata: name: rabbitmq-data @@ -163,7 +175,7 @@ spec: accessModes: [ "ReadWriteOnce" ] resources: requests: - storage: {{ .Values.volume.size }} - storageClassName: {{ .Values.volume.class_name }} + storage: {{ $envAll.Values.volume.size }} + storageClassName: {{ $envAll.Values.volume.class_name }} {{- end }} {{ end }} diff --git a/rabbitmq/templates/_helpers.tpl b/rabbitmq/templates/utils/_to_rabbit_config.tpl similarity index 95% rename from rabbitmq/templates/_helpers.tpl rename to rabbitmq/templates/utils/_to_rabbit_config.tpl index c62b49950d..fb90bd1728 100644 --- a/rabbitmq/templates/_helpers.tpl +++ b/rabbitmq/templates/utils/_to_rabbit_config.tpl @@ -14,7 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- define "rabbitmq.to_rabbit_config" -}} +{{- define "rabbitmq.utils.to_rabbit_config" -}} {{- range $top_key, $top_value := . }} {{- if kindIs "map" $top_value -}} {{- range $second_key, $second_value := . }} diff --git a/rabbitmq/values.yaml b/rabbitmq/values.yaml index 12a27c34d7..023c25e430 100644 --- a/rabbitmq/values.yaml +++ b/rabbitmq/values.yaml @@ -24,13 +24,17 @@ labels: prometheus_rabbitmq_exporter: node_selector_key: openstack-control-plane node_selector_value: enabled + test: + node_selector_key: openstack-control-plane + node_selector_value: enabled images: tags: prometheus_rabbitmq_exporter: docker.io/kbudde/rabbitmq-exporter:v0.21.0 prometheus_rabbitmq_exporter_helm_tests: docker.io/openstackhelm/heat:newton - rabbitmq: docker.io/rabbitmq:3.7.3 - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + rabbitmq: docker.io/rabbitmq:3.7.4 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 + scripted_test: docker.io/rabbitmq:3.7.4-management pull_policy: "IfNotPresent" pod: @@ -94,15 +98,15 @@ conf: rabbitmq: listeners: tcp: - #NOTE(portdirect): This is always defined via the endpoints section. + # NOTE(portdirect): This is always defined via the endpoints section. 1: null cluster_formation: peer_discovery_backend: rabbit_peer_discovery_k8s k8s: - address_type: ip + address_type: hostname node_cleanup: interval: "10" - only_log_warning: "false" + only_log_warning: "true" cluster_partition_handling: autoheal queue_master_locator: min-masters loopback_users.guest: "false" @@ -121,6 +125,10 @@ dependencies: service: monitoring rabbitmq: jobs: null + tests: + services: + - endpoint: internal + service: oslo_messaging monitoring: prometheus: @@ -129,10 +137,18 @@ monitoring: scrape: true network: + management: + ingress: + public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" + annotations: + nginx.ingress.kubernetes.io/rewrite-target: / prometheus_rabbitmq_exporter: port: 9095 -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -161,15 +177,28 @@ endpoints: password: password hosts: default: rabbitmq + # NOTE(portdirect): If left empty, the release name sha suffixed with dsv + # will be used for to produce a unique hostname for clustering + # and discovery. + discovery: null + # NOTE(portdirect): the public host is only used to the management WUI + # If left empty, the release name sha suffixed with mgr, will be used to + # produce an unique hostname. + public: null host_fqdn_override: default: null path: / scheme: rabbit port: + clustering: + # NOTE(portdirect): the value for this port is driven by amqp+20000 + # it should not be set manually. + default: null amqp: default: 5672 http: default: 15672 + public: 80 prometheus_rabbitmq_exporter: namespace: null hosts: @@ -188,15 +217,19 @@ volume: chown_on_start: true enabled: true class_name: general - size: 1Gi + size: 256Mi manifests: configmap_bin: true configmap_etc: true + ingress_management: true + pod_test: true monitoring: prometheus: configmap_bin: true deployment_exporter: true service_exporter: true + service_discovery: true + service_ingress_management: true service: true statefulset: true diff --git a/rally/templates/configmap-tasks.yaml b/rally/templates/configmap-tasks.yaml index 53dd00976f..209d448125 100644 --- a/rally/templates/configmap-tasks.yaml +++ b/rally/templates/configmap-tasks.yaml @@ -23,25 +23,25 @@ metadata: name: rally-tasks data: authenticate.yaml: | -{{ toYaml .Values.conf.rally_tasks.authenticate_task | indent 4 }} +{{ toYaml .Values.conf.rally_tasks.authenticate_task | indent 4 }} ceilometer.yaml: | -{{ toYaml .Values.conf.rally_tasks.ceilometer_task | indent 4 }} +{{ toYaml .Values.conf.rally_tasks.ceilometer_task | indent 4 }} cinder.yaml: | -{{ toYaml .Values.conf.rally_tasks.cinder_task | indent 4 }} +{{ toYaml .Values.conf.rally_tasks.cinder_task | indent 4 }} glance.yaml: | -{{ toYaml .Values.conf.rally_tasks.glance_task | indent 4 }} +{{ toYaml .Values.conf.rally_tasks.glance_task | indent 4 }} heat.yaml: | -{{ toYaml .Values.conf.rally_tasks.heat_task | indent 4 }} +{{ toYaml .Values.conf.rally_tasks.heat_task | indent 4 }} keystone.yaml: | -{{ toYaml .Values.conf.rally_tasks.keystone_task | indent 4 }} +{{ toYaml .Values.conf.rally_tasks.keystone_task | indent 4 }} magnum.yaml: | -{{ toYaml .Values.conf.rally_tasks.magnum_task | indent 4 }} +{{ toYaml .Values.conf.rally_tasks.magnum_task | indent 4 }} neutron.yaml: | -{{ toYaml .Values.conf.rally_tasks.neutron_task | indent 4 }} +{{ toYaml .Values.conf.rally_tasks.neutron_task | indent 4 }} nova.yaml: | -{{ toYaml .Values.conf.rally_tasks.nova_task | indent 4 }} +{{ toYaml .Values.conf.rally_tasks.nova_task | indent 4 }} senlin.yaml: | -{{ toYaml .Values.conf.rally_tasks.senlin_task | indent 4 }} +{{ toYaml .Values.conf.rally_tasks.senlin_task | indent 4 }} swift.yaml: | -{{ toYaml .Values.conf.rally_tasks.swift_task | indent 4 }} +{{ toYaml .Values.conf.rally_tasks.swift_task | indent 4 }} {{- end }} diff --git a/rally/templates/configmap-test-templates.yaml b/rally/templates/configmap-test-templates.yaml index ca4ef031fa..dd3ba30794 100644 --- a/rally/templates/configmap-test-templates.yaml +++ b/rally/templates/configmap-test-templates.yaml @@ -22,34 +22,8 @@ kind: ConfigMap metadata: name: heat-tasks-test-templates data: - random-strings.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.random_strings "tasks/test-templates/_random-strings.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - updated-random-strings-replace.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.updated_random_strings_replace "tasks/test-templates/_updated-random-strings-replace.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - updated-random-strings-add.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.updated_random_strings_add "tasks/test-templates/_updated-random-strings-add.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - updated-random-strings-delete.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.updated_random_strings_delete "tasks/test-templates/_updated-random-strings-delete.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - resource-group-with-constraint.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.resource_group_with_constraint "tasks/test-templates/_resource-group-with-constraint.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - resource-group-with-outputs.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.resource_group_with_outputs "tasks/test-templates/_resource-group-with-outputs.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - resource-group-server-with-volume.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.resource_group_server_with_volume "tasks/test-templates/_resource-group-server-with-volume.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - resource-group.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.resource_group "tasks/test-templates/_resource-group.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - default.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.default "tasks/test-templates/_default.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - autoscaling-group.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.autoscaling_group "tasks/test-templates/_autoscaling-group.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - autoscaling-policy.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.autoscaling_policy "tasks/test-templates/_autoscaling-policy.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - server-with-ports.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.server_with_ports "tasks/test-templates/_server-with-ports.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - server-with-volume.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.server_with_volume "tasks/test-templates/_server-with-volume.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - updated-resource-group-increase.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.updated_resource_group_increase "tasks/test-templates/_updated-resource-group-increase.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} - updated-resource-group-reduce.yaml: | -{{- tuple .Values.conf.rally_tasks.heat_tests.updated_resource_group_reduce "tasks/test-templates/_updated-resource-group-reduce.yaml.template.tpl" . | include "helm-toolkit.utils.configmap_templater" }} +{{- range $key, $value := $envAll.Values.conf.rally_tasks.heat_tests }} +{{- $file := printf "%s.%s" (replace "_" "-" $key) "yaml" }} +{{- include "helm-toolkit.snippets.values_template_renderer" (dict "envAll" $envAll "template" (index $envAll.Values.conf.rally_tasks.heat_tests $key ) "key" $file ) | indent 2 }} +{{- end }} {{- end }} diff --git a/rally/templates/etc/_rally.conf.tpl b/rally/templates/etc/_rally.conf.tpl deleted file mode 100644 index 2ab06942c7..0000000000 --- a/rally/templates/etc/_rally.conf.tpl +++ /dev/null @@ -1,1010 +0,0 @@ -{{/* -Copyright 2017 The Openstack-Helm Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/}} - -{{ include "rally.conf.rally_values_skeleton" .Values.conf.rally | trunc 0 }} -{{ include "rally.conf.rally" .Values.conf.rally }} - - -{{- define "rally.conf.rally_values_skeleton" -}} - -{{- if not .default -}}{{- set . "default" dict -}}{{- end -}} -{{- if not .default.oslo -}}{{- set .default "oslo" dict -}}{{- end -}} -{{- if not .default.oslo.log -}}{{- set .default.oslo "log" dict -}}{{- end -}} -{{- if not .default.rally -}}{{- set .default "rally" dict -}}{{- end -}} -{{- if not .benchmark -}}{{- set . "benchmark" dict -}}{{- end -}} -{{- if not .benchmark.rally -}}{{- set .benchmark "rally" dict -}}{{- end -}} -{{- if not .cleanup -}}{{- set . "cleanup" dict -}}{{- end -}} -{{- if not .cleanup.rally -}}{{- set .cleanup "rally" dict -}}{{- end -}} -{{- if not .database -}}{{- set . "database" dict -}}{{- end -}} -{{- if not .database.oslo -}}{{- set .database "oslo" dict -}}{{- end -}} -{{- if not .database.oslo.db -}}{{- set .database.oslo "db" dict -}}{{- end -}} -{{- if not .roles_context -}}{{- set . "roles_context" dict -}}{{- end -}} -{{- if not .roles_context.rally -}}{{- set .roles_context "rally" dict -}}{{- end -}} -{{- if not .tempest -}}{{- set . "tempest" dict -}}{{- end -}} -{{- if not .tempest.rally -}}{{- set .tempest "rally" dict -}}{{- end -}} -{{- if not .users_context -}}{{- set . "users_context" dict -}}{{- end -}} -{{- if not .users_context.rally -}}{{- set .users_context "rally" dict -}}{{- end -}} - -{{- end -}} - - -{{- define "rally.conf.rally" -}} - -[DEFAULT] - -# -# From oslo.log -# - -# If set to true, the logging level will be set to DEBUG instead of -# the default INFO level. (boolean value) -# Note: This option can be changed without restarting. -# from .default.oslo.log.debug -{{ if not .default.oslo.log.debug }}#{{ end }}debug = {{ .default.oslo.log.debug | default "false" }} - -# The name of a logging configuration file. This file is appended to -# any existing logging configuration files. For details about logging -# configuration files, see the Python logging module documentation. -# Note that when logging configuration files are used then all logging -# configuration is set in the configuration file and other logging -# configuration options are ignored (for example, -# logging_context_format_string). (string value) -# Note: This option can be changed without restarting. -# Deprecated group/name - [DEFAULT]/log-config -# Deprecated group/name - [DEFAULT]/log_config -# from .default.oslo.log.log_config_append -{{ if not .default.oslo.log.log_config_append }}#{{ end }}log_config_append = {{ .default.oslo.log.log_config_append | default "" }} - -# Defines the format string for %%(asctime)s in log records. Default: -# %(default)s . This option is ignored if log_config_append is set. -# (string value) -# from .default.oslo.log.log_date_format -{{ if not .default.oslo.log.log_date_format }}#{{ end }}log_date_format = {{ .default.oslo.log.log_date_format | default "%Y-%m-%d %H:%M:%S" }} - -# (Optional) Name of log file to send logging output to. If no default -# is set, logging will go to stderr as defined by use_stderr. This -# option is ignored if log_config_append is set. (string value) -# Deprecated group/name - [DEFAULT]/logfile -# from .default.oslo.log.log_file -{{ if not .default.oslo.log.log_file }}#{{ end }}log_file = {{ .default.oslo.log.log_file | default "" }} - -# (Optional) The base directory used for relative log_file paths. -# This option is ignored if log_config_append is set. (string value) -# Deprecated group/name - [DEFAULT]/logdir -# from .default.oslo.log.log_dir -{{ if not .default.oslo.log.log_dir }}#{{ end }}log_dir = {{ .default.oslo.log.log_dir | default "" }} - -# Uses logging handler designed to watch file system. When log file is -# moved or removed this handler will open a new log file with -# specified path instantaneously. It makes sense only if log_file -# option is specified and Linux platform is used. This option is -# ignored if log_config_append is set. (boolean value) -# from .default.oslo.log.watch_log_file -{{ if not .default.oslo.log.watch_log_file }}#{{ end }}watch_log_file = {{ .default.oslo.log.watch_log_file | default "false" }} - -# Use syslog for logging. Existing syslog format is DEPRECATED and -# will be changed later to honor RFC5424. This option is ignored if -# log_config_append is set. (boolean value) -# from .default.oslo.log.use_syslog -{{ if not .default.oslo.log.use_syslog }}#{{ end }}use_syslog = {{ .default.oslo.log.use_syslog | default "false" }} - -# Enable journald for logging. If running in a systemd environment you -# may wish to enable journal support. Doing so will use the journal -# native protocol which includes structured metadata in addition to -# log messages.This option is ignored if log_config_append is set. -# (boolean value) -# from .default.oslo.log.use_journal -{{ if not .default.oslo.log.use_journal }}#{{ end }}use_journal = {{ .default.oslo.log.use_journal | default "false" }} - -# Syslog facility to receive log lines. This option is ignored if -# log_config_append is set. (string value) -# from .default.oslo.log.syslog_log_facility -{{ if not .default.oslo.log.syslog_log_facility }}#{{ end }}syslog_log_facility = {{ .default.oslo.log.syslog_log_facility | default "LOG_USER" }} - -# Log output to standard error. This option is ignored if -# log_config_append is set. (boolean value) -# from .default.oslo.log.use_stderr -{{ if not .default.oslo.log.use_stderr }}#{{ end }}use_stderr = {{ .default.oslo.log.use_stderr | default "false" }} - -# Format string to use for log messages with context. (string value) -# from .default.oslo.log.logging_context_format_string -{{ if not .default.oslo.log.logging_context_format_string }}#{{ end }}logging_context_format_string = {{ .default.oslo.log.logging_context_format_string | default "%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s" }} - -# Format string to use for log messages when context is undefined. -# (string value) -# from .default.oslo.log.logging_default_format_string -{{ if not .default.oslo.log.logging_default_format_string }}#{{ end }}logging_default_format_string = {{ .default.oslo.log.logging_default_format_string | default "%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s" }} - -# Additional data to append to log message when logging level for the -# message is DEBUG. (string value) -# from .default.oslo.log.logging_debug_format_suffix -{{ if not .default.oslo.log.logging_debug_format_suffix }}#{{ end }}logging_debug_format_suffix = {{ .default.oslo.log.logging_debug_format_suffix | default "%(funcName)s %(pathname)s:%(lineno)d" }} - -# Prefix each line of exception output with this format. (string -# value) -# from .default.oslo.log.logging_exception_prefix -{{ if not .default.oslo.log.logging_exception_prefix }}#{{ end }}logging_exception_prefix = {{ .default.oslo.log.logging_exception_prefix | default "%(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s" }} - -# Defines the format string for %(user_identity)s that is used in -# logging_context_format_string. (string value) -# from .default.oslo.log.logging_user_identity_format -{{ if not .default.oslo.log.logging_user_identity_format }}#{{ end }}logging_user_identity_format = {{ .default.oslo.log.logging_user_identity_format | default "%(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s" }} - -# List of package logging levels in logger=LEVEL pairs. This option is -# ignored if log_config_append is set. (list value) -# from .default.oslo.log.default_log_levels -{{ if not .default.oslo.log.default_log_levels }}#{{ end }}default_log_levels = {{ .default.oslo.log.default_log_levels | default "amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO" }} - -# Enables or disables publication of error events. (boolean value) -# from .default.oslo.log.publish_errors -{{ if not .default.oslo.log.publish_errors }}#{{ end }}publish_errors = {{ .default.oslo.log.publish_errors | default "false" }} - -# The format for an instance that is passed with the log message. -# (string value) -# from .default.oslo.log.instance_format -{{ if not .default.oslo.log.instance_format }}#{{ end }}instance_format = {{ .default.oslo.log.instance_format | default "\"[instance: %(uuid)s] \"" }} - -# The format for an instance UUID that is passed with the log message. -# (string value) -# from .default.oslo.log.instance_uuid_format -{{ if not .default.oslo.log.instance_uuid_format }}#{{ end }}instance_uuid_format = {{ .default.oslo.log.instance_uuid_format | default "\"[instance: %(uuid)s] \"" }} - -# Interval, number of seconds, of log rate limiting. (integer value) -# from .default.oslo.log.rate_limit_interval -{{ if not .default.oslo.log.rate_limit_interval }}#{{ end }}rate_limit_interval = {{ .default.oslo.log.rate_limit_interval | default "0" }} - -# Maximum number of logged messages per rate_limit_interval. (integer -# value) -# from .default.oslo.log.rate_limit_burst -{{ if not .default.oslo.log.rate_limit_burst }}#{{ end }}rate_limit_burst = {{ .default.oslo.log.rate_limit_burst | default "0" }} - -# Log level name used by rate limiting: CRITICAL, ERROR, INFO, -# WARNING, DEBUG or empty string. Logs with level greater or equal to -# rate_limit_except_level are not filtered. An empty string means that -# all levels are filtered. (string value) -# from .default.oslo.log.rate_limit_except_level -{{ if not .default.oslo.log.rate_limit_except_level }}#{{ end }}rate_limit_except_level = {{ .default.oslo.log.rate_limit_except_level | default "CRITICAL" }} - -# Enables or disables fatal status of deprecations. (boolean value) -# from .default.oslo.log.fatal_deprecations -{{ if not .default.oslo.log.fatal_deprecations }}#{{ end }}fatal_deprecations = {{ .default.oslo.log.fatal_deprecations | default "false" }} - -# -# From rally -# - -# Print debugging output only for Rally. Off-site components stay -# quiet. (boolean value) -# from .default.rally.rally_debug -{{ if not .default.rally.rally_debug }}#{{ end }}rally_debug = {{ .default.rally.rally_debug | default "false" }} - -# HTTP timeout for any of OpenStack service in seconds (floating point -# value) -# from .default.rally.openstack_client_http_timeout -{{ if not .default.rally.openstack_client_http_timeout }}#{{ end }}openstack_client_http_timeout = {{ .default.rally.openstack_client_http_timeout | default "180.0" }} - -# Size of raw result chunk in iterations (integer value) -# Minimum value: 1 -# from .default.rally.raw_result_chunk_size -{{ if not .default.rally.raw_result_chunk_size }}#{{ end }}raw_result_chunk_size = {{ .default.rally.raw_result_chunk_size | default "1000" }} - - -[benchmark] - -# -# From rally -# - -# Time to sleep after creating a resource before polling for it status -# (floating point value) -# from .benchmark.rally.cinder_volume_create_prepoll_delay -{{ if not .benchmark.rally.cinder_volume_create_prepoll_delay }}#{{ end }}cinder_volume_create_prepoll_delay = {{ .benchmark.rally.cinder_volume_create_prepoll_delay | default "2.0" }} - -# Time to wait for cinder volume to be created. (floating point value) -# from .benchmark.rally.cinder_volume_create_timeout -{{ if not .benchmark.rally.cinder_volume_create_timeout }}#{{ end }}cinder_volume_create_timeout = {{ .benchmark.rally.cinder_volume_create_timeout | default "600.0" }} - -# Interval between checks when waiting for volume creation. (floating -# point value) -# from .benchmark.rally.cinder_volume_create_poll_interval -{{ if not .benchmark.rally.cinder_volume_create_poll_interval }}#{{ end }}cinder_volume_create_poll_interval = {{ .benchmark.rally.cinder_volume_create_poll_interval | default "2.0" }} - -# Time to wait for cinder volume to be deleted. (floating point value) -# from .benchmark.rally.cinder_volume_delete_timeout -{{ if not .benchmark.rally.cinder_volume_delete_timeout }}#{{ end }}cinder_volume_delete_timeout = {{ .benchmark.rally.cinder_volume_delete_timeout | default "600.0" }} - -# Interval between checks when waiting for volume deletion. (floating -# point value) -# from .benchmark.rally.cinder_volume_delete_poll_interval -{{ if not .benchmark.rally.cinder_volume_delete_poll_interval }}#{{ end }}cinder_volume_delete_poll_interval = {{ .benchmark.rally.cinder_volume_delete_poll_interval | default "2.0" }} - -# Time to wait for cinder backup to be restored. (floating point -# value) -# from .benchmark.rally.cinder_backup_restore_timeout -{{ if not .benchmark.rally.cinder_backup_restore_timeout }}#{{ end }}cinder_backup_restore_timeout = {{ .benchmark.rally.cinder_backup_restore_timeout | default "600.0" }} - -# Interval between checks when waiting for backup restoring. (floating -# point value) -# from .benchmark.rally.cinder_backup_restore_poll_interval -{{ if not .benchmark.rally.cinder_backup_restore_poll_interval }}#{{ end }}cinder_backup_restore_poll_interval = {{ .benchmark.rally.cinder_backup_restore_poll_interval | default "2.0" }} - -# Time to sleep after boot before polling for status (floating point -# value) -# from .benchmark.rally.ec2_server_boot_prepoll_delay -{{ if not .benchmark.rally.ec2_server_boot_prepoll_delay }}#{{ end }}ec2_server_boot_prepoll_delay = {{ .benchmark.rally.ec2_server_boot_prepoll_delay | default "1.0" }} - -# Server boot timeout (floating point value) -# from .benchmark.rally.ec2_server_boot_timeout -{{ if not .benchmark.rally.ec2_server_boot_timeout }}#{{ end }}ec2_server_boot_timeout = {{ .benchmark.rally.ec2_server_boot_timeout | default "300.0" }} - -# Server boot poll interval (floating point value) -# from .benchmark.rally.ec2_server_boot_poll_interval -{{ if not .benchmark.rally.ec2_server_boot_poll_interval }}#{{ end }}ec2_server_boot_poll_interval = {{ .benchmark.rally.ec2_server_boot_poll_interval | default "1.0" }} - -# Time to sleep after creating a resource before polling for it status -# (floating point value) -# from .benchmark.rally.glance_image_create_prepoll_delay -{{ if not .benchmark.rally.glance_image_create_prepoll_delay }}#{{ end }}glance_image_create_prepoll_delay = {{ .benchmark.rally.glance_image_create_prepoll_delay | default "2.0" }} - -# Time to wait for glance image to be created. (floating point value) -# from .benchmark.rally.glance_image_create_timeout -{{ if not .benchmark.rally.glance_image_create_timeout }}#{{ end }}glance_image_create_timeout = {{ .benchmark.rally.glance_image_create_timeout | default "120.0" }} - -# Interval between checks when waiting for image creation. (floating -# point value) -# from .benchmark.rally.glance_image_create_poll_interval -{{ if not .benchmark.rally.glance_image_create_poll_interval }}#{{ end }}glance_image_create_poll_interval = {{ .benchmark.rally.glance_image_create_poll_interval | default "1.0" }} - -# Time(in sec) to sleep after creating a resource before polling for -# it status. (floating point value) -# from .benchmark.rally.heat_stack_create_prepoll_delay -{{ if not .benchmark.rally.heat_stack_create_prepoll_delay }}#{{ end }}heat_stack_create_prepoll_delay = {{ .benchmark.rally.heat_stack_create_prepoll_delay | default "2.0" }} - -# Time(in sec) to wait for heat stack to be created. (floating point -# value) -# from .benchmark.rally.heat_stack_create_timeout -{{ if not .benchmark.rally.heat_stack_create_timeout }}#{{ end }}heat_stack_create_timeout = {{ .benchmark.rally.heat_stack_create_timeout | default "3600.0" }} - -# Time interval(in sec) between checks when waiting for stack -# creation. (floating point value) -# from .benchmark.rally.heat_stack_create_poll_interval -{{ if not .benchmark.rally.heat_stack_create_poll_interval }}#{{ end }}heat_stack_create_poll_interval = {{ .benchmark.rally.heat_stack_create_poll_interval | default "1.0" }} - -# Time(in sec) to wait for heat stack to be deleted. (floating point -# value) -# from .benchmark.rally.heat_stack_delete_timeout -{{ if not .benchmark.rally.heat_stack_delete_timeout }}#{{ end }}heat_stack_delete_timeout = {{ .benchmark.rally.heat_stack_delete_timeout | default "3600.0" }} - -# Time interval(in sec) between checks when waiting for stack -# deletion. (floating point value) -# from .benchmark.rally.heat_stack_delete_poll_interval -{{ if not .benchmark.rally.heat_stack_delete_poll_interval }}#{{ end }}heat_stack_delete_poll_interval = {{ .benchmark.rally.heat_stack_delete_poll_interval | default "1.0" }} - -# Time(in sec) to wait for stack to be checked. (floating point value) -# from .benchmark.rally.heat_stack_check_timeout -{{ if not .benchmark.rally.heat_stack_check_timeout }}#{{ end }}heat_stack_check_timeout = {{ .benchmark.rally.heat_stack_check_timeout | default "3600.0" }} - -# Time interval(in sec) between checks when waiting for stack -# checking. (floating point value) -# from .benchmark.rally.heat_stack_check_poll_interval -{{ if not .benchmark.rally.heat_stack_check_poll_interval }}#{{ end }}heat_stack_check_poll_interval = {{ .benchmark.rally.heat_stack_check_poll_interval | default "1.0" }} - -# Time(in sec) to sleep after updating a resource before polling for -# it status. (floating point value) -# from .benchmark.rally.heat_stack_update_prepoll_delay -{{ if not .benchmark.rally.heat_stack_update_prepoll_delay }}#{{ end }}heat_stack_update_prepoll_delay = {{ .benchmark.rally.heat_stack_update_prepoll_delay | default "2.0" }} - -# Time(in sec) to wait for stack to be updated. (floating point value) -# from .benchmark.rally.heat_stack_update_timeout -{{ if not .benchmark.rally.heat_stack_update_timeout }}#{{ end }}heat_stack_update_timeout = {{ .benchmark.rally.heat_stack_update_timeout | default "3600.0" }} - -# Time interval(in sec) between checks when waiting for stack update. -# (floating point value) -# from .benchmark.rally.heat_stack_update_poll_interval -{{ if not .benchmark.rally.heat_stack_update_poll_interval }}#{{ end }}heat_stack_update_poll_interval = {{ .benchmark.rally.heat_stack_update_poll_interval | default "1.0" }} - -# Time(in sec) to wait for stack to be suspended. (floating point -# value) -# from .benchmark.rally.heat_stack_suspend_timeout -{{ if not .benchmark.rally.heat_stack_suspend_timeout }}#{{ end }}heat_stack_suspend_timeout = {{ .benchmark.rally.heat_stack_suspend_timeout | default "3600.0" }} - -# Time interval(in sec) between checks when waiting for stack suspend. -# (floating point value) -# from .benchmark.rally.heat_stack_suspend_poll_interval -{{ if not .benchmark.rally.heat_stack_suspend_poll_interval }}#{{ end }}heat_stack_suspend_poll_interval = {{ .benchmark.rally.heat_stack_suspend_poll_interval | default "1.0" }} - -# Time(in sec) to wait for stack to be resumed. (floating point value) -# from .benchmark.rally.heat_stack_resume_timeout -{{ if not .benchmark.rally.heat_stack_resume_timeout }}#{{ end }}heat_stack_resume_timeout = {{ .benchmark.rally.heat_stack_resume_timeout | default "3600.0" }} - -# Time interval(in sec) between checks when waiting for stack resume. -# (floating point value) -# from .benchmark.rally.heat_stack_resume_poll_interval -{{ if not .benchmark.rally.heat_stack_resume_poll_interval }}#{{ end }}heat_stack_resume_poll_interval = {{ .benchmark.rally.heat_stack_resume_poll_interval | default "1.0" }} - -# Time(in sec) to wait for stack snapshot to be created. (floating -# point value) -# from .benchmark.rally.heat_stack_snapshot_timeout -{{ if not .benchmark.rally.heat_stack_snapshot_timeout }}#{{ end }}heat_stack_snapshot_timeout = {{ .benchmark.rally.heat_stack_snapshot_timeout | default "3600.0" }} - -# Time interval(in sec) between checks when waiting for stack snapshot -# to be created. (floating point value) -# from .benchmark.rally.heat_stack_snapshot_poll_interval -{{ if not .benchmark.rally.heat_stack_snapshot_poll_interval }}#{{ end }}heat_stack_snapshot_poll_interval = {{ .benchmark.rally.heat_stack_snapshot_poll_interval | default "1.0" }} - -# Time(in sec) to wait for stack to be restored from snapshot. -# (floating point value) -# from .benchmark.rally.heat_stack_restore_timeout -{{ if not .benchmark.rally.heat_stack_restore_timeout }}#{{ end }}heat_stack_restore_timeout = {{ .benchmark.rally.heat_stack_restore_timeout | default "3600.0" }} - -# Time interval(in sec) between checks when waiting for stack to be -# restored. (floating point value) -# from .benchmark.rally.heat_stack_restore_poll_interval -{{ if not .benchmark.rally.heat_stack_restore_poll_interval }}#{{ end }}heat_stack_restore_poll_interval = {{ .benchmark.rally.heat_stack_restore_poll_interval | default "1.0" }} - -# Time (in sec) to wait for stack to scale up or down. (floating point -# value) -# from .benchmark.rally.heat_stack_scale_timeout -{{ if not .benchmark.rally.heat_stack_scale_timeout }}#{{ end }}heat_stack_scale_timeout = {{ .benchmark.rally.heat_stack_scale_timeout | default "3600.0" }} - -# Time interval (in sec) between checks when waiting for a stack to -# scale up or down. (floating point value) -# from .benchmark.rally.heat_stack_scale_poll_interval -{{ if not .benchmark.rally.heat_stack_scale_poll_interval }}#{{ end }}heat_stack_scale_poll_interval = {{ .benchmark.rally.heat_stack_scale_poll_interval | default "1.0" }} - -# Interval(in sec) between checks when waiting for node creation. -# (floating point value) -# from .benchmark.rally.ironic_node_create_poll_interval -{{ if not .benchmark.rally.ironic_node_create_poll_interval }}#{{ end }}ironic_node_create_poll_interval = {{ .benchmark.rally.ironic_node_create_poll_interval | default "1.0" }} - -# Ironic node create timeout (floating point value) -# from .benchmark.rally.ironic_node_create_timeout -{{ if not .benchmark.rally.ironic_node_create_timeout }}#{{ end }}ironic_node_create_timeout = {{ .benchmark.rally.ironic_node_create_timeout | default "300" }} - -# Ironic node poll interval (floating point value) -# from .benchmark.rally.ironic_node_poll_interval -{{ if not .benchmark.rally.ironic_node_poll_interval }}#{{ end }}ironic_node_poll_interval = {{ .benchmark.rally.ironic_node_poll_interval | default "1.0" }} - -# Ironic node create timeout (floating point value) -# from .benchmark.rally.ironic_node_delete_timeout -{{ if not .benchmark.rally.ironic_node_delete_timeout }}#{{ end }}ironic_node_delete_timeout = {{ .benchmark.rally.ironic_node_delete_timeout | default "300" }} - -# Time(in sec) to sleep after creating a resource before polling for -# the status. (floating point value) -# from .benchmark.rally.magnum_cluster_create_prepoll_delay -{{ if not .benchmark.rally.magnum_cluster_create_prepoll_delay }}#{{ end }}magnum_cluster_create_prepoll_delay = {{ .benchmark.rally.magnum_cluster_create_prepoll_delay | default "5.0" }} - -# Time(in sec) to wait for magnum cluster to be created. (floating -# point value) -# from .benchmark.rally.magnum_cluster_create_timeout -{{ if not .benchmark.rally.magnum_cluster_create_timeout }}#{{ end }}magnum_cluster_create_timeout = {{ .benchmark.rally.magnum_cluster_create_timeout | default "1200.0" }} - -# Time interval(in sec) between checks when waiting for cluster -# creation. (floating point value) -# from .benchmark.rally.magnum_cluster_create_poll_interval -{{ if not .benchmark.rally.magnum_cluster_create_poll_interval }}#{{ end }}magnum_cluster_create_poll_interval = {{ .benchmark.rally.magnum_cluster_create_poll_interval | default "1.0" }} - -# Delay between creating Manila share and polling for its status. -# (floating point value) -# from .benchmark.rally.manila_share_create_prepoll_delay -{{ if not .benchmark.rally.manila_share_create_prepoll_delay }}#{{ end }}manila_share_create_prepoll_delay = {{ .benchmark.rally.manila_share_create_prepoll_delay | default "2.0" }} - -# Timeout for Manila share creation. (floating point value) -# from .benchmark.rally.manila_share_create_timeout -{{ if not .benchmark.rally.manila_share_create_timeout }}#{{ end }}manila_share_create_timeout = {{ .benchmark.rally.manila_share_create_timeout | default "300.0" }} - -# Interval between checks when waiting for Manila share creation. -# (floating point value) -# from .benchmark.rally.manila_share_create_poll_interval -{{ if not .benchmark.rally.manila_share_create_poll_interval }}#{{ end }}manila_share_create_poll_interval = {{ .benchmark.rally.manila_share_create_poll_interval | default "3.0" }} - -# Timeout for Manila share deletion. (floating point value) -# from .benchmark.rally.manila_share_delete_timeout -{{ if not .benchmark.rally.manila_share_delete_timeout }}#{{ end }}manila_share_delete_timeout = {{ .benchmark.rally.manila_share_delete_timeout | default "180.0" }} - -# Interval between checks when waiting for Manila share deletion. -# (floating point value) -# from .benchmark.rally.manila_share_delete_poll_interval -{{ if not .benchmark.rally.manila_share_delete_poll_interval }}#{{ end }}manila_share_delete_poll_interval = {{ .benchmark.rally.manila_share_delete_poll_interval | default "2.0" }} - -# mistral execution timeout (integer value) -# from .benchmark.rally.mistral_execution_timeout -{{ if not .benchmark.rally.mistral_execution_timeout }}#{{ end }}mistral_execution_timeout = {{ .benchmark.rally.mistral_execution_timeout | default "200" }} - -# Delay between creating Monasca metrics and polling for its elements. -# (floating point value) -# from .benchmark.rally.monasca_metric_create_prepoll_delay -{{ if not .benchmark.rally.monasca_metric_create_prepoll_delay }}#{{ end }}monasca_metric_create_prepoll_delay = {{ .benchmark.rally.monasca_metric_create_prepoll_delay | default "15.0" }} - -# A timeout in seconds for an environment deploy (integer value) -# Deprecated group/name - [benchmark]/deploy_environment_timeout -# from .benchmark.rally.murano_deploy_environment_timeout -{{ if not .benchmark.rally.murano_deploy_environment_timeout }}#{{ end }}murano_deploy_environment_timeout = {{ .benchmark.rally.murano_deploy_environment_timeout | default "1200" }} - -# Deploy environment check interval in seconds (integer value) -# Deprecated group/name - [benchmark]/deploy_environment_check_interval -# from .benchmark.rally.murano_deploy_environment_check_interval -{{ if not .benchmark.rally.murano_deploy_environment_check_interval }}#{{ end }}murano_deploy_environment_check_interval = {{ .benchmark.rally.murano_deploy_environment_check_interval | default "5" }} - -# Time to sleep after start before polling for status (floating point -# value) -# from .benchmark.rally.nova_server_start_prepoll_delay -{{ if not .benchmark.rally.nova_server_start_prepoll_delay }}#{{ end }}nova_server_start_prepoll_delay = {{ .benchmark.rally.nova_server_start_prepoll_delay | default "0.0" }} - -# Server start timeout (floating point value) -# from .benchmark.rally.nova_server_start_timeout -{{ if not .benchmark.rally.nova_server_start_timeout }}#{{ end }}nova_server_start_timeout = {{ .benchmark.rally.nova_server_start_timeout | default "300.0" }} - -# Server start poll interval (floating point value) -# from .benchmark.rally.nova_server_start_poll_interval -{{ if not .benchmark.rally.nova_server_start_poll_interval }}#{{ end }}nova_server_start_poll_interval = {{ .benchmark.rally.nova_server_start_poll_interval | default "1.0" }} - -# Time to sleep after stop before polling for status (floating point -# value) -# from .benchmark.rally.nova_server_stop_prepoll_delay -{{ if not .benchmark.rally.nova_server_stop_prepoll_delay }}#{{ end }}nova_server_stop_prepoll_delay = {{ .benchmark.rally.nova_server_stop_prepoll_delay | default "0.0" }} - -# Server stop timeout (floating point value) -# from .benchmark.rally.nova_server_stop_timeout -{{ if not .benchmark.rally.nova_server_stop_timeout }}#{{ end }}nova_server_stop_timeout = {{ .benchmark.rally.nova_server_stop_timeout | default "300.0" }} - -# Server stop poll interval (floating point value) -# from .benchmark.rally.nova_server_stop_poll_interval -{{ if not .benchmark.rally.nova_server_stop_poll_interval }}#{{ end }}nova_server_stop_poll_interval = {{ .benchmark.rally.nova_server_stop_poll_interval | default "2.0" }} - -# Time to sleep after boot before polling for status (floating point -# value) -# from .benchmark.rally.nova_server_boot_prepoll_delay -{{ if not .benchmark.rally.nova_server_boot_prepoll_delay }}#{{ end }}nova_server_boot_prepoll_delay = {{ .benchmark.rally.nova_server_boot_prepoll_delay | default "1.0" }} - -# Server boot timeout (floating point value) -# from .benchmark.rally.nova_server_boot_timeout -{{ if not .benchmark.rally.nova_server_boot_timeout }}#{{ end }}nova_server_boot_timeout = {{ .benchmark.rally.nova_server_boot_timeout | default "300.0" }} - -# Server boot poll interval (floating point value) -# from .benchmark.rally.nova_server_boot_poll_interval -{{ if not .benchmark.rally.nova_server_boot_poll_interval }}#{{ end }}nova_server_boot_poll_interval = {{ .benchmark.rally.nova_server_boot_poll_interval | default "1.0" }} - -# Time to sleep after delete before polling for status (floating point -# value) -# from .benchmark.rally.nova_server_delete_prepoll_delay -{{ if not .benchmark.rally.nova_server_delete_prepoll_delay }}#{{ end }}nova_server_delete_prepoll_delay = {{ .benchmark.rally.nova_server_delete_prepoll_delay | default "2.0" }} - -# Server delete timeout (floating point value) -# from .benchmark.rally.nova_server_delete_timeout -{{ if not .benchmark.rally.nova_server_delete_timeout }}#{{ end }}nova_server_delete_timeout = {{ .benchmark.rally.nova_server_delete_timeout | default "300.0" }} - -# Server delete poll interval (floating point value) -# from .benchmark.rally.nova_server_delete_poll_interval -{{ if not .benchmark.rally.nova_server_delete_poll_interval }}#{{ end }}nova_server_delete_poll_interval = {{ .benchmark.rally.nova_server_delete_poll_interval | default "2.0" }} - -# Time to sleep after reboot before polling for status (floating point -# value) -# from .benchmark.rally.nova_server_reboot_prepoll_delay -{{ if not .benchmark.rally.nova_server_reboot_prepoll_delay }}#{{ end }}nova_server_reboot_prepoll_delay = {{ .benchmark.rally.nova_server_reboot_prepoll_delay | default "2.0" }} - -# Server reboot timeout (floating point value) -# from .benchmark.rally.nova_server_reboot_timeout -{{ if not .benchmark.rally.nova_server_reboot_timeout }}#{{ end }}nova_server_reboot_timeout = {{ .benchmark.rally.nova_server_reboot_timeout | default "300.0" }} - -# Server reboot poll interval (floating point value) -# from .benchmark.rally.nova_server_reboot_poll_interval -{{ if not .benchmark.rally.nova_server_reboot_poll_interval }}#{{ end }}nova_server_reboot_poll_interval = {{ .benchmark.rally.nova_server_reboot_poll_interval | default "2.0" }} - -# Time to sleep after rebuild before polling for status (floating -# point value) -# from .benchmark.rally.nova_server_rebuild_prepoll_delay -{{ if not .benchmark.rally.nova_server_rebuild_prepoll_delay }}#{{ end }}nova_server_rebuild_prepoll_delay = {{ .benchmark.rally.nova_server_rebuild_prepoll_delay | default "1.0" }} - -# Server rebuild timeout (floating point value) -# from .benchmark.rally.nova_server_rebuild_timeout -{{ if not .benchmark.rally.nova_server_rebuild_timeout }}#{{ end }}nova_server_rebuild_timeout = {{ .benchmark.rally.nova_server_rebuild_timeout | default "300.0" }} - -# Server rebuild poll interval (floating point value) -# from .benchmark.rally.nova_server_rebuild_poll_interval -{{ if not .benchmark.rally.nova_server_rebuild_poll_interval }}#{{ end }}nova_server_rebuild_poll_interval = {{ .benchmark.rally.nova_server_rebuild_poll_interval | default "1.0" }} - -# Time to sleep after rescue before polling for status (floating point -# value) -# from .benchmark.rally.nova_server_rescue_prepoll_delay -{{ if not .benchmark.rally.nova_server_rescue_prepoll_delay }}#{{ end }}nova_server_rescue_prepoll_delay = {{ .benchmark.rally.nova_server_rescue_prepoll_delay | default "2.0" }} - -# Server rescue timeout (floating point value) -# from .benchmark.rally.nova_server_rescue_timeout -{{ if not .benchmark.rally.nova_server_rescue_timeout }}#{{ end }}nova_server_rescue_timeout = {{ .benchmark.rally.nova_server_rescue_timeout | default "300.0" }} - -# Server rescue poll interval (floating point value) -# from .benchmark.rally.nova_server_rescue_poll_interval -{{ if not .benchmark.rally.nova_server_rescue_poll_interval }}#{{ end }}nova_server_rescue_poll_interval = {{ .benchmark.rally.nova_server_rescue_poll_interval | default "2.0" }} - -# Time to sleep after unrescue before polling for status (floating -# point value) -# from .benchmark.rally.nova_server_unrescue_prepoll_delay -{{ if not .benchmark.rally.nova_server_unrescue_prepoll_delay }}#{{ end }}nova_server_unrescue_prepoll_delay = {{ .benchmark.rally.nova_server_unrescue_prepoll_delay | default "2.0" }} - -# Server unrescue timeout (floating point value) -# from .benchmark.rally.nova_server_unrescue_timeout -{{ if not .benchmark.rally.nova_server_unrescue_timeout }}#{{ end }}nova_server_unrescue_timeout = {{ .benchmark.rally.nova_server_unrescue_timeout | default "300.0" }} - -# Server unrescue poll interval (floating point value) -# from .benchmark.rally.nova_server_unrescue_poll_interval -{{ if not .benchmark.rally.nova_server_unrescue_poll_interval }}#{{ end }}nova_server_unrescue_poll_interval = {{ .benchmark.rally.nova_server_unrescue_poll_interval | default "2.0" }} - -# Time to sleep after suspend before polling for status (floating -# point value) -# from .benchmark.rally.nova_server_suspend_prepoll_delay -{{ if not .benchmark.rally.nova_server_suspend_prepoll_delay }}#{{ end }}nova_server_suspend_prepoll_delay = {{ .benchmark.rally.nova_server_suspend_prepoll_delay | default "2.0" }} - -# Server suspend timeout (floating point value) -# from .benchmark.rally.nova_server_suspend_timeout -{{ if not .benchmark.rally.nova_server_suspend_timeout }}#{{ end }}nova_server_suspend_timeout = {{ .benchmark.rally.nova_server_suspend_timeout | default "300.0" }} - -# Server suspend poll interval (floating point value) -# from .benchmark.rally.nova_server_suspend_poll_interval -{{ if not .benchmark.rally.nova_server_suspend_poll_interval }}#{{ end }}nova_server_suspend_poll_interval = {{ .benchmark.rally.nova_server_suspend_poll_interval | default "2.0" }} - -# Time to sleep after resume before polling for status (floating point -# value) -# from .benchmark.rally.nova_server_resume_prepoll_delay -{{ if not .benchmark.rally.nova_server_resume_prepoll_delay }}#{{ end }}nova_server_resume_prepoll_delay = {{ .benchmark.rally.nova_server_resume_prepoll_delay | default "2.0" }} - -# Server resume timeout (floating point value) -# from .benchmark.rally.nova_server_resume_timeout -{{ if not .benchmark.rally.nova_server_resume_timeout }}#{{ end }}nova_server_resume_timeout = {{ .benchmark.rally.nova_server_resume_timeout | default "300.0" }} - -# Server resume poll interval (floating point value) -# from .benchmark.rally.nova_server_resume_poll_interval -{{ if not .benchmark.rally.nova_server_resume_poll_interval }}#{{ end }}nova_server_resume_poll_interval = {{ .benchmark.rally.nova_server_resume_poll_interval | default "2.0" }} - -# Time to sleep after pause before polling for status (floating point -# value) -# from .benchmark.rally.nova_server_pause_prepoll_delay -{{ if not .benchmark.rally.nova_server_pause_prepoll_delay }}#{{ end }}nova_server_pause_prepoll_delay = {{ .benchmark.rally.nova_server_pause_prepoll_delay | default "2.0" }} - -# Server pause timeout (floating point value) -# from .benchmark.rally.nova_server_pause_timeout -{{ if not .benchmark.rally.nova_server_pause_timeout }}#{{ end }}nova_server_pause_timeout = {{ .benchmark.rally.nova_server_pause_timeout | default "300.0" }} - -# Server pause poll interval (floating point value) -# from .benchmark.rally.nova_server_pause_poll_interval -{{ if not .benchmark.rally.nova_server_pause_poll_interval }}#{{ end }}nova_server_pause_poll_interval = {{ .benchmark.rally.nova_server_pause_poll_interval | default "2.0" }} - -# Time to sleep after unpause before polling for status (floating -# point value) -# from .benchmark.rally.nova_server_unpause_prepoll_delay -{{ if not .benchmark.rally.nova_server_unpause_prepoll_delay }}#{{ end }}nova_server_unpause_prepoll_delay = {{ .benchmark.rally.nova_server_unpause_prepoll_delay | default "2.0" }} - -# Server unpause timeout (floating point value) -# from .benchmark.rally.nova_server_unpause_timeout -{{ if not .benchmark.rally.nova_server_unpause_timeout }}#{{ end }}nova_server_unpause_timeout = {{ .benchmark.rally.nova_server_unpause_timeout | default "300.0" }} - -# Server unpause poll interval (floating point value) -# from .benchmark.rally.nova_server_unpause_poll_interval -{{ if not .benchmark.rally.nova_server_unpause_poll_interval }}#{{ end }}nova_server_unpause_poll_interval = {{ .benchmark.rally.nova_server_unpause_poll_interval | default "2.0" }} - -# Time to sleep after shelve before polling for status (floating point -# value) -# from .benchmark.rally.nova_server_shelve_prepoll_delay -{{ if not .benchmark.rally.nova_server_shelve_prepoll_delay }}#{{ end }}nova_server_shelve_prepoll_delay = {{ .benchmark.rally.nova_server_shelve_prepoll_delay | default "2.0" }} - -# Server shelve timeout (floating point value) -# from .benchmark.rally.nova_server_shelve_timeout -{{ if not .benchmark.rally.nova_server_shelve_timeout }}#{{ end }}nova_server_shelve_timeout = {{ .benchmark.rally.nova_server_shelve_timeout | default "300.0" }} - -# Server shelve poll interval (floating point value) -# from .benchmark.rally.nova_server_shelve_poll_interval -{{ if not .benchmark.rally.nova_server_shelve_poll_interval }}#{{ end }}nova_server_shelve_poll_interval = {{ .benchmark.rally.nova_server_shelve_poll_interval | default "2.0" }} - -# Time to sleep after unshelve before polling for status (floating -# point value) -# from .benchmark.rally.nova_server_unshelve_prepoll_delay -{{ if not .benchmark.rally.nova_server_unshelve_prepoll_delay }}#{{ end }}nova_server_unshelve_prepoll_delay = {{ .benchmark.rally.nova_server_unshelve_prepoll_delay | default "2.0" }} - -# Server unshelve timeout (floating point value) -# from .benchmark.rally.nova_server_unshelve_timeout -{{ if not .benchmark.rally.nova_server_unshelve_timeout }}#{{ end }}nova_server_unshelve_timeout = {{ .benchmark.rally.nova_server_unshelve_timeout | default "300.0" }} - -# Server unshelve poll interval (floating point value) -# from .benchmark.rally.nova_server_unshelve_poll_interval -{{ if not .benchmark.rally.nova_server_unshelve_poll_interval }}#{{ end }}nova_server_unshelve_poll_interval = {{ .benchmark.rally.nova_server_unshelve_poll_interval | default "2.0" }} - -# Time to sleep after image_create before polling for status (floating -# point value) -# from .benchmark.rally.nova_server_image_create_prepoll_delay -{{ if not .benchmark.rally.nova_server_image_create_prepoll_delay }}#{{ end }}nova_server_image_create_prepoll_delay = {{ .benchmark.rally.nova_server_image_create_prepoll_delay | default "0.0" }} - -# Server image_create timeout (floating point value) -# from .benchmark.rally.nova_server_image_create_timeout -{{ if not .benchmark.rally.nova_server_image_create_timeout }}#{{ end }}nova_server_image_create_timeout = {{ .benchmark.rally.nova_server_image_create_timeout | default "300.0" }} - -# Server image_create poll interval (floating point value) -# from .benchmark.rally.nova_server_image_create_poll_interval -{{ if not .benchmark.rally.nova_server_image_create_poll_interval }}#{{ end }}nova_server_image_create_poll_interval = {{ .benchmark.rally.nova_server_image_create_poll_interval | default "2.0" }} - -# Time to sleep after image_delete before polling for status (floating -# point value) -# from .benchmark.rally.nova_server_image_delete_prepoll_delay -{{ if not .benchmark.rally.nova_server_image_delete_prepoll_delay }}#{{ end }}nova_server_image_delete_prepoll_delay = {{ .benchmark.rally.nova_server_image_delete_prepoll_delay | default "0.0" }} - -# Server image_delete timeout (floating point value) -# from .benchmark.rally.nova_server_image_delete_timeout -{{ if not .benchmark.rally.nova_server_image_delete_timeout }}#{{ end }}nova_server_image_delete_timeout = {{ .benchmark.rally.nova_server_image_delete_timeout | default "300.0" }} - -# Server image_delete poll interval (floating point value) -# from .benchmark.rally.nova_server_image_delete_poll_interval -{{ if not .benchmark.rally.nova_server_image_delete_poll_interval }}#{{ end }}nova_server_image_delete_poll_interval = {{ .benchmark.rally.nova_server_image_delete_poll_interval | default "2.0" }} - -# Time to sleep after resize before polling for status (floating point -# value) -# from .benchmark.rally.nova_server_resize_prepoll_delay -{{ if not .benchmark.rally.nova_server_resize_prepoll_delay }}#{{ end }}nova_server_resize_prepoll_delay = {{ .benchmark.rally.nova_server_resize_prepoll_delay | default "2.0" }} - -# Server resize timeout (floating point value) -# from .benchmark.rally.nova_server_resize_timeout -{{ if not .benchmark.rally.nova_server_resize_timeout }}#{{ end }}nova_server_resize_timeout = {{ .benchmark.rally.nova_server_resize_timeout | default "400.0" }} - -# Server resize poll interval (floating point value) -# from .benchmark.rally.nova_server_resize_poll_interval -{{ if not .benchmark.rally.nova_server_resize_poll_interval }}#{{ end }}nova_server_resize_poll_interval = {{ .benchmark.rally.nova_server_resize_poll_interval | default "5.0" }} - -# Time to sleep after resize_confirm before polling for status -# (floating point value) -# from .benchmark.rally.nova_server_resize_confirm_prepoll_delay -{{ if not .benchmark.rally.nova_server_resize_confirm_prepoll_delay }}#{{ end }}nova_server_resize_confirm_prepoll_delay = {{ .benchmark.rally.nova_server_resize_confirm_prepoll_delay | default "0.0" }} - -# Server resize_confirm timeout (floating point value) -# from .benchmark.rally.nova_server_resize_confirm_timeout -{{ if not .benchmark.rally.nova_server_resize_confirm_timeout }}#{{ end }}nova_server_resize_confirm_timeout = {{ .benchmark.rally.nova_server_resize_confirm_timeout | default "200.0" }} - -# Server resize_confirm poll interval (floating point value) -# from .benchmark.rally.nova_server_resize_confirm_poll_interval -{{ if not .benchmark.rally.nova_server_resize_confirm_poll_interval }}#{{ end }}nova_server_resize_confirm_poll_interval = {{ .benchmark.rally.nova_server_resize_confirm_poll_interval | default "2.0" }} - -# Time to sleep after resize_revert before polling for status -# (floating point value) -# from .benchmark.rally.nova_server_resize_revert_prepoll_delay -{{ if not .benchmark.rally.nova_server_resize_revert_prepoll_delay }}#{{ end }}nova_server_resize_revert_prepoll_delay = {{ .benchmark.rally.nova_server_resize_revert_prepoll_delay | default "0.0" }} - -# Server resize_revert timeout (floating point value) -# from .benchmark.rally.nova_server_resize_revert_timeout -{{ if not .benchmark.rally.nova_server_resize_revert_timeout }}#{{ end }}nova_server_resize_revert_timeout = {{ .benchmark.rally.nova_server_resize_revert_timeout | default "200.0" }} - -# Server resize_revert poll interval (floating point value) -# from .benchmark.rally.nova_server_resize_revert_poll_interval -{{ if not .benchmark.rally.nova_server_resize_revert_poll_interval }}#{{ end }}nova_server_resize_revert_poll_interval = {{ .benchmark.rally.nova_server_resize_revert_poll_interval | default "2.0" }} - -# Time to sleep after live_migrate before polling for status (floating -# point value) -# from .benchmark.rally.nova_server_live_migrate_prepoll_delay -{{ if not .benchmark.rally.nova_server_live_migrate_prepoll_delay }}#{{ end }}nova_server_live_migrate_prepoll_delay = {{ .benchmark.rally.nova_server_live_migrate_prepoll_delay | default "1.0" }} - -# Server live_migrate timeout (floating point value) -# from .benchmark.rally.nova_server_live_migrate_timeout -{{ if not .benchmark.rally.nova_server_live_migrate_timeout }}#{{ end }}nova_server_live_migrate_timeout = {{ .benchmark.rally.nova_server_live_migrate_timeout | default "400.0" }} - -# Server live_migrate poll interval (floating point value) -# from .benchmark.rally.nova_server_live_migrate_poll_interval -{{ if not .benchmark.rally.nova_server_live_migrate_poll_interval }}#{{ end }}nova_server_live_migrate_poll_interval = {{ .benchmark.rally.nova_server_live_migrate_poll_interval | default "2.0" }} - -# Time to sleep after migrate before polling for status (floating -# point value) -# from .benchmark.rally.nova_server_migrate_prepoll_delay -{{ if not .benchmark.rally.nova_server_migrate_prepoll_delay }}#{{ end }}nova_server_migrate_prepoll_delay = {{ .benchmark.rally.nova_server_migrate_prepoll_delay | default "1.0" }} - -# Server migrate timeout (floating point value) -# from .benchmark.rally.nova_server_migrate_timeout -{{ if not .benchmark.rally.nova_server_migrate_timeout }}#{{ end }}nova_server_migrate_timeout = {{ .benchmark.rally.nova_server_migrate_timeout | default "400.0" }} - -# Server migrate poll interval (floating point value) -# from .benchmark.rally.nova_server_migrate_poll_interval -{{ if not .benchmark.rally.nova_server_migrate_poll_interval }}#{{ end }}nova_server_migrate_poll_interval = {{ .benchmark.rally.nova_server_migrate_poll_interval | default "2.0" }} - -# Nova volume detach timeout (floating point value) -# from .benchmark.rally.nova_detach_volume_timeout -{{ if not .benchmark.rally.nova_detach_volume_timeout }}#{{ end }}nova_detach_volume_timeout = {{ .benchmark.rally.nova_detach_volume_timeout | default "200.0" }} - -# Nova volume detach poll interval (floating point value) -# from .benchmark.rally.nova_detach_volume_poll_interval -{{ if not .benchmark.rally.nova_detach_volume_poll_interval }}#{{ end }}nova_detach_volume_poll_interval = {{ .benchmark.rally.nova_detach_volume_poll_interval | default "2.0" }} - -# A timeout in seconds for a cluster create operation (integer value) -# Deprecated group/name - [benchmark]/cluster_create_timeout -# from .benchmark.rally.sahara_cluster_create_timeout -{{ if not .benchmark.rally.sahara_cluster_create_timeout }}#{{ end }}sahara_cluster_create_timeout = {{ .benchmark.rally.sahara_cluster_create_timeout | default "1800" }} - -# A timeout in seconds for a cluster delete operation (integer value) -# Deprecated group/name - [benchmark]/cluster_delete_timeout -# from .benchmark.rally.sahara_cluster_delete_timeout -{{ if not .benchmark.rally.sahara_cluster_delete_timeout }}#{{ end }}sahara_cluster_delete_timeout = {{ .benchmark.rally.sahara_cluster_delete_timeout | default "900" }} - -# Cluster status polling interval in seconds (integer value) -# Deprecated group/name - [benchmark]/cluster_check_interval -# from .benchmark.rally.sahara_cluster_check_interval -{{ if not .benchmark.rally.sahara_cluster_check_interval }}#{{ end }}sahara_cluster_check_interval = {{ .benchmark.rally.sahara_cluster_check_interval | default "5" }} - -# A timeout in seconds for a Job Execution to complete (integer value) -# Deprecated group/name - [benchmark]/job_execution_timeout -# from .benchmark.rally.sahara_job_execution_timeout -{{ if not .benchmark.rally.sahara_job_execution_timeout }}#{{ end }}sahara_job_execution_timeout = {{ .benchmark.rally.sahara_job_execution_timeout | default "600" }} - -# Job Execution status polling interval in seconds (integer value) -# Deprecated group/name - [benchmark]/job_check_interval -# from .benchmark.rally.sahara_job_check_interval -{{ if not .benchmark.rally.sahara_job_check_interval }}#{{ end }}sahara_job_check_interval = {{ .benchmark.rally.sahara_job_check_interval | default "5" }} - -# Amount of workers one proxy should serve to. (integer value) -# from .benchmark.rally.sahara_workers_per_proxy -{{ if not .benchmark.rally.sahara_workers_per_proxy }}#{{ end }}sahara_workers_per_proxy = {{ .benchmark.rally.sahara_workers_per_proxy | default "20" }} - -# Interval between checks when waiting for a VM to become pingable -# (floating point value) -# from .benchmark.rally.vm_ping_poll_interval -{{ if not .benchmark.rally.vm_ping_poll_interval }}#{{ end }}vm_ping_poll_interval = {{ .benchmark.rally.vm_ping_poll_interval | default "1.0" }} - -# Time to wait for a VM to become pingable (floating point value) -# from .benchmark.rally.vm_ping_timeout -{{ if not .benchmark.rally.vm_ping_timeout }}#{{ end }}vm_ping_timeout = {{ .benchmark.rally.vm_ping_timeout | default "120.0" }} - -# Watcher audit launch interval (floating point value) -# from .benchmark.rally.watcher_audit_launch_poll_interval -{{ if not .benchmark.rally.watcher_audit_launch_poll_interval }}#{{ end }}watcher_audit_launch_poll_interval = {{ .benchmark.rally.watcher_audit_launch_poll_interval | default "2.0" }} - -# Watcher audit launch timeout (integer value) -# from .benchmark.rally.watcher_audit_launch_timeout -{{ if not .benchmark.rally.watcher_audit_launch_timeout }}#{{ end }}watcher_audit_launch_timeout = {{ .benchmark.rally.watcher_audit_launch_timeout | default "300" }} - - -[cleanup] - -# -# From rally -# - -# A timeout in seconds for deleting resources (integer value) -# from .cleanup.rally.resource_deletion_timeout -{{ if not .cleanup.rally.resource_deletion_timeout }}#{{ end }}resource_deletion_timeout = {{ .cleanup.rally.resource_deletion_timeout | default "600" }} - -# Number of cleanup threads to run (integer value) -# from .cleanup.rally.cleanup_threads -{{ if not .cleanup.rally.cleanup_threads }}#{{ end }}cleanup_threads = {{ .cleanup.rally.cleanup_threads | default "20" }} - - -[database] - -# -# From oslo.db -# - -# If True, SQLite uses synchronous mode. (boolean value) -# Deprecated group/name - [DEFAULT]/sqlite_synchronous -# from .database.oslo.db.sqlite_synchronous -{{ if not .database.oslo.db.sqlite_synchronous }}#{{ end }}sqlite_synchronous = {{ .database.oslo.db.sqlite_synchronous | default "true" }} - -# The back end to use for the database. (string value) -# Deprecated group/name - [DEFAULT]/db_backend -# from .database.oslo.db.backend -{{ if not .database.oslo.db.backend }}#{{ end }}backend = {{ .database.oslo.db.backend | default "sqlalchemy" }} - -# The SQLAlchemy connection string to use to connect to the database. -# (string value) -# Deprecated group/name - [DEFAULT]/sql_connection -# Deprecated group/name - [DATABASE]/sql_connection -# Deprecated group/name - [sql]/connection -# from .database.oslo.db.connection -{{ if not .database.oslo.db.connection }}#{{ end }}connection = {{ .database.oslo.db.connection | default "" }} - -# The SQLAlchemy connection string to use to connect to the slave -# database. (string value) -# from .database.oslo.db.slave_connection -{{ if not .database.oslo.db.slave_connection }}#{{ end }}slave_connection = {{ .database.oslo.db.slave_connection | default "" }} - -# The SQL mode to be used for MySQL sessions. This option, including -# the default, overrides any server-set SQL mode. To use whatever SQL -# mode is set by the server configuration, set this to no value. -# Example: mysql_sql_mode= (string value) -# from .database.oslo.db.mysql_sql_mode -{{ if not .database.oslo.db.mysql_sql_mode }}#{{ end }}mysql_sql_mode = {{ .database.oslo.db.mysql_sql_mode | default "TRADITIONAL" }} - -# If True, transparently enables support for handling MySQL Cluster -# (NDB). (boolean value) -# from .database.oslo.db.mysql_enable_ndb -{{ if not .database.oslo.db.mysql_enable_ndb }}#{{ end }}mysql_enable_ndb = {{ .database.oslo.db.mysql_enable_ndb | default "false" }} - -# Timeout before idle SQL connections are reaped. (integer value) -# Deprecated group/name - [DEFAULT]/sql_idle_timeout -# Deprecated group/name - [DATABASE]/sql_idle_timeout -# Deprecated group/name - [sql]/idle_timeout -# from .database.oslo.db.idle_timeout -{{ if not .database.oslo.db.idle_timeout }}#{{ end }}idle_timeout = {{ .database.oslo.db.idle_timeout | default "3600" }} - -# Minimum number of SQL connections to keep open in a pool. (integer -# value) -# Deprecated group/name - [DEFAULT]/sql_min_pool_size -# Deprecated group/name - [DATABASE]/sql_min_pool_size -# from .database.oslo.db.min_pool_size -{{ if not .database.oslo.db.min_pool_size }}#{{ end }}min_pool_size = {{ .database.oslo.db.min_pool_size | default "1" }} - -# Maximum number of SQL connections to keep open in a pool. Setting a -# value of 0 indicates no limit. (integer value) -# Deprecated group/name - [DEFAULT]/sql_max_pool_size -# Deprecated group/name - [DATABASE]/sql_max_pool_size -# from .database.oslo.db.max_pool_size -{{ if not .database.oslo.db.max_pool_size }}#{{ end }}max_pool_size = {{ .database.oslo.db.max_pool_size | default "5" }} - -# Maximum number of database connection retries during startup. Set to -# -1 to specify an infinite retry count. (integer value) -# Deprecated group/name - [DEFAULT]/sql_max_retries -# Deprecated group/name - [DATABASE]/sql_max_retries -# from .database.oslo.db.max_retries -{{ if not .database.oslo.db.max_retries }}#{{ end }}max_retries = {{ .database.oslo.db.max_retries | default "10" }} - -# Interval between retries of opening a SQL connection. (integer -# value) -# Deprecated group/name - [DEFAULT]/sql_retry_interval -# Deprecated group/name - [DATABASE]/reconnect_interval -# from .database.oslo.db.retry_interval -{{ if not .database.oslo.db.retry_interval }}#{{ end }}retry_interval = {{ .database.oslo.db.retry_interval | default "10" }} - -# If set, use this value for max_overflow with SQLAlchemy. (integer -# value) -# Deprecated group/name - [DEFAULT]/sql_max_overflow -# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow -# from .database.oslo.db.max_overflow -{{ if not .database.oslo.db.max_overflow }}#{{ end }}max_overflow = {{ .database.oslo.db.max_overflow | default "50" }} - -# Verbosity of SQL debugging information: 0=None, 100=Everything. -# (integer value) -# Minimum value: 0 -# Maximum value: 100 -# Deprecated group/name - [DEFAULT]/sql_connection_debug -# from .database.oslo.db.connection_debug -{{ if not .database.oslo.db.connection_debug }}#{{ end }}connection_debug = {{ .database.oslo.db.connection_debug | default "0" }} - -# Add Python stack traces to SQL as comment strings. (boolean value) -# Deprecated group/name - [DEFAULT]/sql_connection_trace -# from .database.oslo.db.connection_trace -{{ if not .database.oslo.db.connection_trace }}#{{ end }}connection_trace = {{ .database.oslo.db.connection_trace | default "false" }} - -# If set, use this value for pool_timeout with SQLAlchemy. (integer -# value) -# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout -# from .database.oslo.db.pool_timeout -{{ if not .database.oslo.db.pool_timeout }}#{{ end }}pool_timeout = {{ .database.oslo.db.pool_timeout | default "" }} - -# Enable the experimental use of database reconnect on connection -# lost. (boolean value) -# from .database.oslo.db.use_db_reconnect -{{ if not .database.oslo.db.use_db_reconnect }}#{{ end }}use_db_reconnect = {{ .database.oslo.db.use_db_reconnect | default "false" }} - -# Seconds between retries of a database transaction. (integer value) -# from .database.oslo.db.db_retry_interval -{{ if not .database.oslo.db.db_retry_interval }}#{{ end }}db_retry_interval = {{ .database.oslo.db.db_retry_interval | default "1" }} - -# If True, increases the interval between retries of a database -# operation up to db_max_retry_interval. (boolean value) -# from .database.oslo.db.db_inc_retry_interval -{{ if not .database.oslo.db.db_inc_retry_interval }}#{{ end }}db_inc_retry_interval = {{ .database.oslo.db.db_inc_retry_interval | default "true" }} - -# If db_inc_retry_interval is set, the maximum seconds between retries -# of a database operation. (integer value) -# from .database.oslo.db.db_max_retry_interval -{{ if not .database.oslo.db.db_max_retry_interval }}#{{ end }}db_max_retry_interval = {{ .database.oslo.db.db_max_retry_interval | default "10" }} - -# Maximum retries in case of connection error or deadlock error before -# error is raised. Set to -1 to specify an infinite retry count. -# (integer value) -# from .database.oslo.db.db_max_retries -{{ if not .database.oslo.db.db_max_retries }}#{{ end }}db_max_retries = {{ .database.oslo.db.db_max_retries | default "20" }} - - -[roles_context] - -# -# From rally -# - -# How many concurrent threads to use for serving roles context -# (integer value) -# from .roles_context.rally.resource_management_workers -{{ if not .roles_context.rally.resource_management_workers }}#{{ end }}resource_management_workers = {{ .roles_context.rally.resource_management_workers | default "30" }} - - -[tempest] - -# -# From rally -# - -# image URL (string value) -# from .tempest.rally.img_url -{{ if not .tempest.rally.img_url }}#{{ end }}img_url = {{ .tempest.rally.img_url | default "http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img" }} - -# Image disk format to use when creating the image (string value) -# from .tempest.rally.img_disk_format -{{ if not .tempest.rally.img_disk_format }}#{{ end }}img_disk_format = {{ .tempest.rally.img_disk_format | default "qcow2" }} - -# Image container format to use when creating the image (string value) -# from .tempest.rally.img_container_format -{{ if not .tempest.rally.img_container_format }}#{{ end }}img_container_format = {{ .tempest.rally.img_container_format | default "bare" }} - -# Regular expression for name of a public image to discover it in the -# cloud and use it for the tests. Note that when Rally is searching -# for the image, case insensitive matching is performed. Specify -# nothing ('img_name_regex =') if you want to disable discovering. In -# this case Rally will create needed resources by itself if the values -# for the corresponding config options are not specified in the -# Tempest config file (string value) -# from .tempest.rally.img_name_regex -{{ if not .tempest.rally.img_name_regex }}#{{ end }}img_name_regex = {{ .tempest.rally.img_name_regex | default "^.*(cirros|testvm).*$" }} - -# Role required for users to be able to create Swift containers -# (string value) -# from .tempest.rally.swift_operator_role -{{ if not .tempest.rally.swift_operator_role }}#{{ end }}swift_operator_role = {{ .tempest.rally.swift_operator_role | default "Member" }} - -# User role that has reseller admin (string value) -# from .tempest.rally.swift_reseller_admin_role -{{ if not .tempest.rally.swift_reseller_admin_role }}#{{ end }}swift_reseller_admin_role = {{ .tempest.rally.swift_reseller_admin_role | default "ResellerAdmin" }} - -# Role required for users to be able to manage Heat stacks (string -# value) -# from .tempest.rally.heat_stack_owner_role -{{ if not .tempest.rally.heat_stack_owner_role }}#{{ end }}heat_stack_owner_role = {{ .tempest.rally.heat_stack_owner_role | default "heat_stack_owner" }} - -# Role for Heat template-defined users (string value) -# from .tempest.rally.heat_stack_user_role -{{ if not .tempest.rally.heat_stack_user_role }}#{{ end }}heat_stack_user_role = {{ .tempest.rally.heat_stack_user_role | default "heat_stack_user" }} - -# Primary flavor RAM size used by most of the test cases (integer -# value) -# from .tempest.rally.flavor_ref_ram -{{ if not .tempest.rally.flavor_ref_ram }}#{{ end }}flavor_ref_ram = {{ .tempest.rally.flavor_ref_ram | default "64" }} - -# Alternate reference flavor RAM size used by test thatneed two -# flavors, like those that resize an instance (integer value) -# from .tempest.rally.flavor_ref_alt_ram -{{ if not .tempest.rally.flavor_ref_alt_ram }}#{{ end }}flavor_ref_alt_ram = {{ .tempest.rally.flavor_ref_alt_ram | default "128" }} - -# RAM size flavor used for orchestration test cases (integer value) -# from .tempest.rally.heat_instance_type_ram -{{ if not .tempest.rally.heat_instance_type_ram }}#{{ end }}heat_instance_type_ram = {{ .tempest.rally.heat_instance_type_ram | default "64" }} - - -[users_context] - -# -# From rally -# - -# The number of concurrent threads to use for serving users context. -# (integer value) -# from .users_context.rally.resource_management_workers -{{ if not .users_context.rally.resource_management_workers }}#{{ end }}resource_management_workers = {{ .users_context.rally.resource_management_workers | default "20" }} - -# ID of domain in which projects will be created. (string value) -# from .users_context.rally.project_domain -{{ if not .users_context.rally.project_domain }}#{{ end }}project_domain = {{ .users_context.rally.project_domain | default "default" }} - -# ID of domain in which users will be created. (string value) -# from .users_context.rally.user_domain -{{ if not .users_context.rally.user_domain }}#{{ end }}user_domain = {{ .users_context.rally.user_domain | default "default" }} - -# The default role name of the keystone to assign to users. (string -# value) -# from .users_context.rally.keystone_default_role -{{ if not .users_context.rally.keystone_default_role }}#{{ end }}keystone_default_role = {{ .users_context.rally.keystone_default_role | default "member" }} - -{{- end -}} diff --git a/rally/templates/tasks/test-templates/_autoscaling-group.yaml.template.tpl b/rally/templates/tasks/test-templates/_autoscaling-group.yaml.template.tpl deleted file mode 100644 index f6f9f1240d..0000000000 --- a/rally/templates/tasks/test-templates/_autoscaling-group.yaml.template.tpl +++ /dev/null @@ -1,46 +0,0 @@ -heat_template_version: 2013-05-23 - -parameters: - flavor: - type: string - default: m1.tiny - constraints: - - custom_constraint: nova.flavor - image: - type: string - default: cirros-0.3.4-x86_64-uec - constraints: - - custom_constraint: glance.image - scaling_adjustment: - type: number - default: 1 - max_size: - type: number - default: 5 - constraints: - - range: {min: 1} - - -resources: - asg: - type: OS::Heat::AutoScalingGroup - properties: - resource: - type: OS::Nova::Server - properties: - image: { get_param: image } - flavor: { get_param: flavor } - min_size: 1 - desired_capacity: 3 - max_size: { get_param: max_size } - - scaling_policy: - type: OS::Heat::ScalingPolicy - properties: - adjustment_type: change_in_capacity - auto_scaling_group_id: {get_resource: asg} - scaling_adjustment: { get_param: scaling_adjustment } - -outputs: - scaling_url: - value: {get_attr: [scaling_policy, alarm_url]} diff --git a/rally/templates/tasks/test-templates/_autoscaling-policy.yaml.template.tpl b/rally/templates/tasks/test-templates/_autoscaling-policy.yaml.template.tpl deleted file mode 100644 index a22487e339..0000000000 --- a/rally/templates/tasks/test-templates/_autoscaling-policy.yaml.template.tpl +++ /dev/null @@ -1,17 +0,0 @@ -heat_template_version: 2013-05-23 - -resources: - test_group: - type: OS::Heat::AutoScalingGroup - properties: - desired_capacity: 0 - max_size: 0 - min_size: 0 - resource: - type: OS::Heat::RandomString - test_policy: - type: OS::Heat::ScalingPolicy - properties: - adjustment_type: change_in_capacity - auto_scaling_group_id: { get_resource: test_group } - scaling_adjustment: 1 \ No newline at end of file diff --git a/rally/templates/tasks/test-templates/_default.yaml.template.tpl b/rally/templates/tasks/test-templates/_default.yaml.template.tpl deleted file mode 100644 index eb4f2f2dd8..0000000000 --- a/rally/templates/tasks/test-templates/_default.yaml.template.tpl +++ /dev/null @@ -1 +0,0 @@ -heat_template_version: 2014-10-16 \ No newline at end of file diff --git a/rally/templates/tasks/test-templates/_random-strings.yaml.template.tpl b/rally/templates/tasks/test-templates/_random-strings.yaml.template.tpl deleted file mode 100644 index 7486ddd950..0000000000 --- a/rally/templates/tasks/test-templates/_random-strings.yaml.template.tpl +++ /dev/null @@ -1,13 +0,0 @@ -heat_template_version: 2014-10-16 - -description: Test template for rally create-update-delete scenario - -resources: - test_string_one: - type: OS::Heat::RandomString - properties: - length: 20 - test_string_two: - type: OS::Heat::RandomString - properties: - length: 20 diff --git a/rally/templates/tasks/test-templates/_resource-group-server-with-volume.yaml.template.tpl b/rally/templates/tasks/test-templates/_resource-group-server-with-volume.yaml.template.tpl deleted file mode 100644 index 60905683a9..0000000000 --- a/rally/templates/tasks/test-templates/_resource-group-server-with-volume.yaml.template.tpl +++ /dev/null @@ -1,44 +0,0 @@ -heat_template_version: 2014-10-16 - -description: > - Test template that creates a resource group with servers and volumes. - The template allows to create a lot of nested stacks with standard - configuration: nova instance, cinder volume attached to that instance - -parameters: - - num_instances: - type: number - description: number of instances that should be created in resource group - constraints: - - range: {min: 1} - instance_image: - type: string - default: cirros-0.3.4-x86_64-uec - instance_volume_size: - type: number - description: Size of volume to attach to instance - default: 1 - constraints: - - range: {min: 1, max: 1024} - instance_flavor: - type: string - description: Type of the instance to be created. - default: m1.tiny - instance_availability_zone: - type: string - description: The Availability Zone to launch the instance. - default: nova - -resources: - group_of_volumes: - type: OS::Heat::ResourceGroup - properties: - count: {get_param: num_instances} - resource_def: - type: templates/server-with-volume.yaml.template - properties: - image: {get_param: instance_image} - volume_size: {get_param: instance_volume_size} - flavor: {get_param: instance_flavor} - availability_zone: {get_param: instance_availability_zone} diff --git a/rally/templates/tasks/test-templates/_resource-group-with-constraint.yaml.template.tpl b/rally/templates/tasks/test-templates/_resource-group-with-constraint.yaml.template.tpl deleted file mode 100644 index 234e4237ff..0000000000 --- a/rally/templates/tasks/test-templates/_resource-group-with-constraint.yaml.template.tpl +++ /dev/null @@ -1,21 +0,0 @@ -heat_template_version: 2013-05-23 - -description: Template for testing caching. - -parameters: - count: - type: number - default: 40 - delay: - type: number - default: 0.1 - -resources: - rg: - type: OS::Heat::ResourceGroup - properties: - count: {get_param: count} - resource_def: - type: OS::Heat::TestResource - properties: - constraint_prop_secs: {get_param: delay} diff --git a/rally/templates/tasks/test-templates/_resource-group-with-outputs.yaml.template.tpl b/rally/templates/tasks/test-templates/_resource-group-with-outputs.yaml.template.tpl deleted file mode 100644 index f47d03ccc1..0000000000 --- a/rally/templates/tasks/test-templates/_resource-group-with-outputs.yaml.template.tpl +++ /dev/null @@ -1,37 +0,0 @@ -heat_template_version: 2013-05-23 -parameters: - attr_wait_secs: - type: number - default: 0.5 - -resources: - rg: - type: OS::Heat::ResourceGroup - properties: - count: 10 - resource_def: - type: OS::Heat::TestResource - properties: - attr_wait_secs: {get_param: attr_wait_secs} - -outputs: - val1: - value: {get_attr: [rg, resource.0.output]} - val2: - value: {get_attr: [rg, resource.1.output]} - val3: - value: {get_attr: [rg, resource.2.output]} - val4: - value: {get_attr: [rg, resource.3.output]} - val5: - value: {get_attr: [rg, resource.4.output]} - val6: - value: {get_attr: [rg, resource.5.output]} - val7: - value: {get_attr: [rg, resource.6.output]} - val8: - value: {get_attr: [rg, resource.7.output]} - val9: - value: {get_attr: [rg, resource.8.output]} - val10: - value: {get_attr: [rg, resource.9.output]} \ No newline at end of file diff --git a/rally/templates/tasks/test-templates/_resource-group.yaml.template.tpl b/rally/templates/tasks/test-templates/_resource-group.yaml.template.tpl deleted file mode 100644 index b3f505fa67..0000000000 --- a/rally/templates/tasks/test-templates/_resource-group.yaml.template.tpl +++ /dev/null @@ -1,13 +0,0 @@ -heat_template_version: 2014-10-16 - -description: Test template for rally create-update-delete scenario - -resources: - test_group: - type: OS::Heat::ResourceGroup - properties: - count: 2 - resource_def: - type: OS::Heat::RandomString - properties: - length: 20 \ No newline at end of file diff --git a/rally/templates/tasks/test-templates/_server-with-ports.yaml.template.tpl b/rally/templates/tasks/test-templates/_server-with-ports.yaml.template.tpl deleted file mode 100644 index 909f45d212..0000000000 --- a/rally/templates/tasks/test-templates/_server-with-ports.yaml.template.tpl +++ /dev/null @@ -1,64 +0,0 @@ -heat_template_version: 2013-05-23 - -parameters: - # set all correct defaults for parameters before launch test - public_net: - type: string - default: public - image: - type: string - default: cirros-0.3.4-x86_64-uec - flavor: - type: string - default: m1.tiny - cidr: - type: string - default: 11.11.11.0/24 - -resources: - server: - type: OS::Nova::Server - properties: - image: {get_param: image} - flavor: {get_param: flavor} - networks: - - port: { get_resource: server_port } - - router: - type: OS::Neutron::Router - properties: - external_gateway_info: - network: {get_param: public_net} - - router_interface: - type: OS::Neutron::RouterInterface - properties: - router_id: { get_resource: router } - subnet_id: { get_resource: private_subnet } - - private_net: - type: OS::Neutron::Net - - private_subnet: - type: OS::Neutron::Subnet - properties: - network: { get_resource: private_net } - cidr: {get_param: cidr} - - port_security_group: - type: OS::Neutron::SecurityGroup - properties: - name: default_port_security_group - description: > - Default security group assigned to port. The neutron default group is not - used because neutron creates several groups with the same name=default and - nova cannot chooses which one should it use. - - server_port: - type: OS::Neutron::Port - properties: - network: {get_resource: private_net} - fixed_ips: - - subnet: { get_resource: private_subnet } - security_groups: - - { get_resource: port_security_group } diff --git a/rally/templates/tasks/test-templates/_server-with-volume.yaml.template.tpl b/rally/templates/tasks/test-templates/_server-with-volume.yaml.template.tpl deleted file mode 100644 index 23c8827145..0000000000 --- a/rally/templates/tasks/test-templates/_server-with-volume.yaml.template.tpl +++ /dev/null @@ -1,39 +0,0 @@ -heat_template_version: 2013-05-23 - -parameters: - # set all correct defaults for parameters before launch test - image: - type: string - default: cirros-0.3.4-x86_64-uec - flavor: - type: string - default: m1.tiny - availability_zone: - type: string - description: The Availability Zone to launch the instance. - default: nova - volume_size: - type: number - description: Size of the volume to be created. - default: 1 - constraints: - - range: { min: 1, max: 1024 } - description: must be between 1 and 1024 Gb. - -resources: - server: - type: OS::Nova::Server - properties: - image: {get_param: image} - flavor: {get_param: flavor} - cinder_volume: - type: OS::Cinder::Volume - properties: - size: { get_param: volume_size } - availability_zone: { get_param: availability_zone } - volume_attachment: - type: OS::Cinder::VolumeAttachment - properties: - volume_id: { get_resource: cinder_volume } - instance_uuid: { get_resource: server} - mountpoint: /dev/vdc diff --git a/rally/templates/tasks/test-templates/_updated-autoscaling-policy-inplace.yaml.template.tpl b/rally/templates/tasks/test-templates/_updated-autoscaling-policy-inplace.yaml.template.tpl deleted file mode 100644 index cf34879ca7..0000000000 --- a/rally/templates/tasks/test-templates/_updated-autoscaling-policy-inplace.yaml.template.tpl +++ /dev/null @@ -1,23 +0,0 @@ -heat_template_version: 2013-05-23 - -description: > - Test template for create-update-delete-stack scenario in rally. - The template updates resource parameters without resource re-creation(replacement) - in the stack defined by autoscaling_policy.yaml.template. It allows to measure - performance of "pure" resource update operation only. - -resources: - test_group: - type: OS::Heat::AutoScalingGroup - properties: - desired_capacity: 0 - max_size: 0 - min_size: 0 - resource: - type: OS::Heat::RandomString - test_policy: - type: OS::Heat::ScalingPolicy - properties: - adjustment_type: change_in_capacity - auto_scaling_group_id: { get_resource: test_group } - scaling_adjustment: -1 \ No newline at end of file diff --git a/rally/templates/tasks/test-templates/_updated-random-strings-add.yaml.template.tpl b/rally/templates/tasks/test-templates/_updated-random-strings-add.yaml.template.tpl deleted file mode 100644 index 03f9a885d5..0000000000 --- a/rally/templates/tasks/test-templates/_updated-random-strings-add.yaml.template.tpl +++ /dev/null @@ -1,19 +0,0 @@ -heat_template_version: 2014-10-16 - -description: > - Test template for create-update-delete-stack scenario in rally. - The template updates the stack defined by random-strings.yaml.template with additional resource. - -resources: - test_string_one: - type: OS::Heat::RandomString - properties: - length: 20 - test_string_two: - type: OS::Heat::RandomString - properties: - length: 20 - test_string_three: - type: OS::Heat::RandomString - properties: - length: 20 diff --git a/rally/templates/tasks/test-templates/_updated-random-strings-delete.yaml.template.tpl b/rally/templates/tasks/test-templates/_updated-random-strings-delete.yaml.template.tpl deleted file mode 100644 index 414d90d583..0000000000 --- a/rally/templates/tasks/test-templates/_updated-random-strings-delete.yaml.template.tpl +++ /dev/null @@ -1,11 +0,0 @@ -heat_template_version: 2014-10-16 - -description: > - Test template for create-update-delete-stack scenario in rally. - The template deletes one resource from the stack defined by random-strings.yaml.template. - -resources: - test_string_one: - type: OS::Heat::RandomString - properties: - length: 20 diff --git a/rally/templates/tasks/test-templates/_updated-random-strings-replace.yaml.template.tpl b/rally/templates/tasks/test-templates/_updated-random-strings-replace.yaml.template.tpl deleted file mode 100644 index 780fcc168e..0000000000 --- a/rally/templates/tasks/test-templates/_updated-random-strings-replace.yaml.template.tpl +++ /dev/null @@ -1,19 +0,0 @@ -heat_template_version: 2014-10-16 - -description: > - Test template for create-update-delete-stack scenario in rally. - The template deletes one resource from the stack defined by - random-strings.yaml.template and re-creates it with the updated parameters - (so-called update-replace). That happens because some parameters cannot be - changed without resource re-creation. The template allows to measure performance - of update-replace operation. - -resources: - test_string_one: - type: OS::Heat::RandomString - properties: - length: 20 - test_string_two: - type: OS::Heat::RandomString - properties: - length: 40 diff --git a/rally/templates/tasks/test-templates/_updated-resource-group-increase.yaml.template.tpl b/rally/templates/tasks/test-templates/_updated-resource-group-increase.yaml.template.tpl deleted file mode 100644 index 94bc271f79..0000000000 --- a/rally/templates/tasks/test-templates/_updated-resource-group-increase.yaml.template.tpl +++ /dev/null @@ -1,16 +0,0 @@ -heat_template_version: 2014-10-16 - -description: > - Test template for create-update-delete-stack scenario in rally. - The template updates one resource from the stack defined by resource-group.yaml.template - and adds children resources to that resource. - -resources: - test_group: - type: OS::Heat::ResourceGroup - properties: - count: 3 - resource_def: - type: OS::Heat::RandomString - properties: - length: 20 diff --git a/rally/templates/tasks/test-templates/_updated-resource-group-reduce.yaml.template.tpl b/rally/templates/tasks/test-templates/_updated-resource-group-reduce.yaml.template.tpl deleted file mode 100644 index a076224a80..0000000000 --- a/rally/templates/tasks/test-templates/_updated-resource-group-reduce.yaml.template.tpl +++ /dev/null @@ -1,16 +0,0 @@ -heat_template_version: 2014-10-16 - -description: > - Test template for create-update-delete-stack scenario in rally. - The template updates one resource from the stack defined by resource-group.yaml.template - and deletes children resources from that resource. - -resources: - test_group: - type: OS::Heat::ResourceGroup - properties: - count: 1 - resource_def: - type: OS::Heat::RandomString - properties: - length: 20 diff --git a/rally/values.yaml b/rally/values.yaml index 6319d2a046..b54987fd34 100644 --- a/rally/values.yaml +++ b/rally/values.yaml @@ -20,7 +20,7 @@ labels: images: tags: bootstrap: docker.io/kolla/ubuntu-source-rally:3.0.3 - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 db_init: docker.io/kolla/ubuntu-source-rally:3.0.3 manage_db: docker.io/kolla/ubuntu-source-rally:3.0.3 run_task: docker.io/kolla/ubuntu-source-rally:3.0.3 @@ -269,15 +269,6 @@ pvc: storage_class: general conf: - paste: - override: - append: - policy: - override: - append: - audit_map: - override: - append: rally: keystone_authtoken: auth_type: password @@ -288,62 +279,425 @@ conf: connection: rally_tasks: heat_tests: - random_strings: - override: - prefix: - append: - updated_random_strings_replace: - override: - prefix: - append: - updated_random_strings_add: - override: - prefix: - append: - updated_random_strings_delete: - override: - prefix: - append: - resource_group_with_constraint: - override: - prefix: - append: - resource_group_with_outputs: - override: - prefix: - append: - resource_group_server_with_volume: - override: - prefix: - append: - resource_group: - override: - prefix: - append: - default: - override: - prefix: - append: - autoscaling_group: - override: - prefix: - append: - server_with_ports: - override: - prefix: - append: - server_with_volume: - override: - prefix: - append: - updated_resource_group_increase: - override: - prefix: - append: - updated_resource_group_reduce: - override: - prefix: - append: + autoscaling_group: + heat_template_version: '2013-05-23' + outputs: + scaling_url: + value: + get_attr: + - scaling_policy + - alarm_url + parameters: + flavor: + constraints: + - custom_constraint: nova.flavor + default: m1.tiny + type: string + image: + constraints: + - custom_constraint: glance.image + default: cirros-0.3.4-x86_64-uec + type: string + max_size: + constraints: + - range: + min: 1 + default: 5 + type: number + scaling_adjustment: + default: 1 + type: number + resources: + asg: + properties: + desired_capacity: 3 + max_size: + get_param: max_size + min_size: 1 + resource: + properties: + flavor: + get_param: flavor + image: + get_param: image + type: 'OS::Nova::Server' + type: 'OS::Heat::AutoScalingGroup' + scaling_policy: + properties: + adjustment_type: change_in_capacity + auto_scaling_group_id: + get_resource: asg + scaling_adjustment: + get_param: scaling_adjustment + type: 'OS::Heat::ScalingPolicy' + autoscaling_policy: + heat_template_version: '2013-05-23' + resources: + test_group: + properties: + desired_capacity: 0 + max_size: 0 + min_size: 0 + resource: + type: 'OS::Heat::RandomString' + type: 'OS::Heat::AutoScalingGroup' + test_policy: + properties: + adjustment_type: change_in_capacity + auto_scaling_group_id: + get_resource: test_group + scaling_adjustment: 1 + type: 'OS::Heat::ScalingPolicy' + default: + heat_template_version: '2014-10-16' + random_strings: + description: Test template for rally create-update-delete scenario + heat_template_version: '2014-10-16' + resources: + test_string_one: + properties: + length: 20 + type: 'OS::Heat::RandomString' + test_string_two: + properties: + length: 20 + type: 'OS::Heat::RandomString' + resource_group: + description: Test template for rally create-update-delete scenario + heat_template_version: '2014-10-16' + resources: + test_group: + properties: + count: 2 + resource_def: + properties: + length: 20 + type: 'OS::Heat::RandomString' + type: 'OS::Heat::ResourceGroup' + resource_group_server_with_volume: + description: | + Test template that creates a resource group with servers and volumes. + The template allows to create a lot of nested stacks with standard configuration: + nova instance, cinder volume attached to that instance + heat_template_version: '2014-10-16' + parameters: + instance_availability_zone: + default: nova + description: The Availability Zone to launch the instance. + type: string + instance_flavor: + default: m1.tiny + description: Type of the instance to be created. + type: string + instance_image: + default: cirros-0.3.4-x86_64-uec + type: string + instance_volume_size: + constraints: + - range: + max: 1024 + min: 1 + default: 1 + description: Size of volume to attach to instance + type: number + num_instances: + constraints: + - range: + min: 1 + description: number of instances that should be created in resource group + type: number + resources: + group_of_volumes: + properties: + count: + get_param: num_instances + resource_def: + properties: + availability_zone: + get_param: instance_availability_zone + flavor: + get_param: instance_flavor + image: + get_param: instance_image + volume_size: + get_param: instance_volume_size + type: templates/server-with-volume.yaml.template + type: 'OS::Heat::ResourceGroup' + resource_group_with_constraint: + description: Template for testing caching. + heat_template_version: '2013-05-23' + parameters: + count: + default: 40 + type: number + delay: + default: 0.1 + type: number + resources: + rg: + properties: + count: + get_param: count + resource_def: + properties: + constraint_prop_secs: + get_param: delay + type: 'OS::Heat::TestResource' + type: 'OS::Heat::ResourceGroup' + resource_group_with_outputs: + heat_template_version: '2013-05-23' + outputs: + val1: + value: + get_attr: + - rg + - resource.0.output + val10: + value: + get_attr: + - rg + - resource.9.output + val2: + value: + get_attr: + - rg + - resource.1.output + val3: + value: + get_attr: + - rg + - resource.2.output + val4: + value: + get_attr: + - rg + - resource.3.output + val5: + value: + get_attr: + - rg + - resource.4.output + val6: + value: + get_attr: + - rg + - resource.5.output + val7: + value: + get_attr: + - rg + - resource.6.output + val8: + value: + get_attr: + - rg + - resource.7.output + val9: + value: + get_attr: + - rg + - resource.8.output + parameters: + attr_wait_secs: + default: 0.5 + type: number + resources: + rg: + properties: + count: 10 + resource_def: + properties: + attr_wait_secs: + get_param: attr_wait_secs + type: 'OS::Heat::TestResource' + type: 'OS::Heat::ResourceGroup' + server_with_ports: + heat_template_version: '2013-05-23' + parameters: + cidr: + default: 11.11.11.0/24 + type: string + flavor: + default: m1.tiny + type: string + image: + default: cirros-0.3.4-x86_64-uec + type: string + public_net: + default: public + type: string + resources: + port_security_group: + properties: + description: | + Default security group assigned to port. The neutron default group + is not used because neutron creates several groups with the same name=default + and nova cannot chooses which one should it use. + name: default_port_security_group + type: 'OS::Neutron::SecurityGroup' + private_net: + type: 'OS::Neutron::Net' + private_subnet: + properties: + cidr: + get_param: cidr + network: + get_resource: private_net + type: 'OS::Neutron::Subnet' + router: + properties: + external_gateway_info: + network: + get_param: public_net + type: 'OS::Neutron::Router' + router_interface: + properties: + router_id: + get_resource: router + subnet_id: + get_resource: private_subnet + type: 'OS::Neutron::RouterInterface' + server: + properties: + flavor: + get_param: flavor + image: + get_param: image + networks: + - port: + get_resource: server_port + type: 'OS::Nova::Server' + server_port: + properties: + fixed_ips: + - subnet: + get_resource: private_subnet + network: + get_resource: private_net + security_groups: + - get_resource: port_security_group + type: 'OS::Neutron::Port' + server_with_volume: + heat_template_version: '2013-05-23' + parameters: + availability_zone: + default: nova + description: The Availability Zone to launch the instance. + type: string + flavor: + default: m1.tiny + type: string + image: + default: cirros-0.3.4-x86_64-uec + type: string + volume_size: + constraints: + - description: must be between 1 and 1024 Gb. + range: + max: 1024 + min: 1 + default: 1 + description: Size of the volume to be created. + type: number + resources: + cinder_volume: + properties: + availability_zone: + get_param: availability_zone + size: + get_param: volume_size + type: 'OS::Cinder::Volume' + server: + properties: + flavor: + get_param: flavor + image: + get_param: image + type: 'OS::Nova::Server' + volume_attachment: + properties: + instance_uuid: + get_resource: server + mountpoint: /dev/vdc + volume_id: + get_resource: cinder_volume + type: 'OS::Cinder::VolumeAttachment' + updated_random_strings_add: + description: | + Test template for create-update-delete-stack scenario in rally. The + template updates the stack defined by random-strings.yaml.template with additional + resource. + heat_template_version: '2014-10-16' + resources: + test_string_one: + properties: + length: 20 + type: 'OS::Heat::RandomString' + test_string_three: + properties: + length: 20 + type: 'OS::Heat::RandomString' + test_string_two: + properties: + length: 20 + type: 'OS::Heat::RandomString' + updated_random_strings_delete: + description: | + Test template for create-update-delete-stack scenario in rally. The + template deletes one resource from the stack defined by random-strings.yaml.template. + heat_template_version: '2014-10-16' + resources: + test_string_one: + properties: + length: 20 + type: 'OS::Heat::RandomString' + updated_random_strings_replace: + description: | + Test template for create-update-delete-stack scenario in rally. The + template deletes one resource from the stack defined by random-strings.yaml.template + and re-creates it with the updated parameters (so-called update-replace). That happens + because some parameters cannot be changed without resource re-creation. The template + allows to measure performance of update-replace operation. + heat_template_version: '2014-10-16' + resources: + test_string_one: + properties: + length: 20 + type: 'OS::Heat::RandomString' + test_string_two: + properties: + length: 40 + type: 'OS::Heat::RandomString' + updated_resource_group_increase: + description: | + Test template for create-update-delete-stack scenario in rally. The + template updates one resource from the stack defined by resource-group.yaml.template + and adds children resources to that resource. + heat_template_version: '2014-10-16' + resources: + test_group: + properties: + count: 3 + resource_def: + properties: + length: 20 + type: 'OS::Heat::RandomString' + type: 'OS::Heat::ResourceGroup' + updated_resource_group_reduce: + description: | + Test template for create-update-delete-stack scenario in rally. + The template updates one resource from the stack defined by resource-group.yaml.template + and deletes children resources from that resource. + heat_template_version: '2014-10-16' + resources: + test_group: + properties: + count: 1 + resource_def: + properties: + length: 20 + type: 'OS::Heat::RandomString' + type: 'OS::Heat::ResourceGroup' authenticate_task: Authenticate.keystone: - diff --git a/senlin/templates/job-db-drop.yaml b/senlin/templates/job-db-drop.yaml index fc5000e57b..1b20bf43be 100644 --- a/senlin/templates/job-db-drop.yaml +++ b/senlin/templates/job-db-drop.yaml @@ -15,72 +15,6 @@ limitations under the License. */}} {{- if .Values.manifests.job_db_drop }} -{{- $envAll := . }} -{{- $dependencies := .Values.dependencies.static.db_drop }} - -{{- $randStringSuffix := randAlphaNum 5 | lower }} - -{{- $serviceAccountName := print "senlin-db-drop-" $randStringSuffix }} -{{ tuple $envAll $dependencies $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: {{ print "senlin-db-drop-" $randStringSuffix }} - annotations: - "helm.sh/hook": pre-delete - "helm.sh/hook-delete-policy": hook-succeeded -spec: - template: - metadata: - labels: -{{ tuple $envAll "senlin" "db-drop" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }} - spec: - serviceAccountName: {{ $serviceAccountName }} - restartPolicy: OnFailure - nodeSelector: - {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }} - initContainers: -{{ tuple $envAll $dependencies list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }} - containers: - - name: senlin-db-drop - image: {{ .Values.images.tags.db_drop }} - imagePullPolicy: {{ .Values.images.pull_policy }} -{{ tuple $envAll $envAll.Values.pod.resources.jobs.db_drop | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }} - env: - - name: ROOT_DB_CONNECTION - valueFrom: - secretKeyRef: - name: {{ .Values.secrets.oslo_db.admin }} - key: DB_CONNECTION - - name: OPENSTACK_CONFIG_FILE - value: /etc/senlin/senlin.conf - - name: OPENSTACK_CONFIG_DB_SECTION - value: database - - name: OPENSTACK_CONFIG_DB_KEY - value: connection - command: - - /tmp/db-drop.py - volumeMounts: - - name: senlin-bin - mountPath: /tmp/db-drop.py - subPath: db-drop.py - readOnly: true - - name: etcsenlin - mountPath: /etc/senlin - - name: senlin-etc - mountPath: /etc/senlin/senlin.conf - subPath: senlin.conf - readOnly: true - volumes: - - name: etcsenlin - emptyDir: {} - - name: senlin-etc - configMap: - name: senlin-etc - defaultMode: 0444 - - name: senlin-bin - configMap: - name: senlin-bin - defaultMode: 0555 +{{- $dbDropJob := dict "envAll" . "serviceName" "senlin" -}} +{{ $dbDropJob | include "helm-toolkit.manifests.job_db_drop_mysql" }} {{- end }} diff --git a/senlin/templates/service-ingress-api.yaml b/senlin/templates/service-ingress-api.yaml index f7cc6f023e..55b1570e48 100644 --- a/senlin/templates/service-ingress-api.yaml +++ b/senlin/templates/service-ingress-api.yaml @@ -14,19 +14,7 @@ See the License for the specific language governing permissions and limitations under the License. */}} -{{- if .Values.manifests.service_ingress_api }} -{{- $envAll := . }} -{{- if .Values.network.api.ingress.public }} ---- -apiVersion: v1 -kind: Service -metadata: - name: {{ tuple "clustering" "public" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }} -spec: - ports: - - name: http - port: 80 - selector: - app: ingress-api -{{- end }} +{{- if and .Values.manifests.service_ingress_api .Values.network.api.ingress.public }} +{{- $serviceIngressOpts := dict "envAll" . "backendServiceType" "clustering" -}} +{{ $serviceIngressOpts | include "helm-toolkit.manifests.service_ingress" }} {{- end }} diff --git a/senlin/values.yaml b/senlin/values.yaml index 356098ec86..66414971c9 100644 --- a/senlin/values.yaml +++ b/senlin/values.yaml @@ -42,7 +42,7 @@ images: ks_endpoints: docker.io/openstackhelm/heat:newton senlin_api: docker.io/openstackhelm/senlin:newton senlin_engine: docker.io/openstackhelm/senlin:newton - dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.2.1 + dep_check: quay.io/stackanetes/kubernetes-entrypoint:v0.3.0 pull_policy: "IfNotPresent" conf: @@ -130,7 +130,7 @@ conf: auth_version: v3 memcache_security_strategy: ENCRYPT senlin_api: - #NOTE(portdirect): the bind port should not be defined, and is manipulated + # NOTE(portdirect): the bind port should not be defined, and is manipulated # via the endpoints section. bind_port: null @@ -138,8 +138,10 @@ network: api: ingress: public: true + classes: + namespace: "nginx" + cluster: "nginx-cluster" annotations: - kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / node_port: enabled: false @@ -203,8 +205,8 @@ dependencies: service: identity rabbit_init: services: - - service: oslo_messaging - endpoint: internal + - service: oslo_messaging + endpoint: internal # Names of secrets used by bootstrap and environmental checks secrets: @@ -218,7 +220,7 @@ secrets: admin: senlin-rabbitmq-admin senlin: senlin-rabbitmq-user -# typically overriden by environmental +# typically overridden by environmental # values, but should include all endpoints # required by this chart endpoints: @@ -327,11 +329,11 @@ pod: senlin: uid: 42424 affinity: - anti: - type: - default: preferredDuringSchedulingIgnoredDuringExecution - topologyKey: - default: kubernetes.io/hostname + anti: + type: + default: preferredDuringSchedulingIgnoredDuringExecution + topologyKey: + default: kubernetes.io/hostname mounts: senlin_api: init_container: null diff --git a/tools/deployment/armada/multinode/armada-lma.yaml b/tools/deployment/armada/multinode/armada-lma.yaml index d353c2cd3b..8b157fb30c 100644 --- a/tools/deployment/armada/multinode/armada-lma.yaml +++ b/tools/deployment/armada/multinode/armada-lma.yaml @@ -178,9 +178,6 @@ data: labels: node_selector_key: openstack-control-plane node_selector_value: enabled - storage: - elasticsearch: - enabled: false source: type: local location: ${OSH_INFRA_PATH} @@ -356,8 +353,6 @@ data: upgrade: no_hooks: false values: - storage: - enabled: false labels: node_selector_key: openstack-control-plane node_selector_value: enabled diff --git a/tools/deployment/developer/ceph/120-glance.sh b/tools/deployment/developer/ceph/120-glance.sh index 8670402749..bfcceec4c8 100755 --- a/tools/deployment/developer/ceph/120-glance.sh +++ b/tools/deployment/developer/ceph/120-glance.sh @@ -27,7 +27,8 @@ else values="" fi : ${OSH_EXTRA_HELM_ARGS:=""} -GLANCE_BACKEND="radosgw" # NOTE(portdirect), this could be: radosgw, rbd, swift or pvc +#NOTE(portdirect), this could be: radosgw, rbd, swift or pvc +: ${GLANCE_BACKEND:="radosgw"} helm upgrade --install glance ./glance \ --namespace=openstack $values\ --set storage=${GLANCE_BACKEND} \ diff --git a/tools/deployment/developer/ceph/130-cinder.sh b/tools/deployment/developer/ceph/130-cinder.sh index b9af6b8266..918abfd231 100755 --- a/tools/deployment/developer/ceph/130-cinder.sh +++ b/tools/deployment/developer/ceph/130-cinder.sh @@ -26,8 +26,25 @@ else values="" fi : ${OSH_EXTRA_HELM_ARGS:=""} +tee /tmp/cinder.yaml <>>>>>> f1e1338... Opencontrail support for ocata charts ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_CINDER} diff --git a/tools/deployment/developer/common/900-use-it.sh b/tools/deployment/developer/common/900-use-it.sh index 091907c5ac..409aec0a44 100755 --- a/tools/deployment/developer/common/900-use-it.sh +++ b/tools/deployment/developer/common/900-use-it.sh @@ -42,7 +42,6 @@ openstack stack create --wait \ export OSH_EXT_NET_NAME="public" -export OSH_VM_FLAVOR="m1.tiny" export OSH_VM_KEY_STACK="heat-vm-key" export OSH_PRIVATE_SUBNET="10.0.0.0/24" @@ -60,7 +59,6 @@ chmod 600 ${HOME}/.ssh/osh_key openstack stack create --wait \ --parameter public_net=${OSH_EXT_NET_NAME} \ --parameter image="${IMAGE_NAME}" \ - --parameter flavor=${OSH_VM_FLAVOR} \ --parameter ssh_key=${OSH_VM_KEY_STACK} \ --parameter cidr=${OSH_PRIVATE_SUBNET} \ -t ./tools/gate/files/heat-basic-vm-deployment.yaml \ diff --git a/tools/deployment/developer/ldap/080-keystone.sh b/tools/deployment/developer/ldap/080-keystone.sh new file mode 100755 index 0000000000..744dc0ba5d --- /dev/null +++ b/tools/deployment/developer/ldap/080-keystone.sh @@ -0,0 +1,76 @@ +#!/bin/bash + +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +set -xe + +#NOTE: Handle LDAP +make pull-images ldap + +#NOTE: Deploy command +: ${OSH_EXTRA_HELM_ARGS:=""} +helm upgrade --install ldap ./ldap \ + --namespace=openstack \ + --set pod.replicas.server=1 \ + --set bootstrap.enabled=true \ + ${OSH_EXTRA_HELM_ARGS} \ + ${OSH_EXTRA_HELM_ARGS_LDAP} + +#NOTE: Wait for deploy +./tools/deployment/common/wait-for-pods.sh openstack + +#NOTE: Validate Deployment info +helm status ldap + +#NOTE: Handle Keystone +make pull-images keystone + +#NOTE: Deploy command +: ${OSH_EXTRA_HELM_ARGS:=""} +helm upgrade --install keystone ./keystone \ + --namespace=openstack \ + --values=./tools/overrides/keystone/ldap_domain_config.yaml \ + ${OSH_EXTRA_HELM_ARGS} \ + ${OSH_EXTRA_HELM_ARGS_KEYSTONE} + +#NOTE: Wait for deploy +./tools/deployment/common/wait-for-pods.sh openstack + +#NOTE: Validate Deployment info +helm status keystone +export OS_CLOUD=openstack_helm +sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx +openstack endpoint list + +#NOTE: Do some additional queries here for LDAP +openstack domain list +openstack user list +openstack user list --domain ldapdomain + +openstack role add --user bob --project admin --user-domain ldapdomain --project-domain default admin + +domain="ldapdomain" +domainId=$(openstack domain show ${domain} -f value -c id) +token=$(openstack token issue -f value -c id) + +#NOTE: Testing we can auth against the LDAP user +unset OS_CLOUD +openstack --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-username bob --os-password password --os-user-domain-name ${domain} --os-identity-api-version 3 token issue + +#NOTE: Test the domain specific thing works +curl --verbose -X GET \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $token" \ + http://keystone.openstack.svc.cluster.local/v3/domains/${domainId}/config diff --git a/tools/deployment/multinode/030-ceph.sh b/tools/deployment/multinode/030-ceph.sh index 417cda55e2..7c0c3633c3 100755 --- a/tools/deployment/multinode/030-ceph.sh +++ b/tools/deployment/multinode/030-ceph.sh @@ -30,6 +30,9 @@ if [ "x${ID}" == "xubuntu" ] && \ else CRUSH_TUNABLES=null fi +if [ "x${ID}" == "xcentos" ]; then + CRUSH_TUNABLES=hammer +fi tee /tmp/ceph.yaml << EOF endpoints: identity: diff --git a/tools/deployment/multinode/050-mariadb.sh b/tools/deployment/multinode/050-mariadb.sh index ffa966590f..3964408b01 100755 --- a/tools/deployment/multinode/050-mariadb.sh +++ b/tools/deployment/multinode/050-mariadb.sh @@ -19,9 +19,9 @@ set -xe #NOTE: Deploy command helm upgrade --install mariadb ./mariadb \ --namespace=openstack \ + --set pod.replicas.server=3 \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_MARIADB} - #NOTE: Wait for deploy ./tools/deployment/common/wait-for-pods.sh openstack diff --git a/tools/deployment/multinode/131-libvirt-opencontrail.sh b/tools/deployment/multinode/131-libvirt-opencontrail.sh index eff11ee06c..d9f22cb074 100755 --- a/tools/deployment/multinode/131-libvirt-opencontrail.sh +++ b/tools/deployment/multinode/131-libvirt-opencontrail.sh @@ -45,7 +45,7 @@ OSH_EXTRA_HELM_ARGS_LIBVIRT=$OSH_EXTRA_HELM_ARGS_LIBVIRT" $values" #NOTE: Deploy command echo "Libvirt is being deployed, with hugepages mount directory" helm upgrade --install libvirt ./libvirt \ - --namespace=openstack \ + --namespace=openstack $values \ --values=./tools/overrides/backends/opencontrail/libvirt.yaml \ ${OSH_EXTRA_HELM_ARGS} \ ${OSH_EXTRA_HELM_ARGS_LIBVIRT} diff --git a/tools/gate/files/heat-basic-bm-deployment.yaml b/tools/gate/files/heat-basic-bm-deployment.yaml index 237a5befea..f82adf9661 100644 --- a/tools/gate/files/heat-basic-bm-deployment.yaml +++ b/tools/gate/files/heat-basic-bm-deployment.yaml @@ -4,15 +4,19 @@ parameters: baremetal_net: type: string default: baremetal + baremetal_subnet: type: string default: baremetal + image: type: string default: Cirros 0.3.5 64-bit + flavor: type: string default: baremetal + ssh_key: type: string default: heat-vm-key @@ -21,21 +25,32 @@ resources: server: type: OS::Nova::Server properties: - image: {get_param: image} - flavor: {get_param: flavor} - key_name: {get_param: ssh_key} + image: + get_param: image + flavor: + get_param: flavor + key_name: + get_param: ssh_key networks: - - port: { get_resource: server_port } + - port: + get_resource: server_port user_data_format: RAW server_port: type: OS::Neutron::Port properties: - network: {get_param: baremetal_net} + network: + get_param: baremetal_net fixed_ips: - - subnet: { get_param: baremetal_subnet } + - subnet: + get_param: baremetal_subnet port_security_enabled: false outputs: ip: - value: {get_attr: [server_port, fixed_ips, 0, ip_address]} + value: + get_attr: + - server_port + - fixed_ips + - 0 + - ip_address diff --git a/tools/gate/files/heat-basic-vm-deployment.yaml b/tools/gate/files/heat-basic-vm-deployment.yaml index 8625e772f3..21b70a8079 100644 --- a/tools/gate/files/heat-basic-vm-deployment.yaml +++ b/tools/gate/files/heat-basic-vm-deployment.yaml @@ -1,44 +1,58 @@ -heat_template_version: 2016-10-14 +heat_template_version: '2016-10-14' parameters: public_net: type: string default: public + image: type: string default: Cirros 0.3.5 64-bit - flavor: - type: string - default: m1.tiny + ssh_key: type: string default: heat-vm-key + cidr: type: string default: 10.11.11.0/24 resources: + flavor: + type: OS::Nova::Flavor + properties: + disk: 1 + ram: 64 + vcpus: 1 + server: type: OS::Nova::Server properties: - image: {get_param: image} - flavor: {get_param: flavor} - key_name: {get_param: ssh_key} + image: + get_param: image + flavor: + get_resource: flavor + key_name: + get_param: ssh_key networks: - - port: { get_resource: server_port } + - port: + get_resource: server_port user_data_format: RAW router: type: OS::Neutron::Router properties: external_gateway_info: - network: {get_param: public_net} + network: + get_param: public_net router_interface: type: OS::Neutron::RouterInterface properties: - router_id: { get_resource: router } - subnet_id: { get_resource: private_subnet } + router_id: + get_resource: router + subnet_id: + get_resource: private_subnet private_net: type: OS::Neutron::Net @@ -46,8 +60,10 @@ resources: private_subnet: type: OS::Neutron::Subnet properties: - network: { get_resource: private_net } - cidr: {get_param: cidr} + network: + get_resource: private_net + cidr: + get_param: cidr dns_nameservers: - 8.8.8.8 - 8.8.4.4 @@ -56,31 +72,37 @@ resources: type: OS::Neutron::SecurityGroup properties: name: default_port_security_group - description: > - Default security group assigned to port. - rules: [ - {remote_ip_prefix: 0.0.0.0/0, - protocol: tcp, - port_range_min: 22, - port_range_max: 22}, - {remote_ip_prefix: 0.0.0.0/0, - protocol: icmp}] + description: 'Default security group assigned to port.' + rules: + - remote_ip_prefix: 0.0.0.0/0 + protocol: tcp + port_range_min: 22 + port_range_max: 22 + - remote_ip_prefix: 0.0.0.0/0 + protocol: icmp server_port: type: OS::Neutron::Port properties: - network: {get_resource: private_net} + network: + get_resource: private_net fixed_ips: - - subnet: { get_resource: private_subnet } + - subnet: + get_resource: private_subnet security_groups: - - { get_resource: port_security_group } + - get_resource: port_security_group server_floating_ip: type: OS::Neutron::FloatingIP properties: - floating_network: {get_param: public_net} - port_id: { get_resource: server_port } + floating_network: + get_param: public_net + port_id: + get_resource: server_port outputs: floating_ip: - value: {get_attr: [server_floating_ip, floating_ip_address]} + value: + get_attr: + - server_floating_ip + - floating_ip_address diff --git a/tools/gate/files/heat-public-net-deployment.yaml b/tools/gate/files/heat-public-net-deployment.yaml index 055eb49f97..9f090e0421 100644 --- a/tools/gate/files/heat-public-net-deployment.yaml +++ b/tools/gate/files/heat-public-net-deployment.yaml @@ -25,18 +25,24 @@ resources: public_net: type: OS::Neutron::ProviderNet properties: - name: {get_param: network_name} + name: + get_param: network_name router_external: true - physical_network: {get_param: physical_network_name} + physical_network: + get_param: physical_network_name network_type: flat private_subnet: type: OS::Neutron::Subnet properties: - name: {get_param: subnet_name} - network: { get_resource: public_net } - cidr: {get_param: subnet_cidr} - gateway_ip: {get_param: subnet_gateway} + name: + get_param: subnet_name + network: + get_resource: public_net + cidr: + get_param: subnet_cidr + gateway_ip: + get_param: subnet_gateway enable_dhcp: false dns_nameservers: - 10.96.0.10 diff --git a/tools/gate/files/heat-subnet-pool-deployment.yaml b/tools/gate/files/heat-subnet-pool-deployment.yaml index 69cdf729c6..dc8aac5e68 100644 --- a/tools/gate/files/heat-subnet-pool-deployment.yaml +++ b/tools/gate/files/heat-subnet-pool-deployment.yaml @@ -7,7 +7,8 @@ parameters: subnet_pool_prefixes: type: comma_delimited_list - default: ["10.0.0.0/8"] + default: + - 10.0.0.0/8 subnet_pool_default_prefix_length: type: number @@ -17,8 +18,11 @@ resources: public_net: type: OS::Neutron::SubnetPool properties: - name: {get_param: subnet_pool_name} + name: + get_param: subnet_pool_name shared: true is_default: true - default_prefixlen: {get_param: subnet_pool_default_prefix_length} - prefixes: {get_param: subnet_pool_prefixes} + default_prefixlen: + get_param: subnet_pool_default_prefix_length + prefixes: + get_param: subnet_pool_prefixes diff --git a/tools/gate/playbooks/dev-deploy-ceph.yaml b/tools/gate/playbooks/dev-deploy-ceph.yaml index 02536797a4..411f724939 100644 --- a/tools/gate/playbooks/dev-deploy-ceph.yaml +++ b/tools/gate/playbooks/dev-deploy-ceph.yaml @@ -132,17 +132,21 @@ shell: | set -xe; ./tools/deployment/developer/ceph/120-glance.sh + environment: + OSH_EXTRA_HELM_ARGS: "{{ zuul_osh_extra_helm_args_relative_path | default('') }}" + OSH_INFRA_PATH: "{{ zuul_osh_infra_relative_path | default('') }}" + GLANCE_BACKEND: "{{ zuul_glance_backend | default('') }}" + args: + chdir: "{{ zuul.project.src_dir }}" + - name: Deploy Cinder + shell: | + set -xe; + ./tools/deployment/developer/ceph/130-cinder.sh environment: OSH_EXTRA_HELM_ARGS: "{{ zuul_osh_extra_helm_args_relative_path | default('') }}" OSH_INFRA_PATH: "{{ zuul_osh_infra_relative_path | default('') }}" args: chdir: "{{ zuul.project.src_dir }}" - # - name: Deploy Cinder - # shell: | - # set -xe; - # ./tools/deployment/developer/ceph/130-cinder.sh - # args: - # chdir: "{{ zuul.project.src_dir }}" - name: Deploy OpenvSwitch when: osh_neutron_backend == 'openvswitch' shell: | diff --git a/tools/gate/playbooks/dev-deploy-nfs.yaml b/tools/gate/playbooks/dev-deploy-nfs.yaml index 27719a42c6..ad45c4011b 100644 --- a/tools/gate/playbooks/dev-deploy-nfs.yaml +++ b/tools/gate/playbooks/dev-deploy-nfs.yaml @@ -87,6 +87,7 @@ args: chdir: "{{ zuul.project.src_dir }}" - name: Deploy Keystone + when: idp_backend is not defined shell: | set -xe; ./tools/deployment/developer/nfs/080-keystone.sh @@ -95,6 +96,16 @@ OSH_INFRA_PATH: "{{ zuul_osh_infra_relative_path | default('') }}" args: chdir: "{{ zuul.project.src_dir }}" + - name: Deploy Keystone with LDAP + when: idp_backend is defined and idp_backend == "ldap" + shell: | + set -xe; + ./tools/deployment/developer/ldap/080-keystone.sh + environment: + OSH_EXTRA_HELM_ARGS: "{{ zuul_osh_extra_helm_args_relative_path | default('') }}" + OSH_INFRA_PATH: "{{ zuul_osh_infra_relative_path | default('') }}" + args: + chdir: "{{ zuul.project.src_dir }}" - name: Deploy Heat shell: | set -xe; diff --git a/tools/gate/playbooks/osh-infra-build.yaml b/tools/gate/playbooks/osh-infra-build.yaml new file mode 100644 index 0000000000..d06296c1a3 --- /dev/null +++ b/tools/gate/playbooks/osh-infra-build.yaml @@ -0,0 +1,36 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +- hosts: primary + vars_files: + - vars.yaml + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + gather_facts: True + roles: + - build-helm-packages + tags: + - build-helm-packages + +- hosts: all + vars_files: + - vars.yaml + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + gather_facts: False + become: yes + roles: + - build-images + tags: + - build-images diff --git a/tools/gate/playbooks/osh-infra-collect-logs.yaml b/tools/gate/playbooks/osh-infra-collect-logs.yaml new file mode 100644 index 0000000000..bcd5c546fe --- /dev/null +++ b/tools/gate/playbooks/osh-infra-collect-logs.yaml @@ -0,0 +1,58 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +- hosts: all + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + logs_dir: "/tmp/logs" + roles: + - gather-host-logs + tags: + - gather-host-logs + +- hosts: primary + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + logs_dir: "/tmp/logs" + roles: + - helm-release-status + tags: + - helm-release-status + +- hosts: primary + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + logs_dir: "/tmp/logs" + roles: + - describe-kubernetes-objects + tags: + - describe-kubernetes-objects + +- hosts: primary + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + logs_dir: "/tmp/logs" + roles: + - gather-pod-logs + tags: + - gather-pod-logs + +- hosts: primary + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + logs_dir: "/tmp/logs" + roles: + - gather-prom-metrics + tags: + - gather-prom-metrics diff --git a/tools/gate/playbooks/osh-infra-deploy-docker.yaml b/tools/gate/playbooks/osh-infra-deploy-docker.yaml new file mode 100644 index 0000000000..4c54324530 --- /dev/null +++ b/tools/gate/playbooks/osh-infra-deploy-docker.yaml @@ -0,0 +1,43 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +- hosts: all + vars_files: + - vars.yaml + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + gather_facts: False + become: yes + roles: + - deploy-python + tags: + - deploy-python + +- hosts: all + vars_files: + - vars.yaml + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + gather_facts: True + become: yes + roles: + - setup-firewall + - deploy-python-pip + - deploy-docker + - deploy-yq + tags: + - setup-firewall + - deploy-python-pip + - deploy-docker + - deploy-yq diff --git a/tools/gate/playbooks/osh-infra-deploy-k8s.yaml b/tools/gate/playbooks/osh-infra-deploy-k8s.yaml new file mode 100644 index 0000000000..8daa337e31 --- /dev/null +++ b/tools/gate/playbooks/osh-infra-deploy-k8s.yaml @@ -0,0 +1,44 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +- hosts: primary + vars_files: + - vars.yaml + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + gather_facts: True + roles: + - build-helm-packages + tags: + - build-helm-packages + +- hosts: primary + vars_files: + - vars.yaml + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + roles: + - deploy-kubeadm-aio-master + tags: + - deploy-kube-master + +- hosts: nodes + vars_files: + - vars.yaml + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + roles: + - deploy-kubeadm-aio-node + tags: + - deploy-kube-nodes diff --git a/tools/gate/playbooks/osh-infra-upgrade-host.yaml b/tools/gate/playbooks/osh-infra-upgrade-host.yaml new file mode 100644 index 0000000000..0e42a8e733 --- /dev/null +++ b/tools/gate/playbooks/osh-infra-upgrade-host.yaml @@ -0,0 +1,39 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +- hosts: all + vars_files: + - vars.yaml + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + gather_facts: False + become: yes + roles: + - deploy-python + tags: + - deploy-python + +- hosts: all + vars_files: + - vars.yaml + vars: + work_dir: "{{ zuul.project.src_dir }}/{{ zuul_osh_infra_relative_path | default('') }}" + gather_facts: True + become: yes + roles: + - upgrade-host + - start-zuul-console + tags: + - upgrade-host + - start-zuul-console diff --git a/tools/gate/playbooks/vars.yaml b/tools/gate/playbooks/vars.yaml new file mode 100644 index 0000000000..31ea631dfa --- /dev/null +++ b/tools/gate/playbooks/vars.yaml @@ -0,0 +1,64 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +version: + kubernetes: v1.9.3 + helm: v2.7.2 + cni: v0.6.0 + +proxy: + http: null + https: null + noproxy: null + +images: + kubernetes: + kubeadm_aio: openstackhelm/kubeadm-aio:dev + + +kubernetes: + network: + default_device: null + cluster: + cni: calico + pod_subnet: 192.168.0.0/16 + domain: cluster.local + +nodes: + labels: + primary: + - name: openstack-helm-node-class + value: primary + nodes: + - name: openstack-helm-node-class + value: general + all: + - name: openstack-control-plane + value: enabled + - name: openstack-compute-node + value: enabled + - name: openvswitch + value: enabled + - name: linuxbridge + value: enabled + - name: ceph-mon + value: enabled + - name: ceph-osd + value: enabled + - name: ceph-mds + value: enabled + - name: ceph-rgw + value: enabled + - name: ceph-mgr + value: enabled diff --git a/tools/images/ceph-config-helper/Dockerfile b/tools/images/ceph-config-helper/Dockerfile index 5e6ce775ab..1f12c61ad3 100644 --- a/tools/images/ceph-config-helper/Dockerfile +++ b/tools/images/ceph-config-helper/Dockerfile @@ -1,20 +1,26 @@ -FROM ubuntu:16.04 +FROM docker.io/ubuntu:xenial MAINTAINER pete.birley@att.com -ARG KUBE_VERSION=v1.7.5 +ARG KUBE_VERSION=v1.9.6 +ARG CEPH_RELEASE=luminous -RUN set -x \ - && TMP_DIR=$(mktemp --directory) \ - && cd ${TMP_DIR} \ - && apt-get update \ - && apt-get install -y \ +ADD https://download.ceph.com/keys/release.asc /etc/apt/ceph-release.asc +RUN set -ex ;\ + export DEBIAN_FRONTEND=noninteractive ;\ + apt-key add /etc/apt/ceph-release.asc ;\ + rm -f /etc/apt/ceph-release.asc ;\ + echo deb http://download.ceph.com/debian-${CEPH_RELEASE}/ xenial main | tee /etc/apt/sources.list.d/ceph.list ;\ + TMP_DIR=$(mktemp --directory) ;\ + cd ${TMP_DIR} ;\ + apt-get update ;\ + apt-get dist-upgrade -y ;\ + apt-get install -y \ apt-transport-https \ ca-certificates \ curl \ python \ - jq \ -# Install kubectl: - && curl -sSL https://dl.k8s.io/${KUBE_VERSION}/kubernetes-client-linux-amd64.tar.gz | tar -zxv --strip-components=1 \ - && mv ${TMP_DIR}/client/bin/kubectl /usr/bin/kubectl \ - && chmod +x /usr/bin/kubectl \ - && rm -rf ${TMP_DIR} + jq ;\ + curl -sSL https://dl.k8s.io/${KUBE_VERSION}/kubernetes-client-linux-amd64.tar.gz | tar -zxv --strip-components=1 ;\ + mv ${TMP_DIR}/client/bin/kubectl /usr/bin/kubectl ;\ + chmod +x /usr/bin/kubectl ;\ + rm -rf ${TMP_DIR} diff --git a/tools/images/ceph-config-helper/Makefile b/tools/images/ceph-config-helper/Makefile new file mode 100644 index 0000000000..ac72363634 --- /dev/null +++ b/tools/images/ceph-config-helper/Makefile @@ -0,0 +1,39 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# It's necessary to set this because some environments don't link sh -> bash. +SHELL := /bin/bash + +DOCKER_REGISTRY ?= docker.io +IMAGE_NAME ?= ceph-config-helper +IMAGE_PREFIX ?= openstackhelm +IMAGE_TAG ?= latest +KUBE_VERSION ?= v1.9.6 +LABEL ?= putlabelshere + +IMAGE := ${DOCKER_REGISTRY}/${IMAGE_PREFIX}/${IMAGE_NAME}:${IMAGE_TAG} + +# Build Deckhand Docker image for this project +.PHONY: images +images: build_$(IMAGE_NAME) + +# Make targets intended for use by the primary targets above. +.PHONY: build_$(IMAGE_NAME) +build_$(IMAGE_NAME): + docker build \ + --network host \ + --build-arg KUBE_VERSION=$(KUBE_VERSION) \ + -t $(IMAGE) \ + --label $(LABEL) --label KUBE_VERSION=$(KUBE_VERSION) \ + . diff --git a/tools/images/ceph-config-helper/README.rst b/tools/images/ceph-config-helper/README.rst index 41e7897a2a..a1b68ed253 100644 --- a/tools/images/ceph-config-helper/README.rst +++ b/tools/images/ceph-config-helper/README.rst @@ -31,8 +31,9 @@ repo run: .. code:: bash - export KUBE_VERSION=v1.7.5 + export KUBE_VERSION=v1.9.6 sudo docker build \ + --network host \ --build-arg KUBE_VERSION=${KUBE_VERSION} \ -t docker.io/port/ceph-config-helper:${KUBE_VERSION} \ tools/images/ceph-config-helper diff --git a/tools/images/gate-utils/Makefile b/tools/images/gate-utils/Makefile new file mode 100644 index 0000000000..60a2e3b0ab --- /dev/null +++ b/tools/images/gate-utils/Makefile @@ -0,0 +1,36 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# It's necessary to set this because some environments don't link sh -> bash. +SHELL := /bin/bash + +DOCKER_REGISTRY ?= docker.io +IMAGE_NAME ?= gate-utils +IMAGE_PREFIX ?= openstackhelm +IMAGE_TAG ?= v0.1.0 +LABEL ?= putlabelshere + +IMAGE := ${DOCKER_REGISTRY}/${IMAGE_PREFIX}/${IMAGE_NAME}:${IMAGE_TAG} + +# Build Deckhand Docker image for this project +.PHONY: images +images: build_$(IMAGE_NAME) + +# Make targets intended for use by the primary targets above. +.PHONY: build_$(IMAGE_NAME) +build_$(IMAGE_NAME): + docker build \ + --label $(LABEL) \ + -t $(IMAGE) \ + . diff --git a/tools/images/libvirt/Dockerfile.ubuntu.xenial b/tools/images/libvirt/Dockerfile.ubuntu.xenial index eeed452e8e..a696657b85 100644 --- a/tools/images/libvirt/Dockerfile.ubuntu.xenial +++ b/tools/images/libvirt/Dockerfile.ubuntu.xenial @@ -1,7 +1,7 @@ FROM docker.io/ubuntu:xenial MAINTAINER pete.birley@att.com -ARG LIBVIRT_VERSION=1.3.1-1ubuntu10.18 +ARG LIBVIRT_VERSION=1.3.1-1ubuntu10.19 ARG CEPH_RELEASE=luminous ARG PROJECT=nova ARG UID=42424 diff --git a/tools/images/libvirt/Makefile b/tools/images/libvirt/Makefile new file mode 100644 index 0000000000..04ff014113 --- /dev/null +++ b/tools/images/libvirt/Makefile @@ -0,0 +1,47 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# It's necessary to set this because some environments don't link sh -> bash. +SHELL := /bin/bash + +LIBVIRT_VERSION ?= 1.3.1-1ubuntu10.19 +LIBVIRT_MJR_VERSION = $(subst -, ,$(LIBVIRT_VERSION)) +DISTRO ?= ubuntu +DISTRO_RELEASE ?= xenial +CEPH_RELEASE ?= luminous + +DOCKER_REGISTRY ?= docker.io +IMAGE_NAME ?= libvirt +IMAGE_PREFIX ?= openstackhelm +IMAGE_TAG ?= $(DISTRO)-$(DISTRO_RELEASE)-$(word 1, $(LIBVIRT_MJR_VERSION)) +LABEL ?= putlabelshere + +IMAGE := ${DOCKER_REGISTRY}/${IMAGE_PREFIX}/${IMAGE_NAME}:${IMAGE_TAG} + +# Build Deckhand Docker image for this project +.PHONY: images +images: build_$(IMAGE_NAME) + +# Make targets intended for use by the primary targets above. +.PHONY: build_$(IMAGE_NAME) +build_$(IMAGE_NAME): + docker build \ + --network=host \ + --force-rm \ + --file=./Dockerfile.${DISTRO}.xenial \ + --build-arg LIBVIRT_VERSION="${LIBVIRT_VERSION}" \ + --build-arg CEPH_RELEASE="${CEPH_RELEASE}" \ + --label $(LABEL) \ + -t $(IMAGE) \ + . diff --git a/tools/images/libvirt/README.rst b/tools/images/libvirt/README.rst index 89e7fda302..800b4915bd 100644 --- a/tools/images/libvirt/README.rst +++ b/tools/images/libvirt/README.rst @@ -30,7 +30,7 @@ repo run: .. code:: bash - LIBVIRT_VERSION=1.3.1-1ubuntu10.18 + LIBVIRT_VERSION=1.3.1-1ubuntu10.19 DISTRO=ubuntu DISTRO_RELEASE=xenial CEPH_RELEASE=luminous diff --git a/tools/images/openstack/newton/loci.sh b/tools/images/openstack/newton/loci.sh index cd577c1303..09b76ed2bc 100644 --- a/tools/images/openstack/newton/loci.sh +++ b/tools/images/openstack/newton/loci.sh @@ -31,7 +31,7 @@ sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ --build-arg PROJECT=keystone \ --build-arg FROM=gcr.io/google_containers/ubuntu-slim:0.14 \ --build-arg PROJECT_REF=${OPENSTACK_VERSION} \ - --build-arg PROFILES="apache" \ + --build-arg PROFILES="apache ldap" \ --build-arg PIP_PACKAGES="pycrypto python-openstackclient" \ --build-arg WHEELS=openstackhelm/requirements:${IMAGE_TAG} \ --tag docker.io/openstackhelm/keystone:${IMAGE_TAG} @@ -49,6 +49,16 @@ sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ --tag docker.io/openstackhelm/heat:${IMAGE_TAG} sudo docker exec docker-in-docker docker push docker.io/openstackhelm/heat:${IMAGE_TAG} +sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ + https://git.openstack.org/openstack/loci.git \ + --build-arg PROJECT=barbican \ + --build-arg FROM=gcr.io/google_containers/ubuntu-slim:0.14 \ + --build-arg PROJECT_REF=${OPENSTACK_VERSION} \ + --build-arg PIP_PACKAGES="pycrypto" \ + --build-arg WHEELS=openstackhelm/requirements:${IMAGE_TAG} \ + --tag docker.io/openstackhelm/barbican:${IMAGE_TAG} +sudo docker exec docker-in-docker docker push docker.io/openstackhelm/barbican:${IMAGE_TAG} + sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ https://git.openstack.org/openstack/loci.git \ --build-arg PROJECT=glance \ @@ -65,7 +75,7 @@ sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ --build-arg PROJECT=cinder \ --build-arg FROM=gcr.io/google_containers/ubuntu-slim:0.14 \ --build-arg PROJECT_REF=${OPENSTACK_VERSION} \ - --build-arg PROFILES="cinder lvm ceph" \ + --build-arg PROFILES="cinder lvm ceph qemu" \ --build-arg PIP_PACKAGES="pycrypto python-swiftclient" \ --build-arg WHEELS=openstackhelm/requirements:${IMAGE_TAG} \ --tag docker.io/openstackhelm/cinder:${IMAGE_TAG} @@ -82,6 +92,18 @@ sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ --tag docker.io/openstackhelm/neutron:${IMAGE_TAG} sudo docker exec docker-in-docker docker push docker.io/openstackhelm/neutron:${IMAGE_TAG} +sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ + https://git.openstack.org/openstack/loci.git \ + --build-arg PROJECT=neutron \ + --build-arg FROM=docker.io/ubuntu:18.04 \ + --build-arg PROJECT_REF=${OPENSTACK_VERSION} \ + --build-arg PROFILES="neutron linuxbridge openvswitch" \ + --build-arg PIP_PACKAGES="pycrypto" \ + --build-arg DIST_PACKAGES="ethtool lshw" \ + --build-arg WHEELS=openstackhelm/requirements:${IMAGE_TAG} \ + --tag docker.io/openstackhelm/neutron:${IMAGE_TAG}-sriov-1804 +sudo docker exec docker-in-docker docker push docker.io/openstackhelm/neutron:${IMAGE_TAG}-sriov-1804 + sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ https://git.openstack.org/openstack/loci.git \ --build-arg PROJECT=nova \ diff --git a/tools/images/openstack/ocata/loci.sh b/tools/images/openstack/ocata/loci.sh index e986ebc9a9..7f31d7dce0 100644 --- a/tools/images/openstack/ocata/loci.sh +++ b/tools/images/openstack/ocata/loci.sh @@ -31,7 +31,7 @@ sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ --build-arg PROJECT=keystone \ --build-arg FROM=gcr.io/google_containers/ubuntu-slim:0.14 \ --build-arg PROJECT_REF=${OPENSTACK_VERSION} \ - --build-arg PROFILES="apache" \ + --build-arg PROFILES="apache ldap" \ --build-arg PIP_PACKAGES="pycrypto python-openstackclient" \ --build-arg WHEELS=openstackhelm/requirements:${IMAGE_TAG} \ --tag docker.io/openstackhelm/keystone:${IMAGE_TAG} @@ -49,6 +49,16 @@ sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ --tag docker.io/openstackhelm/heat:${IMAGE_TAG} sudo docker exec docker-in-docker docker push docker.io/openstackhelm/heat:${IMAGE_TAG} +sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ + https://git.openstack.org/openstack/loci.git \ + --build-arg PROJECT=barbican \ + --build-arg FROM=gcr.io/google_containers/ubuntu-slim:0.14 \ + --build-arg PROJECT_REF=${OPENSTACK_VERSION} \ + --build-arg PIP_PACKAGES="pycrypto" \ + --build-arg WHEELS=openstackhelm/requirements:${IMAGE_TAG} \ + --tag docker.io/openstackhelm/barbican:${IMAGE_TAG} +sudo docker exec docker-in-docker docker push docker.io/openstackhelm/barbican:${IMAGE_TAG} + sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ https://git.openstack.org/openstack/loci.git \ --build-arg PROJECT=glance \ @@ -65,7 +75,7 @@ sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ --build-arg PROJECT=cinder \ --build-arg FROM=gcr.io/google_containers/ubuntu-slim:0.14 \ --build-arg PROJECT_REF=${OPENSTACK_VERSION} \ - --build-arg PROFILES="cinder lvm ceph" \ + --build-arg PROFILES="cinder lvm ceph qemu" \ --build-arg PIP_PACKAGES="pycrypto python-swiftclient" \ --build-arg WHEELS=openstackhelm/requirements:${IMAGE_TAG} \ --tag docker.io/openstackhelm/cinder:${IMAGE_TAG} @@ -82,6 +92,18 @@ sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ --tag docker.io/openstackhelm/neutron:${IMAGE_TAG} sudo docker exec docker-in-docker docker push docker.io/openstackhelm/neutron:${IMAGE_TAG} +sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ + https://git.openstack.org/openstack/loci.git \ + --build-arg PROJECT=neutron \ + --build-arg FROM=docker.io/ubuntu:18.04 \ + --build-arg PROJECT_REF=${OPENSTACK_VERSION} \ + --build-arg PROFILES="neutron linuxbridge openvswitch" \ + --build-arg PIP_PACKAGES="pycrypto" \ + --build-arg DIST_PACKAGES="ethtool lshw" \ + --build-arg WHEELS=openstackhelm/requirements:${IMAGE_TAG} \ + --tag docker.io/openstackhelm/neutron:${IMAGE_TAG}-sriov-1804 +sudo docker exec docker-in-docker docker push docker.io/openstackhelm/neutron:${IMAGE_TAG}-sriov-1804 + sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \ https://git.openstack.org/openstack/loci.git \ --build-arg PROJECT=nova \ diff --git a/tools/images/openvswitch/Makefile b/tools/images/openvswitch/Makefile new file mode 100644 index 0000000000..1157eff991 --- /dev/null +++ b/tools/images/openvswitch/Makefile @@ -0,0 +1,39 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# It's necessary to set this because some environments don't link sh -> bash. +SHELL := /bin/bash + +DOCKER_REGISTRY ?= docker.io +IMAGE_NAME ?= openvswitch +IMAGE_PREFIX ?= openstackhelm +OVS_VERSION ?= 2.8.1 +IMAGE_TAG ?= v$(OVS_VERSION) +LABEL ?= putlabelshere + +IMAGE := ${DOCKER_REGISTRY}/${IMAGE_PREFIX}/${IMAGE_NAME}:${IMAGE_TAG} + +# Build Deckhand Docker image for this project +.PHONY: images +images: build_$(IMAGE_NAME) + +# Make targets intended for use by the primary targets above. +.PHONY: build_$(IMAGE_NAME) +build_$(IMAGE_NAME): + docker build \ + --network=host \ + --build-arg OVS_VERSION=$(OVS_VERSION) \ + --label $(LABEL) \ + -t $(IMAGE) \ + . diff --git a/tools/images/vbmc/Makefile b/tools/images/vbmc/Makefile new file mode 100644 index 0000000000..c5f42ad694 --- /dev/null +++ b/tools/images/vbmc/Makefile @@ -0,0 +1,36 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# It's necessary to set this because some environments don't link sh -> bash. +SHELL := /bin/bash + +DOCKER_REGISTRY ?= docker.io +IMAGE_NAME ?= vbmc +IMAGE_PREFIX ?= openstackhelm +IMAGE_TAG ?= centos +LABEL ?= putlabelshere + +IMAGE := ${DOCKER_REGISTRY}/${IMAGE_PREFIX}/${IMAGE_NAME}:${IMAGE_TAG} + +# Build Deckhand Docker image for this project +.PHONY: images +images: build_$(IMAGE_NAME) + +# Make targets intended for use by the primary targets above. +.PHONY: build_$(IMAGE_NAME) +build_$(IMAGE_NAME): + docker build \ + --label $(LABEL) \ + -t $(IMAGE) \ + . diff --git a/tools/overrides/backends/networking/compute-kit-sr-iov.sh b/tools/overrides/backends/networking/compute-kit-sr-iov.sh new file mode 100755 index 0000000000..0ae0165e10 --- /dev/null +++ b/tools/overrides/backends/networking/compute-kit-sr-iov.sh @@ -0,0 +1,151 @@ +#!/bin/bash + +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +#NOTE(portdirect): This file is included as an example of how to deploy +# nova and neutron with ovs and sr-iov active. It will not work without +# modification for your environment. + +set -xe + +#NOTE: Pull images and lint chart +make pull-images nova +make pull-images neutron + +SRIOV_DEV1=enp3s0f0 +SRIOV_DEV2=enp66s0f1 +OVSBR=vlan92 + +#NOTE: Deploy nova +: ${OSH_EXTRA_HELM_ARGS:=""} +tee /tmp/nova.yaml << EOF +network: + backend: + - openvswitch + - sriov +conf: + nova: + DEFAULT: + debug: True + vcpu_pin_set: 4,8,12,16,20,24,28,32,36,40,44,48,52,56,60,5,9,13,17,21,25,29,33,37,41,45,49,53,57,61 + vif_plugging_is_fatal: False + vif_plugging_timeout: 30 + pci: + alias: '{"name": "numa0", "capability_type": "pci", "product_id": "10fb", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required"}' + passthrough_whitelist: | + [{"address": "0000:03:10.0", "physical_network": "physnet1"}, {"address": "0000:03:10.2", "physical_network": "physnet1"}, {"address": "0000:03:10.4", "physical_network": "physnet1"}, {"address": "0000:03:10.6", "physical_network": "physnet1"}, {"address": "0000:03:11.0", "physical_network": "physnet1"}, {"address": "0000:03:11.2", "physical_network": "physnet1"}, {"address": "0000:03:11.4", "physical_network": "physnet1"}, {"address": "0000:03:11.6", "physical_network": "physnet1"}, {"address": "0000:03:12.0", "physical_network": "physnet1"}, {"address": "0000:03:12.2", "physical_network": "physnet1"}, {"address": "0000:03:12.4", "physical_network": "physnet1"}, {"address": "0000:03:12.6", "physical_network": "physnet1"}, {"address": "0000:03:13.0", "physical_network": "physnet1"}, {"address": "0000:03:13.2", "physical_network": "physnet1"}, {"address": "0000:03:13.4", "physical_network": "physnet1"}, {"address": "0000:03:13.6", "physical_network": "physnet1"}, {"address": "0000:03:14.0", "physical_network": "physnet1"}, {"address": "0000:03:14.2", "physical_network": "physnet1"}, {"address": "0000:03:14.4", "physical_network": "physnet1"}, {"address": "0000:03:14.6", "physical_network": "physnet1"}, {"address": "0000:03:15.0", "physical_network": "physnet1"}, {"address": "0000:03:15.2", "physical_network": "physnet1"}, {"address": "0000:03:15.4", "physical_network": "physnet1"}, {"address": "0000:03:15.6", "physical_network": "physnet1"}, {"address": "0000:03:16.0", "physical_network": "physnet1"}, {"address": "0000:03:16.2", "physical_network": "physnet1"}, {"address": "0000:03:16.4", "physical_network": "physnet1"}, {"address": "0000:03:16.6", "physical_network": "physnet1"}, {"address": "0000:03:17.0", "physical_network": "physnet1"}, {"address": "0000:03:17.2", "physical_network": "physnet1"}, {"address": "0000:03:17.4", "physical_network": "physnet1"}, {"address": "0000:03:17.6", "physical_network": "physnet1"}, {"address": "0000:42:10.1", "physical_network": "physnet2"}, {"address": "0000:42:10.3", "physical_network": "physnet2"}, {"address": "0000:42:10.5", "physical_network": "physnet2"}, {"address": "0000:42:10.7", "physical_network": "physnet2"}, {"address": "0000:42:11.1", "physical_network": "physnet2"}, {"address": "0000:42:11.3", "physical_network": "physnet2"}, {"address": "0000:42:11.5", "physical_network": "physnet2"}, {"address": "0000:42:11.7", "physical_network": "physnet2"}, {"address": "0000:42:12.1", "physical_network": "physnet2"}, {"address": "0000:42:12.3", "physical_network": "physnet2"}, {"address": "0000:42:12.5", "physical_network": "physnet2"}, {"address": "0000:42:12.7", "physical_network": "physnet2"}, {"address": "0000:42:13.1", "physical_network": "physnet2"}, {"address": "0000:42:13.3", "physical_network": "physnet2"}, {"address": "0000:42:13.5", "physical_network": "physnet2"}, {"address": "0000:42:13.7", "physical_network": "physnet2"}, {"address": "0000:42:14.1", "physical_network": "physnet2"}, {"address": "0000:42:14.3", "physical_network": "physnet2"}, {"address": "0000:42:14.5", "physical_network": "physnet2"}, {"address": "0000:42:14.7", "physical_network": "physnet2"}, {"address": "0000:42:15.1", "physical_network": "physnet2"}, {"address": "0000:42:15.3", "physical_network": "physnet2"}, {"address": "0000:42:15.5", "physical_network": "physnet2"}, {"address": "0000:42:15.7", "physical_network": "physnet2"}, {"address": "0000:42:16.1", "physical_network": "physnet2"}, {"address": "0000:42:16.3", "physical_network": "physnet2"}, {"address": "0000:42:16.5", "physical_network": "physnet2"}, {"address": "0000:42:16.7", "physical_network": "physnet2"}, {"address": "0000:42:17.1", "physical_network": "physnet2"}, {"address": "0000:42:17.3", "physical_network": "physnet2"}, {"address": "0000:42:17.5", "physical_network": "physnet2"}, {"address": "0000:42:17.7", "physical_network": "physnet2"}] + filter_scheduler: + enabled_filters: "RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter, NUMATopologyFilter, DifferentHostFilter, SameHostFilter" +EOF + +if [ "x$(systemd-detect-virt)" == "xnone" ]; then + echo 'OSH is not being deployed in virtualized environment' + helm upgrade --install nova ./nova \ + --namespace=openstack \ + --values /tmp/nova.yaml \ + ${OSH_EXTRA_HELM_ARGS} +else + echo 'OSH is being deployed in virtualized environment, using qemu for nova' + helm upgrade --install nova ./nova \ + --namespace=openstack \ + --set conf.nova.libvirt.virt_type=qemu \ + --values /tmp/nova.yaml \ + ${OSH_EXTRA_HELM_ARGS} +fi + +#NOTE: Deploy neutron +tee /tmp/neutron.yaml << EOF +network: + backend: + - openvswitch + - sriov + interface: + tunnel: docker0 + sriov: + - device: ${SRIOV_DEV1} + num_vfs: 32 + promisc: false + - device: ${SRIOV_DEV2} + num_vfs: 32 + promisc: false + auto_bridge_add: + br-physnet3: ${OVSBR} +conf: + neutron: + DEFAULT: + debug: True + l3_ha: False + min_l3_agents_per_router: 1 + max_l3_agents_per_router: 1 + l3_ha_network_type: vxlan + dhcp_agents_per_network: 1 + plugins: + ml2_conf: + ml2: + mechanism_drivers: openvswitch,sriovnicswitch,l2population + ml2_type_flat: + flat_networks: public + type_drivers: vlan,flat,vxlan + mechanism_drivers: openvswitch,sriovnicswitch,l2population + tenant_network_types: vxlan + ml2_type_vlan: + network_vlan_ranges: physnet1:20:30,physnet2:20:30 + #NOTE(portdirect): for clarity we include options for all the neutron + # backends here. + openvswitch_agent: + agent: + tunnel_types: vxlan + ovs: + bridge_mappings: "public:br-ex,physnet3:br-physnet3" + linuxbridge_agent: + linux_bridge: + bridge_mappings: "public:br-ex,physnet1:br-physnet1" + sriov_agent: + sriov_nic: + physical_device_mappings: physnet1:${SRIOV_DEV1},physnet2:${SRIOV_DEV2} + exclude_devices: null +EOF +kubectl label node cab24-r820-14 --overwrite=true sriov=enabled +kubectl label node cab24-r820-15 --overwrite=true sriov=enabled + +helm upgrade --install neutron ./neutron \ + --namespace=openstack \ + --values=/tmp/neutron.yaml \ + ${OSH_EXTRA_HELM_ARGS} + +#NOTE: Wait for deploy +./tools/deployment/common/wait-for-pods.sh openstack + +#NOTE: Validate Deployment info +export OS_CLOUD=openstack_helm +openstack service list +sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx +openstack hypervisor list +openstack network agent list + +#NOTE: Exercise the deployment +openstack network create test +NET_ID=$(openstack network show test -f value -c id) +openstack subnet create --subnet-range "172.24.4.0/24" --network ${NET_ID} test +openstack port create --network ${NET_ID} --fixed-ip subnet=test,ip-address="172.24.4.10" --binding-profile vnic_type=direct sriov_port +PORT_ID=$(openstack port show sriov_port -f value -c id) + +# NOTE(portdirect): We do this fancy, and seemingly pointless, footwork to get +# the full image name for the cirros Image without having to be explicit. +export IMAGE_NAME=$(openstack image show -f value -c name \ + $(openstack image list -f csv | awk -F ',' '{ print $2 "," $1 }' | \ + grep "^\"Cirros" | head -1 | awk -F ',' '{ print $2 }' | tr -d '"')) + +openstack server create --flavor m1.tiny --image "${IMAGE_NAME}" --nic port-id=${PORT_ID} test-sriov diff --git a/tools/overrides/backends/networking/linuxbridge.yaml b/tools/overrides/backends/networking/linuxbridge.yaml index 31828eda00..45b3e9355a 100644 --- a/tools/overrides/backends/networking/linuxbridge.yaml +++ b/tools/overrides/backends/networking/linuxbridge.yaml @@ -18,4 +18,5 @@ # It should be kept to the bare minimum required for this purpose. network: - backend: linuxbridge + backend: + - linuxbridge diff --git a/tools/overrides/backends/opencontrail/neutron.yaml b/tools/overrides/backends/opencontrail/neutron.yaml index 54b59e0926..00abef5a8f 100644 --- a/tools/overrides/backends/opencontrail/neutron.yaml +++ b/tools/overrides/backends/opencontrail/neutron.yaml @@ -17,7 +17,8 @@ images: opencontrail_neutron_init: docker.io/opencontrailnightly/contrail-openstack-neutron-init:ocata-master-46 network: - backend: opencontrail + backend: + - opencontrail conf: neutron: diff --git a/tools/overrides/backends/opencontrail/nova.yaml b/tools/overrides/backends/opencontrail/nova.yaml index 37cec82595..2f5cd78381 100644 --- a/tools/overrides/backends/opencontrail/nova.yaml +++ b/tools/overrides/backends/opencontrail/nova.yaml @@ -19,7 +19,8 @@ images: opencontrail_compute_init: docker.io/opencontrailnightly/contrail-openstack-compute-init:ocata-master-46 network: - backend: opencontrail + backend: + - opencontrail dependencies: dynamic: diff --git a/tools/overrides/example/keystone_domain_config.yaml b/tools/overrides/example/keystone_domain_config.yaml deleted file mode 100644 index 4d63f8e531..0000000000 --- a/tools/overrides/example/keystone_domain_config.yaml +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright 2017 The Openstack-Helm Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# This example sets the default domain to be LDAP based, and adds in a new -# dbdomain that is SQL-backed. Note that for this to work, you need to set -# an admin (env: OS_USERNAME and OS_PASSWORD) that is valid in the LDAP. -conf: - keystone: - identity: - driver: ldap - default_domain_id: default - domain_specific_drivers_enabled: True - domain_configurations_from_database: True - domain_config_dir: /etc/keystonedomains - ldap: - url: "ldap://ldap.openstack.svc.cluster.local:389" - user: "cn=admin,dc=cluster,dc=local" - password: password - suffix: "dc=cluster,dc=local" - user_attribute_ignore: enabled,email,tenants,default_project_id - query_scope: sub - user_enabled_emulation: True - user_enabled_emulation_dn: "cn=overwatch,ou=Groups,dc=cluster,dc=local" - user_tree_dn: "ou=People,dc=cluster,dc=local" - user_enabled_mask: 2 - user_enabled_default: 512 - user_name_attribute: cn - user_id_attribute: sn - user_mail_attribute: mail - user_pass_attribute: userPassword - group_tree_dn: "ou=Groups,dc=cluster,dc=local" - user_allow_create: False - user_allow_delete: False - user_allow_update: False - ks_domains: - dbdomain: - identity: - driver: sql diff --git a/tools/overrides/keystone/ldap_domain_config.yaml b/tools/overrides/keystone/ldap_domain_config.yaml new file mode 100644 index 0000000000..774938bb3f --- /dev/null +++ b/tools/overrides/keystone/ldap_domain_config.yaml @@ -0,0 +1,46 @@ +# Copyright 2017 The Openstack-Helm Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +conf: + keystone: + identity: + driver: sql + default_domain_id: default + domain_specific_drivers_enabled: True + domain_configurations_from_database: True + domain_config_dir: /etc/keystonedomains + ks_domains: + ldapdomain: + identity: + driver: ldap + ldap: + url: "ldap://ldap.openstack.svc.cluster.local:389" + user: "cn=admin,dc=cluster,dc=local" + password: password + suffix: "dc=cluster,dc=local" + user_attribute_ignore: "enabled,email,tenants,default_project_id" + query_scope: sub + user_enabled_emulation: True + user_enabled_emulation_dn: "cn=overwatch,ou=Groups,dc=cluster,dc=local" + user_tree_dn: "ou=People,dc=cluster,dc=local" + user_enabled_mask: 2 + user_enabled_default: 512 + user_name_attribute: cn + user_id_attribute: sn + user_mail_attribute: mail + user_pass_attribute: userPassword + group_tree_dn: "ou=Groups,dc=cluster,dc=local" + user_allow_create: False + user_allow_delete: False + user_allow_update: False diff --git a/tools/overrides/releases/ocata/loci.yaml b/tools/overrides/releases/ocata/loci.yaml index 391c80dd75..aeb1802564 100644 --- a/tools/overrides/releases/ocata/loci.yaml +++ b/tools/overrides/releases/ocata/loci.yaml @@ -54,6 +54,8 @@ images: neutron_metadata: 'docker.io/openstackhelm/neutron:ocata' neutron_openvswitch_agent: 'docker.io/openstackhelm/neutron:ocata' neutron_server: 'docker.io/openstackhelm/neutron:ocata' + neutron_sriov_agent: 'docker.io/openstackhelm/neutron:ocata-sriov-1804' + neutron_sriov_agent_init: 'docker.io/openstackhelm/neutron:ocata-sriov-1804' nova_api: 'docker.io/openstackhelm/nova:ocata' nova_cell_setup: 'docker.io/openstackhelm/nova:ocata' nova_compute: 'docker.io/openstackhelm/nova:ocata'