diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/README.md b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/README.md index 787956345..7799aef7e 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/README.md +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/README.md @@ -6,10 +6,14 @@ You must ensure that you are using the April 2023 or later release of Identity a The scripts can be run from any host which has access to your Kubernetes cluster. -If you wish the scripts to automatically copy files to your Oracle HTTP Servers then you must have passwordless ssh set up from the deployment host to each of your webhosts. +If you wish the scripts to automatically copy files to your Oracle HTTP Servers then you must have passwordless ssh set up from the deployment host to each of your web hosts. These scripts are provided as examples and can be customized as desired. +Scripts have also been provided to enable Disaster Recovery, instructions for their use can be found in [README_DR.md](README_DR.md) + +Scripts have also been provided to provision and OCI Kubernetes environment prior to running these automation scripts, instructions for their use can be found in [README.md](oke_utils/README.md) + ## Obtaining the Scripts The automation scripts are available for download from GitHub. @@ -39,7 +43,7 @@ This section lists the actions that the scripts perform as part of the deploymen ### What the Scripts Will do -The scripts will deploy Oracle Unified Directory (OUD), Oracle Access Manager (OAM), and Oracle Identity Governance (OIG). They will integrate each of the products. You can choose to integrate one or more products. +The scripts will deploy Oracle Unified Directory (OUD), Oracle Access Manager (OAM), and Oracle Identity Governance (OIG), Oracle Identity Role Intelligence (OIRI) and Oracle Advanced Authentication (OAA). They will integrate each of the products. You can choose to integrate one or more products. The scripts perform the following actions: @@ -115,7 +119,7 @@ While the scripts perform the majority of the deployment, they do not perform th * Configure Oracle HTTP Server to send log files and monitoring data to Elastic Search and Prometheus. * Configure Oracle Database Server to send log files and monitoring data to Elastic Search and Prometheus. * Send Oracle HTTP Monitoring data to Prometheus. -* Send Oracle Database Monitioring data to Prometheus. +* Send Oracle Database Monitoring data to Prometheus. ## Key Concepts of the Scripts @@ -124,7 +128,7 @@ To make things simple and easy to manage the scripts are based around two concep * A response file with details of your environment. * Template files you can easily modify or add to as required. -> Note: Provisioning scripts are re-enterant, if something fails it can be restarted at the point at which it failed. +> Note: Provisioning scripts are reentrant, if something fails it can be restarted at the point at which it failed. ## Getting Started @@ -133,11 +137,11 @@ If you are provisioning Oracle Identity Governance, you must also download the O If you are provisioning the Oracle HTTP Server, you must download the Oracle HTTP installer and place it in the location `$SCRIPTDIR/templates/ohs/installer` the installer MUST be the ZIP file for example, fmw\_12.2.1.4.0\_ohs\_linux64\_Disk1\_1of1.zip. -If you wish to Install the Oracle HTTP Server or copy files to it, you must setup passwordless SSH from the deployment host, during the provisioning. +If you wish to Install the Oracle HTTP Server or copy files to it, you must setup password-less SSH from the deployment host, during the provisioning. ## Creating a Response File -Sample response and password files are created for you in the `responsefile` directory. You can edit these files or create your own file in the same directory using these files as templates. The files can be editied directly or by running the shell script `start_here.sh` in the script's home directory. +Sample response and password files are created for you in the `responsefile` directory. You can edit these files or create your own file in the same directory using these files as templates. The files can be edited directly or by running the shell script `start_here.sh` in the script's home directory. For example @@ -147,7 +151,9 @@ For example You can run the above script as many times as you want on the same file. Pressing the Enter key on any response retains the existing value. -Values are stored in the files `idm.rsp` and `.idmpwds` files unless the command is started with the -r and -p options in which case the files updated will be those specified.. +Values are stored in the files `idm.rsp` and `.idmpwds` files unless the command is started with the -r and -p options in which case the files updated will be those specified. + +> Note: The reference sections below detail all parameters. Parameters associated with passwords are stored in a hidden file in the same directory. This is an added security measure. > Note: > * The file consists of key/value pairs. There should be no spaces between the name of the key and its value. For example: @@ -222,7 +228,7 @@ You should also keep any override files that are generated. ## After Installation/Configuration As part of running the scripts, a number of working files are created in the `WORKDIR` directory prior to copying to the persistent volume in `/u01/user_projects/workdir`. Many of these files contain passwords required for the setup. You should archive these files after completing the deployment. -The responsfile uses a hidden file in the responsefile directory to store passwords. +The responsefile uses a hidden file in the responsefile directory to store passwords. ## Oracle HTTP Server Configuration Files @@ -263,17 +269,24 @@ These parameters are used to specify the type of Kubernetes deployment and the n | **Parameter** | **Sample Value** | **Comments** | | --- | --- | --- | -|**USE\_REGISTRY** | `false` | Set to `true` to configure OAA.| +|**USE\_REGISTRY** | `false` | Set to `true` to obtain images from a Container Registry.| +| **USE\_INGESS** | `true` | Set to true if using and ingress controller| |**IMAGE\_TYPE** | `crio` | Set to `crio` or `docker` depending on your container engine.| + + +### Generic Parameters +These parameters are used to specify Generic properties. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | |**IMAGE\_DIR** | `/container/images` | The location where you have downloaded the container images. Used by the `load_images.sh` script.| | **LOCAL\_WORKDIR** | `/workdir` | The location where you want to create the working directory.| | **K8\_WORKDIR** | `/u01/oracle/user_projects/workdir` | The location inside the Kubernetes containers to which working files are copied.| | **K8\_WORKER\_HOST1** | `k8worker1.example.com` | The name of a Kubernetes worker node used in generating the OHS sample files.| | **K8\_WORKER\_HOST2** | `k8worker2.example.com` | The name of a Kubernetes worker node used in generating the OHS sample files.| - -### Registry Parameters -These parameters are used to determine whether or not you are using a container registry. If you are, then it allows you to store the login credentials to the repository so that you are able to store the credentials as registry secrets in the individual product namespaces. +### Container Registry Parameters +These parameters are used to determine whether or not you are using a container registry. If you are, then it allows you to store the login credentials as registry secrets in the individual product namespaces. If you are pulling images from GitHub or Docker hub, then you can also specify the login parameters here so that you can create the appropriate Kubernetes secrets. @@ -281,13 +294,14 @@ If you are pulling images from GitHub or Docker hub, then you can also specify t | --- | --- | --- | |**REGISTRY** | `iad.ocir.io/mytenancy` | Set to the location of your container registry.| |**REG\_USER** | `mytenancy/oracleidentitycloudservice/email@example.com` | Set to your registry user name.| -|**REG\_PWD** | *``* | Set to your registry password.| +|**REG\_PWD** | *``* | Set to your registry password. Stored in password file.| |**CREATE\_REGSECRET** | `false` | Set to `true` to create a registry secret for automatically pulling images.| |**CREATE\_GITSECRET** | `true` | Specify whether to create a secret for GitHub. This parameter ensures that you do not see errors relating to GitHub not allowing anonymous downloads.| |**GIT\_USER** | `gituser` | The GitHub user's name.| -|**GIT\_TOKEN** | `ghp_aO8fqRNVdfsfshOxsWk40uNMS` | The GitHub token.| +|**GIT\_TOKEN** | `ghp_aO8fqRNVdfsfshOxsWk40uNMS` | The GitHub token. Stored in password file| |**DH\_USER** | *`username`* | The Docker user name for `hub.docker.com`. Used for CronJob images.| -|**DH\_PWD** | *`mypassword`* | The Docker password for `hub.docker.com`. Used for CronJob images.| +|**DH\_PWD** | *`mypassword`* | The Docker password for `hub.docker.com`. Used for CronJob images. Stored in password file| + ### Image Parameters @@ -306,7 +320,7 @@ These can include registry prefixes if you use a registry. Use the `local/` pref |**OIRI\_IMAGE** | `$REGISTRY/oiri` | The OIRI image name.| |**OIRI\_UI\_IMAGE** | `$REGISTRY/oiri-ui` | The OIRI UI image name.| |**OIRI\_DING\_IMAGE** | `$REGISTRY/oiri-ding` | The OIRI DING image name.| -|**OAA\_MGT\_IMAGE** | `$REGISTRY/oracle/shared/oaa-mgmt` | The OAA Management container image.| +|**OAA\_MGT\_IMAGE** | `$REGISTRY/oracle/oaa-mgmt` | The OAA Management container image.| |**KUBECTL\_REPO** | `bitnami/kubectl` | The kubectl image used by OUD.| |**BUSYBOX\_REPO** | `docker.io/busybox` | The busybox image used by OUD.| |**OPER\_VER** | `4.0.4` | The version of the WebLogic Kubernetes Operator.| @@ -322,8 +336,8 @@ These can include registry prefixes if you use a registry. Use the `local/` pref |**OAA\_VER** | `oaa_122140-20210721` | The OAA version.| -### Generic Parameters -These generic parameters apply to all deployments. +### NFS Parameters +These parameters specify the NFS filesystem locations. | **Parameter** | **Sample Value** | **Comments** | | --- | --- | --- | @@ -331,18 +345,6 @@ These generic parameters apply to all deployments. |**IAM\_PVS** | `/export/IAMPVS` | The export path on the NFS where persistent volumes are located.| |**PV\_MOUNT** | `/u01/oracle/user_projects` | The path to mount the PV inside the Kubernetes container. Oracle recommends you to not change this value.| -### Ingress Parameters -These parameters determine how the Ingress controller is deployed. - -| **Parameter** | **Sample Value** | **Comments** | -| --- | --- | --- | -|**INGRESSNS** |`ingressns`| The Kubernetes namespace used to hold the Ingress objects.| -|**INGRESS\_TYPE** |`nginx`| The type of Ingress controller you wan to deploy. At this time, the script supports only `nginx`.| -|**INGRESS\_ENABLE\_TCP** |`true`| Set to `true` if you want the controller to forward LDAP requests.| -|**INGRESS\_NAME** |`idmedg`| The name of the Ingress controller used to create an Nginx Class.| -|**INGRESS\_SSL** |`false`| Set to `true` if you want to configure the Ingress controller for SSL.| -|**INGRESS\_DOMAIN** |`example.com`| Used when creating self-signed certificates for the Ingress controller.| -|**INGRESS\_REPLICAS** |`2`| The number of Ingress controller replicas to start with. This value should be a minimum of two for high availability.| ### Elastic Search Parameters These parameters determine how to send log files to Elastic Search. @@ -362,8 +364,21 @@ These parameters determine how to send monitoring information to Prometheus. | **Parameter** | **Sample Value** | **Comments** | | --- | --- | --- | |**USE\_PROM** |`false`| Set to `true` if you send monitoring data to Prometheus| -|**PROMNS** |`monitoring`| The Kubernetes namespace used to hold the Prometheus Deployement.| +|**PROMNS** |`monitoring`| The Kubernetes namespace used to hold the Prometheus Deployment.| + +### Ingress Parameters +These parameters determine how the Ingress controller is deployed. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**INGRESSNS** |`ingressns`| The Kubernetes namespace used to hold the Ingress objects.| +|**INGRESS\_TYPE** |`nginx`| The type of Ingress controller you wan to deploy. At this time, the script supports only `nginx`.| +|**INGRESS\_ENABLE\_TCP** |`true`| Set to `true` if you want the controller to forward LDAP requests.| +|**INGRESS\_NAME** |`idmedg`| The name of the Ingress controller used to create an Nginx Class.| +|**INGRESS\_SSL** |`false`| Set to `true` if you want to configure the Ingress controller for SSL.| +|**INGRESS\_DOMAIN** |`example.com`| Used when creating self-signed certificates for the Ingress controller.| +|**INGRESS\_REPLICAS** |`2`| The number of Ingress controller replicas to start with. This value should be a minimum of two for high availability.| ### Oracle HTTP Server Parameters These parameters are specific to OHS. These parameters are used to construct the Oracle HTTP Server configuration files and Install the Oracle HTTP Server if requested. @@ -377,10 +392,12 @@ These parameters are specific to OHS. These parameters are used to construct th |**DEPLOY\_WG** |`true`| Deploy WebGate in the `OHS_ORACLE_HOME`.| |**COPY\_WG\_FILES** |`true`| Set this to true if you wish the scripts to automatically copy the generated Webgate Artifacts to your OHS Server. Note: You must first have deployed your Webgate.| |**OHS\_BASE** |`/u02/private`| The location of your OHSbase directory. Binaries and Configuration files are below this location. The OracleInventory is also placed into this location when installing the Oracle HTTP Server| -|**OHS\_ORACLE\_HOME** |`$OHS_BASE/oracle/products/ohs`| The location of your OHS binaries| +|**OHS\_ORACLE\_HOME** |`$OHS_BASE/oracle/products/ohs`| The location of your OHS binaries.| +|**OHS\_USER** |`opc`| The Oracle HTTP Server account user.| +|**OHS\_GRP** |`opc`| The Oracle HTTP Server account group.| |**OHS\_DOMAIN** |`$OHS_BASE/oracle/config/domains/ohsDomain`| The location of your OHS domain| |**OHS1\_NAME** |`ohs1`| The component name of your first OHS instance| -|**OHS2\_NAME** |`ohs1`| The component name of your second OHS instance| +|**OHS2\_NAME** |`ohs2`| The component name of your second OHS instance| |**NM\_ADMIN\_USER** |`admin`| The name of the admin user you wish to assign to Node Manager if Installing the Oracle HTTP Server.| |**NM\_ADMIN\_PWD** |`password`| The password of the admin user you wish to assign to Node Manager if Installing the Oracle HTTP Server.| |**OHS\_PORT** |`7777`| The port your Oracle HTTP Servers listen on.| @@ -397,7 +414,7 @@ These parameters are specific to OUD. When deploying OUD, you also require the g |**OUD\_LOCAL\_SHARE** | `/nfs_volumes/oudconfigpv` | The local directory where **OUD\_CONFIG\_SHARE** is mounted. Used to hold seed files.| |**OUD\_LOCAL\_PVSHARE** | `/nfs_volumes/oudpv`| The local directory where **OUD_SHARE** is mounted. Used for deletion.| |**OUD\_POD\_PREFIX** | `edg`| The prefix used for the OUD pods.| -|**OUD\_REPLICAS** | `1`| The number of OUD replicas to create. If you require two OUD instances, set this to 1. This value is in addition to the primary instance.| +|**OUD\_REPLICAS** | `2`| The number of OUD replicas to create. | |**OUD\_REGION** | `us`| The OUD region to use should be the first part of the searchbase without the `dc=`.| |**LDAP\_USER\_PWD** | *``* | The password to assign to all users being created in LDAP. **Note**: This value should have at least one capital letter, one number, and should be at least eight characters long. |**OUD\_PWD\_EXPIRY** | `2024-01-02`| The date when the user passwords you are creating expires.| @@ -464,7 +481,6 @@ These parameters determine how OAM is deployed and configured. | --- | --- | --- | |**OAMNS** | `oamns` | The Kubernetes namespace used to hold the OAM objects.| |**OAM\_SHARE** | `$IAM_PVS/oampv` | The mount point on NFS where OAM persistent volume is exported.| -|**OAMNS** | `oamns` | The Kubernetes namespace used to hold the OAM objects.| |**OAM\_LOCAL\_SHARE** | `/nfs_volumes/oampv` | The local directory where **OAM_SHARE** is mounted. It is used by the deletion procedure.| |**OAM\_SERVER\_COUNT** | `5` | The number of OAM servers to configure. This value should be more than you expect to use.| |**OAM\_SERVER\_INITIAL** | `2` | The number of OAM Managed Servers you want to start for normal running. You will need at least two servers for high availability.| @@ -487,6 +503,7 @@ These parameters determine how OAM is deployed and configured. |**OAM\_OAP\_HOST** | `k8worker1.example.com` | The name of one of the Kubernetes worker nodes used for OAP calls.| |**OAM\_OAP\_PORT** | `5575` | The internal Kubernetes port used for OAM requests.| |**OAMSERVER\_JAVA\_PARAMS** | "`-Xms2048m -Xmx8192m`" | The internal Kubernetes port used for OAM requests.| +|**COPY\_WG\_FILES** | "`true`" | Set to true if you wish the deployment to copy the Webate Artifacts to your Oracle HTTP Server(s)| ### OIG Parameters These parameters determine how OIG is provisioned and configured. @@ -577,14 +594,13 @@ These parameters determine how OAA is provisioned and configured. | **Parameter** | **Sample Value** | **Comments** | | --- | --- | --- | |**OAANS** |`oaans`| The Kubernetes namespace used to hold the OAA objects.| -|**OAACONS** |`cons`| The Kubernetes namespace used to hold the Coherence objects.| |**OAA\_DEPLOYMENT** |`edg`| A name for your OAA deployment. Do not use the name `oaa` because this is reserved for internal use.| |**OAA\_DOMAIN** |`OAADomain`| The name of the OAM OAuth domain you want to create.| |**OAA\_VAULT\_TYPE** |`file|oci`| The type of vault to use: file system or OCI.| |**OAA\_CREATE\_OHS** |`true`| Set to `false` if you are installing OAA standalone front ended by Ingress. | -|**OAA\_CONFIG\_SHARE** |`$IAM_PVS/oaaconfigpv`| The mount point on NFS where OAA config persistent volume is exported..| -|**OAA\_CRED\_SHARE** |`$IAM_PVS/oaacredpv`| The mount point on NFS where OAA credentials persistent volume is exported..| -|**OAA\_LOG\_SHARE** |`$IAM_PVS/oaalogpv`| The mount point on NFS where OAA logfiles persistent volume is exported..| +|**OAA\_CONFIG\_SHARE** |`$IAM_PVS/oaaconfigpv`| The mount point on NFS where OAA config persistent volume is exported.| +|**OAA\_CRED\_SHARE** |`$IAM_PVS/oaacredpv`| The mount point on NFS where OAA credentials persistent volume is exported.| +|**OAA\_LOG\_SHARE** |`$IAM_PVS/oaalogpv`| The mount point on NFS where OAA logfiles persistent volume is exported.| |**OAA\_LOCAL\_CONFIG\_SHARE** |`/nfs_volumes/oaaconfigpv`| The local directory where **OAA\_CONFIG\_SHARE** is mounted. It is used by the deletion procedure. | |**OAA\_LOCAL\_CRED\_SHARE** |`/nfs_volumes/oaacredpv`| The local directory where **OAA\_CRED\_SHARE** is mounted. It is used by the deletion procedure.| |**OAA\_LOCAL\_LOG_SHARE** |`/nfs_volumes/oaalogpv`| The local directory where **OAA\_LOG\_SHARE** is mounted. It is used by the deletion procedure. | @@ -789,7 +805,7 @@ For reference purposes this section includes the name and function of all the ob | **oamoig.sedfile** | templates/oig | The Sedfile to create OIGOAMIntegration property files. | | **autn.sedfile** | templates/oig | The supplementary Sedfile to create OIGOAMIntegration property files. | | **create\_oigoam\_files.sh** | templates/oig | The template script to generate OIGOAMIntegration property files. | -| **fix\_gridlink.sh** | templates/oig | The template to enable gridlink on data sources. | +| **fix\_gridlink.sh** | templates/oig | The template to enable grid link on data sources. | | **update\_match\_attr.sh** | templates/oig | The template script to update Match Attribute. | | **oigDomain.sedfile** | templates/oig | The template script to update domain\_soa\_oim.yaml. | | **update\_mds.py** | templates/oig | The template file to update MDS datasource. | @@ -843,3 +859,6 @@ For reference purposes this section includes the name and function of all the ob | **delete\_oaa.sh** | utils | Deletes the OAA deployment. | | **delete\_ingress.sh** | utils | Deletes the Ingress controller. | | **load\_images.sh** | utils | Loads the container image onto each Kubernetes worker host. | +| **enable\_dr.sh** | utils | Enables Disaster Recovery - see [Disaster Recovery](README_DR.md). | +| **idmdrctl.sh** | utils | Disaster Recovery lifecycle operations - see [Disaster Recovery](README_DR.md). | + diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/README_DR.md b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/README_DR.md new file mode 100644 index 000000000..83ee5b3d5 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/README_DR.md @@ -0,0 +1,366 @@ +# Automating the Identity and Access Management Enterprise Disaster Recovery + +A number of sample scripts have been developed which allow you to deploy Oracle Identity and Access Management Disaster recovery. These scripts are provided as samples for you to use to develop your own applications. + +You must ensure that you are using the October 2023 or later release of Identity and Access Management for this utility to work. + +The main script enable_dr.sh is designed to be run on each site, that is to say run it on the Primary then Run it on the Standby. + +The scripts can be run from any host which has access to the local Kubernetes cluster. + +The scripts work by taking a backup of objects on the primary site and restoring them on the standby. + +If you wish the scripts to automatically copy backup files between your primary and standby deployment hosts then you must have passwordless ssh set up between your two deployment hosts. + +These scripts are provided as examples and can be customized as desired. + +## Obtaining the Scripts + +The automation scripts are available for download from GitHub. + +To obtain the scripts, use the following command: + +``` +git clone https://github.com/oracle/fmw-Kubernetes.git +``` + +The scripts appear in the following directory: + +``` +fmw-kubernetes/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement +``` + +Move these template scripts to your working directory. For example: + +``` +cp -R fmw-Kubernetes/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/* /workdir/scripts +``` + + +## Scope +This section lists the actions that the scripts perform as part of the deployment process. It also lists the tasks the scripts do not perform. + +### What the Scripts Will do + +The scripts will enable disaster recovery for Oracle Unified Directory (OUD), Oracle Access Manager (OAM), Oracle Identity Governance (OIG), Oracle Identity Role Intelligence (OIRI), Oracle Advanced Authentication (OAA) and Oracle HTTP Servers. + +The scripts perform the following actions: + +* Create a backup job to periodically take a backup of the persistent volumes associated with an application and transfer these files to a staging area on your standby system. +* Create a restore job to periodically restore the PV backup to the standby site, modifying database connection information and Kubernetes access files as needed. +* Use the MAA Kubernetes Snapshot tool to take a backup of the Kubernetes Objects in the application namespace. +* Use the MAA Kubernetes Snapshot tool to restore the backup of the Kubernetes Objects to the same namespace on the standby system. +* If the MAA Kubernetes Snapshot tool is not being used then they will sanitize the deployment created on the standby system, prior to Syncing with the primary. +* Backup the Oracle HTTP Server configuration on the primary, and restore it on the standby making routing updates as needed. +* Provide Management Operations on the Primary and Standby Sites including: + * Manually running and instantiation Job. + * Start and Stop s deployment + * Suspend and Resume the Cronjob's responsible for PV Synchronization. + * Change the role of a site to reverse the direction on the PV Synchronization. + +### What the Scripts Will Not Do + +While the scripts perform the majority of the deployment, they do not perform the following tasks: + +* Deploy Container Runtime Environment, Kubernetes, or Helm. +* Configure load balancer, and ensure that any SSL certificates are consistent between the load balancers on the Primary and Standby Systems. +* Deploy your Primary Site. +* Install the WebLogic Operator. +* Install Ingress. +* Install Oracle HTTP Server. +* Create a Dataguard database on the Standby Site. +* Enable Disaster Recovery for Prometheus and Grafana. +* Enable Disaster Recovery for Elastic Search and Kibana. + + +## Key Concepts of the Scripts + +To make things simple and easy to manage the scripts are based around two concepts: + +* A response file with details of your environment. +* Template files you can easily modify or add to as required. + +> Note: Scripts are reentrant, if something fails it can be restarted at the point at which it failed. + + +## Getting Started + +All operations are controlled via two scripts which are located in the utils directory. + +* enable_dr.sh - This script takes one parameter, the product. Valid products are ohs, oud, oam, oig, oaa and oiri. + + For example, to enable DR for OAM + + On the Primary Site + + ``` + utils/enable_dr.sh oam + ``` + + Then on the Standby Site + + ``` + utils/enable_dr.sh oam + ``` + +* idmdrctl.sh - This script takes two arguments. + + -a Action + -p Product + + Valid Actions are: + + * initial (Create initialization job) + * switch (Switch the sites role STANDBY/PRIMARY) + * stop (Stop a deployment) + * start (Start a deployment) + * suspend (Suspend the PV backup/restore job) + * resume (Resume the PV backup/restore job) + + Valid products are + + * oud + * oam + * oig + * oaa + * oiri + + For example, to shutdown OAM issue the command: + + ``` + utils/idmdrctl.sh -a stop -p oam + ``` + +## Creating a Response File + +Sample response and password files are created for you in the `responsefile` directory. You can edit these files. The files can be edited directly. + + +Values are stored in the files `dr.rsp` and `.drpwd` + +> Note: +> * The file consists of key/value pairs. There should be no spaces between the name of the key and its value. For example: +> `Key=value` +>* If you are using complex passwords, that is, passwords which contain characters such as `!`, `*`, and `$`, then these characters must be separated by a `\`. For example: 'hello!$' should be entered as `hello\!\$`. + +> Note: The reference sections below detail all parameters. Parameters associated with passwords are stored in a hidden file in the same directory. This is an added security measure. + +## Log Files + +The enable_dr.sh script creates log files in the Directory \/\/DR in a `logs` sub-directory. This directory also contains the following two files: + +* `progressfile` – This file contains the last successfully executed step. If you want to restart the process at a different step, update this file. + +* `timings.log` – This file is used for informational purposes to show how much time was spent on each stage of the provisioning process. + +For example: +/workdir/OAM/DR/logs + + +## Reference – Response File + +The following sections describe the parameters in the response file that is used to control the provisioning of the various products in the Kubernetes cluster. The parameters are divided into generic and product-specific parameters. + + + +### Products to Deploy +These parameters determine which products the deployment scripts attempt to deploy. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +| **DR\_OHS** | `true` | Set to `true` to Enable DR for Oracle HTTP Server. | +| **DR\_OUD** | `true` | Set to `true` to Enable DR for OUD. | +| **DR\_OAM** | `true` | Set to `true` to Enable DR for OAM. | +| **DR\_OIG** | `true` | Set to `true` to Enable DR for OIG. | +| **DR\_OIRI** | `true` | Set to `true` to Enable DR for OIRI. | +| **DR\_OAA** | `true` | Set to `true` to Enable DR for OAA.| + + +### Control Parameters +These parameters are used to specify the type of Kubernetes deployment and the names of the temporary directories you want the deployment to use, during the provisioning process. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**USE\_REGISTRY** | `true` | Set to `true` to obtain images from a Container Registry.| +| **USE\_INGESS** | `true` | Set to `true` if using and ingress controller| + +### Generic Parameters +These parameters are used to specify Generic properties. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +| **LOCAL\_WORKDIR** | `/workdir` | The location where you want to create the working directory.| +| **K8\_DRDIR** | `/u01/oracle/user_projects/dr_scripts` | The location inside the Kubernetes containers to which DR files are copied.| + + +### Container Registry Parameters +These parameters are used to determine whether or not you are using a container registry. If you are, then it allows you to store the login credentials to the repository as registry secrets in the individual product namespaces. + +If you are pulling images from GitHub or Docker hub, then you can also specify the login parameters here so that you can create the appropriate Kubernetes secrets. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**REGISTRY** | `iad.ocir.io/mytenancy` | Set to the location of your container registry.| +|**REG\_USER** | `mytenancy/oracleidentitycloudservice/email@example.com` | Set to your registry user name.| +|**REG\_PWD** | *``* | Set to your registry password. | +|**CREATE\_REGSECRET** | `false` | Set to `true` to create a registry secret for automatically pulling images.| + + +### Image Parameters +These parameters are used to specify the names and versions of the container images you want to use for the deployment. These images must be available either locally or in your container registry. The names and versions must be identical to the images in the registry or the images stored locally. + +These can include registry prefixes if you use a registry. Use the `local/` prefix if you use the Oracle Cloud Native Environment. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**RSYNC\_IMAGE** | `ghcr.io/oracle/weblogic-Kubernetes-operator` | The name of the rsync image you wish to use. | +|**OPER\_VER** | `4.1.2` | The version of the WebLogic Kubernetes Operator.| +|**RSYNC\_VER** | `latest` | The version of the RSYNC image.| + +### DR Parameters +These parameters are specific to DR. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**DR\_TYPE** | `PRIMARY` | The role of the current site PRIMARY or STANDBY| +|**DRNS** | `drns` | The Kubernetes namespace used to hold the DR PV sync jobs| + +### NFS Parameters +These generic parameters apply to all deployments. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**DR\_PRIMARY\_PVSERVER** | `primnfsserver.example.com` | The name or IP address of the NFS server used for persistent volumes in the primary site. **Note**: If you use a name, then the name must be resolvable inside the Kubernetes cluster. If it is not resolvable, you can add it by updating CoreDNS. See [Adding Individual Host Entries to CoreDNS](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/preparing-premises-enterprise-deployment.html#GUID-CC0AE601-6D0A-4000-A8CE-F83D2E1F836E). +|**DR\_PRIMARY\_NFS\_EXPORT** | `/export/IAMPVS` | The export path on the primary NFS where persistent volumes are located.| +|**DR\_STANDBY\_PVSERVER** | `stbynfsserver.example.com` | The name or IP address of the NFS server used for persistent volumes in the primary site. **Note**: If you use a name, then the name must be resolvable inside the Kubernetes cluster. If it is not resolvable, you can add it by updating CoreDNS. See [Adding Individual Host Entries to CoreDNS](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/preparing-premises-enterprise-deployment.html#GUID-CC0AE601-6D0A-4000-A8CE-F83D2E1F836E). +|**DR\_STANDBY\_NFS\_EXPORT** | `/export/IAMPVS` | The export path on the primary NFS where persistent volumes are located.| + +### OUD Parameters +These parameters are specific to OUD. When deploying OUD, you also require the generic LDAP parameters. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**OUDNS** | `oudns` | The Kubernetes namespace used to hold the OUD objects.| +|**OUD\_POD\_PREFIX** | `edg`| The prefix used for the OUD pods.| +|**OUD\_REPLICAS** | `2`| The number of OUD replicas to create. | +|**OUD\_PRIMARY\_SHARE** | `$DR_PRIMARY_NFS_EXPORT/oudpv` | Mount point on the primary NFS where OUD persistent volume is exported.| +|**OUD\_PRIMARY\_CONFIG\_SHARE** | `$DR_PRIMARY_NFS_EXPORT/oudconfigpv`| The mount point on the primary NFS where OUD Configuration persistent volume is exported.| +|**OUD\_STANDBY\_SHARE** | `$DR_STANDBY_NFS_EXPORT/oudpv` | Mount point on the Standby NFS where OUD persistent volume is exported.| +|**OUD\_STANDBY\_CONFIG\_SHARE** | `$DR_STANDBY_NFS_EXPORT/oudconfigpv`| The mount point on the standby NFS where OUD Configuration persistent volume is exported.|** +|**OUD\_LOCAL\_SHARE** | `/nfs_volumes/oudconfigpv` | The local directory where the local OUD\_CONFIG\_SHARE is mounted. Used to hold seed files.| +|**DR\_OUD\_MINS** | `5`| The frequency in minutes to run the PV sync job.| +|**DR\_CREATE\_OUD\_JOB** | `true` | Determines whether or not to create a cron job to sync the PVs. Set to false if using hardware synchronization.| + + +### Oracle HTTP Server Parameters +These parameters are specific to OHS. These parameters are used to construct the Oracle HTTP Server configuration files and Install the Oracle HTTP Server if requested. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**OHS\_BASE** |`/u02/private`| The location of your OHS base directory. Binaries and Configuration files are below this location. The OracleInventory is also placed into this location when installing the Oracle HTTP Server.| +|**OHS\_ORACLE\_HOME** |`$OHS_BASE/oracle/products/ohs`| The location of your OHS binaries.| +|**OHS\_DOMAIN** |`$OHS_BASE/oracle/config/domains/ohsDomain`| The location of your OHS domain.| +|**OHS\_USER** |`opc`| The Oracle HTTP Software account user.| +|**OHS\_HOST1** |`webhost1.example.com`| The fully qualified name of the host running the first Oracle HTTP Server.| +|**OHS\_HOST2** |`webhost2.example.com`| The fully qualified name of the host running the second Oracle HTTP Server, leave it blank if you do not have a second Oracle HTTP Server.| +|**OHS1\_NAME** |`ohs1`| The component name of your first OHS instance.| +|**OHS2\_NAME** |`ohs2`| The component name of your second OHS instance.| + + +### OAM Parameters +These parameters determine how OAM is deployed and configured. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**OAMNS** | `oamns` | The Kubernetes namespace used to hold the OAM objects.| +|**OAM\_DOMAIN\_NAME** | `accessdomain` | The name of the OAM domain you want to create.| +|**OAM\_PRIMARY\_SHARE** | `$DR_PRIMARY_NFS_EXPORT/oampv` | The mount point on the primary NFS where OAM persistent volume is exported.| +|**OAM\_STANDBY\_SHARE** | `$DR_STANDBY_NFS_EXPORT/oampv` | The mount point on the standby NFS where OAM persistent volume is exported.| +|**OAM\_LOCAL\_SHARE** | `/nfs_volumes/oampv` | The local directory where OAM_SHARE is mounted. It is used by the deletion procedure.| +|**OAM\_SERVER\_INITIAL** | `2` | The number of OAM Managed Servers you want to start for normal running. You will need at least two servers for high availability.| +|**OAM\_PRIMARY\_DB\_SCAN** | `dbscan.example.com` | The database scan address of the primary grid infrastructure.| +|**OAM\_PRIMARY\_DB\_SERVICE** | `iadedg.example.com` | The database service that connects to the primary database where the OAM schemas are located.| +|**OAM\_STANDBY\_DB\_SCAN** | `stbyscan.example.com` | The database scan address of the standby grid infrastructure.| +|**OAM\_STANDBY\_DB\_SERVICE** | `iadedg.example.com` | The database service that connects to the standby database where the OAM schemas are located.| +|**OAM\_DB\_LISTENER** | `1521` | The database listener port.| +|**COPY\_WG\_FILES** | `true` | Set to true if you wish the DR scripts to copy the WebGate Artifacts to your Oracle HTTP Server(s).| +|**DR\_OAM\_MINS** | `720`| The frequency in minutes to run the PV sync job.| +|**DR\_CREATE\_OAM\_JOB** | `true` | Determines whether or not to create a cron job to sync the PVs. Set to false if using hardware synchronization.| + +### OIG Parameters +These parameters determine how OIG is provisioned and configured. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**OIGNS** | `oigns` | The Kubernetes namespace used to hold the OIG objects.| +|**OIG\_DOMAIN\_NAME** | `governancedomain` | The name of the OIG domain you want to create.| +|**OIG\_PRIMARY\_SHARE** | `$DR_PRIMARY_NFS_EXPORT/oigpv` | The mount point on the primary NFS where OIG persistent volume is exported.| +|**OIG\_STANDBY\_SHARE** | `$DR_STANDBY_NFS_EXPORT/oigpv` | The mount point on the standby NFS where OIG persistent volume is exported.| +|**OIG\_LOCAL\_SHARE** | `/local_volumes/oigpv` |The local directory where OIG\_SHARE is mounted. It is used by the deletion procedure.| +|**OIG\_SERVER\_INITIAL** | `2` | The number of OIM/SOA Managed Servers you want to start for normal running. You will need at least two servers for high availability.| +|**OIG\_PRIMARY\_DB\_SCAN** | `dbscan.example.com` | The database scan address used by the primary grid infrastructure.| +|**OIG\_STANDBY\_DB\_SCAN** | `stbyscan.example.com` | The database scan address used by the standby grid infrastructure.| +|**OIG\_DB\_LISTENER** | `1521` | The database listener port.| +|**OIG\_PRIMARY\_DB\_SERVICE** | `edgigd.example.com` | The database service that connects to the primary database where the OIG schemas are located.| +|**OIG\_STANDBY\_DB\_SERVICE** | `edgigd.example.com` | The database service that connects to the standby database where the OIG schemas are located.| +|**DR\_OIG\_MINS** | `720`| The frequency in minutes to run the PV sync job.| +|**DR\_CREATE\_OIG\_JOB** | `true` | Determines whether or not to create a cron job to sync the PVs. Set to false if using hardware synchronization.| + + +### OIRI Parameters +These parameters determine how OIRI is provisioned and configured. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**OIRINS** | `oirins` | The Kubernetes namespace used to hold the OIRI objects.| +|**DINGNS** | `dingns` | The Kubernetes namespace used to hold the OIRI DING objects.| +|**OIRI\_PRIMARY\_SHARE** |`$DR_PRIMARY_NFS_EXPORT/oiripv`| The mount point on the primary NFS where OIRI persistent volume is exported.| +|**OIRI\_STANDBY\_SHARE** |`$DR_STANDBY_NFS_EXPORT/oiripv`| The mount point on the primary NFS where OIRI persistent volume is exported.| +|**OIRI\_LOCAL\_SHARE** |`/nfs_volumes/oiripv`| The local directory where the local OIRI_SHARE is mounted. It is used by the deletion procedure.| +|**OIRI\_PRIMARY\_DING\_SHARE** |`$DR_PRIMARY_NFS_EXPORT/dingpv`| The mount point on primary NFS where OIRI DING persistent volume is exported.| +|**OIRI\_STANDBY\_DING\_SHARE** |`$DR_STANDBY_NFS_EXPORT/dingpv`| The mount point on standby NFS where OIRI DING persistent volume is exported.| +|**OIRI\_DING\_LOCAL\_SHARE** |`/nfs_volumes/dingpv`| The local directory where local DING_SHARE is mounted. It is used by the deletion procedure.| +|**OIRI\_PRIMARY\_WORK\_SHARE** |`$DR_PRIMARY_NFS_EXPORT/workpv`| The mount point on primary NFS where OIRI work persistent volume is exported.| +|**OIRI\_STANDBY\_WORK\_SHARE** |`$DR_STANDBY_NFS_EXPORT/workpv`| The mount point on standby NFS where OIRI work persistent volume is exported.| +|**OIRI\_PRIMARY\_DB\_SCAN** |`dbscan.example.com`| The database SCAN address of the primary grid infrastructure.| +|**OIRI\_STANDBY\_DB\_SCAN** |`stbyscan.example.com`| The database SCAN address of the standby grid infrastructure.| +|**OIRI\_DB\_LISTENER** |`1521`| The database listener port.| +|**OIRI\_DB\_PRIMARY\_SERVICE** |`edgoiri.example.com`| The database service that connects to the primary database where the OIRI schemas are located.| +|**OIRI\_STANDBY\_DB\_SERVICE** | `edgoiri.example.com` | The database service that connects to the standby database where the OIRI schemas are located.| +|**OIRI\_PRIMARY\_K8CONFIG** |`primary_k8config`| The name to call the Kubernetes configfile for the primary Kubernetes cluster| +|**OIRI\_STANDBY\_K8CONFIG** |`standby_k8config`| The name to call the Kubernetes configfile for the standby Kubernetes cluster| +|**OIRI\_PRIMARY\_K8CA** |`primary_ca.crt`| The name to call the Kubernetes certificate authority for the primary Kubernetes cluster.| +|**OIRI\_STANDBY\_K8CA** |`standby_ca.crt`| The name to call the Kubernetes certificate authority for the standby Kubernetes cluster.| +|**OIRI\_PRIMARY\_K8** |`10.0.0.5:6443`| Host and port of the Kubernetes primary cluster (obtained from kubeconfig file).| +|**OIRI\_STANDBY\_K8** |`10.1.0.5:6443`| Host and port of the Kubernetes standby cluster (obtained from kubeconfig file).| +|**DR\_OIRI\_MINS** | `720`| The frequency in minutes to run the PV sync job.| +|**DR\_CREATE\_OIRI\_JOB** | `true` | Determines whether or not to create a cron job to sync the PVs. Set to false if using hardware synchronization.| + +### OAA Parameters +These parameters determine how OAA is provisioned and configured. + +| **Parameter** | **Sample Value** | **Comments** | +| --- | --- | --- | +|**OAANS** |`oaans`| The Kubernetes namespace used to hold the OAA objects.| +|**OAA\_MGT\_IMAGE** |`$REGISTRY/oaa-mgmt`| The OAA Management container image.| +|**OAAMGT\_VER** |`latest`| The OAA version.| +|**OAA\_PRIMARY\_CONFIG\_SHARE** |`$DR_PRIMARY_NFS_EXPORT/oaaconfigpv`| The mount point on primary NFS where OAA config persistent volume is exported.| +|**OAA\_STANDBY\_CONFIG\_SHARE** |`$DR_STANDBY_NFS_EXPORT/oaaconfigpv`| The mount point on standby NFS where OAA config persistent volume is exported.| +|**OAA\_PRIMARY\_CRED\_SHARE** |`$DR_PRIMARY_NFS_EXPORT/oaacredpv`| The mount point on the primary NFS where OAA credentials persistent volume is exported.| +|**OAA\_STANDBY\_CRED\_SHARE** |`$DR_STANDBY_NFS_EXPORT/oaacredpv`| The mount point on the standby NFS where OAA credentials persistent volume is exported.| +|**OAA\_PRIMARY\_LOG\_SHARE** |`$DR_PRIMARY_NFS_EXPORT/oaalogpv`| The mount point on the primary NFS where OAA logfiles persistent volume is exported.| +|**OAA\_PRIMARY\_VAULT\_SHARE** |`$DR_PRIMARY_NFS_EXPORT/oaavaultpv`| The mount point on the primary NFS where OAA vault persistent volume is exported.| +|**OAA\_STANDBY\_VAULT\_SHARE** |`$DR_STANDBY_NFS_EXPORT/oaavaultpv`| The mount point on the standby NFS where OAA vault persistent volume is exported.| +|**OAA\_STANDBY\_LOG\_SHARE** |`$DR_STANDBY_NFS_EXPORT/oaalogpv`| The mount point on the standby NFS where OAA logfiles persistent volume is exported.| +|**OAA\_LOCAL\_CONFIG\_SHARE** |`/nfs_volumes/oaaconfigpv`| The local directory where the local OAA_CONFIG_SHARE PV is mounted. It is used by the deletion procedure. | +|**OAA\_LOCAL\_CRED\_SHARE** |`/nfs_volumes/oaacredpv`| The local directory where the local OAA_CRED PV is mounted. It is used by the deletion procedure.| +|**OAA\_LOCAL\_LOG_SHARE** |`/nfs_volumes/oaalogpv`| The local directory where local OAA_LOG PV is mounted. It is used by the deletion procedure. | +|**OAA\_VAULT\_LOG_SHARE** |`/nfs_volumes/oaavaultpv`| The local directory where local OAA_VAULT PV is mounted. It is used by the deletion procedure. | +|**OAA\_PRIMARY\_DB\_SCAN** |`dbscan.example.com`| The database SCAN address of the primary grid infrastructure.| +|**OAA\_STANDBY\_DB\_SCAN** |`stbyscan.example.com`| The database SCAN address of the standby grid infrastructure.| +|**OAA\_DB\_LISTENER** |`1521`| The database listener port.| +|**OAA\_DB\_PRIMARY\_SERVICE** |`edgoaa.example.com`| The database service that connects to the primary database where the OAA schemas are located.| +|**OAA\_DB\_STANDBY\_SERVICE** |`edgoaa.example.com`| The database service that connects to the standby database where the OAA schemas are located.| +|**OAA\_VAULT\_TYPE** |`file or oci`| The type of vault to use: file system or OCI.| +|**OAA\_REPLICAS** |`2`| The number of OAA service pods to be created. For HA, the minimum number is two.| + diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/functions.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/functions.sh index 78ae5fc39..c30edd185 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/functions.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/functions.sh @@ -161,6 +161,18 @@ check_oper_exists() fi } +check_oper_running() +{ + print_msg "Check Operator is Running" + kubectl get pods -ALL | grep operator | head -1 > /dev/null 2>&1 + if [ $? = 0 ] + then + echo "Success" + else + echo "Failed Start WebLogic Kubernetes Operator before continuing." + exit 1 + fi +} # Helm Functions # install_operator() @@ -526,6 +538,27 @@ copy_samples() } +# Download MAA Samples to Directory +# +download_maa_samples() +{ + ST=$(date +%s) + print_msg "Downloading Oracle MAA Samples " + cd $LOCAL_WORKDIR + + if [ -d maa ] + then + echo "Already Exists - Skipping" + else + git clone -q $MAA_SAMPLES_REP > $LOCAL_WORKDIR/maa_sample_download.log 2>&1 + print_status $? $LOCAL_WORKDIR/maa_sample_download.log + chmod 700 $LOCAL_WORKDIR/maa/kubernetes-maa/*.sh >> $LOCAL_WORKDIR/maa_sample_download.log 2>&1 + fi + ET=$(date +%s) + + print_time STEP "Download MAA Samples" $ST $ET >> $LOGDIR/timings.log +} + # Create helper pod # create_helper_pod () @@ -635,6 +668,8 @@ scale_cluster() ;; esac + sleep 60 + if [ $REPLICAS = $CURRENT ] then printf "\t\t\tNo change\n" @@ -1095,7 +1130,7 @@ check_running() X=0 RETRIES=1 - MAX_RETRIES=50 + MAX_RETRIES=30 POD_RUNNING=false while [ $X -lt $MAX_RETRIES ] do @@ -1288,7 +1323,7 @@ print_msg() msg=$1 if [ "$STEPNO" = "" ] then - printf "$msg" + printf "$msg - " else printf "Executing Step $STEPNO:\t$msg - " fi @@ -1828,3 +1863,460 @@ check_ldapsearch() return $? } + +# Suspend DR cronjob +# +suspend_cronjob() +{ + NAMESPACE=$1 + JOBNAME=$2 + + ST=$(date +%s) + print_msg "Suspending DR Cron Job" + kubectl patch cronjobs $JOBNAME -p '{"spec" : {"suspend" : true }}' -n $NAMESPACE > $LOGDIR/suspend_cron.log 2>&1 + + print_status $? $LOGDIR/suspend_cron.log + + ET=$(date +%s) + print_time STEP "Suspend Cron Job" $ST $ET >> $LOGDIR/timings.log +} + +# Restart DR Cronjob +# +resume_cronjob() +{ + NAMESPACE=$1 + JOBNAME=$2 + + ST=$(date +%s) + print_msg "Resuming DR Cron Job" + kubectl patch cronjobs $JOBNAME -p '{"spec" : {"suspend" : false }}' -n $NAMESPACE > $LOGDIR/resume_cron.log 2>&1 + + print_status $? resume_cron.log + + ET=$(date +%s) +} + +# If creating a backup job create persistent volumes pointing to the file systems on both the primary and standby sites. +# +create_dr_pvs() +{ + PRODUCT=$1 + + PRIMARY_SHARE_VAR=${PRODUCT}_PRIMARY_SHARE + STANDBY_SHARE_VAR=${PRODUCT}_STANDBY_SHARE + ST=$(date +%s) + + + print_msg "Creating DR Persistent Volume Files" + cp $TEMPLATE_DIR/dr_pv.yaml $WORKDIR/dr_primary_pv.yaml + cp $TEMPLATE_DIR/dr_pv.yaml $WORKDIR/dr_dr_pv.yaml + if [ "$DR_TYPE" = "PRIMARY" ] + then + update_variable "" $DR_PRIMARY_PVSERVER $WORKDIR/dr_primary_pv.yaml + update_variable "" $DR_STANDBY_PVSERVER $WORKDIR/dr_dr_pv.yaml + if [ ! "$PRODUCT" = "OAA" ] + then + update_variable "<${PRODUCT}_SHARE_PATH>" ${!PRIMARY_SHARE_VAR} $WORKDIR/dr_primary_pv.yaml + update_variable "<${PRODUCT}_SHARE_PATH>" ${!STANDBY_SHARE_VAR} $WORKDIR/dr_dr_pv.yaml + else + update_variable "" $OAA_PRIMARY_CONFIG_SHARE $WORKDIR/dr_primary_pv.yaml + update_variable "" $OAA_STANDBY_CONFIG_SHARE $WORKDIR/dr_dr_pv.yaml + update_variable "" $OAA_PRIMARY_VAULT_SHARE $WORKDIR/dr_primary_pv.yaml + update_variable "" $OAA_STANDBY_VAULT_SHARE $WORKDIR/dr_dr_pv.yaml + update_variable "" $OAA_PRIMARY_CRED_SHARE $WORKDIR/dr_primary_pv.yaml + update_variable "" $OAA_STANDBY_CRED_SHARE $WORKDIR/dr_dr_pv.yaml + update_variable "" $OAA_PRIMARY_LOG_SHARE $WORKDIR/dr_primary_pv.yaml + update_variable "" $OAA_STANDBY_LOG_SHARE $WORKDIR/dr_dr_pv.yaml + fi + + if [ "$PRODUCT" = "OIRI" ] + then + update_variable "" ${OIRI_DING_PRIMARY_SHARE} $WORKDIR/dr_primary_pv.yaml + update_variable "" ${OIRI_WORK_PRIMARY_SHARE} $WORKDIR/dr_primary_pv.yaml + update_variable "" ${OIRI_DING_STANDBY_SHARE} $WORKDIR/dr_dr_pv.yaml + update_variable "" ${OIRI_WORK_STANDBY_SHARE} $WORKDIR/dr_dr_pv.yaml + update_variable "" $DR_STANDBY_PVSERVER $WORKDIR/dr_dr_pv.yaml + fi + else + update_variable "" $DR_PRIMARY_PVSERVER $WORKDIR/dr_dr_pv.yaml + update_variable "" $DR_STANDBY_PVSERVER $WORKDIR/dr_primary_pv.yaml + + if [ ! "$PRODUCT" = "OAA" ] + then + update_variable "<${PRODUCT}_SHARE_PATH>" ${!STANDBY_SHARE_VAR} $WORKDIR/dr_primary_pv.yaml + update_variable "<${PRODUCT}_SHARE_PATH>" ${!PRIMARY_SHARE_VAR} $WORKDIR/dr_dr_pv.yaml + else + update_variable "" $OAA_STANDBY_CONFIG_SHARE $WORKDIR/dr_primary_pv.yaml + update_variable "" $OAA_PRIMARY_CONFIG_SHARE $WORKDIR/dr_dr_pv.yaml + update_variable "" $OAA_STANDBY_VAULT_SHARE $WORKDIR/dr_primary_pv.yaml + update_variable "" $OAA_PRIMARY_VAULT_SHARE $WORKDIR/dr_dr_pv.yaml + update_variable "" $OAA_STANDBY_CRED_SHARE $WORKDIR/dr_primary_pv.yaml + update_variable "" $OAA_PRIMARY_CRED_SHARE $WORKDIR/dr_dr_pv.yaml + update_variable "" $OAA_STANDBY_LOG_SHARE $WORKDIR/dr_primary_pv.yaml + update_variable "" $OAA_PRIMARY_LOG_SHARE $WORKDIR/dr_dr_pv.yaml + fi + if [ "$PRODUCT" = "OIRI" ] + then + update_variable "" ${OIRI_DING_STANDBY_SHARE} $WORKDIR/dr_primary_pv.yaml + update_variable "" ${OIRI_WORK_STANDBY_SHARE} $WORKDIR/dr_primary_pv.yaml + update_variable "" ${OIRI_DING_PRIMARY_SHARE} $WORKDIR/dr_dr_pv.yaml + update_variable "" ${OIRI_WORK_PRIMARY_SHARE} $WORKDIR/dr_dr_pv.yaml + fi + fi + update_variable "" primary $WORKDIR/dr_primary_pv.yaml + update_variable "" standby $WORKDIR/dr_dr_pv.yaml + + print_status $? + + printf "\t\t\tCreating DR Primary PV - " + kubectl create -f $WORKDIR/dr_primary_pv.yaml > $LOGDIR/create_pv.log 2>&1 + print_status $? $LOGDIR/create_pv.log + printf "\t\t\tCreating DR Standby PV - " + kubectl create -f $WORKDIR/dr_dr_pv.yaml >> $LOGDIR/create_pv.log 2>&1 + print_status $? $LOGDIR/create_pv.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volumes" $ST $ET >> $LOGDIR/timings.log +} + +# Create persistent volumes used by DR Cron job. +# +create_dr_pv() +{ + ST=$(date +%s) + print_msg "Creating DR Persistent Volume" + + kubectl create -f $WORKDIR/dr_dr_pv.yaml > $LOGDIR/create_dr_pv.log 2>&1 + print_status $? $LOGDIR/create_dr_pv.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volume " $ST $ET >> $LOGDIR/timings.log +} + +# Create persistent volume claims used by DR Cron job. +# +create_dr_pvcs() +{ + ST=$(date +%s) + print_msg "Creating DR Persistent Volume Claim Files" + cp $TEMPLATE_DIR/dr_pvc.yaml $WORKDIR/dr_primary_pvc.yaml + cp $TEMPLATE_DIR/dr_pvc.yaml $WORKDIR/dr_dr_pvc.yaml + + update_variable "" $DRNS $WORKDIR/dr_primary_pvc.yaml + update_variable "" primary $WORKDIR/dr_primary_pvc.yaml + + update_variable "" $DRNS $WORKDIR/dr_dr_pvc.yaml + update_variable "" standby $WORKDIR/dr_dr_pvc.yaml + + print_status $? + + printf "\t\t\tCreating Primary Persistent Volume Claim - " + kubectl create -f $WORKDIR/dr_primary_pvc.yaml > $LOGDIR/create_pvc.log 2>&1 + print_status $? $LOGDIR/create_pvc.log + + printf "\t\t\tCreating Standby Persistent Volume Claim - " + kubectl create -f $WORKDIR/dr_dr_pvc.yaml >> $LOGDIR/create_pvc.log 2>&1 + print_status $? $LOGDIR/create_pvc.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volume Claim Files" $ST $ET >> $LOGDIR/timings.log +} + +create_dr_pvc() +{ + ST=$(date +%s) + print_msg "Creating DR Persistent Volume Claim" + kubectl create -f $WORKDIR/dr_dr_pvc.yaml > $LOGDIR/create_dr_pvc.log 2>&1 + print_status $? $LOGDIR/create_dr_pvc.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volume Claim " $ST $ET >> $LOGDIR/timings.log +} + +# Create a DR Config Map to control how DR Cronjobs work. +# +create_dr_configmap() +{ + ST=$(date +%s) + print_msg "Creating DR Config Map" + cp $TEMPLATE_DIR/../general/dr_cm.yaml $WORKDIR/dr_cm.yaml + update_variable "" $DRNS $WORKDIR/dr_cm.yaml + update_variable "" $ENV_TYPE $WORKDIR/dr_cm.yaml + update_variable "" $DR_TYPE $WORKDIR/dr_cm.yaml + update_variable "" $OIG_DOMAIN_NAME $WORKDIR/dr_cm.yaml + update_variable "" $OAM_DOMAIN_NAME $WORKDIR/dr_cm.yaml + + if [ "$DR_TYPE" = "PRIMARY" ] + then + update_variable "" $OAM_PRIMARY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OAM_STANDBY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OIG_PRIMARY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OIG_STANDBY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_PRIMARY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_STANDBY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OAA_PRIMARY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OAA_STANDBY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OAM_PRIMARY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OAM_STANDBY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OIG_PRIMARY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OIG_STANDBY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_PRIMARY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_STANDBY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_PRIMARY_K8CONFIG $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_STANDBY_K8CONFIG $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_PRIMARY_K8CA $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_STANDBY_K8CA $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_STANDBY_K8 $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_PRIMARY_K8 $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_STANDBY_K8 $WORKDIR/dr_cm.yaml + update_variable "" $OAA_PRIMARY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OAA_STANDBY_DB_SERVICE $WORKDIR/dr_cm.yaml + else + update_variable "" $OAM_STANDBY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OAM_PRIMARY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OIG_STANDBY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OIG_PRIMARY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OAA_STANDBY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OAA_PRIMARY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OAM_STANDBY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OAM_PRIMARY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OIG_STANDBY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OIG_PRIMARY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_STANDBY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_PRIMARY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_STANDBY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_PRIMARY_DB_SCAN $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_STANDBY_K8CONFIG $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_PRIMARY_K8CONFIG $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_STANDBY_K8CA $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_PRIMARY_K8CA $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_PRIMARY_K8 $WORKDIR/dr_cm.yaml + update_variable "" $OIRI_STANDBY_K8 $WORKDIR/dr_cm.yaml + update_variable "" $OAA_STANDBY_DB_SERVICE $WORKDIR/dr_cm.yaml + update_variable "" $OAA_PRIMARY_DB_SERVICE $WORKDIR/dr_cm.yaml + fi + + kubectl apply -f $WORKDIR/dr_cm.yaml > $LOGDIR/dr_cm.log 2>&1 + if [ $? = 0 ] + then + echo "Success" + else + grep -q Exists $LOGDIR/dr_cm.log + if [ $? = 0 ] + then + echo "Already Exists" + else + echo "Failed - See $LOGDIR/dr_cm.log." + exit 1 + fi + fi + + ET=$(date +%s) + print_time STEP "Create DR Config Map" $ST $ET >> $LOGDIR/timings.log +} + +# Copy the script that the DR CronJob uses to the product PV +# +copy_dr_script() +{ + ST=$(date +%s) + + PV_MOUNT_VAR=${PRODUCT}_LOCAL_SHARE + + print_msg "Copy $PRODUCT DR Script to PV" + printf "\n\t\t\tChecking ${!PV_MOUNT_VAR} is mounted locally - " + + LOCAL_SHARE=${!PV_MOUNT_VAR} + df $LOCAL_SHARE > /dev/null 2>&1 + print_status $? + + printf "\t\t\tCreating DR Script Directory - " + if [ -e $LOCAL_SHARE/dr_scripts ] + then + echo " Already Exists" + else + mkdir $LOCAL_SHARE/dr_scripts > $LOGDIR/copy_drscripts.log 2>&1 + print_status $? $LOGDIR/copy_drscripts.log + fi + + printf "\t\t\tCopy DR Script to Container - " + cp $TEMPLATE_DIR/${product_type}_dr.sh $LOCAL_SHARE/dr_scripts + print_status $? $LOGDIR/copy_drscripts.log + + printf "\t\t\tSet execute permission - " + chmod 700 $LOCAL_SHARE/dr_scripts/${product_type}_dr.sh + print_status $? $LOGDIR/copy_drscripts.log + + ET=$(date +%s) + print_time STEP "Copy DR Script" $ST $ET >> $LOGDIR/timings.log +} + +# Create a cronjob to Rsync PVs to the standby site, and update DB connections +# +create_dr_cronjob() +{ + ST=$(date +%s) + print_msg "Creating DR Cron Job" + kubectl create -f $WORKDIR/dr_cron.yaml > $LOGDIR/create_dr_cron.log 2>&1 + + print_status $? $LOGDIR/create_dr_cron.log + + ET=$(date +%s) + print_time STEP "Create DR Cron Job" $ST $ET >> $LOGDIR/timings.log +} + +# Create a one-of job to initialise the PVs, based on the cronjob +# +initialise_dr() +{ + ST=$(date +%s) + print_msg "Creating Job to Initialise $PRODUCT DR " + kubectl create job --from=cronjob.batch/${product_type}rsyncdr ${product_type}-initialise-$ST -n $DRNS > $LOGDIR/initialise_dr-$ST.log 2>&1 + print_status $? $LOGDIR/initialise_dr-$ST.log + printf "\t\t\tJob - ${product_type}-initialise-$ST Created in namespace $DRNS - " + sleep 10 + PODNAME=`kubectl get pod -n $DRNS | grep ${product_type}-initialise-$ST | tail -1 | awk '{ print $1 }'` + if [ "$PODNAME" = "" ] + then + echo "Failed to create job." + exit 1 + else + echo "Success" + fi + printf "\n\n\t\t\tMonitor job using the command: kubectl logs -n $DRNS $PODNAME\n\n" + + ET=$(date +%s) +} + +# Switch a sites Role from Primary to Standby and visa versa. +# +switch_dr_mode() +{ + current_mode=$(kubectl get cm -n $DRNS dr-cm -o yaml | grep DR_TYPE | cut -f2 -d: | tr -d ' ') + + if [ "$current_mode" = "PRIMARY" ] + then + new_mode="STANDBY" + else + new_mode="PRIMARY" + fi + + echo -n "You are requesting to switch this sites DR mode from $current_mode to $new_mode. Is this correct (y/n) ?" + read ANS + + if [ "$ANS" = "y" ] + then + CMD="kubectl patch configmap -n $DRNS dr-cm -p '{\"data\":{\"DR_TYPE\":\"$new_mode\"}}'" + eval $CMD + if [ $? -eq 0 ] + then + echo "Mode changed successfully." + else + echo "Unable to change the mode." + fi + fi +} + +# Stop all deployments running in a namespace. +# +stop_deployment() +{ + DEPLOYNS=$1 + ST=$(date +%s) + print_msg "Stop Deployments in namespace $DEPLOYNS" + deployments=$(kubectl get deployment -n $DEPLOYNS | grep -v NAME | awk '{print $1}') + for deployment in $deployments + do + echo Stopping Deployment : $deployment + kubectl patch deployment -p '{"spec" : {"replicas" : 0 }}' -n $DEPLOYNS $deployment + done + ET=$(date +%s) + print_time STEP "Stop Deployments in $DEPLOYNS" $ST $ET >> $LOGDIR/timings.log +} + +# Start all deployments in a namespace +# +start_deployment() +{ + DEPLOYNS=$1 + REPLICAS=$2 + ST=$(date +%s) + print_msg "Start Deployments in namespace $DEPLOYNS" + deployments=$(kubectl get deployment -n $DEPLOYNS | grep -v NAME | awk '{print $1}') + for deployment in $deployments + do + echo Starting Deployment : $deployment + kubectl patch deployment -p "{\"spec\" : {\"replicas\" : $REPLICAS }}" -n $DEPLOYNS $deployment + done + ET=$(date +%s) + print_time STEP "Start Deployments in $DEPLOYNS" $ST $ET >> $LOGDIR/timings.log +} + +# Backup the Priamry OHS config. +# +get_ohs_config() +{ + OHS_SERVERS=$1 + + ST=$(date +%s) + print_msg "Copying OHS configuration Files to $LOCAL_WORKDIR/OHS" + + $SCP $OHS_HOST1:$OHS_DOMAIN/config/fmwconfig/components/OHS/$OHS1_NAME/moduleconf/*vh.conf $WORKDIR > $LOGDIR/copy_ohs.log 2>&1 + print_status $? $LOGDIR/copy_ohs.log + + ET=$(date +%s) + print_time STEP "Copy OHS Configuration " $ST $ET >> $LOGDIR/timings.log +} + +# Create a tar file of the OHS config +# +tar_ohs_config() +{ + + ST=$(date +%s) + print_msg "Tarring OHS configuration Files " + + cd $WORKDIR + tar cvfz ohs_config.tar.gz *vh.conf > $LOGDIR/ohs_tar.log 2>&1 + print_status $? $LOGDIR/ohs_tar.log + + ET=$(date +%s) + print_time STEP "Tarring OHS Configuration " $ST $ET >> $LOGDIR/timings.log +} + +# Untar the OHS config on the DR site. +# +untar_ohs_config() +{ + + ST=$(date +%s) + print_msg "Untarring OHS configuration Files " + + cd $WORKDIR + tar xvfz $WORKDIR/ohs_config.tar.gz *vh.conf > $LOGDIR/ohs_tar.log 2>&1 + print_status $? $LOGDIR/ohs_tar.log + + ET=$(date +%s) + print_time STEP "Untarring OHS Configuration " $ST $ET >> $LOGDIR/timings.log +} + +# Copy files to the DR Host +# +copy_files_to_dr() +{ + + FILE=$1 + ST=$(date +%s) + print_msg "Copying $FILE to DR System" + + DIR=$(dirname $FILE) + printf "\n\t\t\tCreate Directory $DIR on $DR_HOST - " + $SSH -o ConnectTimeout=4 $DR_USER@$DR_HOST "mkdir -p $DIR" > $LOGDIR/copy_file_to_dr.log 2>&1 + print_status $? $LOGDIR/copy_file_to_dr.log + printf "\t\t\tCopying file $FILE to $DR_HOST - " + $SCP $FILE $DR_USER@$DR_HOST:$FILE >> $LOGDIR/copy_file_to_dr.log 2>&1 + print_status $? $LOGDIR/copy_file_to_dr.log + ET=$(date +%s) + print_time STEP "Copying OHS Configuration to $DR_HOST" $ST $ET >> $LOGDIR/timings.log +} diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oaa_functions.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oaa_functions.sh index 7df8f0be6..de9bb4e39 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oaa_functions.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oaa_functions.sh @@ -50,16 +50,23 @@ create_helper() print_status $? $LOGDIR/create_mgmt.log check_running $OAANS oaa-mgmt - printf "\t\t\tCopying Settings file - " - - kubectl exec -it -n $OAANS oaa-mgmt -- cp /u01/oracle/installsettings/installOAA.properties /u01/oracle/scripts/settings/ >> $LOGDIR/create_mgmt.log - print_status $? $LOGDIR/create_mgmt.log fi ET=$(date +%s) print_time STEP "Create OAA Management container" $ST $ET >> $LOGDIR/timings.log } +copy_settings_file() +{ + print_msg "Copying Template OAA Propery file" + ST=$(date +%s) + + kubectl exec -it -n $OAANS oaa-mgmt -- cp /u01/oracle/installsettings/installOAA.properties /u01/oracle/scripts/settings/ >> $LOGDIR/create_mgmt.log + print_status $? $LOGDIR/create_mgmt.log + ET=$(date +%s) + print_time STEP "Create OAA Management container" $ST $ET >> $LOGDIR/timings.log +} + # Copy file to Kubernetes Container # copy_to_oaa() @@ -242,6 +249,23 @@ prepare_property_file() sed -i "/sms:/{n;s/replicaCount.*/replicaCount: $OAA_SMS_REPLICAS/}" $override sed -i "/oaa-policy:/{n;s/replicaCount.*/replicaCount: $OAA_POLICY_REPLICAS/}" $override sed -i "/push:/{n;s/replicaCount.*/replicaCount: $OAA_PUSH_REPLICAS/}" $override + echo "resources:" >> $override + echo " requests:" >> $override + echo " cpu: $OAA_OAA_CPU" >> $override + echo " memory: \"$OAA_OAA_MEMORY\"" >> $override + sed -i "/spui:/a\ resources:\n requests:\n cpu: $OAA_SPUI_CPU\n memory: \"$OAA_SPUI_MEMORY\"" $override + sed -i "/totp:/a\ resources:\n requests:\n cpu: $OAA_TOTP_CPU\n memory: \"$OAA_TOTP_MEMORY\"" $override + sed -i "/yotp:/a\ resources:\n requests:\n cpu: $OAA_YOTP_CPU\n memory: \"$OAA_YOTP_MEMORY\"" $override + sed -i "/fido:/a\ resources:\n requests:\n cpu: $OAA_FIDO_CPU\n memory: \"$OAA_FIDO_MEMORY\"" $override + sed -i "/email:/a\ resources:\n requests:\n cpu: $OAA_EMAIL_CPU\n memory: \"$OAA_EMAIL_MEMORY\"" $override + sed -i "/push:/a\ resources:\n requests:\n cpu: $OAA_PUSH_CPU\n memory: \"$OAA_PUSH_MEMORY\"" $override + sed -i "/sms:/a\ resources:\n requests:\n cpu: $OAA_SMS_CPU\n memory: \"$OAA_SMS_MEMORY\"" $override + sed -i "/oaa-kba:/a\ resources:\n requests:\n cpu: $OAA_KBA_CPU\n memory: \"$OAA_KBA_MEMORY\"" $override + sed -i "/oaa-policy:/a\ resources:\n requests:\n cpu: $OAA_POLICY_CPU\n memory: \"$OAA_POLICY_MEMORY\"" $override + sed -i "/customfactor:/a\ resources:\n requests:\n cpu: $OAA_CUSTOM_CPU\n memory: \"$OAA_CUSTOM_MEMORY\"" $override + sed -i "/risk:/a\ resources:\n requests:\n cpu: $OAA_RISK_CPU\n memory: \"$OAA_RISK_MEMORY\"" $override + sed -i "/^riskcc:/a\ resources:\n requests:\n cpu: $OAA_RISKCC_CPU\n memory: \"$OAA_RISKCC_MEMORY\"" $override + sed -i "/oaa-admin-ui:/a\ resources:\n requests:\n cpu: $OAA_ADMIN_CPU\n memory: \"$OAA_ADMIN_MEMORY\"" $override copy_to_oaa $propfile /u01/oracle/scripts/settings/installOAA.properties $OAANS oaa-mgmt >> $LOGDIR/create_property.log 2>&1 @@ -250,6 +274,7 @@ prepare_property_file() ET=$(date +%s) print_time STEP "Create property_file" $ST $ET >> $LOGDIR/timings.log + } @@ -772,6 +797,33 @@ deploy_oaa() print_time STEP "Deploy OAA" $ST $ET >> $LOGDIR/timings.log } +# Deploy OAA on DR +# +deploy_oaa_dr() +{ + + print_msg "Deploy OAA" + ST=$(date +%s) + + oaa_mgmt "/u01/oracle/OAA.sh -f installOAA.properties" > $LOGDIR/deploy_oaa.log 2>&1 + if [ $? -gt 0 ] + then + grep -q "OAUTH validation failed" $LOGDIR/deploy_oaa.log + if [ $? = 0 ] + + then + echo "Executing command /u01/oracle/scripts/validateOauthForOAA.sh -f /u01/oracle/scripts/settings/installOAA.properties -d true to get more information." >> $LOGDIR/deploy_oaa.log + oaa_mgmt "/u01/oracle/scripts/validateOauthForOAA.sh -f /u01/oracle/scripts/settings/installOAA.properties -d true" >> $LOGDIR/deploy_oaa.log 2>&1 + fi + echo "Failed - See Logfile $LOGDIR/deploy_oaa.log" + exit 1 + else + echo "Success." + fi + + ET=$(date +%s) + print_time STEP "Deploy OAA" $ST $ET >> $LOGDIR/timings.log +} # Deploy OAA Snapshot # import_snapshot() @@ -1306,3 +1358,79 @@ create_test_user() ET=$(date +%s) print_time STEP "Create Test User $OAA_USER in LDAP" $ST $ET >> $LOGDIR/timings.log } + +# Modify the template to create a cronjob +# +create_dr_cronjob_files() +{ + ST=$(date +%s) + print_msg "Creating Cron Job Files" + + cp $TEMPLATE_DIR/dr_cron.yaml $WORKDIR/dr_cron.yaml + update_variable "" $DRNS $WORKDIR/dr_cron.yaml + update_variable "" $DR_OAA_MINS $WORKDIR/dr_cron.yaml + update_variable "" $RSYNC_IMAGE $WORKDIR/dr_cron.yaml + update_variable "" $RSYNC_VER $WORKDIR/dr_cron.yaml + + print_status $? + + ET=$(date +%s) + print_time STEP "Create DR Cron Job Files" $ST $ET >> $LOGDIR/timings.log +} + +# Create Persistent Volumes used by DR Job. +# +create_dr_pv() +{ + ST=$(date +%s) + print_msg "Creating DR Persistent Volume" + + kubectl create -f $WORKDIR/dr_dr_pv.yaml > $LOGDIR/create_dr_pv.log 2>&1 + print_status $? $LOGDIR/create_dr_pv.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volume " $ST $ET >> $LOGDIR/timings.log +} + +# Create Persistent Volume Claims used by DR Job. +# +create_dr_pvc() +{ + ST=$(date +%s) + print_msg "Creating DR Persistent Volume Claim" + kubectl create -f $WORKDIR/dr_dr_pvc.yaml > $LOGDIR/create_dr_pvc.log 2>&1 + print_status $? $LOGDIR/create_dr_pvc.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volume Claim " $ST $ET >> $LOGDIR/timings.log +} + +# Delete the OAA files created by a fresh installation. +# +delete_oaa_files() +{ + ST=$(date +%s) + print_msg "Delete OAA Files" + + if [ -e $OAA_LOCAL_CONFIG_SHARE ] && [ ! "$OAA_LOCAL_CONFIG_SHARE" = "" ] + then + echo rm -rf $OAA_LOCAL_CONFIG_SHARE/helm $OAA_LOCAL_CONFIG_SHARE/installOAA.properties $OAA_LOCAL_CONFIG_SHARE/oaaoverride.yaml > $LOGDIR/delete_oaa.log 2>&1 + rm -rf $OAA_LOCAL_CONFIG_SHARE/helm $OAA_LOCAL_CONFIG_SHARE/installOAA.properties $OAA_LOCAL_CONFIG_SHARE/oaaoverride.yaml >> $LOGDIR/delete_oaa.log 2>&1 + else + echo "Share does not exist, or OAA_LOCAL_CONFIG_SHARE is not defined." + fi + + if [ -e $OAA_LOCAL_VAULT_SHARE ] && [ ! "$OAA_LOCAL_VAULT_SHARE" = "" ] + then + echo rm -rf $OAA_LOCAL_VAULT_SHARE/.accessstore.pkcs12 > $LOGDIR/delete_oaa.log 2>&1 + rm -rf $OAA_LOCAL_VAULT_SHARE/.accessstore.pkcs12 >> $LOGDIR/delete_oaa.log 2>&1 + else + echo "Share does not exist, or OAA_LOCAL_VAULT_SHARE is not defined." + fi + print_status $? $LOGDIR/delete_oaa.log + + ET=$(date +%s) + print_time STEP "Delete OAA Files" $ST $ET >> $LOGDIR/timings.log +} + + diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oam_functions.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oam_functions.sh index 6f5d1ae3a..d702f7e34 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oam_functions.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oam_functions.sh @@ -81,6 +81,10 @@ update_java_parameters() printf "\t\t\tUpdating Java Parameters - " cp $TEMPLATE_DIR/oamDomain.sedfile $WORKDIR update_variable "" "$OAMSERVER_JAVA_PARAMS" $WORKDIR/oamDomain.sedfile + update_variable "" "$OAM_MEMORY" $WORKDIR/oamDomain.sedfile + update_variable "" "$OAM_MAX_MEMORY" $WORKDIR/oamDomain.sedfile + update_variable "" "$OAM_MAX_CPU" $WORKDIR/oamDomain.sedfile + update_variable "" "$OAM_CPU" $WORKDIR/oamDomain.sedfile cd $WORKDIR/samples/create-access-domain/domain-home-on-pv sed -i -f $WORKDIR/oamDomain.sedfile output/weblogic-domains/$OAM_DOMAIN_NAME/domain.yaml @@ -733,21 +737,26 @@ create_oam_ohs_config() if [ ! "$OHS_HOST1" = "" ] then + if [ ! "$INGRESS_HOST" = "" ] + then + K8_WORKER_HOST1=$INGRESS_HOST + K8_WORKER_HOST2=$INGRESS_HOST + fi cp $TEMPLATE_DIR/iadadmin_vh.conf $OHS_PATH/$OHS_HOST1/iadadmin_vh.conf cp $TEMPLATE_DIR/login_vh.conf $OHS_PATH/$OHS_HOST1/login_vh.conf update_variable "" $OHS_HOST1 $OHS_PATH/$OHS_HOST1/iadadmin_vh.conf update_variable "" $OHS_PORT $OHS_PATH/$OHS_HOST1/iadadmin_vh.conf update_variable "" $OAM_ADMIN_LBR_HOST $OHS_PATH/$OHS_HOST1/iadadmin_vh.conf update_variable "" $OAM_ADMIN_LBR_PORT $OHS_PATH/$OHS_HOST1/iadadmin_vh.conf - update_variable "" ${INGRESS_HOST:=$K8_WORKER_HOST1} $OHS_PATH/$OHS_HOST1/iadadmin_vh.conf - update_variable "" ${INGRESS_HOST:=$K8_WORKER_HOST2} $OHS_PATH/$OHS_HOST1/iadadmin_vh.conf + update_variable "" $K8_WORKER_HOST1 $OHS_PATH/$OHS_HOST1/iadadmin_vh.conf + update_variable "" $K8_WORKER_HOST2 $OHS_PATH/$OHS_HOST1/iadadmin_vh.conf update_variable "" $OHS_HOST1 $OHS_PATH/$OHS_HOST1/login_vh.conf update_variable "" $OHS_PORT $OHS_PATH/$OHS_HOST1/login_vh.conf update_variable "" $OAM_LOGIN_LBR_PROTOCOL $OHS_PATH/$OHS_HOST1/login_vh.conf update_variable "" $OAM_LOGIN_LBR_HOST $OHS_PATH/$OHS_HOST1/login_vh.conf update_variable "" $OAM_LOGIN_LBR_PORT $OHS_PATH/$OHS_HOST1/login_vh.conf - update_variable "" ${INGRESS_HOST:=$K8_WORKER_HOST1} $OHS_PATH/$OHS_HOST1/login_vh.conf - update_variable "" ${INGRESS_HOST:=$K8_WORKER_HOST2} $OHS_PATH/$OHS_HOST1/login_vh.conf + update_variable "" $K8_WORKER_HOST1 $OHS_PATH/$OHS_HOST1/login_vh.conf + update_variable "" $K8_WORKER_HOST2 $OHS_PATH/$OHS_HOST1/login_vh.conf if [ "$USE_INGRESS" = "true" ] then @@ -773,6 +782,7 @@ create_oam_ohs_config() print_status $? + ET=`date +%s` print_time STEP "Creating OHS config" $ST $ET >> $LOGDIR/timings.log } @@ -874,7 +884,7 @@ deploy_wls_monitor() enable_monitor() { - ST=`date +%s` + ST=$(date +%s) print_msg "Configuring Prometheus Operator" ENC_WEBLOGIC_USER=`encode_pwd $OAM_WEBLOGIC_USER` @@ -894,7 +904,87 @@ enable_monitor() kubectl apply -f $WORKDIR/samples/monitoring-service/manifests/ > $LOGDIR/enable_monitor.log print_status $? $LOGDIR/enable_monitor.log - ET=`date +%s` + ET=$(date +%s) print_time STEP "Configure Prometheus Operator" $ST $ET >> $LOGDIR/timings.log } + +create_dr_cronjob_files() +{ + ST=$(date +%s) + print_msg "Creating Cron Job Files" + + cp $TEMPLATE_DIR/dr_cron.yaml $WORKDIR/dr_cron.yaml + update_variable "" $DRNS $WORKDIR/dr_cron.yaml + update_variable "" $DR_OAM_MINS $WORKDIR/dr_cron.yaml + update_variable "" $RSYNC_IMAGE $WORKDIR/dr_cron.yaml + update_variable "" $RSYNC_VER $WORKDIR/dr_cron.yaml + update_variable "" $OAM_DOMAIN_NAME $WORKDIR/dr_cron.yaml + + print_status $? + + ET=$(date +%s) + print_time STEP "Create DR Cron Job Files" $ST $ET >> $LOGDIR/timings.log +} + + +create_dr_pv() +{ + ST=$(date +%s) + print_msg "Creating DR Persistent Volume" + + kubectl create -f $WORKDIR/dr_dr_pv.yaml > $LOGDIR/create_dr_pv.log 2>&1 + print_status $? $LOGDIR/create_dr_pv.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volume " $ST $ET >> $LOGDIR/timings.log +} + +create_dr_pvc() +{ + ST=$(date +%s) + print_msg "Creating DR Persistent Volume Claim" + kubectl create -f $WORKDIR/dr_dr_pvc.yaml > $LOGDIR/create_dr_pvc.log 2>&1 + print_status $? $LOGDIR/create_dr_pvc.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volume Claim " $ST $ET >> $LOGDIR/timings.log +} + + +delete_oam_files() +{ + ST=$(date +%s) + print_msg "Delete OAM Domain Files" + + if [ -e $OAM_LOCAL_SHARE ] && [ ! "$OAM_LOCAL_SHARE" = "" ] + then + echo rm -rf $OAM_LOCAL_SHARE/domains $OAM_LOCAL_SHARE/applications $OAM_LOCAL_SHARE/stores $OAM_LOCAL_SHARE/keystores > $LOGDIR/delete_oam_domain.log 2>&1 + rm -rf $OAM_LOCAL_SHARE/domains $OAM_LOCAL_SHARE/applications $OAM_LOCAL_SHARE/stores $OAM_LOCAL_SHARE/keystores >> $LOGDIR/delete_oam_domain.log 2>&1 + else + echo "Share does not exist, or OAM_LOCAL_SHARE is not defined." + fi + + print_status $? $LOGDIR/delete_oam_domain.log + + ET=$(date +%s) + print_time STEP "Delete OAM Domain Files" $ST $ET >> $LOGDIR/timings.log +} + +create_dr_source_pv() +{ + ST=$(date +%s) + print_msg "Creating OAM Persistent Volume" + + cp $TEMPLATE_DIR/dr_oampv.yaml $WORKDIR/dr_oampv.yaml + update_variable "" $OAM_DOMAIN_NAME $WORKDIR/dr_oampv.yaml + update_variable "" $DR_STANDBY_PVSERVER $WORKDIR/dr_oampv.yaml + update_variable "" $OAM_STANDBY_SHARE $WORKDIR/dr_oampv.yaml + + kubectl create -f $WORKDIR/dr_oampv.yaml > $LOGDIR/dr_oampv.log 2>&1 + print_status $? $LOGDIR/dr_oampv.log + + ET=$(date +%s) + print_time STEP "Create OAM Persistent Volume" $ST $ET >> $LOGDIR/timings.log +} + diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/ohs_functions.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/ohs_functions.sh index 76568507e..8c87a500c 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/ohs_functions.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/ohs_functions.sh @@ -408,3 +408,72 @@ copy_lbr_cert() ET=$(date +%s) print_time STEP "Copy $OAM_LOGIN_LBR_HOST Certificate to WebGate on $HOSTNAME" $ST $ET >> $LOGDIR/timings.log } + +update_ohs_route() +{ + print_msg "Change OHS Routing" + + ST=$(date +%s) + + OLD_HOST1=$(grep WebLogicCluster $WORKDIR/*_vh.conf | sed "s/WebLogicCluster//" | tr -d ' ' | sed 's/,/:/' | cut -f2,4 -d: | tr ":" "\n" |sort | uniq | head -1 ) + OLD_HOST2=$(grep WebLogicCluster $WORKDIR/*_vh.conf | sed "s/WebLogicCluster//" | tr -d ' ' | sed 's/,/:/' | cut -f2,4 -d: | tr ":" "\n" |sort | uniq | tail -1 ) + NEW_HOST1=$(kubectl get nodes | cut -f1 -d " " | sed "/NAME/d" | head -1) + NEW_HOST2=$(kubectl get nodes | cut -f1 -d " " | sed "/NAME/d" | tail -1) + + printf "\n\t\t\tChanging $OLD_HOST1 to $NEW_HOST1 - " + sed -i "s/$OLD_HOST1/$NEW_HOST1/g" $WORKDIR/*_vh.conf > $LOGDIR/update_ohs_route.log 2>&1 + print_status $? $LOGDIR/update_ohs_route.log + printf "\n\t\t\tChanging $OLD_HOST2 to $NEW_HOST2 - " + sed -i "s/$OLD_HOST2/$NEW_HOST2/g" $WORKDIR/*_vh.conf >> $LOGDIR/update_ohs_route.log 2>&1 + print_status $? $LOGDIR/update_ohs_route.log + + ET=$(date +%s) + print_time STEP "Change OHS Routing" $ST $ET >> $LOGDIR/timings.log +} + + +update_ohs_hostname() +{ + print_msg "Change OHS Virtual Host Name " + ST=$(date +%s) + OLD_HOSTNAME=$( grep "/dev/null + cp $WORKDIR/*.conf $WORKDIR/$OHS_HOST1 + if [ ! "$OLD_HOSTNAME" = "$OHS_HOST1" ] + then + printf "\n\t\t\tChanging $OLD_HOSTNAME to $OHS_HOST1 - " + sed -i "s/$OLD_HOSTNAME/$OHS_HOST1/" $WORKDIR/$OHS_HOST1/*.conf > $LOGDIR/update_vh.log 2>&1 + print_status $? $LOGDIR/update_vh.log + fi + + if [ ! "$OHS_HOST2" = "" ] + then + mkdir $WORKDIR/$OHS_HOST2 2>/dev/null + cp $WORKDIR/*.conf $WORKDIR/$OHS_HOST2 + printf "\n\t\t\tChanging $OLD_HOSTNAME to $OHS_HOST2 - " + sed -i "s/$OLD_HOSTNAME/$OHS_HOST2/" $WORKDIR/$OHS_HOST2/*.conf >> $LOGDIR/update_vh.log 2>&1 + print_status $? $LOGDIR/update_vh.log + fi + ET=$(date +%s) + print_time STEP "Change OHS Virtual HostName" $ST $ET >> $LOGDIR/timings.log +} + + +copy_ohs_dr_config() +{ + print_msg "Copy OHS Config" + ST=$(date +%s) + + printf "\t\t\tCopy OHS Config to $OHS_HOST1 - " + $SCP $WORKDIR/$OHS_HOST1/*vh.conf $OHS_HOST1:$OHS_DOMAIN/config/fmwconfig/components/OHS/$OHS1_NAME/moduleconf/ > $LOGDIR/copy_ohs_config.log 2>&1 + print_status $? $LOGDIR/copy_ohs_config.log + + if [ ! "$OHS_HOST2" = "" ] + then + printf "\t\t\tCopy OHS Config to $OHS_HOST2 - " + $SCP $WORKDIR/$OHS_HOST2/*vh.conf $OHS_HOST2:$OHS_DOMAIN/config/fmwconfig/components/OHS/$OHS2_NAME/moduleconf/ > $LOGDIR/copy_ohs_config.log 2>&1 + print_status $? $LOGDIR/copy_ohs_config.log + fi + ET=$(date +%s) + print_time STEP "Change OHS Routing" $ST $ET >> $LOGDIR/timings.log +} diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oig_functions.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oig_functions.sh index a8b299f45..24ce9433b 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oig_functions.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oig_functions.sh @@ -141,6 +141,14 @@ update_java_parameters() fi update_variable "" "$OIMSERVER_JAVA_PARAMS" $WORKDIR/oigDomain.sedfile update_variable "" "$SOASERVER_JAVA_PARAMS" $WORKDIR/oigDomain.sedfile + update_variable "" "$OIM_MEMORY" $WORKDIR/oigDomain.sedfile + update_variable "" "$OIM_MAX_MEMORY" $WORKDIR/oigDomain.sedfile + update_variable "" "$OIM_MAX_CPU" $WORKDIR/oigDomain.sedfile + update_variable "" "$OIM_CPU" $WORKDIR/oigDomain.sedfile + update_variable "" "$SOA_MEMORY" $WORKDIR/oigDomain.sedfile + update_variable "" "$SOA_MAX_MEMORY" $WORKDIR/oigDomain.sedfile + update_variable "" "$SOA_MAX_CPU" $WORKDIR/oigDomain.sedfile + update_variable "" "$SOA_CPU" $WORKDIR/oigDomain.sedfile OUTPUT_DIR=$WORKDIR/samples/create-oim-domain/domain-home-on-pv/output/weblogic-domains/$OIG_DOMAIN_NAME cp output/weblogic-domains/$OIG_DOMAIN_NAME/domain.yaml $OUTPUT_DIR/domain.orig @@ -394,6 +402,21 @@ update_mds() print_time STEP "Update MDS Datasource" $ST $ET >> $LOGDIR/timings.log } +# Update the OIM Datasources to increase timeout for bootstrap +# +increase_to() +{ + ST=$(date +%s) + print_msg "Increasing Datasource Timeout" + + sed -i "s/300/600/" $OIG_LOCAL_SHARE/domains/$OIG_DOMAIN_NAME/config/jdbc/oimJMSStoreDS-0269-jdbc.xml > $LOGDIR/timeouts.log 2>&1 + sed -i "s/300/600/" $OIG_LOCAL_SHARE/domains/$OIG_DOMAIN_NAME/config/jdbc/oimOperationsDB-0237-jdbc.xml > $LOGDIR/timeouts.log 2>&1 + + print_status $? $LOGDIR/timeouts.log + + ET=$(date +%s) + print_time STEP "Increase Datasource Timeout" $ST $ET >> $LOGDIR/timings.log +} # Fix Gridlink Datasoureces # fix_gridlink() @@ -1077,3 +1100,92 @@ enable_monitor() print_time STEP "Configure Prometheus Operator" $ST $ET >> $LOGDIR/timings.log } + + +# Modify the template to create a cronjob +# +create_dr_cronjob_files() +{ + ST=$(date +%s) + print_msg "Creating Cron Job Files" + + cp $TEMPLATE_DIR/dr_cron.yaml $WORKDIR/dr_cron.yaml + update_variable "" $DRNS $WORKDIR/dr_cron.yaml + update_variable "" $DR_OIG_MINS $WORKDIR/dr_cron.yaml + update_variable "" $RSYNC_IMAGE $WORKDIR/dr_cron.yaml + update_variable "" $RSYNC_VER $WORKDIR/dr_cron.yaml + update_variable "" $OIG_DOMAIN_NAME $WORKDIR/dr_cron.yaml + + print_status $? + + ET=$(date +%s) + print_time STEP "Create DR Cron Job Files" $ST $ET >> $LOGDIR/timings.log +} + +# Create Persistent Volumes used by DR Job. +# +create_dr_pv() +{ + ST=$(date +%s) + print_msg "Creating DR Persistent Volume" + + kubectl create -f $WORKDIR/dr_dr_pv.yaml > $LOGDIR/create_dr_pv.log 2>&1 + print_status $? $LOGDIR/create_dr_pv.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volume " $ST $ET >> $LOGDIR/timings.log +} + +# Create Persistent Volume Claims used by DR Job. +# +create_dr_pvc() +{ + ST=$(date +%s) + print_msg "Creating DR Persistent Volume Claim" + kubectl create -f $WORKDIR/dr_dr_pvc.yaml > $LOGDIR/create_dr_pvc.log 2>&1 + print_status $? $LOGDIR/create_dr_pvc.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volume Claim " $ST $ET >> $LOGDIR/timings.log +} + +# Delete the OIG files created by a fresh installation. +# +delete_oig_files() +{ + ST=$(date +%s) + print_msg "Delete OIG Domain Files" + + if [ -e $OIG_LOCAL_SHARE ] && [ ! "$OIG_LOCAL_SHARE" = "" ] + then + echo rm -rf $OIG_LOCAL_SHARE/domains $OIG_LOCAL_SHARE/applications $OIG_LOCAL_SHARE/stores $OIG_LOCAL_SHARE/keystores > $LOGDIR/delete_oig_domain.log 2>&1 + rm -rf $OIG_LOCAL_SHARE/domains $OIG_LOCAL_SHARE/applications $OIG_LOCAL_SHARE/stores $OIG_LOCAL_SHARE/keystores >> $LOGDIR/delete_oig_domain.log 2>&1 + else + echo "Share does not exist, or OIG_LOCAL_SHARE is not defined." + fi + + print_status $? $LOGDIR/delete_oig_domain.log + + ET=$(date +%s) + print_time STEP "Delete OIG Domain Files" $ST $ET >> $LOGDIR/timings.log +} + + +# Create OAM PVs on the Standby Site +# +create_dr_source_pv() +{ + ST=$(date +%s) + print_msg "Creating OIG Persistent Volume" + + cp $TEMPLATE_DIR/dr_oigpv.yaml $WORKDIR/dr_oigpv.yaml + update_variable "" $OIG_DOMAIN_NAME $WORKDIR/dr_oigpv.yaml + update_variable "" $DR_STANDBY_PVSERVER $WORKDIR/dr_oigpv.yaml + update_variable "" $OIG_STANDBY_SHARE $WORKDIR/dr_oigpv.yaml + + kubectl create -f $WORKDIR/dr_oigpv.yaml > $LOGDIR/dr_oigpv.log 2>&1 + print_status $? $LOGDIR/dr_oigpv.log + + ET=$(date +%s) + print_time STEP "Create OIG Persistent Volume" $ST $ET >> $LOGDIR/timings.log +} diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oiri_functions.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oiri_functions.sh index ddfafbc1c..0bc74cbdc 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oiri_functions.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oiri_functions.sh @@ -71,7 +71,7 @@ create_ding_helper() kubectl create -f $filename > $LOGDIR/create_helper.log print_status $? $LOGDIR/create_helper.log - check_running $OIRINS oiri-cli 15 + check_running $DINGNS oiri-ding-cli 15 ET=`date +%s` print_time STEP "Create DING Helper container" $ST $ET >> $LOGDIR/timings.log } @@ -128,7 +128,7 @@ create_rbac() TOKENNAME=`kubectl -n $OIRINS get serviceaccount/oiri-service-account -o jsonpath='{.secrets[0].name}'` fi - TOKEN=`kubectl -n $OIRINS get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode` + TOKEN=$(kubectl -n $OIRINS get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode) k8url=`grep server: $KUBECONFIG | sed 's/server://;s/ //g'` @@ -715,3 +715,124 @@ create_logstash_cm() ET=`date +%s` print_time STEP "Create Logstash Config Map" $ST $ET >> $LOGDIR/timings.log } + +# Modify the template to create a cronjob +# +create_dr_cronjob_files() +{ + ST=$(date +%s) + print_msg "Creating Cron Job Files" + + cp $TEMPLATE_DIR/dr_cron.yaml $WORKDIR/dr_cron.yaml + update_variable "" $DRNS $WORKDIR/dr_cron.yaml + update_variable "" $DR_OIRI_MINS $WORKDIR/dr_cron.yaml + update_variable "" $RSYNC_IMAGE $WORKDIR/dr_cron.yaml + update_variable "" $RSYNC_VER $WORKDIR/dr_cron.yaml + + print_status $? + + ET=$(date +%s) + print_time STEP "Create DR Cron Job Files" $ST $ET >> $LOGDIR/timings.log +} + +# Create Persistent Volumes used by DR Job. +# +create_dr_pv() +{ + ST=$(date +%s) + print_msg "Creating DR Persistent Volume" + + kubectl create -f $WORKDIR/dr_dr_pv.yaml > $LOGDIR/create_dr_pv.log 2>&1 + print_status $? $LOGDIR/create_dr_pv.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volume " $ST $ET >> $LOGDIR/timings.log +} + +# Create Persistent Volume Claims used by DR Job. +# +create_dr_pvc() +{ + ST=$(date +%s) + print_msg "Creating DR Persistent Volume Claim" + kubectl create -f $WORKDIR/dr_dr_pvc.yaml > $LOGDIR/create_dr_pvc.log 2>&1 + print_status $? $LOGDIR/create_dr_pvc.log + + ET=$(date +%s) + print_time STEP "Create DR Persistent Volume Claim " $ST $ET >> $LOGDIR/timings.log +} + +# Delete the OIRI files created by a fresh installation. +# +delete_oiri_files() +{ + ST=$(date +%s) + print_msg "Delete OIRI Files" + + if [ -e $OIRI_LOCAL_SHARE ] && [ ! "$OIRI_LOCAL_SHARE" = "" ] + then + echo rm -rf $OIRI_LOCAL_SHARE/domains $OIRI_LOCAL_SHARE/applications $OIRI_LOCAL_SHARE/stores $OIRI_LOCAL_SHARE/keystores > $LOGDIR/delete_oiri.log 2>&1 + rm -rf $OIRI_LOCAL_SHARE/domains $OIRI_LOCAL_SHARE/applications $OIRI_LOCAL_SHARE/stores $OIRI_LOCAL_SHARE/keystores >> $LOGDIR/delete_oiri.log 2>&1 + else + echo "Share does not exist, or OIRI_LOCAL_SHARE is not defined." + fi + + if [ -e $OIRI_DING_LOCAL_SHARE ] && [ ! "$OIRI_DING_LOCAL_SHARE" = "" ] + then + echo rm -rf $OIRI_DING_LOCAL_SHARE/domains $OIRI_DING_LOCAL_SHARE/applications $OIRI_DING_LOCAL_SHARE/stores $OIRI_DING_LOCAL_SHARE/keystores > $LOGDIR/delete_oiri.log 2>&1 + rm -rf $OIRI_DING_LOCAL_SHARE/domains $OIRI_DING_LOCAL_SHARE/applications $OIRI_DING_LOCAL_SHARE/stores $OIRI_DING_LOCAL_SHARE/keystores >> $LOGDIR/delete_oiri.log 2>&1 + else + echo "Share does not exist, or OIRI_DING_LOCAL_SHARE is not defined." + fi + print_status $? $LOGDIR/delete_oiri.log + + ET=$(date +%s) + print_time STEP "Delete OIRI Files" $ST $ET >> $LOGDIR/timings.log +} + +# Create OIRI PVs on the Standby Site +# +create_dr_source_pv() +{ + ST=$(date +%s) + print_msg "Creating OIRI Persistent Volume" + + cp $TEMPLATE_DIR/dr_oiripv.yaml $WORKDIR/dr_oiripv.yaml + update_variable "" $DR_STANDBY_PVSERVER $WORKDIR/dr_oiripv.yaml + update_variable "" $OIRI_STANDBY_SHARE $WORKDIR/dr_oiripv.yaml + update_variable "" $OIRI_DING_STANDBY_SHARE $WORKDIR/dr_oiripv.yaml + + kubectl create -f $WORKDIR/dr_oiripv.yaml > $LOGDIR/dr_oiripv.log 2>&1 + print_status $? $LOGDIR/dr_oiripv.log + + ET=$(date +%s) + print_time STEP "Create OIRI Persistent Volumes" $ST $ET >> $LOGDIR/timings.log +} + +# Take a backup of the Kuberenetes Configuration files +# +backup_k8_files() +{ + ST=$(date +%s) + print_msg "Backing up local Kubernetes config files " + + if [ ! "$OIRI_PRIMARY_K8CA" = "" ] + then + printf "\n\t\t\tBacking up DING ca.crt - " + cp $OIRI_DING_LOCAL_SHARE/ca.crt $OIRI_DING_LOCAL_SHARE/$OIRI_PRIMARY_K8CA + print_status $? + printf "\t\t\tBacking up WORK ca.crt - " + cp $OIRI_WORK_LOCAL_SHARE/ca.crt $OIRI_WORK_LOCAL_SHARE/$OIRI_PRIMARY_K8CA + print_status $? + fi + + if [ ! "$OIRI_PRIMARY_K8CONFIG" = "" ] + then + printf "\t\t\tBacking up DING config - " + cp $OIRI_WORK_LOCAL_SHARE/config $OIRI_WORK_LOCAL_SHARE/$OIRI_PRIMARY_K8CONFIG + print_status $? + fi + + ET=$(date +%s) + print_time STEP "Backup local Kubernetes configuration " $ST $ET >> $LOGDIR/timings.log +} diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oud_functions.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oud_functions.sh index 27e98f191..7b9e9bdb2 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oud_functions.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/common/oud_functions.sh @@ -71,6 +71,12 @@ create_override() update_variable "" $OUD_IMAGE $OVERRIDE_FILE update_variable "" $OUD_VER $OVERRIDE_FILE update_variable "" $USE_INGRESS $OVERRIDE_FILE + update_variable "" $OUD_MAX_MEMORY $OVERRIDE_FILE + update_variable "" $OUD_MAX_CPU $OVERRIDE_FILE + update_variable "" $OUD_MEMORY $OVERRIDE_FILE + update_variable "" $OUD_CPU $OVERRIDE_FILE + update_variable "" "$OUDSERVER_TUNING_PARAMS" $OVERRIDE_FILE + update_variable "" $USE_ELK $OVERRIDE_FILE update_variable "" $ELK_VER $OVERRIDE_FILE @@ -496,186 +502,81 @@ create_oudsm_logstash_cm() print_time STEP "Create Logstash Config Map" $ST $ET >> $LOGDIR/timings.log } -create_dr_pv_files() +# Create OUD PVs on the Standby Site +# +create_dr_source_pv() { ST=$(date +%s) - print_msg "Creating DR Persistent Volume Files" - cp $TEMPLATE_DIR/dr_pv.yaml $WORKDIR/dr_primary_pv.yaml - update_variable "" $OUD_PRIMARY_SHARE $WORKDIR/dr_primary_pv.yaml - update_variable "" $DR_PRIMARY_PVSERVER $WORKDIR/dr_primary_pv.yaml - - cp $TEMPLATE_DIR/dr_pv.yaml $WORKDIR/dr_dr_pv.yaml - update_variable "" $OUD_SHARE $WORKDIR/dr_dr_pv.yaml - update_variable "" $OUD_STANDBY_SHARE $WORKDIR/dr_dr_pv.yaml - - print_status $? + print_msg "Creating OUD Persistent Volume" - ET=$(date +%s) - print_time STEP "Create DR Persistent Volume File" $ST $ET >> $LOGDIR/timings.log -} + cp $TEMPLATE_DIR/dr_oudpv.yaml $WORKDIR/dr_oudpv.yaml + update_variable "" $OUD_POD_PREFIX $WORKDIR/dr_oudpv.yaml + update_variable "" $DR_STANDBY_PVSERVER $WORKDIR/dr_oudpv.yaml + update_variable "" $OUD_STANDBY_SHARE $WORKDIR/dr_oudpv.yaml + update_variable "" $OUD_STANDBY_CONFIG_SHARE $WORKDIR/dr_oudpv.yaml + update_variable "" $OUDNS $WORKDIR/dr_oudpv.yaml -create_dr_pvc_files() -{ - ST=$(date +%s) - print_msg "Creating DR Persistent Volume Claim Files" - cp $TEMPLATE_DIR/dr_pvc.yaml $WORKDIR/dr_primary_pvc.yaml - update_variable "" $OUDNS $WORKDIR/dr_primary_pvc.yaml - - cp $WORKDIR/dr_primary_pvc.yaml $WORKDIR/dr_dr_pvc.yaml - - print_status $? + kubectl create -f $WORKDIR/dr_oudpv.yaml > $LOGDIR/dr_oudpv.log 2>&1 + print_status $? $LOGDIR/dr_oudpv.log ET=$(date +%s) - print_time STEP "Create DR Persistent Volume Claim Files" $ST $ET >> $LOGDIR/timings.log + print_time STEP "Create OUD Persistent Volume" $ST $ET >> $LOGDIR/timings.log } + +# Modify the template to create a cronjob +# create_dr_cronjob_files() { ST=$(date +%s) print_msg "Creating Cron Job Files" cp $TEMPLATE_DIR/dr_cron.yaml $WORKDIR/dr_cron.yaml - update_variable "" $OUDNS $WORKDIR/dr_cron.yaml + update_variable "" $DRNS $WORKDIR/dr_cron.yaml update_variable "" $DR_OUD_MINS $WORKDIR/dr_cron.yaml update_variable "" $RSYNC_IMAGE $WORKDIR/dr_cron.yaml update_variable "" $RSYNC_VER $WORKDIR/dr_cron.yaml update_variable "" $OUD_POD_PREFIX $WORKDIR/dr_cron.yaml - #cp $TEMPLATE_DIR/dr_cron.yaml $WORKDIR/dr_dr_cron.yaml - #update_variable "" $OUDNS $WORKDIR/dr_dr_cron.yaml - #update_variable "" $DR_OUD_MINS $WORKDIR/dr_dr_cron.yaml - #update_variable "" $RSYNC_IMAGE $WORKDIR/dr_dr_cron.yaml - #update_variable "" $RSYNC_VER $WORKDIR/dr_dr_cron.yaml - #update_variable "" $OUD_POD_PREFIX $WORKDIR/dr_dr_cron.yaml - print_status $? ET=$(date +%s) print_time STEP "Create DR Cron Job Files" $ST $ET >> $LOGDIR/timings.log } -create_dr_pv() -{ - ST=$(date +%s) - print_msg "Creating DR Persistent Volume" - - kubectl create -f $WORKDIR/dr_dr_pv.yaml > $LOGDIR/create_dr_pv.log 2>&1 - print_status $? $LOGDIR/create_dr_pv.log - - ET=$(date +%s) - print_time STEP "Create DR Persistent Volume " $ST $ET >> $LOGDIR/timings.log -} - -create_dr_pvc() -{ - ST=$(date +%s) - print_msg "Creating DR Persistent Volume Claim" - kubectl create -f $WORKDIR/dr_dr_pvc.yaml > $LOGDIR/create_dr_pvc.log 2>&1 - print_status $? $LOGDIR/create_dr_pvc.log - - ET=$(date +%s) - print_time STEP "Create DR Persistent Volume Claim " $ST $ET >> $LOGDIR/timings.log -} - -copy_dr_script() -{ - ST=$(date +%s) - print_msg "Creating DR Script Directory" - kubectl exec -n $OUDNS -ti $OUD_POD_PREFIX-oud-ds-rs-0 -- mkdir /u01/oracle/user_projects/dr_scripts > $LOGDIR/copy_drscripts.log 2>&1 - if [ $? -gt 0 ] - then - grep -q exists $LOGDIR/copy_drscripts.log - if [ $? = 0 ] - then - echo " Already Exists" - else - echo " Failed - Check logfile $LOGDIR/copy_drscripts.log" - exit 1 - fi - else - echo "Success" - fi - - printf "\t\t\tCopy DR Script to Container - " - cp $TEMPLATE_DIR/oud_dr.sh $WORKDIR - update_variable "" $ENV_TYPE $WORKDIR/oud_dr.sh - kubectl cp $WORKDIR/oud_dr.sh $OUDNS/$OUD_POD_PREFIX-oud-ds-rs-0:/u01/oracle/user_projects/dr_scripts >>$LOGDIR/copy_drscripts.log 2>&1 - print_status $? $LOGDIR/copy_drscripts.log - - printf "\t\t\tSet execute permission - " - kubectl exec -n $OUDNS -ti $OUD_POD_PREFIX-oud-ds-rs-0 -- chmod 750 /u01/oracle/user_projects/dr_scripts/oud_dr.sh >> $LOGDIR/copy_drscripts.log 2>&1 - print_status $? $LOGDIR/copy_drscripts.log - - printf "\t\t\tSet DR Site Type - " - echo $DR_TYPE > $WORKDIR/dr_type - kubectl cp $WORKDIR/dr_type $OUDNS/$OUD_POD_PREFIX-oud-ds-rs-0:/u01/oracle/user_projects/dr_scripts >>$LOGDIR/copy_drscripts.log 2>&1 - print_status $? $LOGDIR/copy_drscripts.log - - ET=$(date +%s) - print_time STEP "Copy DR Script" $ST $ET >> $LOGDIR/timings.log -} - -create_dr_cronjob() -{ - ST=$(date +%s) - print_msg "Creating DR Cron Job" - kubectl create -f $WORKDIR/dr_cron.yaml > $LOGDIR/create_dr_cron.log 2>&1 - - print_status $? $LOGDIR/create_dr_cron.log - - ET=$(date +%s) - print_time STEP "Create DR Cron Job" $ST $ET >> $LOGDIR/timings.log -} -suspend_cronjob() -{ - ST=$(date +%s) - print_msg "Suspending DR Cron Job - " - kubectl patch cronjobs rsyncdr -p '{"spec" : {"suspend" : true }}' -n $OUDNS > $LOGDIR/suspend_cron.log 2>&1 - - print_status $? $LOGDIR/suspend_cron.log - - ET=$(date +%s) - print_time STEP "Suspend Cron Job" $ST $ET >> $LOGDIR/timings.log -} - -resume_cronjob() +# Stop OUD Instance +# +stop_oud() { ST=$(date +%s) - print_msg "Resuming DR Cron Job - " - kubectl patch cronjobs rsyncdr -p '{"spec" : {"suspend" : false }}' -n $OUDNS > $LOGDIR/resume_cron.log 2>&1 - - print_status $? resume_cron.log - - ET=$(date +%s) -} + print_msg "Stopping OUD" + echo helm upgrade -n $OUDNS --set replicaCount=0 $OUD_POD_PREFIX $WORKDIR/samples/kubernetes/helm/oud-ds-rs --reuse-values > $LOGDIR/stop_oud.log + helm upgrade -n $OUDNS --set replicaCount=0 $OUD_POD_PREFIX $WORKDIR/samples/kubernetes/helm/oud-ds-rs --reuse-values >> $LOGDIR/stop_oud.log 2>&1 + print_status $? $LOGDIR/stop_oud.log + check_stopped $OUDNS ${OUD_POD_PREFIX}-oud-ds-rs-0 -initialise_dr() -{ - ST=$(date +%s) - print_msg "Creating Job to Initialise OUD DR - " - kubectl create job --from=cronjob.batch/rsyncdr initialise-dr -n $OUDNS > $LOGDIR/oud_initialise.log 2>&1 - print_status $? $LOGDIR/oud_initialise.log - printf "Job - initialise-dr Created in namespace $OUDNS" - PODNAME=`kubectl get pod -n $OUDNS | grep cron | tail -1 | awk '{ print $1 }'` - printf "Monitor job using the command: kubectl logs -n $OUDNS $PODNAME" - ET=$(date +%s) + print_time STEP "Stop OUD" $ST $ET >> $LOGDIR/timings.log } -stop_oud() +# Start OUD instance +# +start_oud() { ST=$(date +%s) - print_msg "Stopping OUD" - echo helm upgrade -n $OUDNS --set replicaCount=0 $OUD_POD_PREFIX $WORKDIR/samples/kubernetes/helm/oud-ds-rs --reuse-values > $LOGDIR/stop_oud.log - helm upgrade -n $OUDNS --set replicaCount=0 $OUD_POD_PREFIX $WORKDIR/samples/kubernetes/helm/oud-ds-rs --reuse-values >> $LOGDIR/stop_oud.log 2>&1 + print_msg "Starting OUD" + echo helm upgrade -n $OUDNS --set replicaCount=$OUD_REPLICAS $OUD_POD_PREFIX $WORKDIR/samples/kubernetes/helm/oud-ds-rs --reuse-values > $LOGDIR/stop_oud.log + helm upgrade -n $OUDNS --set replicaCount=$OUD_REPLICAS $OUD_POD_PREFIX $WORKDIR/samples/kubernetes/helm/oud-ds-rs --reuse-values >> $LOGDIR/stop_oud.log 2>&1 print_status $? $LOGDIR/stop_oud.log check_stopped $OUDNS ${OUD_POD_PREFIX}-oud-ds-rs-0 ET=$(date +%s) - print_time STEP "Create Cron Job Files" $ST $ET >> $LOGDIR/timings.log + print_time STEP "Start OUD" $ST $ET >> $LOGDIR/timings.log } +# Delete the OUD files created by a fresh installation. +# delete_oud_files() { ST=$(date +%s) diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/oke_utils/common/oci_setup_functions.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/oke_utils/common/oci_setup_functions.sh index 0cf74cdd7..5a91a618d 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/oke_utils/common/oci_setup_functions.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/oke_utils/common/oci_setup_functions.sh @@ -592,7 +592,7 @@ createService() { if [[ $STEPNO -gt $PROGRESS ]]; then if [[ ! "$DBSERVICES" =~ "$2" ]]; then print_msg begin "Creating the srvctl Service for the '$1' Pluggable Database..." - cmd="ssh -q -i $SSH_ID_KEYFILE -t -o 'StrictHostKeyChecking no' -o ProxyCommand='ssh -q -i $SSH_ID_KEYFILE opc@$BASTIONIP -W %h:%p' oracle@$DBIP 'srvctl add service -db ${DB_NAME}_${DB_SUFFIX} -service $2 -pdb $1 -preferred $DBINSTANCES'" + cmd="ssh -q -i $SSH_ID_KEYFILE -t -o 'StrictHostKeyChecking no' -o ProxyCommand='ssh -q -i $SSH_ID_KEYFILE opc@$BASTIONIP -W %h:%p' oracle@$DBIP 'srvctl add service -db ${DB_NAME}_${DB_SUFFIX} -service $2 -pdb $1 -role PRIMARY,SNAPSHOT_STANDBY -preferred $DBINSTANCES'" execute "$cmd" print_msg end else diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/oke_utils/create_dr.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/oke_utils/create_dr.sh new file mode 100755 index 000000000..707a2b64f --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/oke_utils/create_dr.sh @@ -0,0 +1,504 @@ +#!/bin/bash +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of an Umbrella script that will create all of the OCI infrastructure components +# that are required for setting up a Disaster Recovery configuration between 2 OCI regions. +# +# Dependencies: ./responsefile/prim_oke.rsp +# ./responsefile/stby_oke.rsp +# ./common/oci_util_functions.sh +# ./common/oci_dr_functions.sh +# +# Usage: create_dr_oci.sh +# + +export USE_ACTIVE_DATAGUARD=true + +if [[ $# -eq 0 ]]; then + echo "Usage: $0 " + exit 1 +fi + +DIRNAME=$(dirname $0) +if test -f $DIRNAME/responsefile/$1 ; then + PRIMARY_RSP=$DIRNAME/responsefile/$1 + PRIMARY_TEMPLATE=$(basename $DIRNAME/responsefile/$1 | sed 's/.rsp//') +else + echo "Error, Unable to read template file '$DIRNAME/responsefile/$1'" + exit 1 +fi + +if test -f $DIRNAME/responsefile/$2 ; then + STBY_RSP=$DIRNAME/responsefile/$2 + STBY_TEMPLATE=$(basename $DIRNAME/responsefile/$2 | sed 's/.rsp//') +else + echo "Error, Unable to read template file '$DIRNAME/responsefile/$2'" + exit 1 +fi + +source $DIRNAME/common/oci_util_functions.sh +source $DIRNAME/common/oci_dr_functions.sh + +WORKDIR=$(get_rsp_value WORKDIR $PRIMARY_RSP ) + +LOGDIR=$WORKDIR/dr/logs/${PRIMARY_TEMPLATE}_${STBY_TEMPLATE} +LOGFILE="dr_oci.log" +PRIMARY_OUTDIR=$WORKDIR/$PRIMARY_TEMPLATE/output +PRIMARY_RESOURCE_OCID_FILE=$PRIMARY_OUTDIR/$PRIMARY_TEMPLATE.ocid +STBY_OUTDIR=$WORKDIR/$STBY_TEMPLATE/output +STBY_RESOURCE_OCID_FILE=$STBY_OUTDIR/$STBY_TEMPLATE.ocid + +COMPARTMENT_NAME=$(get_rsp_value COMPARTMENT_NAME $PRIMARY_RSP ) + +echo -e "Getting the OCID of the '$COMPARTMENT_NAME' compartment..." +get_compartment_ocid + +echo -e "\n============================================================" +echo -e "Compartment Name: $COMPARTMENT_NAME" +echo -e "Compartment OCID: $COMPARTMENT_ID" +echo -e "Created Date/Time: $COMPARTMENT_CREATED" +echo -e "Create Using Primary Template $PRIMARY_TEMPLATE" +echo -e "Create Using Standby Template $STBY_TEMPLATE" +echo -e "============================================================\n" + +echo -e "Are you sure you wish to continue and setup Disaster Recovery" +echo -e "components into the above compartment ($COMPARTMENT_NAME) using the specified" +read -r -p "templates named '$PRIMARY_TEMPLATE and $STBY_TEMPLATE' [Y|N]? " confirm +if ! [[ $confirm =~ ^[Yy]$ ]]; then + echo "Exiting without making any changes" + exit 1 +fi + +START_TIME=$(date +%s) + +d=$(date +%m-%d-%Y-%H-%M-%S) +mkdir -p $LOGDIR > /dev/null +mv $LOGDIR/$LOGFILE $LOGDIR/$LOGFILE-${d} 2>/dev/null +mv $LOGDIR/timings.log $LOGDIR/timings.log-${d} 2>/dev/null + +d1=$(date +"%a %d %b %Y %T") +echo -e "Provisioning the OCI Disaster Recovery Started on $d1" > $LOGDIR/timings.log + +STEPNO=0 +PROGRESS=$(get_progress) + +PRIMARY_REGION=$(get_rsp_value REGION $PRIMARY_RSP ) +STBY_REGION=$(get_rsp_value REGION $STBY_RSP ) + + +PRIMARY_VCN_NAME=$(get_rsp_value VCN_DISPLAY_NAME $PRIMARY_RSP ) +STBY_VCN_NAME=$(get_rsp_value VCN_DISPLAY_NAME $STBY_RSP ) + +PRIMARY_VCN_ID=$( grep $PRIMARY_VCN_NAME $PRIMARY_RESOURCE_OCID_FILE | cut -d: -f2) +STBY_VCN_ID=$( grep $STBY_VCN_NAME $STBY_RESOURCE_OCID_FILE | cut -d: -f2) + +print_msg screen "Setting up the Dynamic Routing Gateway Resources..." + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + create_drg $PRIMARY_REGION +fi + +PRIMARY_DRG_ID=$(grep "DRG-${PRIMARY_REGION}" $PRIMARY_RESOURCE_OCID_FILE | cut -f2 -d:) + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + create_drg_attach $PRIMARY_REGION $PRIMARY_DRG_ID $PRIMARY_VCN_ID +fi + +PRIMARY_ATTACH_ID=$(grep "DRG-ATTACHMENT-${PRIMARY_REGION}" $PRIMARY_RESOURCE_OCID_FILE | cut -f2 -d:) + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + create_rpc $PRIMARY_REGION $PRIMARY_DRG_ID +fi + +PRIMARY_RPC_ID=$(grep "RPC-${PRIMARY_REGION}" $PRIMARY_RESOURCE_OCID_FILE | cut -f2 -d:) + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + create_drg $STBY_REGION +fi + +STBY_DRG_ID=$(grep "DRG-${STBY_REGION}" $STBY_RESOURCE_OCID_FILE | cut -f2 -d:) + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + create_drg_attach $STBY_REGION $STBY_DRG_ID $STBY_VCN_ID +fi + +STBY_ATTACH_ID=$(grep "DRG-ATTACHMENT-${STBY_REGION}" $STBY_RESOURCE_OCID_FILE | cut -f2 -d:) + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + create_rpc $STBY_REGION $STBY_DRG_ID +fi + + +STBY_RPC_ID=$(grep "RPC-${STBY_REGION}" $STBY_RESOURCE_OCID_FILE | cut -f2 -d:) + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + connect_rpc $PRIMARY_REGION $PRIMARY_RPC_ID $STBY_REGION $STBY_RPC_ID +fi + + +print_msg screen "Setting up the Routing to Gateway ..." +PRIMARY_DB_ROUTE_NAME=$(get_rsp_value DB_ROUTE_TABLE_DISPLAY_NAME $PRIMARY_RSP ) +PRIMARY_DB_ROUTE_ID=$(grep "$PRIMARY_DB_ROUTE_NAME" $PRIMARY_RESOURCE_OCID_FILE | cut -f2 -d:) + +STBY_DB_CIDR=$(get_rsp_value DB_SUBNET_CIDR $STBY_RSP ) +STBY_K8_CIDR=$(get_rsp_value OKE_NODE_SUBNET_CIDR $STBY_RSP ) + + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_drg_route $PRIMARY_REGION $PRIMARY_DB_ROUTE_ID $PRIMARY_DRG_ID $STBY_DB_CIDR +fi + +PRIMARY_K8_ROUTE_NAME=$(get_rsp_value VCN_PRIVATE_ROUTE_TABLE_DISPLAY_NAME $PRIMARY_RSP ) +PRIMARY_K8_ROUTE_ID=$(grep "$PRIMARY_K8_ROUTE_NAME" $PRIMARY_RESOURCE_OCID_FILE | cut -f2 -d:) + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_drg_route $PRIMARY_REGION $PRIMARY_K8_ROUTE_ID $PRIMARY_DRG_ID $STBY_K8_CIDR +fi + + +STBY_DB_ROUTE_NAME=$(get_rsp_value DB_ROUTE_TABLE_DISPLAY_NAME $STBY_RSP ) +STBY_DB_ROUTE_ID=$(grep "$STBY_DB_ROUTE_NAME" $STBY_RESOURCE_OCID_FILE | cut -f2 -d:) + +PRIMARY_DB_CIDR=$(get_rsp_value DB_SUBNET_CIDR $PRIMARY_RSP ) +PRIMARY_K8_CIDR=$(get_rsp_value OKE_NODE_SUBNET_CIDR $PRIMARY_RSP ) + + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + update_drg_route $STBY_REGION $STBY_DB_ROUTE_ID $STBY_DRG_ID $PRIMARY_DB_CIDR +fi + +STBY_K8_ROUTE_NAME=$(get_rsp_value VCN_PRIVATE_ROUTE_TABLE_DISPLAY_NAME $STBY_RSP ) +STBY_K8_ROUTE_ID=$(grep "$STBY_K8_ROUTE_NAME" $STBY_RESOURCE_OCID_FILE | cut -f2 -d:) + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + update_drg_route $STBY_REGION $STBY_K8_ROUTE_ID $STBY_DRG_ID $PRIMARY_K8_CIDR +fi + +print_msg screen "Updating Security Lists ..." +PRIMARY_DB_SECLIST_DISPLAY_NAME=$(get_rsp_value DB_SECLIST_DISPLAY_NAME $PRIMARY_RSP ) +PRIMARY_DB_SECLIST=$(grep "$PRIMARY_DB_SECLIST_DISPLAY_NAME" $PRIMARY_RESOURCE_OCID_FILE | cut -f2 -d:) +PRIMARY_DB_LISTENER=$(get_rsp_value DB_SQLNET_PORT $PRIMARY_RSP ) +STBY_DB_SECLIST_DISPLAY_NAME=$(get_rsp_value DB_SECLIST_DISPLAY_NAME $STBY_RSP ) +STBY_DB_SECLIST=$(grep "$STBY_DB_SECLIST_DISPLAY_NAME" $STBY_RESOURCE_OCID_FILE | cut -f2 -d:) +STBY_DB_LISTENER=$(get_rsp_value DB_SQLNET_PORT $STBY_RSP ) + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_ingress $PRIMARY_REGION $PRIMARY_DB_SECLIST $PRIMARY_DB_SECLIST_DISPLAY_NAME TCP $STBY_DB_CIDR $STBY_DB_LISTENER +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_ingress $PRIMARY_REGION $PRIMARY_DB_SECLIST $PRIMARY_DB_SECLIST_DISPLAY_NAME TCP $STBY_DB_CIDR 6200 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + update_seclist_ingress $STBY_REGION $STBY_DB_SECLIST $STBY_DB_SECLIST_DISPLAY_NAME TCP $PRIMARY_DB_CIDR $PRIMARY_DB_LISTENER +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + update_seclist_ingress $STBY_REGION $STBY_DB_SECLIST $STBY_DB_SECLIST_DISPLAY_NAME TCP $PRIMARY_DB_CIDR 6200 +fi + +PRIMARY_OKE_SECLIST_DISPLAY_NAME=$(get_rsp_value PV_SECLIST_DISPLAY_NAME $PRIMARY_RSP ) +PRIMARY_OKE_SECLIST=$(grep "$PRIMARY_OKE_SECLIST_DISPLAY_NAME" $PRIMARY_RESOURCE_OCID_FILE | cut -f2 -d:) +PRIMARY_OKE_CIDR=$(get_rsp_value OKE_NODE_SUBNET_CIDR $PRIMARY_RSP ) +STBY_OKE_SECLIST_DISPLAY_NAME=$(get_rsp_value PV_SECLIST_DISPLAY_NAME $STBY_RSP ) +STBY_OKE_SECLIST=$(grep "$STBY_OKE_SECLIST_DISPLAY_NAME" $STBY_RESOURCE_OCID_FILE | cut -f2 -d:) +STBY_OKE_CIDR=$(get_rsp_value OKE_NODE_SUBNET_CIDR $STBY_RSP ) + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_ingress $PRIMARY_REGION $PRIMARY_OKE_SECLIST $PRIMARY_OKE_SECLIST_DISPLAY_NAME TCP $STBY_OKE_CIDR 31444 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_ingress $PRIMARY_REGION $PRIMARY_OKE_SECLIST $PRIMARY_OKE_SECLIST_DISPLAY_NAME TCP $STBY_OKE_CIDR 111 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_ingress $PRIMARY_REGION $PRIMARY_OKE_SECLIST $PRIMARY_OKE_SECLIST_DISPLAY_NAME TCP $STBY_OKE_CIDR 2048 2050 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_ingress $PRIMARY_REGION $PRIMARY_OKE_SECLIST $PRIMARY_OKE_SECLIST_DISPLAY_NAME UDP $STBY_OKE_CIDR 111 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_ingress $PRIMARY_REGION $PRIMARY_OKE_SECLIST $PRIMARY_OKE_SECLIST_DISPLAY_NAME UDP $STBY_OKE_CIDR 2048 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_egress $PRIMARY_REGION $PRIMARY_OKE_SECLIST $PRIMARY_OKE_SECLIST_DISPLAY_NAME TCP $STBY_OKE_CIDR 111 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_egress $PRIMARY_REGION $PRIMARY_OKE_SECLIST $PRIMARY_OKE_SECLIST_DISPLAY_NAME TCP $STBY_OKE_CIDR 2048 2050 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_egress $PRIMARY_REGION $PRIMARY_OKE_SECLIST $PRIMARY_OKE_SECLIST_DISPLAY_NAME UDP $STBY_OKE_CIDR 111 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$PRIMARY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_egress $PRIMARY_REGION $PRIMARY_OKE_SECLIST $PRIMARY_OKE_SECLIST_DISPLAY_NAME UDP $STBY_OKE_CIDR 2048 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + update_seclist_ingress $STBY_REGION $STBY_OKE_SECLIST $STBY_OKE_SECLIST_DISPLAY_NAME TCP $PRIMARY_OKE_CIDR 31444 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + update_seclist_ingress $STBY_REGION $STBY_OKE_SECLIST $STBY_OKE_SECLIST_DISPLAY_NAME TCP $PRIMARY_OKE_CIDR 111 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_ingress $STBY_REGION $STBY_OKE_SECLIST $STBY_OKE_SECLIST_DISPLAY_NAME TCP $PRIMARY_OKE_CIDR 2048 2050 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + update_seclist_ingress $STBY_REGION $STBY_OKE_SECLIST $STBY_OKE_SECLIST_DISPLAY_NAME UDP $PRIMARY_OKE_CIDR 111 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + update_seclist_ingress $STBY_REGION $STBY_OKE_SECLIST $STBY_OKE_SECLIST_DISPLAY_NAME UDP $PRIMARY_OKE_CIDR 2048 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + update_seclist_egress $STBY_REGION $STBY_OKE_SECLIST $STBY_OKE_SECLIST_DISPLAY_NAME TCP $PRIMARY_OKE_CIDR 111 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$PRIMARY_OUTDIR + update_seclist_egress $STBY_REGION $STBY_OKE_SECLIST $STBY_OKE_SECLIST_DISPLAY_NAME TCP $PRIMARY_OKE_CIDR 2048 2050 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + update_seclist_egress $STBY_REGION $STBY_OKE_SECLIST $STBY_OKE_SECLIST_DISPLAY_NAME UDP $PRIMARY_OKE_CIDR 111 +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + update_seclist_egress $STBY_REGION $STBY_OKE_SECLIST $STBY_OKE_SECLIST_DISPLAY_NAME UDP $PRIMARY_OKE_CIDR 2048 +fi + +print_msg screen "Setting Up Dataguard ..." +PRIMARY_DB_NAME=$(get_rsp_value DB_NAME $PRIMARY_RSP ) +PRIMARY_DB_SUFFIX=$(get_rsp_value DB_SUFFIX $PRIMARY_RSP ) +PRIMARY_DB_SYS_NAME=$(get_rsp_value DB_DISPLAY_NAME $PRIMARY_RSP ) +PRIMARY_DB_SYSID=$(grep "$PRIMARY_DB_SYS_NAME" $PRIMARY_RESOURCE_OCID_FILE | cut -f2 -d:) +PRIMARY_DB_PWD=$(get_rsp_value DB_PWD $PRIMARY_RSP ) +STBY_DB_SYS_NAME=$(get_rsp_value DB_DISPLAY_NAME $STBY_RSP) +STBY_AD=$(get_rsp_value DB_AD $STBY_RSP) +STBY_DB_SUBNET=$(get_rsp_value DB_SUBNET_DISPLAY_NAME $STBY_RSP) +STBY_DB_SUBNET_ID=$(grep "$STBY_DB_SUBNET" $STBY_RESOURCE_OCID_FILE | cut -f2 -d:) +STBY_DB_SYSID=$(grep "$STBY_DB_SYS_NAME" $STBY_RESOURCE_OCID_FILE | cut -f2 -d:) +STBY_DB_IMAGE=$(get_rsp_value DB_SUBNET_DISPLAY_NAME $STBY_RSP) +STBY_DB_LICENCE=$(get_rsp_value DB_LICENSE $STBY_RSP) +STBY_DB_TIMEZONE=$(get_rsp_value DB_TIMEZONE $STBY_RSP) + +get_ad_list $STBY_REGION +PRIMARY_DB_ID=$(oci db database list --compartment-id $COMPARTMENT_ID --db-system-id $PRIMARY_DB_SYSID --query "data[?contains(\"db-name\", '$PRIMARY_DB_NAME')].{ocid:id}" | jq -r '.[].ocid') + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + create_dataguard $STBY_REGION $PRIMARY_DB_SYSID $PRIMARY_DB_NAME ${!STBY_AD} $STBY_TEMPLATE $STBY_DB_SYS_NAME $STBY_DB_SUBNET_ID $PRIMARY_DB_PWD $STBY_DB_LICENCE $STBY_DB_TIMEZONE +fi + +DB_STATE=$(oci db database get --database-id $PRIMARY_DB_ID --query 'data."lifecycle-state"' --raw-output) + +echo " Primary Database State : " $DB_STATE + +if [ "$DB_STATE" = "UPDATING" ] +then + echo "Database is still Updating - Try again later." + exit 1 +fi + +DG_STATE=$(oci db data-guard-association list --database-id $PRIMARY_DB_ID | jq -r '.data[0]["lifecycle-state"]') + +echo " Dataguard Association State : " $DG_STATE +echo "" + +if [ "$DG_STATE" != "AVAILABLE" ] +then + echo "Dataguard is not ready, please try again later." + exit 1 +fi + + + +PRIMARY_SSH_KEYFILE=$(get_rsp_value SSH_ID_KEYFILE $PRIMARY_RSP ) +PRIMARY_BASTION_HOSTNAME=$(get_rsp_value BASTION_HOSTNAME $PRIMARY_RSP ) +PRIMARY_BASTION_ID=$(grep $PRIMARY_BASTION_HOSTNAME $PRIMARY_RESOURCE_OCID_FILE | cut -f2 -d:) +PRIMARY_BASTION_IP=$(oci compute instance list-vnics --region $PRIMARY_REGION --compartment-id $COMPARTMENT_ID --instance-id $PRIMARY_BASTION_ID --query 'data[0]."public-ip"' --raw-output) +PRIMARY_DB_VNIC=$(oci db system get --db-system-id $PRIMARY_DB_SYSID --region $PRIMARY_REGION --query 'data."scan-ip-ids"[0]' --raw-output) +PRIMARY_DB_IP=$(oci network private-ip get --region $PRIMARY_REGION --private-ip-id $PRIMARY_DB_VNIC --query 'data."ip-address"' --raw-output) +PRIMARY_DB_DOMAIN=$(oci db system get --db-system-id $PRIMARY_DB_SYSID --region $PRIMARY_REGION --query 'data."domain"' --raw-output) + +STBY_SSH_KEYFILE=$(get_rsp_value SSH_ID_KEYFILE $STBY_RSP ) +STBY_BASTION_HOSTNAME=$(get_rsp_value BASTION_HOSTNAME $STBY_RSP ) +STBY_BASTION_ID=$(grep $STBY_BASTION_HOSTNAME $STBY_RESOURCE_OCID_FILE | cut -f2 -d:) +STBY_BASTION_IP=$(oci compute instance list-vnics --region $STBY_REGION --compartment-id $COMPARTMENT_ID --instance-id $STBY_BASTION_ID --query 'data[0]."public-ip"' --raw-output) +STBY_DB_SYSID=$(oci db system list --compartment-id $COMPARTMENT_ID --region $STBY_REGION --query "data[?contains(\"display-name\", '$PRIMARY_DB_SYS_NAME-Dataguard')].{ocid:id}" | jq -r '.[].ocid') +STBY_DB_VNIC=$(oci db system get --db-system-id $STBY_DB_SYSID --region $STBY_REGION --query 'data."scan-ip-ids"[0]' --raw-output) +STBY_DB_IP=$(oci network private-ip get --region $STBY_REGION --private-ip-id $STBY_DB_VNIC --query 'data."ip-address"' --raw-output) +STBY_DB_NAME=$(oci db database list --compartment-id $COMPARTMENT_ID --region $STBY_REGION --db-system-id $STBY_DB_SYSID --query 'data[0]."db-unique-name"' --raw-output) + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + + copy_auth_keys $STBY_SSH_KEYFILE $STBY_BASTION_IP $STBY_DB_IP + +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + + set_key_permission $STBY_SSH_KEYFILE $STBY_BASTION_IP $STBY_DB_IP + +fi + +STEPNO=$((STEPNO+1)) +if [[ $STEPNO -gt $PROGRESS ]] +then + RESOURCE_OCID_FILE=$STBY_RESOURCE_OCID_FILE + OUTDIR=$STBY_OUTDIR + SERVICES=$(get_db_services $PRIMARY_SSH_KEYFILE $PRIMARY_BASTION_IP $PRIMARY_DB_IP $PRIMARY_DB_NAME $PRIMARY_DB_SUFFIX) + INSTANCES=$(get_db_instances $STBY_SSH_KEYFILE $STBY_BASTION_IP $STBY_DB_IP $STBY_DB_NAME ) + + for svc in $SERVICES + do + create_dg_service $STBY_SSH_KEYFILE $STBY_BASTION_IP $STBY_DB_IP $STBY_DB_NAME $PRIMARY_DB_DOMAIN $INSTANCES $svc + done +fi diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/oke_utils/responsefile/oci_oke.rsp b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/oke_utils/responsefile/oci_oke.rsp index 1c3deca78..92c05a3f7 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/oke_utils/responsefile/oci_oke.rsp +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/oke_utils/responsefile/oci_oke.rsp @@ -68,16 +68,16 @@ DB_MEMORY_CONFIG="dev" CONFIGURE_DATABASE="true" CREATE_OAM_PDB="true" OAM_PDB_NAME="oampdb" -OAM_SERVICE_NAME="oam_s" +OAM_SERVICE_NAME="oamsvc.$DNS_DOMAIN_NAME" CREATE_OIG_PDB="true" OIG_PDB_NAME="oigpdb" -OIG_SERVICE_NAME="oig_s" +OIG_SERVICE_NAME="oigsvc.$DNS_DOMAIN_NAME" CREATE_OAA_PDB="false" OAA_PDB_NAME="oaapdb" -OAA_SERVICE_NAME="oaa_s" +OAA_SERVICE_NAME="oaasvc.$DNS_DOMAIN_NAME" CREATE_OIRI_PDB="false" OIRI_PDB_NAME="oiripdb" -OIRI_SERVICE_NAME="oiri_s" +OIRI_SERVICE_NAME="oirisvc.$DNS_DOMAIN_NAME" # OCI images names to use for the underlying OS instances. Note that they default to use the same as defined for the # Bastion host but each can be set separately if desired. diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oaa.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oaa.sh index 95a9d6ff6..545dc13bc 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oaa.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oaa.sh @@ -81,6 +81,8 @@ echo create_local_workdir create_logdir +printf "Using Image:" +printf "\n\t$OAA_MGT_IMAGE:$OAAMGT_VER\n\n" echo -n "Provisioning Oracle Advanced Authentication on " >> $LOGDIR/timings.log date +"%a %d %b %Y %T" >> $LOGDIR/timings.log @@ -155,6 +157,12 @@ then update_progress fi +new_step +if [ $STEPNO -gt $PROGRESS ] +then + copy_settings_file + update_progress +fi new_step if [ $STEPNO -gt $PROGRESS ] @@ -225,42 +233,46 @@ then update_progress fi -# Create OHS rewrite Rules -# -new_step -if [ $STEPNO -gt $PROGRESS ] +if [ "$OAA_CREATE_OHS" = "true" ] then - if [ "$UPDATE_OHS" = "true" ] - then - add_ohs_rewrite_rules - update_progress - fi -fi + # Create OHS rewrite Rules + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + if [ "$UPDATE_OHS" = "true" ] + then + add_ohs_rewrite_rules + update_progress + fi + fi -# Add OHS entries for OAA to OAM ohs config files if Ingress is being used -# -if [ "$USE_INGRESS" = "true" ] && [ "$OAA_CREATE_OHS" = "true" ] -then - new_step - if [ $STEPNO -gt $PROGRESS ] - then - create_ohs_entries - update_progress - fi -fi -# Copy OHS config to OHS servers if required -# -new_step -if [ $STEPNO -gt $PROGRESS ] -then - if [ "$UPDATE_OHS" = "true" ] && [ "$OAA_CREATE_OHS" = "true" ] - then - copy_ohs_config - update_progress - fi + # Add OHS entries for OAA to OAM ohs config files if Ingress is being used + # + if [ "$USE_INGRESS" = "true" ] + then + new_step + if [ $STEPNO -gt $PROGRESS ] + then + create_ohs_entries + update_progress + fi + fi + + # Copy OHS config to OHS servers if required + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + if [ "$UPDATE_OHS" = "true" ] && [ "$OAA_CREATE_OHS" = "true" ] + then + copy_ohs_config + update_progress + fi + fi fi diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oam.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oam.sh index 84e646560..4c2ee31ba 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oam.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oam.sh @@ -64,11 +64,13 @@ then then INGRESS_HTTP_PORT=`get_k8_port $INGRESS_NAME $INGRESSNS http ` INGRESS_HTTPS_PORT=`get_k8_port $INGRESS_NAME $INGRESSNS https` + INGRESS_HOST="" else INGRESS_HTTP_PORT=$INGRESS_HTTP INGRESS_HTTPS_PORT=$INGRESS_HTTPS - INGRESS_HOST=`kubectl get svc -n ingressns | awk '{print $4}' | grep -v EXTERNAL` + INGRESS_HOST=`kubectl get svc -n $INGRESSNS | awk '{print $4}' | grep -v EXTERNAL` fi + if [ "$INGRESS_HTTP_PORT" = "" ] then echo "Unable to get Ingress Ports - Check Ingress is running" @@ -90,6 +92,8 @@ echo create_local_workdir create_logdir +printf "Using Image:" +printf "\n\t$OAM_IMAGE:$OAM_VER\n\n" echo -n "Provisioning OAM on " >> $LOGDIR/timings.log date +"%a %d %b %Y %T" >> $LOGDIR/timings.log diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oig.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oig.sh index e7595a96e..5e73ac2c1 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oig.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oig.sh @@ -84,11 +84,15 @@ echo create_local_workdir create_logdir +printf "Using Image:" +printf "\n\t$OIG_IMAGE:$OIG_VER\n\n" echo -n "Provisioning OIG on " >> $LOGDIR/timings.log date +"%a %d %b %Y %T" >> $LOGDIR/timings.log echo "-----------------------------------------------" >> $LOGDIR/timings.log - +echo >> $LOGDIR/timings.log +printf "Using Image:">> $LOGDIR/timings.log +printf "\n\t$OIG_IMAGE:$OIG_VER">> $LOGDIR/timings.log STEPNO=1 PROGRESS=$(get_progress) @@ -232,6 +236,15 @@ then update_progress fi +# Increase Timeouts +# +new_step +if [ $STEPNO -gt $PROGRESS ] +then + increase_to + update_progress +fi + # Perform Initial Domain Start # new_step diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oiri.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oiri.sh index 6ddaf1502..684dc6bc4 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oiri.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oiri.sh @@ -80,10 +80,21 @@ echo create_local_workdir create_logdir +printf "Using Images:" +printf "\n\t$OIRI_CLI_IMAGE:$OIRICLI_VER" +printf "\n\t$OIRI_IMAGE:$OIRI_VER" +printf "\n\t$OIRI_UI_IMAGE:$OIRIUI_VER" +printf "\n\t$OIRI_DING_IMAGE:$OIRIDING_VER\n\n" echo -n "Provisioning Oracle Identity Role Intelligence on " >> $LOGDIR/timings.log date +"%a %d %b %Y %T" >> $LOGDIR/timings.log echo "---------------------------------------------------------------------------" >> $LOGDIR/timings.log +echo "" +printf "Using Images:" >> $LOGDIR/timings.log +printf "\n\t$OIRI_CLI_IMAGE:$OIRICLI_VER" >> $LOGDIR/timings.log +printf "\n\t$OIRI_IMAGE:$OIRI_VER" >> $LOGDIR/timings.log +printf "\n\t$OIRI_UI_IMAGE:$OIRIUI_VER" >> $LOGDIR/timings.log +printf "\n\t$OIRI_DING_IMAGE:$OIRIDING_VER\n\n" >> $LOGDIR/timings.log STEPNO=1 PROGRESS=$(get_progress) diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oud.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oud.sh index 02105f0c4..9ec260f79 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oud.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oud.sh @@ -72,10 +72,14 @@ echo create_local_workdir create_logdir - +printf "Using Image:" +printf "\n\t$OUD_IMAGE:$OUD_VER\n\n" echo -n "Provisioning OUD on " >> $LOGDIR/timings.log date +"%a %d %b %Y %T" >> $LOGDIR/timings.log echo "------------------------------------------------" >> $LOGDIR/timings.log +echo >> $LOGDIR/timings.log +printf "Using Image:">> $LOGDIR/timings.log +printf "\n\t$OUD_IMAGE:$OUD_VER">> $LOGDIR/timings.log STEPNO=1 PROGRESS=$(get_progress) diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oudsm.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oudsm.sh index 7e057743a..771d0c457 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oudsm.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/provision_oudsm.sh @@ -82,10 +82,15 @@ echo create_local_workdir create_logdir +printf "Using Image:" +printf "\n\t$OUDSM_IMAGE:$OUDSM_VER" echo -n "Provisioning OUDSM on " >> $LOGDIR/timings.log date +"%a %d %b %Y %T" >> $LOGDIR/timings.log echo "-------------------------------------------------" >> $LOGDIR/timings.log +echo >> $LOGDIR/timings.log +printf "Using Image:">> $LOGDIR/timings.log +printf "\n\t$OUDSM_IMAGE:$OUDSM_VER">> $LOGDIR/timings.log STEPNO=1 PROGRESS=$(get_progress) diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/responsefile/.drpwd b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/responsefile/.drpwd new file mode 100644 index 000000000..b534bee11 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/responsefile/.drpwd @@ -0,0 +1,9 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of a file containing setup passwords for IDM +# + +# Registry Passwords +# +REG_PWD="" diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/responsefile/dr.rsp b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/responsefile/dr.rsp new file mode 100755 index 000000000..ade9b1553 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/responsefile/dr.rsp @@ -0,0 +1,225 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of a responsefile for IDM Provisioning on Kubernetes +# + +############################################################################################ +# CONTROL Parameters # +############################################################################################ +# + +# Products to Deploy +# +DR_OUD=true +DR_OAM=true +DR_OIG=true +DR_OIRI=true +DR_OAA=true +DR_OHS=true + +# Control Parameters +# +USE_REGISTRY=true +USE_INGRESS=true +COPY_FILES_TO_DR=true + +DR_HOST=drhost +DR_USER=opc + +ENV_TYPE=OTHER +USE_MAA_SCRIPTS=true + +############################################################################################ +# GENERIC Parameters # +############################################################################################ +# + +# Local Work Directories + +LOCAL_WORKDIR=/home/opc/workdir +K8_DRDIR=/u01/oracle/user_projects/dr_scripts + + +############################################################################################ +# Container Registry Parameters # +############################################################################################ +# +REGISTRY=iad.ocir.io/mytenancy/idm +REG_USER=mytenancy/oracleidentitycloudservice/myemail@example.com +CREATE_REGSECRET=true + +############################################################################################ +# IMAGE Parameters # +############################################################################################ +# +# Images +# + +# Image Versions +# +RSYNC_VER=latest + + +############################################################################################ +# DR Parameters # +############################################################################################ +DR_TYPE=STANDBY +DRNS=drns + +############################################################################################ +# NFS Parameters # +############################################################################################ +# +DR_PRIMARY_NFS_EXPORT=/export/IAMPVS +DR_PRIMARY_PVSERVER=site1pvserver.example.com + +DR_STANDBY_PVSERVER=site2pvserver.example.com +DR_STANDBY_NFS_EXPORT=/export/IAMPVS + +############################################################################################ +# OUD Parameters # +############################################################################################ +# +OUDNS=oudns +OUD_POD_PREFIX=edg +OUD_REPLICAS=2 + +OUD_PRIMARY_SHARE=$DR_PRIMARY_NFS_EXPORT/oudpv +OUD_PRIMARY_CONFIG_SHARE=$DR_PRIMARY_NFS_EXPORT/oudconfigpv +OUD_STANDBY_SHARE=$DR_STANDBY_NFS_EXPORT/oudpv +OUD_STANDBY_CONFIG_SHARE=$DR_STANDBY_NFS_EXPORT/oudconfigpv + +OUD_LOCAL_CONFIG_SHARE=/nfs_volumes/oudconfigpv +OUD_LOCAL_SHARE=/nfs_volumes/oudpv + +DR_OUD_MINS=5 +DR_CREATE_OUD_JOB=true + +############################################################################################ +# OHS Parameters # +############################################################################################ +# + + +OHS_BASE=/u02/private +OHS_ORACLE_HOME=$OHS_BASE/oracle/products/ohs + +OHS_USER=opc +OHS_HOST1=webhost1.example.com +OHS1_NAME=ohs1 +OHS_HOST2=webhost2.example.com +OHS2_NAME=ohs2 + +OHS_DOMAIN=/u02/private/oracle/config/domains/ohsDomain + +############################################################################################ +# OAM Parameters # +############################################################################################ +# +OAMNS=oamns +OAM_DOMAIN_NAME=accessdomain +OAM_PRIMARY_SHARE=$DR_PRIMARY_NFS_EXPORT/oampv +OAM_STANDBY_SHARE=$DR_STANDBY_NFS_EXPORT/oampv +OAM_LOCAL_SHARE=/nfs_volumes/oampv +OAM_SERVER_INITIAL=2 +OAM_PRIMARY_DB_SCAN=site1-scan.example.com +OAM_PRIMARY_DB_SERVICE=oamsvc.example.com +OAM_STANDBY_DB_SCAN=site2-scan.example.com +OAM_STANDBY_DB_SERVICE=oamsvc.example.com +OAM_DB_LISTENER=1521 + +COPY_WG_FILES=true +DR_OAM_MINS=720 +DR_CREATE_OAM_JOB=true +############################################################################################ +# OIG Parameters # +############################################################################################ +# +OIGNS=oigns +OIG_DOMAIN_NAME=governancedomain +OIG_PRIMARY_SHARE=$DR_PRIMARY_NFS_EXPORT/oigpv +OIG_STANDBY_SHARE=$DR_STANDBY_NFS_EXPORT/oigpv +OIG_LOCAL_SHARE=/nfs_volumes/oigpv +OIG_SERVER_INITIAL=2 +OIG_PRIMARY_DB_SCAN=site1-scan.example.com +OIG_PRIMARY_DB_SERVICE=oigsvc.example.com +OIG_STANDBY_DB_SCAN=site2-scan.example.com +OIG_STANDBY_DB_SERVICE=oigsvc.example.com + + +DR_OIG_MINS=720 +DR_CREATE_OIG_JOB=true + +############################################################################################ +# OIRI Parameters # +############################################################################################ +# +OIRINS=oirins +DINGNS=dingns + +# NFS Parameters +# +OIRI_PRIMARY_SHARE=$DR_PRIMARY_NFS_EXPORT/oiripv +OIRI_STANDBY_SHARE=$DR_STANDBY_NFS_EXPORT/oiripv +OIRI_DING_PRIMARY_SHARE=$DR_PRIMARY_NFS_EXPORT/dingpv +OIRI_DING_STANDBY_SHARE=$DR_STANDBY_NFS_EXPORT/dingpv +OIRI_WORK_PRIMARY_SHARE=$DR_PRIMARY_NFS_EXPORT/workpv +OIRI_WORK_STANDBY_SHARE=$DR_STANDBY_NFS_EXPORT/workpv +OIRI_LOCAL_SHARE=/nfs_volumes/oiripv +OIRI_DING_LOCAL_SHARE=/nfs_volumes/dingpv +OIRI_WORK_LOCAL_SHARE=/nfs_volumes/workpv +OIRI_PRIMARY_DB_SCAN=site1-scan.example.com +OIRI_STANDBY_DB_SCAN=site2-scan.example.com +OIRI_PRIMARY_DB_SERVICE=oirisvc.example.com +OIRI_STANDBY_DB_SERVICE=oirisvc.example.com + +OIRI_PRIMARY_K8CONFIG=primary_k8config +OIRI_STANDBY_K8CONFIG=standby_k8config +OIRI_PRIMARY_K8CA=primary_ca.crt +OIRI_STANDBY_K8CA=standby_ca.crt +OIRI_PRIMARY_K8=10.0.0.5:6443 +OIRI_STANDBY_K8=10.1.0.10:6443 + +DR_OIRI_MINS=720 +DR_CREATE_OIRI_JOB=true + +############################################################################################ +# OAA Parameters # +############################################################################################ +# +OAANS=oaans +OAA_MGT_IMAGE=$REGISTRY/oaa-mgmt +OAAMGT_VER=12.2.1.4-jdk8-ol7-DATE + +# NFS Parameters +# +OAA_PRIMARY_CONFIG_SHARE=$DR_PRIMARY_NFS_EXPORT/oaaconfigpv +OAA_STANDBY_CONFIG_SHARE=$DR_STANDBY_NFS_EXPORT/oaaconfigpv +OAA_PRIMARY_CRED_SHARE=$DR_PRIMARY_NFS_EXPORT/oaacredpv +OAA_STANDBY_CRED_SHARE=$DR_STANDBY_NFS_EXPORT/oaacredpv +OAA_PRIMARY_LOG_SHARE=$DR_PRIMARY_NFS_EXPORT/oaalogpv +OAA_STANDBY_LOG_SHARE=$DR_STANDBY_NFS_EXPORT/oaalogpv +OAA_PRIMARY_VAULT_SHARE=$DR_PRIMARY_NFS_EXPORT/oaavaultpv +OAA_STANDBY_VAULT_SHARE=$DR_STANDBY_NFS_EXPORT/oaavaultpv +OAA_LOCAL_CONFIG_SHARE=/nfs_volumes/oaaconfigpv +OAA_LOCAL_CRED_SHARE=/nfs_volumes/oaacredpv +OAA_LOCAL_LOG_SHARE=/nfs_volumes/oaalogpv +OAA_LOCAL_VAULT_SHARE=/nfs_volumes/oaavaultpv +OAA_LOCAL_SHARE=$OAA_LOCAL_CONFIG_SHARE + +OAA_VAULT_TYPE=file +OAA_REPLICAS=2 + + +# DB Parameters +# +OAA_PRIMARY_DB_SCAN=site1-scan.example.com +OAA_STANDBY_DB_SCAN=site2-scan.example.com +OAA_PRIMARY_DB_SERVICE=oaasvc.example.com +OAA_STANDBY_DB_SERVICE=oaasvc.example.com + +DR_OAA_MINS=720 +DR_CREATE_OAA_JOB=true +SAMPLES_REP="https://github.com/oracle/fmw-kubernetes.git" +MAA_SAMPLES_REP="https://github.com/oracle-samples/maa" diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/responsefile/idm.rsp b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/responsefile/idm.rsp index c9eeee8dd..7d040c974 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/responsefile/idm.rsp +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/responsefile/idm.rsp @@ -3,7 +3,7 @@ # # This is an example of a responsefile for IDM Provisioning on Kubernetes # -# Version: 4.0 +# Version: 4.1 ############################################################################################ # CONTROL Parameters # @@ -37,7 +37,7 @@ ENV_TYPE=OCI IMAGE_TYPE=crio ############################################################################################ -# GENERIC Registry Parameters # +# GENERIC Parameters # ############################################################################################ # # Image Download Location @@ -117,7 +117,7 @@ OIRIDING_VER=12.2.1.4-jdk8-ol7-DATE OAAMGT_VER=12.2.1.4-jdk8-ol7-DATE OAA_VER=12.2.1.4-jdk8-ol7-DATE -OPER_VER=4.0.4 +OPER_VER=4.1.2 ############################################################################################ # NFS Parameters # @@ -217,6 +217,13 @@ OUD_LDAP_K8=31389 OUD_LDAPS_K8=31636 OUD_ADMIN_K8=31444 +# Pod Resource Allocation +# +OUD_MAX_CPU=1 # Max CPU Cores pod is allowed to consume. +OUD_MAX_MEMORY=4Gi # Max Memory pod is allowed to consume. +OUD_CPU=500m # Initial CPU Units 1000m = 1 CPU core +OUD_MEMORY=2Gi # Initial Memory allocated to pod. +OUDSERVER_TUNING_PARAMS="-Xms1024m -Xmx2048m " ############################################################################################ # OUDSM Parameters # @@ -248,7 +255,7 @@ OAM_SERVER_COUNT=5 OAM_SERVER_INITIAL=2 OAM_DB_SCAN=db-scan.example.com OAM_DB_LISTENER=1521 -OAM_DB_SERVICE=oam_s.example.com +OAM_DB_SERVICE=oamsvc.example.com OAM_RCU_PREFIX=IAD OAM_WEBLOGIC_USER=weblogic OAM_DOMAIN_NAME=accessdomain @@ -264,7 +271,13 @@ OAM_OIG_INTEG=true OAM_OAMADMIN_USER=$LDAP_OAMADMIN_USER +# Resource Parameters +# OAMSERVER_JAVA_PARAMS="-Xms2048m -Xmx8192m " +OAM_MAX_CPU=2 # Max CPU Cores pod is allowed to consume. +OAM_CPU=1000m # Initial CPU Units 1000m = 1 CPU core +OAM_MAX_MEMORY=10Gi # Max Memory pod is allowed to consume. +OAM_MEMORY=2Gi # Initial Memory allocated to pod. # OAM Ports # @@ -289,7 +302,7 @@ OIG_SERVER_INITIAL=2 OIG_DOMAIN_NAME=governancedomain OIG_DB_SCAN=db-scan.example.com OIG_DB_LISTENER=1521 -OIG_DB_SERVICE=oig_s.example.com +OIG_DB_SERVICE=oigsvc.example.com OIG_RCU_PREFIX=IGD OIG_WEBLOGIC_USER=weblogic OIG_ADMIN_LBR_HOST=igdadmin.example.com @@ -314,8 +327,18 @@ OIG_EMAIL_ADDRESS=email@example.com OIG_EMAIL_FROM_ADDRESS=fromaddress@example.com OIG_EMAIL_REPLY_ADDRESS=noreplies@example.com +# Pod Resource Allocation +# OIMSERVER_JAVA_PARAMS="-Xms4096m -Xmx8192m " SOASERVER_JAVA_PARAMS="-Xms4096m -Xmx8192m " +OIM_MAX_CPU=2 # Max CPU Cores pod is allowed to consume. +OIM_CPU=1000m # Initial CPU Units 1000m = 1 CPU core +OIM_MAX_MEMORY=10Gi # Max Memory pod is allowed to consume. +OIM_MEMORY=4Gi # Initial Memory allocated to pod. +SOA_MAX_CPU=2 # Max CPU Cores pod is allowed to consume. +SOA_CPU=1000m # Initial CPU Units 1000m = 1 CPU core +SOA_MAX_MEMORY=10Gi # Max Memory pod is allowed to consume. +SOA_MEMORY=4Gi # Initial Memory allocated to pod. # OIG Ports # @@ -374,7 +397,7 @@ OIRI_WORK_SHARE=$IAM_PVS/workpv # OIRI_DB_SCAN=db-scan.example.com OIRI_DB_LISTENER=1521 -OIRI_DB_SERVICE=oiri_s.example.com +OIRI_DB_SERVICE=oirisvc.example.com OIRI_RCU_PREFIX=ORI # Ingress Parameters @@ -431,7 +454,7 @@ OAA_CREATE_OHS=true # OAA_DB_SCAN=db-scan.example.com OAA_DB_LISTENER=1521 -OAA_DB_SERVICE=oaa_s.example.com +OAA_DB_SERVICE=oaasvc.example.com OAA_RCU_PREFIX=OAA # Users/Groups @@ -492,6 +515,38 @@ OAA_PUSH_REPLICAS=2 OAA_RISK_REPLICAS=2 OAA_RISKCC_REPLICAS=2 +# Resource Allocations +# +OAA_OAA_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_OAA_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_ADMIN_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_ADMIN_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_POLICY_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_POLICY_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_SPUI_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_SPUI_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_TOTP_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_TOTP_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_YOTP_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_YOTP_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_FIDO_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_FIDO_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_EMAIL_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_EMAIL_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_PUSH_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_PUSH_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_SMS_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_SMS_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_KBA_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_KBA_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_CUSTOM_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_CUSTOM_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_RISK_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_RISK_MEMORY=1Gi # Initial Memory allocated to pod. +OAA_RISKCC_CPU=200m # Initial CPU Units 1000m = 1 CPU core +OAA_RISKCC_MEMORY=1Gi # Initial Memory allocated to pod. + + ############################################################################################ # INTERNAL Parameters - DO NOT CHANGE # diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/general/dr_cm.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/general/dr_cm.yaml new file mode 100644 index 000000000..5514c6090 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/general/dr_cm.yaml @@ -0,0 +1,35 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +apiVersion: v1 +kind: ConfigMap +metadata: + name: dr-cm + namespace: +data: + ENV_TYPE: + DR_TYPE: + OAM_DOMAIN_NAME: + OAM_LOCAL_SCAN: + OAM_REMOTE_SCAN: + OAM_LOCAL_SERVICE: + OAM_REMOTE_SERVICE: + OIG_DOMAIN_NAME: + OIG_LOCAL_SCAN: + OIG_REMOTE_SCAN: + OIG_LOCAL_SERVICE: + OIG_REMOTE_SERVICE: + OIRI_LOCAL_SCAN: + OIRI_REMOTE_SCAN: + OIRI_LOCAL_SERVICE: + OIRI_REMOTE_SERVICE: + OIRI_REMOTE_K8: + OIRI_REMOTE_K8CONFIG: + OIRI_REMOTE_K8CA: + OIRI_LOCAL_K8: + OIRI_LOCAL_K8CONFIG: + OIRI_LOCAL_K8CA: + OAA_LOCAL_SCAN: + OAA_REMOTE_SCAN: + OAA_LOCAL_SERVICE: + OAA_REMOTE_SERVICE: diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/dr_cron.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/dr_cron.yaml new file mode 100644 index 000000000..fb5279d11 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/dr_cron.yaml @@ -0,0 +1,73 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to deploy a cron job to replicate the primary OAA PV to the DR OAM PV +# +apiVersion: batch/v1 +kind: CronJob +metadata: + name: oaarsyncdr + namespace: +spec: + schedule: "*/ * * * *" + jobTemplate: + spec: + backoffLimit: 1 + template: + spec: + imagePullSecrets: + - name: regcred + containers: + - name: alpine-rsync + image: : + imagePullPolicy: IfNotPresent + envFrom: + - configMapRef: + name: dr-cm + volumeMounts: + - mountPath: "/u01/primary_oaaconfigpv" + name: oaaconfigpv + - mountPath: "/u01/dr_oaaconfigpv" + name: oaaconfigpv-dr + - mountPath: "/u01/primary_oaavaultpv" + name: oaavaultpv + - mountPath: "/u01/dr_oaavaultpv" + name: oaavaultpv-dr + - mountPath: "/u01/primary_oaacredpv" + name: oaacredpv + - mountPath: "/u01/dr_oaacredpv" + name: oaacredpv-dr + - mountPath: "/u01/primary_oaalogpv" + name: oaalogpv + - mountPath: "/u01/dr_oaalogpv" + name: oaalogpv-dr + command: + - /bin/sh + - -c + - /u01/primary_oaaconfigpv/dr_scripts/oaa_dr.sh + restartPolicy: Never + volumes: + - name: oaaconfigpv + persistentVolumeClaim: + claimName: primary-oaa-config-pvc + - name: oaaconfigpv-dr + persistentVolumeClaim: + claimName: standby-oaa-config-pvc + - name: oaavaultpv + persistentVolumeClaim: + claimName: primary-oaa-vault-pvc + - name: oaavaultpv-dr + persistentVolumeClaim: + claimName: standby-oaa-vault-pvc + - name: oaacredpv + persistentVolumeClaim: + claimName: primary-oaa-cred-pvc + - name: oaacredpv-dr + persistentVolumeClaim: + claimName: standby-oaa-cred-pvc + - name: oaalogpv + persistentVolumeClaim: + claimName: primary-oaa-log-pvc + - name: oaalogpv-dr + persistentVolumeClaim: + claimName: standby-oaa-log-pvc diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/dr_pv.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/dr_pv.yaml new file mode 100644 index 000000000..1fe4c49ff --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/dr_pv.yaml @@ -0,0 +1,69 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to setup a Persistent Volume for OAA DR +# +# +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -oaa-config-pv + labels: + type: -oaa-config-pv +spec: + storageClassName: manual + capacity: + storage: 30Gi + accessModes: + - ReadWriteMany + nfs: + path: + server: +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -oaa-vault-pv + labels: + type: -oaa-vault-pv +spec: + storageClassName: manual + capacity: + storage: 30Gi + accessModes: + - ReadWriteMany + nfs: + path: + server: +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -oaa-cred-pv + labels: + type: -oaa-cred-pv +spec: + storageClassName: manual + capacity: + storage: 30Gi + accessModes: + - ReadWriteMany + nfs: + path: + server: +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -oaa-log-pv + labels: + type: -oaa-log-pv +spec: + storageClassName: manual + capacity: + storage: 30Gi + accessModes: + - ReadWriteMany + nfs: + path: + server: diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/dr_pvc.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/dr_pvc.yaml new file mode 100644 index 000000000..35f3e23c0 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/dr_pvc.yaml @@ -0,0 +1,76 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to setup a Persistent Volume Claim for OAA DR +# +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: -oaa-config-pvc + namespace: + labels: + type: -oaa-config-pvc +spec: + storageClassName: manual + accessModes: + - ReadWriteMany + resources: + requests: + storage: 30Gi + selector: + matchLabels: + type: -oaa-config-pv +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: -oaa-vault-pvc + namespace: + labels: + type: -oaa-vault-pvc +spec: + storageClassName: manual + accessModes: + - ReadWriteMany + resources: + requests: + storage: 30Gi + selector: + matchLabels: + type: -oaa-vault-pv +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: -oaa-cred-pvc + namespace: + labels: + type: -oaa-cred-pvc +spec: + storageClassName: manual + accessModes: + - ReadWriteMany + resources: + requests: + storage: 30Gi + selector: + matchLabels: + type: -oaa-cred-pv +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: -oaa-log-pvc + namespace: + labels: + type: -oaa-log-pvc +spec: + storageClassName: manual + accessModes: + - ReadWriteMany + resources: + requests: + storage: 30Gi + selector: + matchLabels: + type: -oaa-log-pv diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/oaa_dr.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/oaa_dr.sh new file mode 100644 index 000000000..bd948cfae --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oaa/oaa_dr.sh @@ -0,0 +1,287 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of a script which can be used to create local backups and transfer and restore them to a DR system. +# +# +# Usage: oaa_dr.sh +# + +COPIES=3 +EXCLUDE_LIST="--exclude=\".snapshot\" --exclude=\"backups\" --exclude=\"dr_scripts\" --exclude=\"backup_running\" --exclude=\"k8sconfig\" --exclude=\"ca.crt\"" + + +create_oci_snapshot() +{ + PRIMARY_BASE=$1 + BACKUP_DIR=$2 + echo -n "Creating Snapshot : $BACKUP_DIR - " + mkdir $PRIMARY_BASE/.snapshot/$BACKUP_DIR + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + copy_to_remote $PRIMARY_BASE/.snapshot/$BACKUP_DIR $BACKUP_DIR +} + +create_backup() +{ + PRIMARY_BASE=$1 + BACKUP_DIR=$2 + echo "Creating Backup of $PRIMARY_BASE into $PRIMARY_BASE/backups/$BACKUP_DIR - " + mkdir -p $PRIMARY_BASE/backups/$BACKUP_DIR + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/ $PRIMARY_BASE/backups/$BACKUP_DIR" + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} + + +restore_backup() +{ + PRIMARY_BASE=$1 + BACKUP_DIR=$2 + echo "Restoring Backup :$PRIMARY_BASE/backups/$BACKUP_DIR to $PRIMARY_BASE - " + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/backups/$BACKUP_DIR/ $PRIMARY_BASE" + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} + +update_db_connection() +{ + PRIMARY_BASE=$1 + echo "Updating Database Connections" + cd $PRIMARY_BASE + db_files=$(grep -rl "$OAA_REMOTE_SCAN" . | grep -v backup) + if [ "$db_files" = "" ] + then + echo "No database connections found to change. Check the LOCAL and REMOTE SCAN addresses are set correctly in the config map dr-cm in the DR namespace." + fi + + for file in $db_files + do + echo "Changing scan address from $OAA_REMOTE_SCAN to $OAA_LOCAL_SCAN in file $file" + sed -i "s/$OAA_REMOTE_SCAN/$OAA_LOCAL_SCAN/g" $file + if [ ! "$OAA_REMOTE_SERVICE" = "$OAA_LOCAL_SERVICE" ] + then + echo "Changing service from $OAA_REMOTE_SERVICE to $OAA_LOCAL_SERVICE in file $file" + sed -i "s/$OAA_REMOTE_SERVICE/$OAA_LOCAL_SERVICE/g" $file + fi + done + + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} + +update_property_file() +{ + PRIMARY_BASE=$1 + file=$PRIMARY_BASE/installOAA.properties + echo "Updating OAA Property File" + cd $PRIMARY_BASE + echo "Unset Create DB Schemas in file $file" + sed -i "s/database.createschema=true/database.createschema=false/g" $file + + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} + +update_k8_connection() +{ + echo "Updating Kubernetes Connections" + cd $PRIMARY_BASE + k8_files="$PRIMARY_BASE/data/conf/data-ingestion-config.yaml $VAULT_PRIMARY_BASE/data/conf/env.properties " + if [ "$k8_files" = "" ] + then + echo "No Kubernetes connections found to change. Check the LOCAL and REMOTE K8 addresses are set correctly in the config map dr-cm in the DR namespace." + fi + + for file in $k8_files + do + echo "Changing Kubernetes Cluster address from $OAA_REMOTE_K8 to $OAA_LOCAL_K8 in file $file" + sed -i "s/$OAA_REMOTE_K8/$OAA_LOCAL_K8/g" $file + done + + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} +check_backup_running() +{ + PRIMARY_BASE=$1 + DR_BASE=$2 + if [ -e $PRIMARY_BASE/backup_running ] + then + echo "Previous Backup Still running, exiting." + exit 1 + else + if [ "$DR_TYPE" = "PRIMARY" ] + then + touch $PRIMARY_BASE/backup_running + touch $DR_BASE/backup_running + fi + fi +} + +copy_to_remote() +{ + PRIMARY_BASE=$1 + DR_BASE=$2 + BACKUP_DIR=$3 + + echo "Remote Copy of Backup :$PRIMARY_BASE/backups/$BACKUP_DIR" + + mkdir -p $DR_BASE/backups/$BACKUP_DIR + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/backups/$BACKUP_DIR/ $DR_BASE/backups/$BACKUP_DIR" + echo CMD:$CMD + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + +} +check_restore_running() +{ + PRIMARY_BASE=$1 + if [ -e $PRIMARY_BASE/restore_running ] + then + echo "Previous restore Still running, exiting." + exit 1 + else + touch $PRIMARY_BASE/restore_running + fi +} + +remove_old_backups() +{ + BACKUP_DIR=$1 + NO_BACKUPS=$(ls -lstd $BACKUP_DIR/20* | wc -l ) + TO_MANY=$((NO_BACKUPS-COPIES)) + if [ $TO_MANY -gt 0 ] + then + BACKUPS_TO_DELETE=$( ls -lstd $BACKUP_DIR/20* | awk '{print $10}' | head -$TO_MANY) + for file in $BACKUPS_TO_DELETE + do + echo "Deleting Backup : $file" + rm -rf $file + done + fi +} + + +ST=$(date +%s) + +CONFIG_PRIMARY_BASE=/u01/primary_oaaconfigpv +CONFIG_DR_BASE=/u01/dr_oaaconfigpv +VAULT_PRIMARY_BASE=/u01/primary_oaavaultpv +VAULT_DR_BASE=/u01/dr_oaavaultpv +CRED_PRIMARY_BASE=/u01/primary_oaacredpv +CRED_DR_BASE=/u01/dr_oaacredpv +LOG_PRIMARY_BASE=/u01/primary_oaalogpv +LOG_DR_BASE=/u01/dr_oaalogpv + +check_backup_running $CONFIG_PRIMARY_BASE $CONFIG_DR_BASE + +if [ "$DR_TYPE" = "PRIMARY" ] +then + BACKUP=$(date +%F_%H-%M-%S) + + if [ "$ENV_TYPE" = "OCI" ] + then + create_oci_snapshot $CONFIG_PRIMARY_BASE $BACKUP + remove_old_backups $CONFIG_PRIMARY_BASE/.snapshot + create_oci_snapshot $VAULT_PRIMARY_BASE $BACKUP + remove_old_backups $VAULT_PRIMARY_BASE/.snapshot + create_oci_snapshot $CRED_PRIMARY_BASE $BACKUP + remove_old_backups $CRED_PRIMARY_BASE/.snapshot + create_oci_snapshot $LOG_PRIMARY_BASE $BACKUP + remove_old_backups $LOG_PRIMARY_BASE/.snapshot + else + create_backup $CONFIG_PRIMARY_BASE $BACKUP + copy_to_remote $CONFIG_PRIMARY_BASE $CONFIG_DR_BASE $BACKUP + remove_old_backups $CONFIG_PRIMARY_BASE/backups + create_backup $VAULT_PRIMARY_BASE $BACKUP + copy_to_remote $VAULT_PRIMARY_BASE $VAULT_DR_BASE $BACKUP + remove_old_backups $VAULT_PRIMARY_BASE/backups + create_backup $CRED_PRIMARY_BASE $BACKUP + copy_to_remote $CRED_PRIMARY_BASE $CRED_DR_BASE $BACKUP + remove_old_backups $CRED_PRIMARY_BASE/backups + create_backup $LOG_PRIMARY_BASE $BACKUP + copy_to_remote $LOG_PRIMARY_BASE $LOG_DR_BASE $BACKUP + remove_old_backups $LOG_PRIMARY_BASE/backups + fi + + echo "Backup Complete" + rm $CONFIG_PRIMARY_BASE/backup_running + rm $CONFIG_DR_BASE/backup_running + +elif [ "$DR_TYPE" = "STANDBY" ] +then + BACKUP=$(ls -lstr $CONFIG_PRIMARY_BASE/backups | tail -1 | awk '{ print $10 }') + + check_restore_running $CONFIG_PRIMARY_BASE + check_backup_running $CONFIG_PRIMARY_BASE + restore_backup $CONFIG_PRIMARY_BASE $BACKUP + update_db_connection $CONFIG_PRIMARY_BASE + update_property_file $CONFIG_PRIMARY_BASE + remove_old_backups $CONFIG_PRIMARY_BASE/backups + + restore_backup $VAULT_PRIMARY_BASE $BACKUP + remove_old_backups $VAULT_PRIMARY_BASE/backups + echo "Restore Complete" + + restore_backup $CRED_PRIMARY_BASE $BACKUP + remove_old_backups $CRED_PRIMARY_BASE/backups + echo "Restore Complete" + + restore_backup $LOG_PRIMARY_BASE $BACKUP + remove_old_backups $LOG_PRIMARY_BASE/backups + echo "Restore Complete" + + rm $CONFIG_PRIMARY_BASE/restore_running +fi + + +ET=$(date +%s) +time_taken=$((ET-ST)) + +if [ "$DR_TYPE" = "PRIMARY" ] +then + eval "echo Total Time taken to create Backup: $(date -ud "@$time_taken" +' %H hours %M minutes %S seconds')" +else + eval "echo Total Time taken to Restore Backup: $(date -ud "@$time_taken" +' %H hours %M minutes %S seconds')" +fi + +exit diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/configoam.props b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/configoam.props index 2d1f0f9e8..b08f2e40b 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/configoam.props +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/configoam.props @@ -45,8 +45,8 @@ OAM11G_IDM_DOMAIN_OHS_PROTOCOL: OAM11G_SERVER_LBR_HOST: OAM11G_SERVER_LBR_PORT: OAM11G_SERVER_LBR_PROTOCOL: -OAM11G_OAM_SERVER_TRANSFER_MODE: simple -OAM_TRANSFER_MODE: simple +OAM11G_OAM_SERVER_TRANSFER_MODE: open +OAM_TRANSFER_MODE: open OAM11G_SSO_ONLY_FLAG: false OAM11G_IMPERSONATION_FLAG: false OAM11G_IDM_DOMAIN_LOGOUT_URLS: /console/jsp/common/logout.jsp,/em/targetauth/emaslogout.jsp diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_cron.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_cron.yaml new file mode 100644 index 000000000..2ea7d65df --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_cron.yaml @@ -0,0 +1,42 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to deploy a cron job to replicate the primary OAM PV to the DR OAM PV +# +apiVersion: batch/v1 +kind: CronJob +metadata: + name: oamrsyncdr + namespace: +spec: + schedule: "*/ * * * *" + jobTemplate: + spec: + template: + spec: + imagePullSecrets: + - name: regcred + containers: + - name: alpine-rsync + image: : + imagePullPolicy: IfNotPresent + envFrom: + - configMapRef: + name: dr-cm + volumeMounts: + - mountPath: "/u01/primary_oampv" + name: oampv + - mountPath: "/u01/dr_oampv" + name: oampv-dr + command: + - /bin/sh + - -c + - /u01/primary_oampv/dr_scripts/oam_dr.sh + volumes: + - name: oampv + persistentVolumeClaim: + claimName: primary-oampv-pvc + - name: oampv-dr + persistentVolumeClaim: + claimName: standby-oampv-pvc + restartPolicy: OnFailure diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_oampv.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_oampv.yaml new file mode 100644 index 000000000..18888c868 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_oampv.yaml @@ -0,0 +1,20 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -domain-pv + labels: + weblogic.domainUID: +spec: + storageClassName: -domain-storage-class + capacity: + storage: 10Gi + accessModes: + - ReadWriteMany + persistentVolumeReclaimPolicy: Retain + nfs: + server: + path: diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_pv.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_pv.yaml new file mode 100644 index 000000000..5b12d0fe0 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_pv.yaml @@ -0,0 +1,21 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to setup a Persistent Volume for OAM DR +# +# +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -oam-pv + labels: + type: -oam-pv +spec: + storageClassName: manual + capacity: + storage: 30Gi + accessModes: + - ReadWriteMany + nfs: + path: + server: diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_pvc.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_pvc.yaml new file mode 100644 index 000000000..463b64c0a --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/dr_pvc.yaml @@ -0,0 +1,22 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to setup a Persistent Volume Claim for OAM DR +# +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: -oampv-pvc + namespace: + labels: + type: -oampv-pvc +spec: + storageClassName: manual + accessModes: + - ReadWriteMany + resources: + requests: + storage: 30Gi + selector: + matchLabels: + type: -oam-pv diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/iadadmin_vh.conf b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/iadadmin_vh.conf index 47ffa238e..83551a764 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/iadadmin_vh.conf +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/iadadmin_vh.conf @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2022, Oracle and/or its affiliates. +# Copyright (c) 2021, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # This is an example of an OHS virtual host conf file for iadadmin_vh.conf @@ -78,4 +78,10 @@ WebLogicCluster :,: + + WLSRequest ON + DynamicServerList OFF + WebLogicCluster :,: + + diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/login_vh.conf b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/login_vh.conf index ddf8d83cb..ac875df77 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/login_vh.conf +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/login_vh.conf @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2022, Oracle and/or its affiliates. +# Copyright (c) 2021, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # This is an example of an OHS virtual host conf file for login_vh.conf @@ -68,7 +68,7 @@ WLSRequest ON DynamicServerList OFF - webLogicCluster :,: + WebLogicCluster :,: WLCookieName OAMJSESSIONID WLProxySSL ON WLProxySSLPassThrough ON @@ -77,7 +77,7 @@ WLSRequest ON DynamicServerList OFF - webLogicCluster :,: + WebLogicCluster :,: WLCookieName OAMJSESSIONID WLProxySSL ON WLProxySSLPassThrough ON @@ -86,7 +86,7 @@ WLSRequest ON DynamicServerList OFF - webLogicCluster :,: + WebLogicCluster :,: WLCookieName OAMJSESSIONID PathTrim /.well-known PathPrepend /oauth2/rest @@ -97,7 +97,7 @@ WLSRequest ON DynamicServerList OFF - webLogicCluster :,: + WebLogicCluster :,: WLCookieName OAMJSESSIONID PathTrim /.well-known PathPrepend /oauth2/rest @@ -108,7 +108,7 @@ WLSRequest ON DynamicServerList OFF - webLogicCluster :,: + WebLogicCluster :,: WLCookieName OAMJSESSIONID WLProxySSL ON WLProxySSLPassThrough ON diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/oamDomain.sedfile b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/oamDomain.sedfile index 05b200ec4..5c1e5819d 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/oamDomain.sedfile +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/oamDomain.sedfile @@ -10,7 +10,14 @@ /replicas:/a\ serverPod:\ \ env: \ \ - name: USER_MEM_ARGS \ -\ value: "-Djava.security.egd=file:/dev/./urandom " +\ value: "-Djava.security.egd=file:/dev/./urandom " \ +\ resources: \ +\ limits: \ +\ cpu: "" \ +\ memory: \ +\ requests: \ +\ cpu: \ +\ memory: } diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/oam_dr.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/oam_dr.sh new file mode 100644 index 000000000..5f222f26a --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/oam_dr.sh @@ -0,0 +1,202 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of a script which can be used to create local backups and transfer and restore them to a DR system. +# +# +# Usage: oam_dr.sh +# + +COPIES=3 +EXCLUDE_LIST="--exclude=\".snapshot\" --exclude=\"backups\" --exclude=\"domains/*/servers/*/tmp\" --exclude=\"logs/*\" --exclude=\"dr_scripts\" --exclude=\"network\" --exclude \"domains/*/servers/*/data/nodemanager/*.lck\" --exclude \"domains/*/servers/*/data/nodemanager/*.pid\" --exclude \"domains/*/servers/*/data/nodemager/*.state\" --exclude \"domains/ConnectorDefaultDirectory\"--exclude=\"backup_running\"" + + +create_oci_snapshot() +{ + BACKUP_DIR=$1 + echo -n "Creating Snapshot : $BACKUP_DIR - " + mkdir $PRIMARY_BASE/.snapshot/$BACKUP_DIR + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + copy_to_remote $PRIMARY_BASE/.snapshot/$BACKUP_DIR $BACKUP_DIR +} + +create_backup() +{ + BACKUP_DIR=$1 + echo "Creating Backup of $PRIMARY_BASE into $PRIMARY_BASE/backups/$BACKUP_DIR - " + mkdir -p $PRIMARY_BASE/backups/$BACKUP_DIR + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/ $PRIMARY_BASE/backups/$BACKUP_DIR" + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + copy_to_remote $PRIMARY_BASE/backups/$BACKUP_DIR $BACKUP_DIR +} + + +restore_backup() +{ + BACKUP_DIR=$1 + echo "Restoring Backup : $PRIMARY_BASE/backups/$BACKUP_DIR - " + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/backups/$BACKUP_DIR/ $PRIMARY_BASE" + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} + +update_db_connection() +{ + echo "Updating Database Connections" + cd $PRIMARY_BASE/domains/$OAM_DOMAIN_NAME/config + db_files=$(grep -rl "$OAM_REMOTE_SCAN" . | grep -v backup) + if [ "$db_files" = "" ] + then + echo "No database connections found to change. Check the LOCAL and REMOTE SCAN addresses are set correctly in the config map dr-cm in the DR namespace." + fi + + for file in $db_files + do + echo "Changing scan address from $OAM_REMOTE_SCAN to $OAM_LOCAL_SCAN in file $file" + sed -i "s/$OAM_REMOTE_SCAN/$OAM_LOCAL_SCAN/g" $file + if [ ! "$OAM_REMOTE_SERVICE" = "$OAM_LOCAL_SERVICE" ] + then + echo "Changing service from $OAM_REMOTE_SERVICE to $OAM_LOCAL_SERVICE in file $file" + sed -i "s/$OAM_REMOTE_SERVICE/$OAM_LOCAL_SERVICE/g" $file + fi + done + + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} +check_backup_running() +{ + if [ -e $PRIMARY_BASE/backup_running ] + then + echo "Previous Backup Still running, exiting." + exit + else + if [ "$DR_TYPE" = "PRIMARY" ] + then + touch $PRIMARY_BASE/backup_running + touch $DR_BASE/backup_running + fi + fi +} + +copy_to_remote() +{ + source=$1 + remote=$2 + + echo "Remote Copy of Backup :$BACKUP_DIR" + + mkdir -p $DR_BASE/backups/$remote + CMD="rsync -avz $EXCLUDE_LIST $source/ $DR_BASE/backups/$remote" + echo CMD:$CMD + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + +} +check_restore_running() +{ + if [ -e $PRIMARY_BASE/restore_running ] + then + echo "Previous restore Still running, exiting." + exit + else + touch $PRIMARY_BASE/restore_running + fi +} + +remove_old_backups() +{ + BACKUP_DIR=$1 + echo NO_BACKUPS=`ls -lstd $BACKUP_DIR/20* | wc -l ` + NO_BACKUPS=`ls -lstrd $BACKUP_DIR/20* | wc -l ` + TO_MANY=$((NO_BACKUPS-COPIES)) + if [ $TO_MANY -gt 0 ] + then + BACKUPS_TO_DELETE=` ls -lstd $BACKUP_DIR/20* | awk '{print $10}' | head -$TO_MANY` + for file in $BACKUPS_TO_DELETE + do + echo "Deleting Backup : $file" + rm -rf $file + done + fi +} + +ST=$(date +%s) + +PRIMARY_BASE=/u01/primary_oampv +DR_BASE=/u01/dr_oampv +check_backup_running + +if [ "$DR_TYPE" = "PRIMARY" ] +then + BACKUP=$(date +%F_%H-%M-%S) + + if [ "$ENV_TYPE" = "OCI" ] + then + create_oci_snapshot $BACKUP + remove_old_backups $PRIMARY_BASE/.snapshot + else + create_backup $BACKUP + remove_old_backups $PRIMARY_BASE/backups + fi + + echo "Backup Complete" + rm $PRIMARY_BASE/backup_running + rm $DR_BASE/backup_running + +elif [ "$DR_TYPE" = "STANDBY" ] +then + BACKUP=`ls -lstr $PRIMARY_BASE/backups | tail -1 | awk '{ print $10 }'` + + check_restore_running + restore_backup $BACKUP + update_db_connection + remove_old_backups $PRIMARY_BASE/backups + + echo "Restore Complete" + rm $PRIMARY_BASE/restore_running + +fi + + +ET=$(date +%s) +time_taken=$((ET-ST)) + +if [ "$DR_TYPE" = "PRIMARY" ] +then + eval "echo Total Time taken to create Backup: $(date -ud "@$time_taken" +' %H hours %M minutes %S seconds')" +else + eval "echo Total Time taken to Restore Backup: $(date -ud "@$time_taken" +' %H hours %M minutes %S seconds')" +fi + +exit diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/oamconfig_modify_template.xml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/oamconfig_modify_template.xml index 0e91e83c1..e32eb2d24 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/oamconfig_modify_template.xml +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/oamconfig_modify_template.xml @@ -12,8 +12,8 @@ ://: ://:/oam/server/logout -simple -simple +open +open ://:/oam/server/logout 15 M 5 M diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/resource_list.txt b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/resource_list.txt index 5011098d0..d57f1da46 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/resource_list.txt +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oam/resource_list.txt @@ -27,3 +27,4 @@ /oaa-yotp-factor/**:EXCLUDED:: /risk-analyzer/**:EXCLUDED:: /risk-cc/**:EXCLUDED:: +/dms/**:EXCLUDED:: diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_cron.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_cron.yaml new file mode 100644 index 000000000..9d6fc4e0e --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_cron.yaml @@ -0,0 +1,42 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to deploy a cron job to replicate the primary OIG PV to the DR OIG PV +# +apiVersion: batch/v1 +kind: CronJob +metadata: + name: oigrsyncdr + namespace: +spec: + schedule: "*/ * * * *" + jobTemplate: + spec: + template: + spec: + imagePullSecrets: + - name: regcred + containers: + - name: alpine-rsync + image: : + imagePullPolicy: IfNotPresent + envFrom: + - configMapRef: + name: dr-cm + volumeMounts: + - mountPath: "/u01/primary_oigpv" + name: oigpv + - mountPath: "/u01/dr_oigpv" + name: oigpv-dr + command: + - /bin/sh + - -c + - /u01/primary_oigpv/dr_scripts/oig_dr.sh + volumes: + - name: oigpv + persistentVolumeClaim: + claimName: primary-oigpv-pvc + - name: oigpv-dr + persistentVolumeClaim: + claimName: standby-oigpv-pvc + restartPolicy: OnFailure diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_oigpv.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_oigpv.yaml new file mode 100644 index 000000000..8da297fb7 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_oigpv.yaml @@ -0,0 +1,21 @@ +# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -domain-pv + labels: + weblogic.domainUID: +spec: + storageClassName: -domain-storage-class + capacity: + storage: 10Gi + accessModes: + - ReadWriteMany + # Valid values are Retain, Delete or Recycle + persistentVolumeReclaimPolicy: Retain + # hostPath: + nfs: + server: + path: "" diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_pv.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_pv.yaml new file mode 100644 index 000000000..ceb2fa558 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_pv.yaml @@ -0,0 +1,21 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to setup a Persistent Volume for OIG DR +# +# +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -oig-pv + labels: + type: -oig-pv +spec: + storageClassName: manual + capacity: + storage: 30Gi + accessModes: + - ReadWriteMany + nfs: + path: + server: diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_pvc.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_pvc.yaml new file mode 100644 index 000000000..3b6690dbe --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/dr_pvc.yaml @@ -0,0 +1,22 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to setup a Persistent Volume Claim for OIG DR +# +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: -oigpv-pvc + namespace: + labels: + type: -oigpv-pvc +spec: + storageClassName: manual + accessModes: + - ReadWriteMany + resources: + requests: + storage: 30Gi + selector: + matchLabels: + type: -oig-pv diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/igdadmin_vh.conf b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/igdadmin_vh.conf index 46e82abb8..c93459200 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/igdadmin_vh.conf +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/igdadmin_vh.conf @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2022, Oracle and/or its affiliates. +# Copyright (c) 2021, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # This is an example of an OHS conf file for igdadmin_vh.conf @@ -87,4 +87,10 @@ WebLogicCluster :,: + + WLSRequest ON + DynamicServerList OFF + WebLogicCluster :,: + + diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/oamoig.sedfile b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/oamoig.sedfile index dbc1edb8b..773aba2dc 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/oamoig.sedfile +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/oamoig.sedfile @@ -47,7 +47,7 @@ s/OAM_PORT:.*/OAM_PORT: / s/OAM_SERVER_VERSION:.*/OAM_SERVER_VERSION: 12c/ s/WEBGATE_TYPE:.*/WEBGATE_TYPE: Webgate12c/ s/COOKIE_DOMAIN:.*/COOKIE_DOMAIN: / -s/OAM_TRANSFER_MODE:.*/OAM_TRANSFER_MODE: Simple/ +s/OAM_TRANSFER_MODE:.*/OAM_TRANSFER_MODE: open/ s/OIM_LOGINATTRIBUTE:.*/OIM_LOGINATTRIBUTE: uid/ s/OAM11G_WLS_ADMIN_HOST:.*/OAM11G_WLS_ADMIN_HOST: -adminserver..svc.cluster.local/ s/OAM11G_WLS_ADMIN_PORT:.*/OAM11G_WLS_ADMIN_PORT: 30012/ diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/oigDomain.sedfile b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/oigDomain.sedfile index d2a417b5d..26e27beb8 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/oigDomain.sedfile +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/oigDomain.sedfile @@ -10,13 +10,27 @@ /replicas:/a\ serverPod:\ \ env: \ \ - name: USER_MEM_ARGS \ -\ value: "-Djava.security.egd=file:/dev/./urandom " +\ value: "-Djava.security.egd=file:/dev/./urandom " \ +\ resources: \ +\ limits: \ +\ cpu: "" \ +\ memory: \ +\ requests: \ +\ cpu: \ +\ memory: } /soa_cluster/,/replicas/{ /replicas/a\ serverPod:\ \ env: \ \ - name: USER_MEM_ARGS \ -\ value: "" +\ value: "" \ +\ resources: \ +\ limits: \ +\ cpu: "2" \ +\ memory: \ +\ requests: \ +\ cpu: \ +\ memory: } diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/oig_dr.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/oig_dr.sh new file mode 100644 index 000000000..24b3a6f6f --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oig/oig_dr.sh @@ -0,0 +1,202 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of a script which can be used to create local backups and transfer and restore them to a DR system. +# +# +# Usage: oig_dr.sh +# + +COPIES=3 +EXCLUDE_LIST="--exclude=\".snapshot\" --exclude=\"backups\" --exclude=\"domains/*/servers/*/tmp\" --exclude=\"logs/*\" --exclude=\"dr_scripts\" --exclude=\"network\" --exclude \"domains/*/servers/*/data/nodemanager/*.lck\" --exclude \"domains/*/servers/*/data/nodemanager/*.pid\" --exclude \"domains/*/servers/*/data/nodemager/*.state\" --exclude \"domains/ConnectorDefaultDirectory\" --exclude=\"backup_running\"" + + +create_oci_snapshot() +{ + BACKUP_DIR=$1 + echo -n "Creating Snapshot : $BACKUP_DIR - " + mkdir $PRIMARY_BASE/.snapshot/$BACKUP_DIR + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + copy_to_remote $PRIMARY_BASE/.snapshot/$BACKUP_DIR $BACKUP_DIR +} + +create_backup() +{ + BACKUP_DIR=$1 + echo "Creating Backup of $PRIMARY_BASE into $PRIMARY_BASE/backups/$BACKUP_DIR - " + mkdir -p $PRIMARY_BASE/backups/$BACKUP_DIR + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/ $PRIMARY_BASE/backups/$BACKUP_DIR" + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + copy_to_remote $PRIMARY_BASE/backups/$BACKUP_DIR $BACKUP_DIR +} + + +restore_backup() +{ + BACKUP_DIR=$1 + echo "Restoring Backup : $PRIMARY_BASE/backups/$BACKUP_DIR - " + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/backups/$BACKUP_DIR/ $PRIMARY_BASE" + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} + +update_db_connection() +{ + echo "Updating Database Connections" + cd $PRIMARY_BASE/domains/$OIG_DOMAIN_NAME/config + db_files=$(grep -rl "$OIG_REMOTE_SCAN" . | grep -v backup) + if [ "$db_files" = "" ] + then + echo "No database connections found to change. Check the LOCAL and REMOTE SCAN addresses are set correctly in the config map dr-cm in the DR namespace." + fi + + for file in $db_files + do + echo "Changing scan address from $OIG_REMOTE_SCAN to $OIG_LOCAL_SCAN in file $file" + sed -i "s/$OIG_REMOTE_SCAN/$OIG_LOCAL_SCAN/g" $file + if [ ! "$OIG_REMOTE_SERVICE" = "$OIG_LOCAL_SERVICE" ] + then + echo "Changing service from $OIG_REMOTE_SERVICE to $OIG_LOCAL_SERVICE in file $file" + sed -i "s/$OIG_REMOTE_SERVICE/$OIG_LOCAL_SERVICE/g" $file + fi + done + + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} +check_backup_running() +{ + if [ -e $PRIMARY_BASE/backup_running ] + then + echo "Previous Backup Still running, exiting." + exit + else + if [ "$DR_TYPE" = "PRIMARY" ] + then + touch $PRIMARY_BASE/backup_running + touch $DR_BASE/backup_running + fi + fi +} + +copy_to_remote() +{ + source=$1 + remote=$2 + + echo "Remote Copy of Backup :$BACKUP_DIR" + + mkdir -p $DR_BASE/backups/$remote + CMD="rsync -avz $EXCLUDE_LIST $source/ $DR_BASE/backups/$remote" + echo CMD:$CMD + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + +} +check_restore_running() +{ + if [ -e $PRIMARY_BASE/restore_running ] + then + echo "Previous restore Still running, exiting." + exit + else + touch $PRIMARY_BASE/restore_running + fi +} + +remove_old_backups() +{ + BACKUP_DIR=$1 + echo NO_BACKUPS=`ls -lstd $BACKUP_DIR/20* | wc -l ` + NO_BACKUPS=`ls -lstrd $BACKUP_DIR/20* | wc -l ` + TO_MANY=$((NO_BACKUPS-COPIES)) + if [ $TO_MANY -gt 0 ] + then + BACKUPS_TO_DELETE=` ls -lstd $BACKUP_DIR/20* | awk '{print $10}' | head -$TO_MANY` + for file in $BACKUPS_TO_DELETE + do + echo "Deleting Backup : $file" + rm -rf $file + done + fi +} + +ST=$(date +%s) + +PRIMARY_BASE=/u01/primary_oigpv +DR_BASE=/u01/dr_oigpv +check_backup_running + +if [ "$DR_TYPE" = "PRIMARY" ] +then + BACKUP=$(date +%F_%H-%M-%S) + + if [ "$ENV_TYPE" = "OCI" ] + then + create_oci_snapshot $BACKUP + remove_old_backups $PRIMARY_BASE/.snapshot + else + create_backup $BACKUP + remove_old_backups $PRIMARY_BASE/backups + fi + + echo "Backup Complete" + rm $PRIMARY_BASE/backup_running + rm $DR_BASE/backup_running + +elif [ "$DR_TYPE" = "STANDBY" ] +then + BACKUP=`ls -lstr $PRIMARY_BASE/backups | tail -1 | awk '{ print $10 }'` + + check_restore_running + restore_backup $BACKUP + update_db_connection + remove_old_backups $PRIMARY_BASE/backups + + echo "Restore Complete" + rm $PRIMARY_BASE/restore_running + +fi + + +ET=$(date +%s) +time_taken=$((ET-ST)) + +if [ "$DR_TYPE" = "PRIMARY" ] +then + eval "echo Total Time taken to create Backup: $(date -ud "@$time_taken" +' %H hours %M minutes %S seconds')" +else + eval "echo Total Time taken to Restore Backup: $(date -ud "@$time_taken" +' %H hours %M minutes %S seconds')" +fi + +exit diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_cron.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_cron.yaml new file mode 100644 index 000000000..0526c6eb5 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_cron.yaml @@ -0,0 +1,63 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to deploy a cron job to replicate the primary OIRI PV to the DR OIRI PV +# +apiVersion: batch/v1 +kind: CronJob +metadata: + name: oirirsyncdr + namespace: +spec: + schedule: "*/ * * * *" + jobTemplate: + spec: + backoffLimit: 1 + template: + spec: + imagePullSecrets: + - name: regcred + containers: + - name: alpine-rsync + image: : + imagePullPolicy: IfNotPresent + envFrom: + - configMapRef: + name: dr-cm + volumeMounts: + - mountPath: "/u01/primary_oiripv" + name: oiripv + - mountPath: "/u01/dr_oiripv" + name: oiripv-dr + - mountPath: "/u01/primary_dingpv" + name: dingpv + - mountPath: "/u01/primary_workpv" + name: workpv + - mountPath: "/u01/dr_dingpv" + name: dingpv-dr + - mountPath: "/u01/dr_workpv" + name: workpv-dr + command: + - /bin/sh + - -c + - /u01/primary_oiripv/dr_scripts/oiri_dr.sh + restartPolicy: Never + volumes: + - name: oiripv + persistentVolumeClaim: + claimName: primary-oiripv-pvc + - name: oiripv-dr + persistentVolumeClaim: + claimName: standby-oiripv-pvc + - name: dingpv + persistentVolumeClaim: + claimName: primary-dingpv-pvc + - name: dingpv-dr + persistentVolumeClaim: + claimName: standby-dingpv-pvc + - name: workpv + persistentVolumeClaim: + claimName: primary-workpv-pvc + - name: workpv-dr + persistentVolumeClaim: + claimName: standby-workpv-pvc diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_oiripv.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_oiripv.yaml new file mode 100644 index 000000000..c88af5494 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_oiripv.yaml @@ -0,0 +1,44 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +apiVersion: v1 +kind: PersistentVolume +metadata: + annotations: + meta.helm.sh/release-name: oiri + meta.helm.sh/release-namespace: oirins + labels: + app.kubernetes.io/managed-by: Helm + type: nfs + name: oiri-pv +spec: + accessModes: + - ReadWriteMany + capacity: + storage: 10Gi + nfs: + path: + server: + storageClassName: oiri-storage-class + volumeMode: Filesystem +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + annotations: + meta.helm.sh/release-name: oiri + meta.helm.sh/release-namespace: oirins + labels: + app.kubernetes.io/managed-by: Helm + type: nfs + name: ding-pv +spec: + accessModes: + - ReadWriteMany + capacity: + storage: 10Gi + nfs: + path: + server: + storageClassName: ding-storage-class + volumeMode: Filesystem diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_pv.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_pv.yaml new file mode 100644 index 000000000..da0983eca --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_pv.yaml @@ -0,0 +1,53 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to setup a Persistent Volume for OIRI DR +# +# +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -oiri-pv + labels: + type: -oiri-pv +spec: + storageClassName: manual + capacity: + storage: 30Gi + accessModes: + - ReadWriteMany + nfs: + path: + server: +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -ding-pv + labels: + type: -ding-pv +spec: + storageClassName: manual + capacity: + storage: 30Gi + accessModes: + - ReadWriteMany + nfs: + path: + server: +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -work-pv + labels: + type: -work-pv +spec: + storageClassName: manual + capacity: + storage: 30Gi + accessModes: + - ReadWriteMany + nfs: + path: + server: diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_pvc.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_pvc.yaml new file mode 100644 index 000000000..b32cd446f --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/dr_pvc.yaml @@ -0,0 +1,58 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to setup a Persistent Volume Claim for OIRI DR +# +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: -oiripv-pvc + namespace: + labels: + type: -oiripv-pvc +spec: + storageClassName: manual + accessModes: + - ReadWriteMany + resources: + requests: + storage: 30Gi + selector: + matchLabels: + type: -oiri-pv +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: -dingpv-pvc + namespace: + labels: + type: -dingpv-pvc +spec: + storageClassName: manual + accessModes: + - ReadWriteMany + resources: + requests: + storage: 30Gi + selector: + matchLabels: + type: -ding-pv +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: -workpv-pvc + namespace: + labels: + type: -workpv-pvc +spec: + storageClassName: manual + accessModes: + - ReadWriteMany + resources: + requests: + storage: 30Gi + selector: + matchLabels: + type: -work-pv diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/oiri_dr.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/oiri_dr.sh new file mode 100644 index 000000000..d2f05fae5 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/oiri_dr.sh @@ -0,0 +1,292 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of a script which can be used to create local backups and transfer and restore them to a DR system. +# +# +# Usage: oiri_dr.sh +# + +COPIES=3 +EXCLUDE_LIST="--exclude=\".snapshot\" --exclude=\"backups\" --exclude=\"dr_scripts\" --exclude=\"backup_running\"" + + +create_oci_snapshot() +{ + PRIMARY_BASE=$1 + BACKUP_DIR=$2 + echo -n "Creating Snapshot : $BACKUP_DIR - " + mkdir $PRIMARY_BASE/.snapshot/$BACKUP_DIR + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + copy_to_remote $PRIMARY_BASE/.snapshot/$BACKUP_DIR $BACKUP_DIR +} + +create_backup() +{ + PRIMARY_BASE=$1 + BACKUP_DIR=$2 + echo "Creating Backup of $PRIMARY_BASE into $PRIMARY_BASE/backups/$BACKUP_DIR - " + mkdir -p $PRIMARY_BASE/backups/$BACKUP_DIR + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/ $PRIMARY_BASE/backups/$BACKUP_DIR" + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} + + +restore_backup() +{ + PRIMARY_BASE=$1 + BACKUP_DIR=$2 + echo "Restoring Backup :$PRIMARY_BASE/backups/$BACKUP_DIR to $PRIMARY_BASE - " + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/backups/$BACKUP_DIR/ $PRIMARY_BASE" + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} + +update_db_connection() +{ + PRIMARY_BASE=$1 + echo "Updating Database Connections" + cd $PRIMARY_BASE + db_files=$(grep -rl "$OIRI_REMOTE_SCAN" . | grep -v backup) + if [ "$db_files" = "" ] + then + echo "No database connections found to change. Check the LOCAL and REMOTE SCAN addresses are set correctly in the config map dr-cm in the DR namespace." + fi + + for file in $db_files + do + echo "Changing scan address from $OIRI_REMOTE_SCAN to $OIRI_LOCAL_SCAN in file $file" + sed -i "s/$OIRI_REMOTE_SCAN/$OIRI_LOCAL_SCAN/g" $file + if [ ! "$OIRI_REMOTE_SERVICE" = "$OIRI_LOCAL_SERVICE" ] + then + echo "Changing service from $OIRI_REMOTE_SERVICE to $OIRI_LOCAL_SERVICE in file $file" + sed -i "s/$OIRI_REMOTE_SERVICE/$OIRI_LOCAL_SERVICE/g" $file + fi + done + + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} + +update_k8_connection() +{ + echo "Updating Kubernetes Connections" + cd $PRIMARY_BASE + k8_files="$DING_PRIMARY_BASE/data/conf/data-ingestion-config.yaml $DING_PRIMARY_BASE/data/conf/env.properties " + if [ "$k8_files" = "" ] + then + echo "No Kubernetes connections found to change. Check the LOCAL and REMOTE K8 addresses are set correctly in the config map dr-cm in the DR namespace." + fi + + for file in $k8_files + do + echo "Changing Kubernetes Cluster address from $OIRI_REMOTE_K8 to $OIRI_LOCAL_K8 in file $file" + sed -i "s/$OIRI_REMOTE_K8/$OIRI_LOCAL_K8/g" $file + done + + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} +check_backup_running() +{ + PRIMARY_BASE=$1 + DR_BASE=$2 + if [ -e $PRIMARY_BASE/backup_running ] + then + echo "Previous Backup Still running, exiting." + exit 1 + else + if [ "$DR_TYPE" = "PRIMARY" ] + then + touch $PRIMARY_BASE/backup_running + touch $DR_BASE/backup_running + fi + fi +} + +copy_to_remote() +{ + PRIMARY_BASE=$1 + DR_BASE=$2 + BACKUP_DIR=$3 + + echo "Remote Copy of Backup :$PRIMARY_BASE/backups/$BACKUP_DIR" + + mkdir -p $DR_BASE/backups/$BACKUP_DIR + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/backups/$BACKUP_DIR/ $DR_BASE/backups/$BACKUP_DIR" + echo CMD:$CMD + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + +} +check_restore_running() +{ + PRIMARY_BASE=$1 + if [ -e $PRIMARY_BASE/restore_running ] + then + echo "Previous restore Still running, exiting." + exit 1 + else + touch $PRIMARY_BASE/restore_running + fi +} + +remove_old_backups() +{ + BACKUP_DIR=$1 + NO_BACKUPS=$(ls -lstd $BACKUP_DIR/20* | wc -l ) + TO_MANY=$((NO_BACKUPS-COPIES)) + if [ $TO_MANY -gt 0 ] + then + BACKUPS_TO_DELETE=$( ls -lstd $BACKUP_DIR/20* | awk '{print $10}' | head -$TO_MANY) + for file in $BACKUPS_TO_DELETE + do + echo "Deleting Backup : $file" + rm -rf $file + done + fi +} + +switch_k8_files() +{ + + echo -n "Switching Kubernetes ca file for DING - " + cp $DING_PRIMARY_BASE/$OIRI_LOCAL_K8CA $DING_PRIMARY_BASE/ca.crt + if [ $? = 0 ] + then + echo "Success" + else + echo "Failed." + fi + echo -n "Switching Kubernetes ca file for OIRI - " + cp $WORK_PRIMARY_BASE/$OIRI_LOCAL_K8CA $WORK_PRIMARY_BASE/ca.crt + if [ $? = 0 ] + then + echo "Success" + else + echo "Failed." + fi + + echo -n "Switching Kubernetes config file for OIRI - " + cp $WORK_PRIMARY_BASE/$OIRI_LOCAL_K8CONFIG $WORK_PRIMARY_BASE/config + if [ $? = 0 ] + then + echo "Success" + else + echo "Failed." + fi +} + + +ST=$(date +%s) + +OIRI_PRIMARY_BASE=/u01/primary_oiripv +OIRI_DR_BASE=/u01/dr_oiripv +DING_PRIMARY_BASE=/u01/primary_dingpv +DING_DR_BASE=/u01/dr_dingpv +WORK_PRIMARY_BASE=/u01/primary_workpv +WORK_DR_BASE=/u01/dr_workpv + +check_backup_running $OIRI_PRIMARY_BASE $OIRI_DR_BASE + +if [ "$DR_TYPE" = "PRIMARY" ] +then + BACKUP=$(date +%F_%H-%M-%S) + + if [ "$ENV_TYPE" = "OCI" ] + then + create_oci_snapshot $OIRI_PRIMARY_BASE $BACKUP + remove_old_backups $OIRI_PRIMARY_BASE/.snapshot + create_oci_snapshot $DING_PRIMARY_BASE $BACKUP + remove_old_backups $DING_PRIMARY_BASE/.snapshot + create_oci_snapshot $WORK_PRIMARY_BASE $BACKUP + remove_old_backups $WORK_PRIMARY_BASE/.snapshot + else + create_backup $OIRI_PRIMARY_BASE $BACKUP + copy_to_remote $OIRI_PRIMARY_BASE $OIRI_DR_BASE $BACKUP + remove_old_backups $OIRI_PRIMARY_BASE/backups + create_backup $DING_PRIMARY_BASE $BACKUP + copy_to_remote $DING_PRIMARY_BASE $DING_DR_BASE $BACKUP + remove_old_backups $DING_PRIMARY_BASE/backups + create_backup $WORK_PRIMARY_BASE $BACKUP + copy_to_remote $WORK_PRIMARY_BASE $WORK_DR_BASE $BACKUP + remove_old_backups $WORK_PRIMARY_BASE/backups + fi + + echo "Backup Complete" + rm $OIRI_PRIMARY_BASE/backup_running + rm $OIRI_DR_BASE/backup_running + +elif [ "$DR_TYPE" = "STANDBY" ] +then + BACKUP=$(ls -lstr $OIRI_PRIMARY_BASE/backups | tail -1 | awk '{ print $10 }') + + check_restore_running $OIRI_PRIMARY_BASE + check_backup_running $OIRI_PRIMARY_BASE + restore_backup $OIRI_PRIMARY_BASE $BACKUP + update_db_connection $OIRI_PRIMARY_BASE + remove_old_backups $OIRI_PRIMARY_BASE/backups + + restore_backup $DING_PRIMARY_BASE $BACKUP + update_db_connection $DING_PRIMARY_BASE + remove_old_backups $DING_PRIMARY_BASE/backups + echo "Restore Complete" + + restore_backup $WORK_PRIMARY_BASE $BACKUP + update_db_connection $WORK_PRIMARY_BASE + update_k8_connection $WORK_PRIMARY_BASE + remove_old_backups $WORK_PRIMARY_BASE/backups + echo "Restore Complete" + + switch_k8_files + + rm $OIRI_PRIMARY_BASE/restore_running +fi + + +ET=$(date +%s) +time_taken=$((ET-ST)) + +if [ "$DR_TYPE" = "PRIMARY" ] +then + eval "echo Total Time taken to create Backup: $(date -ud "@$time_taken" +' %H hours %M minutes %S seconds')" +else + eval "echo Total Time taken to Restore Backup: $(date -ud "@$time_taken" +' %H hours %M minutes %S seconds')" +fi + +exit diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/oiri_svc_acct_ingress.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/oiri_svc_acct_ingress.yaml index c0925c979..8fe7b0d98 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/oiri_svc_acct_ingress.yaml +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/oiri_svc_acct_ingress.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2022, Oracle and/or its affiliates. +# Copyright (c) 2022, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # This is an example of creating an service account for OIRI @@ -32,7 +32,7 @@ rules: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: - name: oiri-clusterrolebinding + name: -clusterrolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole @@ -45,7 +45,7 @@ subjects: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: - name: oiri-clusteradmin + name: -clusteradmin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/oiri_svc_acct_noingress.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/oiri_svc_acct_noingress.yaml index b59a17d4c..fc6df4b47 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/oiri_svc_acct_noingress.yaml +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oiri/oiri_svc_acct_noingress.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2022, Oracle and/or its affiliates. +# Copyright (c) 2021, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # This is an example of creating an service account for OIRI @@ -32,7 +32,7 @@ rules: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: - name: oiri-clusterrolebinding + name: -clusterrolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole @@ -45,7 +45,7 @@ subjects: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: - name: oiri-rolebinding + name: -rolebinding namespace: roleRef: apiGroup: rbac.authorization.k8s.io diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_cron.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_cron.yaml new file mode 100644 index 000000000..363db4d1d --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_cron.yaml @@ -0,0 +1,42 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to create a DR Cron Job +# +apiVersion: batch/v1 +kind: CronJob +metadata: + name: oudrsyncdr + namespace: +spec: + schedule: "*/ * * * *" + jobTemplate: + spec: + template: + spec: + imagePullSecrets: + - name: regcred + containers: + - name: alpine-rsync + image: : + imagePullPolicy: IfNotPresent + envFrom: + - configMapRef: + name: dr-cm + volumeMounts: + - mountPath: "/u01/primary_oudpv" + name: oudpv + - mountPath: "/u01/dr_oudpv" + name: oudpv-dr + command: + - /bin/sh + - -c + - /u01/primary_oudpv/dr_scripts/oud_dr.sh + volumes: + - name: oudpv + persistentVolumeClaim: + claimName: primary-oudpv-pvc + - name: oudpv-dr + persistentVolumeClaim: + claimName: standby-oudpv-pvc + restartPolicy: OnFailure diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_oudpv.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_oudpv.yaml new file mode 100644 index 000000000..c644a7781 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_oudpv.yaml @@ -0,0 +1,54 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# +apiVersion: v1 +kind: PersistentVolume +metadata: + annotations: + meta.helm.sh/release-name: + meta.helm.sh/release-namespace: + labels: + app.kubernetes.io/instance: + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: oud-ds-rs + app.kubernetes.io/version: 12.2.1.4.0 + type: -oud-ds-rs-pv + name: -oud-ds-rs-pv +spec: + accessModes: + - ReadWriteMany + capacity: + storage: 30Gi + nfs: + path: + server: + persistentVolumeReclaimPolicy: Delete + storageClassName: manual + volumeMode: Filesystem +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + annotations: + meta.helm.sh/release-name: + meta.helm.sh/release-namespace: + labels: + app.kubernetes.io/instance: + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: oud-ds-rs + app.kubernetes.io/version: 12.2.1.4.0 + helm.sh/chart: oud-ds-rs-0.2 + type: -oud-ds-rs-pv-config + name: -oud-ds-rs-pv-config +spec: + accessModes: + - ReadWriteMany + capacity: + storage: 10Gi + nfs: + path: + server: + persistentVolumeReclaimPolicy: Retain + storageClassName: manual + volumeMode: Filesystem diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_override_oud.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_override_oud.yaml new file mode 100644 index 000000000..997fb2639 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_override_oud.yaml @@ -0,0 +1,77 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of a helm override file to deploy OUD +# It will also seed users and groups and create ACIs for integration with other Oracle Identity Products. +# +# Dependencies: ./templates/oud/base.ldif +# ./templates/oud/99-user.ldif +# +# Usage: Used and Input to Helm command +# + +image: + repository: + tag: + pullPolicy: IfNotPresent + +imagePullSecrets: + - name: regcred + +oudConfig: + baseDN: + rootUserDN: + rootUserPassword: + sleepBeforeConfig: 300 + +persistence: + type: networkstorage + networkstorage: + nfs: + server: + path: + size: 30Gi + +configVolume: + enabled: true + type: networkstorage + networkstorage: + nfs: + server: + path: + mountPath: /u01/oracle/config-input + +replicaCount: + +ingress: + enabled: false + type: nginx + tlsEnabled: false + +elk: + enabled: false + imagePullSecrets: + - name: dockercred + + + +cronJob: + kubectlImage: + repository: bitnami/kubectl + tag: + pullPolicy: IfNotPresent + + imagePullSecrets: + - name: dockercred + +baseOUD: + envVars: + - name: schemaConfigFile_1 + value: /u01/oracle/config-input/99-user.ldif + - name: restartAfterSchemaConfig + value: "true" + +replOUD: + envVars: + - name: dsconfig_1 + value: set-global-configuration-prop --set lookthrough-limit:75000 diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_pv.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_pv.yaml new file mode 100644 index 000000000..0af99e10a --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_pv.yaml @@ -0,0 +1,21 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to deploy logstash +# +# +apiVersion: v1 +kind: PersistentVolume +metadata: + name: -oud-pv + labels: + type: -oud-pv +spec: + storageClassName: manual + capacity: + storage: 30Gi + accessModes: + - ReadWriteMany + nfs: + path: + server: diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_pvc.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_pvc.yaml new file mode 100644 index 000000000..598b6eb35 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/dr_pvc.yaml @@ -0,0 +1,22 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example file to deploy logstash +# +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: -oudpv-pvc + namespace: + labels: + type: -oudpv-pvc +spec: + storageClassName: manual + accessModes: + - ReadWriteMany + resources: + requests: + storage: 30Gi + selector: + matchLabels: + type: -oud-pv diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/oud_dr.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/oud_dr.sh new file mode 100755 index 000000000..63849793d --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/oud_dr.sh @@ -0,0 +1,172 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of a script which can be used to create local backups and transfer and restore them to a DR system. +# +# +# Usage: oud_dr.sh +# + +COPIES=3 +EXCLUDE_LIST="--exclude=\".snapshot\" " + + +create_oci_snapshot() +{ + BACKUP_DIR=$1 + echo -n "Creating Snapshot : $BACKUP_DIR - " + mkdir $PRIMARY_BASE/.snapshot/$BACKUP_DIR + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + copy_to_remote $PRIMARY_BASE/.snapshot/$BACKUP_DIR $BACKUP_DIR +} + +create_backup() +{ + BACKUP_DIR=$1 + echo "Creating Backup of $PRIMARY_BASE into $PRIMARY_BASE/backups/$BACKUP_DIR - " + mkdir -p $PRIMARY_BASE/backups/$BACKUP_DIR + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/ $PRIMARY_BASE/backups/$BACKUP_DIR" + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + copy_to_remote $PRIMARY_BASE/backups/$BACKUP_DIR $BACKUP_DIR +} + + +restore_backup() +{ + BACKUP_DIR=$1 + echo "Restoring Backup : $PRIMARY_BASE/backups/$BACKUP_DIR - " + CMD="rsync -avz $EXCLUDE_LIST $PRIMARY_BASE/backups/$BACKUP_DIR/ $PRIMARY_BASE" + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi +} + +check_backup_running() +{ + if [ -e $PRIMARY_BASE/backup_running ] + then + echo "Previous Backup Still running, exiting." + exit + else + if [ "$DR_TYPE" = "PRIMARY" ] + then + touch $PRIMARY_BASE/backup_running + touch $DR_BASE/backup_running + fi + fi +} + +copy_to_remote() +{ + source=$1 + remote=$2 + + echo "Remote Copy of Backup :$BACKUP_DIR" + + mkdir -p $DR_BASE/backups/$remote + CMD="rsync -avz $EXCLUDE_LIST $source/ $DR_BASE/backups/$remote" + echo CMD:$CMD + eval $CMD + if [ $? -gt 0 ] + then + echo Failed. + exit 1 + else + echo "Success" + fi + +} +check_restore_running() +{ + if [ -e $PRIMARY_BASE/restore_running ] + then + echo "Previous restore Still running, exiting." + exit + else + touch $PRIMARY_BASE/restore_running + fi +} + +remove_old_backups() +{ + BACKUP_DIR=$1 + echo NO_BACKUPS=`ls -lstd $BACKUP_DIR/20* | wc -l ` + NO_BACKUPS=`ls -lstrd $BACKUP_DIR/20* | wc -l ` + TO_MANY=$((NO_BACKUPS-COPIES)) + if [ $TO_MANY -gt 0 ] + then + BACKUPS_TO_DELETE=` ls -lstd $BACKUP_DIR/20* | awk '{print $10}' | head -$TO_MANY` + for file in $BACKUPS_TO_DELETE + do + echo "Deleting Backup : $file" + rm -rf $file + done + fi +} + +ST=$(date +%s) + +PRIMARY_BASE=/u01/primary_oudpv +DR_BASE=/u01/dr_oudpv +check_backup_running + +if [ "$DR_TYPE" = "PRIMARY" ] +then + BACKUP=$(date +%F_%H-%M-%S) + + if [ "$ENV_TYPE" = "OCI" ] + then + create_oci_snapshot $BACKUP + remove_old_backups $PRIMARY_BASE/.snapshot + else + create_backup $BACKUP + remove_old_backups $PRIMARY_BASE/backups + fi + + echo "Backup Complete" + rm $PRIMARY_BASE/backup_running + rm $DR_BASE/backup_running + +elif [ "$DR_TYPE" = "STANDBY" ] +then + BACKUP=`ls -lstr $PRIMARY_BASE/backups | tail -1 | awk '{ print $10 }'` + + check_restore_running + restore_backup $BACKUP + remove_old_backups $PRIMARY_BASE/backups + + echo "Restore Complete" + rm $PRIMARY_BASE/restore_running + +fi + + +ET=$(date +%s) +time_taken=$((ET-ST)) + +if [ "$DR_TYPE" = "PRIMARY" ] +then + eval "echo Total Time taken to create Backup: $(date -ud "@$time_taken" +' %H hours %M minutes %S seconds')" +else + eval "echo Total Time taken to Restore Backup: $(date -ud "@$time_taken" +' %H hours %M minutes %S seconds')" +fi + +exit diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/override_oud.yaml b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/override_oud.yaml index a2bccb63f..ea3d1243f 100644 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/override_oud.yaml +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/templates/oud/override_oud.yaml @@ -24,6 +24,15 @@ oudConfig: rootUserPassword: sleepBeforeConfig: 300 + # memory, cpu parameters for both requests and limits for oud instances + resources: + limits: + memory: "" + cpu: "" + requests: + memory: "" + cpu: "" + persistence: type: networkstorage networkstorage: @@ -62,6 +71,7 @@ cronJob: imagePullSecrets: - name: dockercred + busybox: image: @@ -74,7 +84,7 @@ baseOUD: - name: importLdif_1 value: --append --replaceExisting --includeBranch ${baseDN} --backendID userRoot --ldifFile /u01/oracle/config-input/base.ldif --rejectFile /u01/oracle/config-input/rejects.ldif --skipFile /u01/oracle/config-input/skip.ldif - name: serverTuning - value: -Xms1024m -Xmx2048m -d64 -XX:+UseCompressedOops -server -Xmn1g -XX:MaxTenuringThreshold=1 -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=60 + value: -d64 -XX:+UseCompressedOops -server -Xmn1g -XX:MaxTenuringThreshold=1 -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=60 - name: dsconfig_1 value: set-global-configuration-prop --set lookthrough-limit:75000 - name: dsconfig_2 @@ -144,6 +154,8 @@ baseOUD: replOUD: envVars: + - name: serverTuning + value: -d64 -XX:+UseCompressedOops -server -Xmn1g -XX:MaxTenuringThreshold=1 -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=60 - name: dsconfig_1 value: set-global-configuration-prop --set lookthrough-limit:75000 - name: dsconfig_2 diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/delete_oam.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/delete_oam.sh index 4ed04f2ad..5c72c02fc 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/delete_oam.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/delete_oam.sh @@ -50,7 +50,11 @@ fi . $SCRIPTDIR/common/functions.sh WORKDIR=$LOCAL_WORKDIR/OAM -LOGDIR=$WORKDIR/logs +LOGDIR=$LOCAL_WORKDIR/delete_logs/OAM +if [ ! -e $LOGDIR ] +then + mkdir -p $LOGDIR +fi mkdir $LOCAL_WORKDIR/deleteLogs > /dev/null 2>&1 @@ -104,6 +108,13 @@ check_stopped $OAMNS adminserver # ST=`date +%s` printf "Dropping Schemas - " +kubectl get pod -n $OAMNS helper > /dev/null 2>&1 + +if [ $? -gt 0 ] +then + create_helper_pod $OAMNS $OAM_IMAGE:$OAM_VER +fi + drop_schemas $OAMNS $OAM_DB_SCAN $OAM_DB_LISTENER $OAM_DB_SERVICE $OAM_RCU_PREFIX OAM $OAM_DB_SYS_PWD $OAM_SCHEMA_PWD >> $LOG 2>&1 ET=`date +%s` @@ -129,15 +140,22 @@ else fi fi +echo "Deleting Namespace $OAMNS" +kubectl delete namespace $OAMNS >> $LOG 2>&1 # Delete All contents in the Persistent Volumes # Requires that the PV is mounted locally +# Remove Persistent Volume & Claim from Kubernetes +# +echo "Removing Persistent Volumes" +kubectl delete pv $OAM_DOMAIN_NAME-domain-pv >> $LOG 2>&1 + echo "Deleting Volumes" if [ ! "$OAM_LOCAL_SHARE" = "" ] then - rm -rf $OAM_LOCAL_SHARE/>> $LOG 2>&1 + rm -rf $OAM_LOCAL_SHARE/applications $OAM_LOCAL_SHARE/domains $OAM_LOCAL_SHARE/dr_scripts $OAM_LOCAL_SHARE/keystores $OAM_LOCAL_SHARE/logs $OAM_LOCAL_SHARE/workdir >> $LOG 2>&1 fi @@ -149,15 +167,6 @@ else echo "Unable to delete volumes." fi -# Remove Persistent Volume & Claim from Kubernetes -# -echo "Removing Persistent Volumes" -kubectl delete pvc -n $OAMNS $OAM_DOMAIN_NAME-domain-pvc >> $LOG 2>&1 -kubectl delete pv $OAM_DOMAIN_NAME-domain-pv >> $LOG 2>&1 - - -echo "Deleting Namespace $OAMNS" -kubectl delete namespace $OAMNS >> $LOG 2>&1 FINISH_TIME=`date +%s` print_time TOTAL "Delete OAM " $START_TIME $FINISH_TIME diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/delete_oig.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/delete_oig.sh index 45d9f2709..644c4636b 100755 --- a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/delete_oig.sh +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/delete_oig.sh @@ -49,7 +49,12 @@ fi . $SCRIPTDIR/common/functions.sh export WORKDIR=$LOCAL_WORKDIR/OIG -LOGDIR=$WORKDIR/logs +LOGDIR=$LOCAL_WORKDIR/delete_logs/OIG +if [ ! -e $LOGDIR ] +then + mkdir -p $LOGDIR +fi + START_TIME=`date +%s` mkdir $LOCAL_WORKDIR/deleteLogs > /dev/null 2>&1 @@ -113,9 +118,16 @@ check_stopped $OIGNS adminserver # Drop the OIG schemas # -printf "Drop Schemas - " ST=`date +%s` +kubectl get pod -n $OIGNS helper > /dev/null 2>&1 + +if [ $? -gt 0 ] +then + create_helper_pod $OIGNS $OIG_IMAGE:$OIG_VER +fi + +printf "Drop Schemas - " drop_schemas $OIGNS $OIG_DB_SCAN $OIG_DB_LISTENER $OIG_DB_SERVICE $OIG_RCU_PREFIX OIG $OIG_DB_SYS_PWD $OIG_SCHEMA_PWD >> $LOG 2>&1 ET=`date +%s` @@ -145,12 +157,21 @@ print_time STEP "Drop Schemas" $ST $ET +echo "Delete Namespace" +kubectl delete namespace $OIGNS >> $LOG 2>&1 + +# Remove Persistent Volume and Claim +# +echo "Remove Persistent Volumes" +kubectl delete pv $OIG_DOMAIN_NAME-domain-pv >> $LOG 2>&1 + + ST=`date +%s` echo "Deleting Volumes" if [ ! "$OIG_LOCAL_SHARE" = "" ] then - rm -rf $OIG_LOCAL_SHARE/* >> $LOG 2>&1 + rm -rf $OIG_LOCAL_SHARE/applications $OIG_LOCAL_SHARE/domains $OIG_LOCAL_SHARE/dr_scripts $OIG_LOCAL_SHARE/ConnectorDefaultDirectory $OIG_LOCAL_SHARE/keystores $OIG_LOCAL_SHARE/logs $OIG_LOCAL_SHARE/workdir >> $LOG 2>&1 else echo "Unable to Delete Volumes." fi @@ -167,14 +188,6 @@ fi ET=`date +%s` print_time STEP "Delete Volume" $ST $ET -# Remove Persistent Volume and Claim -# -echo "Remove Persistent Volumes" -kubectl delete pvc -n $OIGNS $OIG_DOMAIN_NAME-domain-pvc >> $LOG 2>&1 -kubectl delete pv $OIG_DOMAIN_NAME-domain-pv >> $LOG 2>&1 - -echo "Delete Namespace" -kubectl delete namespace $OIGNS >> $LOG 2>&1 FINISH_TIME=`date +%s` print_time TOTAL "Delete OIG Domain" $START_TIME $FINISH_TIME diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/enable_dr.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/enable_dr.sh new file mode 100755 index 000000000..9f46d3412 --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/enable_dr.sh @@ -0,0 +1,670 @@ +#!/bin/bash +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of deploying Oracle Identity and Access Management, disaster recovery +# +# Dependencies: ./common/functions.sh +# ./common/oud_functions.sh +# ./common/oam_functions.sh +# ./common/oig_functions.sh +# ./common/oiri_functions.sh +# ./common/oaa_functions.sh +# ./common/ohs_functions.sh +# ./responsefile/dr.rsp +# ./templates/oig +# +# Usage: enable_dr.sh +# +SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" +SCRIPTDIR=$SCRIPTDIR/.. + +# Check for the existence of a responsefile. + +if [ ! -e $SCRIPTDIR/responsefile/dr.rsp ] +then + echo "Responsefile $SCRIPTDIR/responsefile/dr.rsp not found." + exit 1 +fi +if [ ! -e $SCRIPTDIR/responsefile/.drpwd ] +then + echo "Password File $SCRIPTDIR/responsefile/.drpwd not found." + exit 1 +fi + +. $SCRIPTDIR/responsefile/dr.rsp +. $SCRIPTDIR/responsefile/.drpwd +. $SCRIPTDIR/common/functions.sh + +# Validate Product Type and setup Namespace accordingly. +# +product_type=$1 +case "$product_type" in + "oud") + . $SCRIPTDIR/common/oud_functions.sh + OPER_DIR=OracleUnifiedDirectory + PRODUCTNS=$OUDNS + ;; + "oam") + . $SCRIPTDIR/common/oam_functions.sh + OPER_DIR=OracleAccessManagement + PRODUCTNS=$OAMNS + ;; + "oig") + . $SCRIPTDIR/common/oig_functions.sh + OPER_DIR=OracleIdentityGovernance + PRODUCTNS=$OIGNS + ;; + "oiri") + . $SCRIPTDIR/common/oiri_functions.sh + if [ ! "$DINGNS" = "$OIRINS" ] + then + PRODUCTNS="'$OIRINS $DINGNS'" + else + PRODUCTNS=$OIRINS + fi + ;; + "oaa") + . $SCRIPTDIR/common/oaa_functions.sh + PRODUCTNS=$OAANS + ;; + "ohs") + . $SCRIPTDIR/common/ohs_functions.sh + ;; + *) + echo "Usage: enable_dr.sh oam|oig|oiri|oaa|ohs" + exit 1 + ;; +esac + +PRODUCT=${product_type^^} + +TEMPLATE_DIR=$SCRIPTDIR/templates/$product_type + +START_TIME=`date +%s` +WORKDIR=$LOCAL_WORKDIR/${PRODUCT}/DR +LOGDIR=$WORKDIR/logs + +DR_ENABLED=DR_$PRODUCT +if [ "${!DR_ENABLED}" != "true" ] && [ "{!DR_ENABLED}" != "TRUE" ] +then + echo "You have not requested $PRODUCT DR installation" + exit 1 +fi + +echo +echo -n "Enabling $PRODUCT Disaster Recovery - $DR_TYPE " +date +"%a %d %b %Y %T" +echo "--------------------------------------------------------------------" +echo + + +# If the MAA scripts are not being used, the program will delete files on the standby system, make user aware. +# +if [ "$DR_TYPE" = "STANDBY" ] && [ "$USE_MAA_SCRIPTS" = "false" ] +then + echo "WARNING: CREATING A STANDBY SITE WILL REPLACE THE $PRODUCT WITH THE PRIMARY $PRODUCT." + echo "TAKING A BACKUP IS HIGHLY RECOMMENDED." + echo "" +fi + +echo -n "You are requesting to set this site up as an $DR_TYPE $PRODUCT DR site proceed (y/n) ? " +read ANS + +if [ ! "$ANS" = "y" ] && [ ! "$ANS" = "Y" ] +then + echo "Operation Cancelled." + exit +fi + +echo + +create_local_workdir +create_logdir + +echo -n "Enabling $PRODUCT Disaster Recovery - $DR_TYPE " >> $LOGDIR/timings.log +date +"%a %d %b %Y %T" >> $LOGDIR/timings.log +echo "----------------------------------------------------------------" >> $LOGDIR/timings.log + +STEPNO=0 +PROGRESS=$(get_progress) + +if [ ! "$product_type" = "ohs" ] +then + + # If using MAA Scripts make sure they are available. + # MAA scripts are not supported by OAA DR + # + if [ "$USE_MAA_SCRIPTS" = "true" ] && [ ! "$PRODUCT" = "OAA" ] + then + new_step + if [ $STEPNO -gt $PROGRESS ] + then + download_maa_samples + update_progress + fi + + new_step + if [ $STEPNO -gt $PROGRESS ] + then + print_msg "Create MAA Directory $WORKDIR/MAA" + if [ -e $LOCAL_WORKDIR/MAA ] + then + echo "Already Exists." + else + mkdir -p $WORKDIR/MAA > /dev/null 2>&1 + print_status $? + fi + update_progress + fi + fi + + # Create Persistent Volumes for the DR job. + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + create_dr_pvs $PRODUCT + update_progress + fi + + # Create namespace on DR system + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + create_namespace $DRNS + update_progress + fi + + # Create a Container Registry Secret if requested + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + if [ "$CREATE_REGSECRET" = "true" ] + then + create_registry_secret $REGISTRY $REG_USER $REG_PWD $DRNS + fi + update_progress + fi + + # Create Persistent Volume Claims for the DR Job PVs + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + create_dr_pvcs + update_progress + fi + + # DR jobs are controlled via a configmap, create that CM here. + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + create_dr_configmap + update_progress + fi + + # Copy the product specific DR shell script to the product Persistent Volume. + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + copy_dr_script + update_progress + fi + + if [ "$DR_TYPE" = "PRIMARY" ] + then + # Make a backup of the Kubeconfig files used by OIRI + # + if [ "$product_type" = "oiri" ] + then + new_step + if [ $STEPNO -gt $PROGRESS ] + then + backup_k8_files + update_progress + fi + fi + fi + + # Create DR Job for if requested + # + + CREATE_CRON_VAR=DR_CREATE_${PRODUCT}_JOB + CREATE_CRON_JOB=${!CREATE_CRON_VAR} + + if [ "$CREATE_CRON_JOB" = "true" ] + then + + new_step + if [ $STEPNO -gt $PROGRESS ] + then + create_dr_cronjob_files + update_progress + fi + + new_step + if [ $STEPNO -gt $PROGRESS ] + then + create_dr_cronjob + update_progress + fi + + # Stop automatic replication of PVs using the cronjob. Initialisation will occur using a one-of job + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + suspend_cronjob $DRNS ${product_type}rsyncdr + update_progress + fi + fi + if [ "$USE_MAA_SCRIPTS" = "false" ] + then + # Ensure any existing deployment on the standby system is stopped. + # + if [ "$DR_TYPE" = "STANDBY" ] + then + new_step + if [ $STEPNO -gt $PROGRESS ] + then + case "$PRODUCT" in + "OAM") stop_domain $OAMNS $OAM_DOMAIN_NAME + ;; + "OIG") stop_domain $OIGNS $OIG_DOMAIN_NAME + ;; + "OIRI") + stop_deployment $DINGNS + stop_deployment $OIRINS + ;; + "OAA") + stop_deployment $OAANS + ;; + esac + update_progress + fi + + # If you are not using the MAA scripts, you are using a mirrored install. Delete the files that this install created. + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + case "$PRODUCT" in + "OAM") delete_oam_files + ;; + "OIG") delete_oig_files + ;; + "OIRI") delete_oiri_files + ;; + "OAA") delete_oaa_files + ;; + esac + update_progress + fi + fi + fi + + # Create a job to initialise the DR PVs from the Primary. This is a one time operation. + # if run on the primary it creates a backup and ships it to the standby. + # if run on the standby it restores the backup. + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + initialise_dr + update_progress + fi + if [ "$DR_TYPE" = "STANDBY" ] + then + new_step + if [ $STEPNO -gt $PROGRESS ] + then + RUNNING_POD=$(kubectl get pods -n $DRNS | grep ${product_type}-initial) + POD_NAME=$(echo $RUNNING_POD | awk '{print $1}') + POD_STATUS=$(echo $RUNNING_POD | awk '{print $3}') + print_msg "Waiting for pod $POD_NAME to complete " + if [ "$POD_STATUS" = "Pending" ] || [ "$RUNNING_POD" = "" ] + then + echo "Failed to Start Pod - $POD_NAME" + exit 1 + elif [ "$POD_STATUS" = "Error" ]|| [ "$POD_STATUS" = "CrashLoopBackOff" ] + then + echo "$POD_NAME has exited with an error" + exit 1 + fi + + RUNNING=0 + while [ "$RUNNING" -eq 0 ] + do + printf "." + sleep 60 + RUNNING_POD=$(kubectl get pods -n $DRNS | grep $POD_NAME) + POD_NAME=$(echo $RUNNING_POD | awk '{print $1}') + POD_STATUS=$(echo $RUNNING_POD | awk '{print $3}') + if [ "$RUNNING_POD" = "" ] + then + echo "Pod is not running." + exit 1 + fi + if [ "$POD_STATUS" = "Error" ] || [ "$POD_STATUS" = "CrashLoopBackOff" ] + then + echo "Job has exited with an error" + exit 1 + elif [ "$POD_STATUS" = "Pending" ] + then + echo "Pod stuck in Pending state - check kubectl describe pod -n $DRNS $POD_NAME." + exit 1 + elif [ "$POD_STATUS" = "Completed" ] + then + RUNNING=1 + fi + + done + + printf " Success. \n" + update_progress + fi + + fi + + if [ "$DR_TYPE" = "STANDBY" ] + then + # If the backup is of an WLS domain ensure that the WebLogic Operator is running. + # + if [ "$product_type" = "oam" ] || [ "$product_type" = "oig" ] + then + new_step + if [ $STEPNO -gt $PROGRESS ] + then + check_oper_running + update_progress + fi + fi + fi + + if [ "$USE_MAA_SCRIPTS" = "true" ] && [ ! "$PRODUCT" = "OAA" ] + then + if [ "$DR_TYPE" = "STANDBY" ] + then + + # Manually create application persistent volumes on the DR site, pointing to the DR NFS server. + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + create_dr_source_pv + update_progress + fi + + + # Create Kubernetes Objects using MAA scripts. + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + print_msg "Restoring Backup of Kubernetes Objects " + BACKUP_DIR=$(ls -str $WORKDIR/MAA | tail -1 | awk '{ print $2 }') + if [ "$BACKUP_DIR" = "" ] + then + echo "No Kubernetes backups exist - Copy from primary." + exit 1 + fi + BACKUP_FILE=$(ls $WORKDIR/MAA/$BACKUP_DIR/*.gz) + if [ "$BACKUP_FILE" = "" ] + then + echo "No Kubernetes backups exist - Copy from primary." + exit 1 + else + printf " From $BACKUP_FILE -" + fi + $LOCAL_WORKDIR/maa/kubernetes-maa/maak8-push-all-artifacts.sh $BACKUP_FILE $WORKDIR/MAA/$BACKUP_DIR/restore > $LOGDIR/restore_k8.log 2>&1 + print_status $? $LOGDIR/restore_k8.log + update_progress + fi + + else + # Create a snapshot backup of the Kubernetes objects using MAA scripts. + new_step + if [ $STEPNO -gt $PROGRESS ] + then + print_msg "Creating Backup of Kubernetes Objects in Namespace(s) $PRODUCTNS" + CMD="$LOCAL_WORKDIR/maa/kubernetes-maa/maak8-get-all-artifacts.sh $WORKDIR/MAA $PRODUCTNS" + echo $CMD > $LOGDIR/backup_k8.log + eval $CMD >> $LOGDIR/backup_k8.log 2>&1 + print_status $? $LOGDIR/backup_k8.log + BACKUP_DIR=$(ls -str $WORKDIR/MAA | tail -1 | awk '{ print $2 }') + if [ "$COPY_FILES_TO_DR" = "true" ] + then + new_step + if [ $STEPNO -gt $PROGRESS ] + then + copy_files_to_dr $WORKDIR/MAA/$BACKUP_DIR/*.gz + update_progress + fi + else + printf "\n\t\t\tCopy $WORKDIR/MAA/$BACKUP_DIR to your standby system.\n\n" + update_progress + fi + fi + fi + fi + + if [ "$DR_TYPE" = "PRIMARY" ] + then + RUNNING_POD=$(kubectl get pods -n $DRNS | grep ${product_type}-initial ) + POD_NAME=$(echo $RUNNING_POD | awk '{print $1}') + POD_RUNSTATUS=$(echo $RUNNING_POD | awk '{print $2}') + POD_STATUS=$(echo $RUNNING_POD | awk '{print $3}') + if [ "$POD_STATUS" = "Pending" ] + then + echo "Failed to start pod - $POD_NAME." + exit 1 + fi + if [ ! "$RUNNING_POD" = "" ] && [ "$POD_RUNSTATUS" = "1/1" ] + then + printf "\n\nWait for pod $POD_NAME to complete before enabling DR on the standby site.\n" + fi + fi + + # Copy WebGate artifacts to OHS servers on the DR Site. And restart OHS. + # + if [ "$DR_TYPE" = "STANDBY" ] + then + if [ "$product_type" = "oam" ] + then + if [ "$COPY_WG_FILES" = "true" ] + then + new_step + if [ $STEPNO -gt $PROGRESS ] + then + print_msg "Copy Webate Files to OHS" + + if [ ! "$OHS_HOST1" = "" ] + then + printf "\n\t\t\tCopying Webgate file to $OHS_HOST1 - " + + $SCP -r $OAM_LOCAL_SHARE/domains/$OAM_DOMAIN_NAME/output/Webgate_IDM/wallet ${OHS_USER}@$OHS_HOST1:$OHS_DOMAIN/config/fmwconfig/components/OHS/$OHS1_NAME/webgate/config >> $LOGDIR/copy_ohs.log 2>&1 + $SCP -r $OAM_LOCAL_SHARE/domains/$OAM_DOMAIN_NAME/output/Webgate_IDM/password.xml ${OHS_USER}@$OHS_HOST1:$OHS_DOMAIN/config/fmwconfig/components/OHS/$OHS1_NAME/webgate/config >> $LOGDIR/copy_ohs.log 2>&1 + $SCP -r $OAM_LOCAL_SHARE/domains/$OAM_DOMAIN_NAME/output/Webgate_IDM/aaa* ${OHS_USER}@$OHS_HOST1:$OHS_DOMAIN/config/fmwconfig/components/OHS/$OHS1_NAME/webgate/config/simple >> $LOGDIR/copy_ohs.log 2>&1 + $SCP -r $OAM_LOCAL_SHARE/domains/$OAM_DOMAIN_NAME/output/Webgate_IDM/ObAccessClient.xml ${OHS_USER}@$OHS_HOST1:$OHS_DOMAIN/config/fmwconfig/components/OHS/$OHS1_NAME/webgate/config >> $LOGDIR/copy_ohs.log 2>&1 + print_status $? $LOGDIR/copy_ohs.log 2>&1 + fi + + if [ ! "$OHS_HOST2" = "" ] + then + printf "\t\t\tCopying Webgate file to $OHS_HOST2 - " + $SCP -r $OAM_LOCAL_SHARE/domains/$OAM_DOMAIN_NAME/output/Webgate_IDM/wallet ${OHS_USER}@$OHS_HOST2:$OHS_DOMAIN/config/fmwconfig/components/OHS/$OHS2_NAME/webgate/config >> $LOGDIR/copy_ohs.log 2>&1 + $SCP -r $OAM_LOCAL_SHARE/domains/$OAM_DOMAIN_NAME/output/Webgate_IDM/password.xml ${OHS_USER}@$OHS_HOST2:$OHS_DOMAIN/config/fmwconfig/components/OHS/$OHS2_NAME/webgate/config >> $LOGDIR/copy_ohs.log 2>&1 + $SCP -r $OAM_LOCAL_SHARE/domains/$OAM_DOMAIN_NAME/output/Webgate_IDM/aaa* ${OHS_USER}@$OHS_HOST2:$OHS_DOMAIN/config/fmwconfig/components/OHS/$OHS2_NAME/webgate/config/simple >> $LOGDIR/copy_ohs.log 2>&1 + $SCP -r $OAM_LOCAL_SHARE/domains/$OAM_DOMAIN_NAME/output/Webgate_IDM/ObAccessClient.xml ${OHS_USER}@$OHS_HOST2:$OHS_DOMAIN/config/fmwconfig/components/OHS/$OHS2_NAME/webgate/config >> $LOGDIR/copy_ohs.log 2>&1 + print_status $? $LOGDIR/copy_ohs.log 2>&1 + fi + update_progress + new_step + if [ $STEPNO -gt $PROGRESS ] + then + print_msg "Restarting OHS Servers" + + if [ ! "$OHS_HOST1" = "" ] + then + printf "\n\t\t\tRestarting $OHS_HOST1 - " + $SSH ${OHS_USER}@$OHS_HOST1 "$OHS_DOMAIN/bin/restartComponent.sh $OHS1_NAME" > $LOGDIR/restart_ohs.log 2>&1 + print_status $? $LOGDIR/restart_ohs.log + fi + + if [ ! "$OHS_HOST2" = "" ] + then + printf "\t\t\tRestarting $OHS_HOST2 - " + $SSH ${OHS_USER}@$OHS_HOST2 "$OHS_DOMAIN/bin/restartComponent.sh $OHS2_NAME" >> $LOGDIR/restart_ohs.log 2>&1 + print_status $? $LOGDIR/restart_ohs.log + fi + update_progress + fi + fi + else + echo "Copy the WebGate agent files to your Oracle HTTP Server and Restart." + fi + + # For OAA DR + # Create OAA Namespace and a Registry secret to obtain images. + # Create a managment container with access to the local Kubernetes cluster. + # Use OAA.sh to create Kubernetes objects on DR site. + # + elif [ "$product_type" = "oaa" ] + then + + # Create Kubernetes Namespace(s) + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + create_namespace $OAANS + update_progress + fi + + + # Create a Container Registry Secret if requested + # + new_step + if [ $STEPNO -gt $PROGRESS ] && [ "$CREATE_REGSECRET" = "true" ] + then + create_registry_secret $REGISTRY $REG_USER $REG_PWD $OAANS + update_progress + fi + + + # Create a Management Container + # + new_step + if [ $STEPNO -gt $PROGRESS ] + then + PVSERVER=$DR_STANDBY_PVSERVER + OAA_CONFIG_SHARE=$OAA_STANDBY_CONFIG_SHARE + OAA_CRED_SHARE=$OAA_STANDBY_CRED_SHARE + OAA_LOG_SHARE=$OAA_STANDBY_LOG_SHARE + OAA_VAULT_SHARE=$OAA_STANDBY_VAULT_SHARE + create_helper + update_progress + fi + + new_step + if [ $STEPNO -gt $PROGRESS ] + then + create_rbac + update_progress + fi + + new_step + if [ $STEPNO -gt $PROGRESS ] + then + deploy_oaa_dr + update_progress + fi + fi + fi +else + # For OHS create a copy of the OHS configuration on the Primary Site. + # On the DR site update the OHS routing and send to DR OHS servers, before restarting the OHS servers. + # + if [ "$DR_TYPE" = "PRIMARY" ] + then + new_step + if [ $STEPNO -gt $PROGRESS ] + then + get_ohs_config + update_progress + fi + + new_step + if [ $STEPNO -gt $PROGRESS ] + then + tar_ohs_config + update_progress + fi + if [ "$COPY_FILES_TO_DR" = "true" ] + then + new_step + if [ $STEPNO -gt $PROGRESS ] + then + copy_files_to_dr $WORKDIR/ohs_config.tar.gz + update_progress + fi + else + printf "\n\nCopy the file $WORKDIR/ohs_config.tar.gz to $WORKDIR/ohs_config.tar.gz on your DR system." + fi + else + new_step + if [ $STEPNO -gt $PROGRESS ] + then + untar_ohs_config + update_progress + fi + new_step + if [ $STEPNO -gt $PROGRESS ] + then + update_ohs_route + update_progress + fi + new_step + if [ $STEPNO -gt $PROGRESS ] + then + update_ohs_hostname + update_progress + fi + new_step + if [ $STEPNO -gt $PROGRESS ] + then + copy_ohs_dr_config + update_progress + fi + new_step + if [ $STEPNO -gt $PROGRESS ] + then + print_msg "Restarting OHS Servers" + + if [ ! "$OHS_HOST1" = "" ] + then + printf "\n\t\t\tRestarting $OHS_HOST1 - " + $SSH ${OHS_USER}@$OHS_HOST1 "$OHS_DOMAIN/bin/restartComponent.sh $OHS1_NAME" > $LOGDIR/restart_ohs.log 2>&1 + print_status $? $LOGDIR/restart_ohs.log + fi + + if [ ! "$OHS_HOST2" = "" ] + then + printf "\t\t\tRestarting $OHS_HOST2 - " + $SSH ${OHS_USER}@$OHS_HOST2 "$OHS_DOMAIN/bin/restartComponent.sh $OHS2_NAME" >> $LOGDIR/restart_ohs.log 2>&1 + print_status $? $LOGDIR/restart_ohs.log + fi + update_progress + fi + fi +fi + FINISH_TIME=`date +%s` + print_time TOTAL "Enable $PRODUCT Disaster Recovery - $DR_TYPE" $START_TIME $FINISH_TIME + print_time TOTAL "Enable $PRODUCT Disaster Recovery - $DR_TYPE" $START_TIME $FINISH_TIME >> $LOGDIR/timings.log + +touch $LOCAL_WORKDIR/dr_${product_type}_installed diff --git a/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/idmdrctl.sh b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/idmdrctl.sh new file mode 100755 index 000000000..f9dd21bae --- /dev/null +++ b/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement/utils/idmdrctl.sh @@ -0,0 +1,153 @@ +#!/bin/bash +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of a script which controls DR actions +# +# +# Usage: idmdrctl.sh ACTION -p product_type +# Actions: suspend | resume | initial | switch +# +MYDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" + +. $MYDIR/../common/functions.sh +. $MYDIR/../responsefile/dr.rsp + +while getopts 'a:p:' OPTION +do + case "$OPTION" in + a) + ACTION=$OPTARG + ;; + p) + product_type=$OPTARG + ;; + ?) + echo "script usage: $(basename $0) ACTION [-p product] " >&2 + exit 1 + ;; + esac +done + +PRODUCT=${product_type^^} + +LOGDIR=/tmp + +if [ ! "$ACTION" = "switch" ] +then + if [ "$product_type" = "oud" ] + then + . $MYDIR/../common/oud_functions.sh + NAMESPACE=$OUDNS + elif [ "$product_type" = "oam" ] + then + . $MYDIR/../common/oam_functions.sh + NAMESPACE=$OAMNS + elif [ "$product_type" = "oig" ] + then + . $MYDIR/../common/oig_functions.sh + NAMESPACE=$OIGNS + elif [ "$product_type" = "oiri" ] + then + . $MYDIR/../common/oiri_functions.sh + NAMESPACE=$OIRINS + elif [ "$product_type" = "oaa" ] + then + . $MYDIR/../common/oaa_functions.sh + NAMESPACE=$OAANS + else + echo "Usage: idmdrctl.sh -a suspend|resume|initial|switch -p oud|oam|oig|oiri|oaa " + exit + fi +fi + +if [ "$ACTION" = "suspend" ] +then + suspend_cronjob $DRNS ${product_type}rsyncdr +elif [ "$ACTION" = "resume" ] +then + resume_cronjob $DRNS ${product_type}rsyncdr +elif [ "$ACTION" = "initial" ] +then + initialise_dr +elif [ "$ACTION" = "switch" ] +then + switch_dr_mode +elif [ "$ACTION" = "stop" ] +then + case "$PRODUCT" in + OUD) + WORKDIR=$LOCAL_WORKDIR/OUD + stop_oud + ;; + OAM) + stop_domain $OAMNS $OAM_DOMAIN_NAME + ;; + OIG) + stop_domain $OIGNS $OIG_DOMAIN_NAME + ;; + OIRI) + stop_deployment $DINGNS + stop_deployment $OIRINS + ;; + OAA) + stop_deployment $OAANS + ;; + esac +elif [ "$ACTION" = "start" ] +then + case "$PRODUCT" in + OUD) + export WORKDIR=$LOCAL_WORKDIR/OUD + start_oud + ;; + OAM) + start_domain $OAMNS $OAM_DOMAIN_NAME + ;; + OIG) + start_domain $OIGNS $OIG_DOMAIN_NAME + ;; + OIRI) + read -p "How many replicas do you wish to start ? " REPLICAS + if ! [[ $REPLICAS =~ ^[0-9]+$ ]] + then + echo "Error: Not a number" + exit 1 + fi + start_deployment $DINGNS $REPLICAS + start_deployment $OIRINS $REPLICAS + if [ "$DR_TYPE" = "PRIMARY" ] + then + PVSERVER=$DR_PRIMARY_PVSERVER + OIRI_SHARE=$OIRI_PRIMARY_SHARE + OIRI_DING_SHARE=$OIRI_DING_PRIMARY_SHARE + OIRI_WORK_SHARE=$OIRI_WORK_PRIMARY_SHARE + else + PVSERVER=$DR_STANDBY_PVSERVER + OIRI_SHARE=$OIRI_STANDBY_SHARE + OIRI_DING_SHARE=$OIRI_DING_STANDBY_SHARE + OIRI_WORK_SHARE=$OIRI_WORK_STANDBY_SHARE + fi + OIRI_IMAGE=$(kubectl describe deployment -n $OIRINS oiri | grep Image | cut -f2 -d: | sed 's/ //g') + OIRI_CLI_IMAGE=${OIRI_IMAGE}-cli + OIRICLI_VER=$(kubectl describe deployment -n $OIRINS oiri | grep Image | cut -f3 -d: | sed 's/ //g') + OIRI_DING_IMAGE=${OIRI_IMAGE}-ding + OIRIDING_VER=$OIRICLI_VER + TEMPLATE_DIR=$MYDIR/../templates/oiri + WORKDIR=$LOCAL_WORKDIR/OIRI + create_helper + create_ding_helper + ;; + OAA) + start_deployment $OAANS $OAA_REPLICAS + ;; + ?) + echo "$PRODUCT is not supported at this time." + exit 1 + ;; + esac +else + echo "Usage: idmdrctl.sh oud|oam|oig|oiri|oaa suspend|resume|initial|switch|start|stop" + exit +fi + diff --git a/OracleAccessManagement/kubernetes/charts/ingress-per-domain/values.yaml b/OracleAccessManagement/kubernetes/charts/ingress-per-domain/values.yaml index 13dbd7686..33aa752b6 100755 --- a/OracleAccessManagement/kubernetes/charts/ingress-per-domain/values.yaml +++ b/OracleAccessManagement/kubernetes/charts/ingress-per-domain/values.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # diff --git a/OracleAccessManagement/kubernetes/charts/traefik/values.yaml b/OracleAccessManagement/kubernetes/charts/traefik/values.yaml index f680d34e3..95a733fe6 100755 --- a/OracleAccessManagement/kubernetes/charts/traefik/values.yaml +++ b/OracleAccessManagement/kubernetes/charts/traefik/values.yaml @@ -2,8 +2,7 @@ # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # image: - name: traefik - tag: 2.6.0 + name: traefik pullPolicy: IfNotPresent ingressRoute: dashboard: diff --git a/OracleAccessManagement/kubernetes/charts/weblogic-operator/Chart.yaml b/OracleAccessManagement/kubernetes/charts/weblogic-operator/Chart.yaml index 6d2acee4e..eb5eb1201 100755 --- a/OracleAccessManagement/kubernetes/charts/weblogic-operator/Chart.yaml +++ b/OracleAccessManagement/kubernetes/charts/weblogic-operator/Chart.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. apiVersion: v1 @@ -6,5 +6,5 @@ name: weblogic-operator description: Helm chart for configuring the WebLogic operator. type: application -version: 4.0.4 -appVersion: 4.0.4 +version: 4.1.0-RELEASE-MARKER +appVersion: 4.1.0-RELEASE-MARKER diff --git a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-general.tpl b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-general.tpl index a14fa9734..239a2ad8d 100755 --- a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-general.tpl +++ b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-general.tpl @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorClusterRoleGeneral" }} @@ -27,6 +27,9 @@ rules: resources: ["customresourcedefinitions"] verbs: ["get", "list", "watch", "create", "update", "patch"] {{- end }} +- apiGroups: [""] + resources: ["persistentvolumes"] + verbs: ["get", "list", "create"] - apiGroups: ["weblogic.oracle"] resources: ["domains", "clusters", "domains/status", "clusters/status"] verbs: ["get", "create", "list", "watch", "update", "patch"] diff --git a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-namespace.tpl b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-namespace.tpl index b6a554280..b91e082a1 100755 --- a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-namespace.tpl +++ b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-namespace.tpl @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorClusterRoleNamespace" }} @@ -25,6 +25,9 @@ rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["persistentvolumeclaims"] + verbs: ["get", "list", "create"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get", "list"] diff --git a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-cm.tpl b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-cm.tpl index d1a06a437..640a5ee03 100755 --- a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-cm.tpl +++ b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-cm.tpl @@ -1,10 +1,11 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorConfigMap" }} --- apiVersion: "v1" data: + helmChartVersion: {{ .Chart.Version }} {{- if .externalRestEnabled }} {{- if (hasKey . "externalRestIdentitySecret") }} externalRestIdentitySecret: {{ .externalRestIdentitySecret | quote }} diff --git a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-dep.tpl b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-dep.tpl index 6c97561d5..b56f661e7 100755 --- a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-dep.tpl +++ b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-dep.tpl @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorDeployment" }} @@ -34,9 +34,10 @@ spec: {{- end }} spec: serviceAccountName: {{ .serviceAccount | quote }} - {{- if .runAsUser }} + {{- if (ne ( .kubernetesPlatform | default "Generic" ) "OpenShift") }} securityContext: - runAsUser: {{ .runAsUser }} + seccompProfile: + type: RuntimeDefault {{- end }} {{- with .nodeSelector }} nodeSelector: @@ -74,16 +75,22 @@ spec: fieldPath: "metadata.uid" - name: "OPERATOR_VERBOSE" value: "false" - - name: "JAVA_LOGGING_LEVEL" - value: {{ .javaLoggingLevel | quote }} {{- if .kubernetesPlatform }} - name: "KUBERNETES_PLATFORM" value: {{ .kubernetesPlatform | quote }} {{- end }} + {{- if and (hasKey . "enableRest") .enableRest }} + - name: "ENABLE_REST_ENDPOINT" + value: "true" + {{- end }} + - name: "JAVA_LOGGING_LEVEL" + value: {{ .javaLoggingLevel | quote }} - name: "JAVA_LOGGING_MAXSIZE" - value: {{ .javaLoggingFileSizeLimit | default 20000000 | quote }} + value: {{ int64 .javaLoggingFileSizeLimit | default 20000000 | quote }} - name: "JAVA_LOGGING_COUNT" value: {{ .javaLoggingFileCount | default 10 | quote }} + - name: "JVM_OPTIONS" + value: {{ .jvmOptions | default "-XshowSettings:vm -XX:MaxRAMPercentage=70" | quote }} {{- if .remoteDebugNodePortEnabled }} - name: "REMOTE_DEBUG_PORT" value: {{ .internalDebugHttpPort | quote }} @@ -109,15 +116,15 @@ spec: {{- if .memoryLimits}} memory: {{ .memoryLimits }} {{- end }} - {{- if (eq ( .kubernetesPlatform | default "Generic" ) "OpenShift") }} securityContext: + {{- if (ne ( .kubernetesPlatform | default "Generic" ) "OpenShift") }} + runAsUser: {{ .runAsUser | default 1000 }} + {{- end }} + runAsNonRoot: true + privileged: false allowPrivilegeEscalation: false capabilities: drop: ["ALL"] - runAsNonRoot: true - seccompProfile: - type: RuntimeDefault - {{- end }} volumeMounts: - name: "weblogic-operator-cm-volume" mountPath: "/deployment/config" @@ -217,6 +224,12 @@ spec: namespace: {{ .Release.Namespace | quote }} data: serviceaccount: {{ .serviceAccount | quote }} + {{- if .featureGates }} + featureGates: {{ .featureGates | quote }} + {{- end }} + {{- if .domainNamespaceSelectionStrategy }} + domainNamespaceSelectionStrategy: {{ .domainNamespaceSelectionStrategy | quote }} + {{- end }} --- # webhook does not exist or chart version is newer, create a new webhook apiVersion: "apps/v1" @@ -259,17 +272,18 @@ spec: {{- end }} spec: serviceAccountName: {{ .serviceAccount | quote }} - {{- if .runAsUser }} + {{- if (ne ( .kubernetesPlatform | default "Generic" ) "OpenShift") }} securityContext: - runAsUser: {{ .runAsUser }} + seccompProfile: + type: RuntimeDefault {{- end }} {{- with .nodeSelector }} nodeSelector: - {{- toYaml . | nindent 8 }} + {{- toYaml . | nindent 12 }} {{- end }} {{- with .affinity }} affinity: - {{- toYaml . | nindent 8 }} + {{- toYaml . | nindent 12 }} {{- end }} containers: - name: "weblogic-operator-webhook" @@ -296,7 +310,7 @@ spec: - name: "JAVA_LOGGING_LEVEL" value: {{ .javaLoggingLevel | quote }} - name: "JAVA_LOGGING_MAXSIZE" - value: {{ .javaLoggingFileSizeLimit | default 20000000 | quote }} + value: {{ int64 .javaLoggingFileSizeLimit | default 20000000 | quote }} - name: "JAVA_LOGGING_COUNT" value: {{ .javaLoggingFileCount | default 10 | quote }} {{- if .remoteDebugNodePortEnabled }} @@ -320,15 +334,15 @@ spec: {{- if .memoryLimits}} memory: {{ .memoryLimits }} {{- end }} - {{- if (eq ( .kubernetesPlatform | default "Generic") "OpenShift") }} securityContext: + {{- if (ne ( .kubernetesPlatform | default "Generic" ) "OpenShift") }} + runAsUser: {{ .runAsUser | default 1000 }} + {{- end }} + runAsNonRoot: true + privileged: false allowPrivilegeEscalation: false capabilities: - drop: ["ALL"] - runAsNonRoot: true - seccompProfile: - type: RuntimeDefault - {{- end }} + drop: ["ALL"] volumeMounts: - name: "weblogic-webhook-cm-volume" mountPath: "/deployment/config" diff --git a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-external-svc.tpl b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-external-svc.tpl index 0fd2ee202..f7936f537 100755 --- a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-external-svc.tpl +++ b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-external-svc.tpl @@ -1,8 +1,8 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorExternalService" }} -{{- if or .externalRestEnabled .remoteDebugNodePortEnabled }} +{{- if or (and (hasKey . "enableRest") .enableRest .externalRestEnabled) .remoteDebugNodePortEnabled }} --- apiVersion: "v1" kind: "Service" diff --git a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-internal-svc.tpl b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-internal-svc.tpl index 5e7725825..c8c91bc1e 100755 --- a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-internal-svc.tpl +++ b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator-internal-svc.tpl @@ -1,7 +1,8 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorInternalService" }} +{{- if and (hasKey . "enableRest") .enableRest }} --- apiVersion: "v1" kind: "Service" @@ -21,6 +22,7 @@ spec: - port: 8083 name: "metrics" appProtocol: http +{{- end }} --- {{- if not .operatorOnly }} apiVersion: "v1" diff --git a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator.tpl b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator.tpl index b2bb5d8a3..ed98f7eb8 100755 --- a/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator.tpl +++ b/OracleAccessManagement/kubernetes/charts/weblogic-operator/templates/_operator.tpl @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- if and (not (empty .Capabilities.APIVersions)) (not (.Capabilities.APIVersions.Has "policy/v1")) }} @@ -14,8 +14,6 @@ {{- include "operator.operatorClusterRoleOperatorAdmin" . }} {{- include "operator.operatorClusterRoleDomainAdmin" . }} {{- include "operator.clusterRoleBindingGeneral" . }} -{{- include "operator.clusterRoleBindingAuthDelegator" . }} -{{- include "operator.clusterRoleBindingDiscovery" . }} {{- if not (eq .domainNamespaceSelectionStrategy "Dedicated") }} {{- include "operator.clusterRoleBindingNonResource" . }} {{- end }} diff --git a/OracleAccessManagement/kubernetes/charts/weblogic-operator/values.yaml b/OracleAccessManagement/kubernetes/charts/weblogic-operator/values.yaml index f2bfed813..b62e1691d 100755 --- a/OracleAccessManagement/kubernetes/charts/weblogic-operator/values.yaml +++ b/OracleAccessManagement/kubernetes/charts/weblogic-operator/values.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # serviceAccount specifies the name of the ServiceAccount in the operator's namespace that the @@ -54,7 +54,7 @@ domainNamespaceSelectionStrategy: LabelSelector enableClusterRoleBinding: true # image specifies the container image containing the operator. -image: "ghcr.io/oracle/weblogic-kubernetes-operator:4.0.4" +image: "4.1.0-RELEASE-MARKER" # imagePullPolicy specifies the image pull policy for the operator's container image. imagePullPolicy: IfNotPresent @@ -69,9 +69,13 @@ imagePullPolicy: IfNotPresent # imagePullSecrets: # - name: "my-operator-secret" +# enableRest specifies whether the operator's REST interface is enabled. Beginning with version 4.0.5, +# the REST interface will be disabled by default. +# enableRest: true + # externalRestEnabled specifies whether the operator's REST interface is exposed # outside the Kubernetes cluster on the port specified by the 'externalRestHttpsPort' -# property. +# property. Ignored if 'enableRest' is not true. # # If set to true, then the customer must provide the SSL certificate and private key for # the operator's external REST interface by specifying the 'externalOperatorCert' and @@ -265,3 +269,8 @@ clusterSizePaddingValidationEnabled: true # runAsuser specifies the UID to run the operator and conversion webhook container processes. # If not specified, it defaults to the user specified in the operator's container image. #runAsUser: 1000 + +# jvmOptions specifies a value used to control the Java process that runs the operator, such as the maximum heap size +# that will be allocated. +#jvmOptions: -XshowSettings:vm -XX:MaxRAMPercentage=70 + diff --git a/OracleAccessManagement/kubernetes/common/utility.sh b/OracleAccessManagement/kubernetes/common/utility.sh index 6c03e0f42..b008e1f76 100755 --- a/OracleAccessManagement/kubernetes/common/utility.sh +++ b/OracleAccessManagement/kubernetes/common/utility.sh @@ -1,5 +1,5 @@ #!/usr/bin/env bash -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # @@ -855,7 +855,7 @@ checkPodDelete() { checkPodState() { status="NotReady" - max=60 + max=120 count=1 pod=$1 @@ -880,7 +880,7 @@ checkPodState() { count=`expr $count + 1` done if [ $count -gt $max ] ; then - echo "[ERROR] Unable to start the Pod [$pod] after 300s "; + echo "[ERROR] Unable to start the Pod [$pod] after 600s "; exit 1 fi @@ -969,11 +969,11 @@ getPodName() { detectPod() { ns=$1 startSecs=$SECONDS - maxWaitSecs=10 + maxWaitSecs=120 while [ -z "`${KUBERNETES_CLI:-kubectl} get pod -n ${ns} -o jsonpath={.items[0].metadata.name}`" ]; do if [ $((SECONDS - startSecs)) -lt $maxWaitSecs ]; then echo "Pod not found after $((SECONDS - startSecs)) seconds, retrying ..." - sleep 2 + sleep 5 else echo "[Error] Could not find Pod after $((SECONDS - startSecs)) seconds" exit 1 diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/common/createOAMDomain.py b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/common/createOAMDomain.py index 694ad08b0..5b1871e2d 100755 --- a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/common/createOAMDomain.py +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/common/createOAMDomain.py @@ -266,6 +266,13 @@ def extendOamDomain(self, domainHome, db, dbPrefix, dbPassword, managedNameBase, self.targetCluster(self.ADDL_CLUSTER); cd('/') + # Assign servers to clusters + for managedName in self.MANAGED_SERVERS: + assign('Server', managedName, 'Cluster', clusterName) + + for managedName in self.ADDL_MANAGED_SERVERS: + assign('Server', managedName, 'Cluster', self.ADDL_CLUSTER) + #configure Active Gridlink datasource based on inputs print('Using datasource type: ' + dstype) diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/common/oamconfig_modify.sh b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/common/oamconfig_modify.sh index c3b61ae2c..09cc374f4 100755 --- a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/common/oamconfig_modify.sh +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/common/oamconfig_modify.sh @@ -1,6 +1,6 @@ #!/bin/bash -# Copyright (c) 2020, 2022, Oracle and/or its affiliates. +# Copyright (c) 2020, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. cur_dir=`dirname $(readlink -f "$0")` @@ -134,14 +134,11 @@ do fi ret1=`${KUBERNETES_CLI:-kubectl} get po -n $OAM_NAMESPACE | grep ${OAM_SERVER}1` - ret2=`${KUBERNETES_CLI:-kubectl} get po -n $OAM_NAMESPACE | grep ${OAM_SERVER}2` echo $ret1 | grep '1/1' rc1=$? - echo $ret2 | grep '1/1' - rc2=$? - if [[ ($rc1 -eq 0) && ($rc2 -eq 0) ]]; then + if [ $rc1 -eq 0 ]; then echo "OAM servers started successfully" break else diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/create-domain-inputs.yaml b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/create-domain-inputs.yaml index d9435dc0a..ec5d55813 100755 --- a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/create-domain-inputs.yaml +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/create-domain-inputs.yaml @@ -33,7 +33,7 @@ clusterName: oam_cluster configuredManagedServerCount: 5 # Number of managed servers to initially start for the domain -initialManagedServerReplicas: 2 +initialManagedServerReplicas: 1 # Base string used to generate managed server names managedServerNameBase: oam_server diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/domain-resources/domain.yaml b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/domain-resources/domain.yaml new file mode 100644 index 000000000..0056bb45e --- /dev/null +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/domain-resources/domain.yaml @@ -0,0 +1,183 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# This is an example of how to define a Domain resource. +# + +apiVersion: "weblogic.oracle/v9" +kind: Domain +metadata: + name: accessdomain + namespace: oamns + labels: + weblogic.domainUID: accessdomain +spec: + # The WebLogic Domain Home + domainHome: /u01/oracle/user_projects/domains/accessdomain + + # The domain home source type + # Set to PersistentVolume for domain-in-pv, Image for domain-in-image, or FromModel for model-in-image + domainHomeSourceType: PersistentVolume + + # The WebLogic Server image that the Operator uses to start the domain + image: "oracle/oam:12.2.1.4.0" + + # imagePullPolicy defaults to "Always" if image version is :latest + imagePullPolicy: IfNotPresent + + # Identify which Secret contains the credentials for pulling an image + imagePullSecrets: + - name: orclcred + + # Identify which Secret contains the WebLogic Admin credentials + webLogicCredentialsSecret: + name: accessdomain-weblogic-credentials + + # Whether to include the server out file into the pod's stdout, default is true + includeServerOutInPodLog: true + + # Whether to enable log home + logHomeEnabled: true + + # Whether to write HTTP access log file to log home + httpAccessLogInLogHome: true + + # The in-pod location for domain log, server logs, server out, introspector out, and Node Manager log files + logHome: /u01/oracle/user_projects/domains/logs/accessdomain + # An (optional) in-pod location for data storage of default and custom file stores. + # If not specified or the value is either not set or empty (e.g. dataHome: "") then the + # data storage directories are determined from the WebLogic domain home configuration. + dataHome: "" + + # serverStartPolicy legal values are "Never, "IfNeeded", or "AdminOnly" + # This determines which WebLogic Servers the Operator will start up when it discovers this Domain + # - "Never" will not start any server in the domain + # - "AdminOnly" will start up only the administration server (no managed servers will be started) + # - "IfNeeded" will start all non-clustered servers, including the administration server and clustered servers up to the replica count + serverStartPolicy: IfNeeded + + serverPod: + initContainers: + #DO NOT CHANGE THE NAME OF THIS INIT CONTAINER + - name: compat-connector-init + image: "oracle/oam:12.2.1.4.0" + #OAM Product image, same as spec.image mentioned above + imagePullPolicy: IfNotPresent + command: [ "/bin/bash", "-c", "mkdir -p /u01/oracle/user_projects/domains/wdt-logs"] + volumeMounts: + - mountPath: /u01/oracle/user_projects/ + name: weblogic-domain-storage-volume + + # a list of environment variable to be set on the servers + env: + - name: JAVA_OPTIONS + value: "-Dweblogic.StdoutDebugEnabled=false" + - name: WLSDEPLOY_LOG_DIRECTORY + value: "/u01/oracle/user_projects/domains/wdt-logs" + - name: USER_MEM_ARGS + value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx1024m " + volumes: + - name: weblogic-domain-storage-volume + persistentVolumeClaim: + claimName: accessdomain-domain-pvc + volumeMounts: + - mountPath: /u01/oracle/user_projects + name: weblogic-domain-storage-volume + + # adminServer is used to configure the desired behavior for starting the administration server. + adminServer: + # adminService: + # channels: + # The Admin Server's NodePort + # - channelName: default + # nodePort: 30701 + # Uncomment to export the T3Channel as a service + # - channelName: T3Channel + serverPod: + # an (optional) list of environment variable to be set on the admin servers + env: + - name: USER_MEM_ARGS + value: "-Djava.security.egd=file:/dev/./urandom -Xms512m -Xmx1024m " + - name: CLASSPATH + value: "/u01/oracle/wlserver/server/lib/weblogic.jar" + + configuration: + secrets: [ accessdomain-rcu-credentials ] + initializeDomainOnPV: + persistentVolume: + metadata: + name: accessdomain-domain-pv + spec: + storageClassName: accessdomain-domain-storage-class + capacity: + # Total storage allocated to the persistent storage. + storage: 10Gi + # Reclaim policy of the persistent storage + # The valid values are: 'Retain', 'Delete', and 'Recycle' + persistentVolumeReclaimPolicy: Retain + # Persistent volume type for the persistent storage. + # The value must be 'hostPath' or 'nfs'. + # If using 'nfs', server must be specified. + nfs: + server: nfsServer + path: "/scratch/k8s_dir" + #hostPath: + #path: "/scratch/k8s_dir" + persistentVolumeClaim: + metadata: + name: accessdomain-domain-pvc + namespace: oamns + spec: + storageClassName: accessdomain-domain-storage-class + resources: + requests: + storage: 10Gi + volumeName: accessdomain-domain-pv + domain: + # Domain | DomainAndRCU + createIfNotExists: Domain + domainCreationImages: + - image: 'oracle/oam:oct23-aux-12.2.1.4.0' + domainType: OAM + # References to Cluster resources that describe the lifecycle options for all + # the Managed Server members of a WebLogic cluster, including Java + # options, environment variables, additional Pod content, and the ability to + # explicitly start, stop, or restart cluster members. The Cluster resource + # must describe a cluster that already exists in the WebLogic domain + # configuration. + clusters: + - name: accessdomain-oam-cluster + - name: accessdomain-policy-cluster + + # The number of managed servers to start for unlisted clusters + # replicas: 1 + +--- +# This is an example of how to define a Cluster resource. +apiVersion: weblogic.oracle/v1 +kind: Cluster +metadata: + name: accessdomain-oam-cluster + namespace: oamns +spec: + clusterName: oam_cluster + serverService: + precreateService: true + serverPod: + env: + - name: USER_MEM_ARGS + value: "-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m" + replicas: 1 + +--- +# This is an example of how to define a Cluster resource. +apiVersion: weblogic.oracle/v1 +kind: Cluster +metadata: + name: accessdomain-policy-cluster + namespace: oamns +spec: + clusterName: policy_cluster + serverService: + precreateService: true + replicas: 1 + diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/OAM.json b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/OAM.json new file mode 100644 index 000000000..10f9499d3 --- /dev/null +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/OAM.json @@ -0,0 +1,31 @@ +{ + "copyright": "Copyright (c) 2023, Oracle and/or its affiliates.", + "license": "Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl", + "name": "OAM", + "description": "Oracle Access Manager domain definition", + "versions": { + "12.2.1.4": "OAM_12CR2" + }, + "definitions": { + "OAM_12CR2": { + "baseTemplate": "Basic WebLogic Server Domain", + "extensionTemplates": [ + "Oracle Access Management Suite" + ], + "serverGroupsToTarget": [ + "OAM-MGD-SVRS", + "OAM-POLICY-MANAGED-SERVER" + ], + "rcuSchemas": [ + "STB", + "WLS", + "MDS", + "IAU", + "IAU_VIEWER", + "IAU_APPEND", + "OPSS", + "OAM" + ] + } + } +} diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/agl_jdbc.yaml b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/agl_jdbc.yaml new file mode 100644 index 000000000..71fa4d9f5 --- /dev/null +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/agl_jdbc.yaml @@ -0,0 +1,94 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# This is an example of how to define Active GridLink type datasources for OAM domain +# For details regarding how to work with WDT model files and WDT model attributes, please refer below links +# https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-on-pv/model-files/ +# https://oracle.github.io/weblogic-deploy-tooling/concepts/model/ +# + +resources: + JDBCSystemResource: + LocalSvcTblDataSource: + JdbcResource: + DatasourceType: AGL + JDBCConnectionPoolParams: + ConnectionReserveTimeoutSeconds: 10 + InitialCapacity: 0 + MaxCapacity: 400 + TestConnectionsOnReserve: true + CapacityIncrement: 1 + TestFrequencySeconds: 0 + SecondsToTrustAnIdlePoolConnection: 0 + TestTableName: SQL ISVALID + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + JDBCDriverParams: + URL: 'jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@ )(PORT= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@)))' + opss-audit-DBDS: + JdbcResource: + DatasourceType: AGL + JDBCConnectionPoolParams: + TestFrequencySeconds: 0 + TestConnectionsOnReserve: true + TestTableName: SQL ISVALID + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + JDBCDriverParams: + URL: 'jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@ )(PORT= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@)))' + opss-audit-viewDS: + JdbcResource: + DatasourceType: AGL + JDBCConnectionPoolParams: + TestFrequencySeconds: 0 + TestConnectionsOnReserve: true + TestTableName: SQL ISVALID + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + JDBCDriverParams: + URL: 'jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@ )(PORT= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@)))' + opss-data-source: + JdbcResource: + DatasourceType: AGL + JDBCConnectionPoolParams: + TestFrequencySeconds: 0 + TestConnectionsOnReserve: true + TestTableName: SQL ISVALID + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + JDBCDriverParams: + URL: 'jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@ )(PORT= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@)))' + WLSSchemaDataSource: + JdbcResource: + DatasourceType: AGL + JDBCConnectionPoolParams: + MaxCapacity: 150 + TestConnectionsOnReserve: true + TestFrequencySeconds: 0 + TestTableName: SQL ISVALID + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + JDBCDriverParams: + URL: 'jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@ )(PORT= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@)))' + oamDS: + JdbcResource: + DatasourceType: AGL + JDBCConnectionPoolParams: + MaxCapacity: 200 + TestConnectionsOnReserve: true + TestTableName: SQL ISVALID + CapacityIncrement: 1 + ConnectionCreationRetryFrequencySeconds: 10 + InitialCapacity: 20 + SecondsToTrustAnIdlePoolConnection: 0 + TestFrequencySeconds: 0 + InactiveConnectionTimeoutSeconds: 300 + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + JDBCDriverParams: + URL: 'jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@ )(PORT= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME= @@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@)))' diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/domainInfo.yaml b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/domainInfo.yaml new file mode 100644 index 000000000..916f41d8f --- /dev/null +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/domainInfo.yaml @@ -0,0 +1,20 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# This is an example of how to define the domainInfo section of WDT Model for OAM domain. +# For details regarding how to work with WDT model files and WDT model attributes, please refer below links +# https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-on-pv/model-files/ +# https://oracle.github.io/weblogic-deploy-tooling/concepts/model/ +# + +domainInfo: + AdminUserName: '@@SECRET:__weblogic-credentials__:username@@' + AdminPassword: '@@SECRET:__weblogic-credentials__:password@@' + ServerGroupTargetingLimits: + 'OAM-MGD-SVRS': oam_cluster + 'OAM-POLICY-MANAGED-SERVER': policy_cluster + RCUDbInfo: + rcu_prefix: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:rcu_prefix@@' + rcu_schema_password: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:rcu_schema_password@@' + rcu_db_conn_string: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@:@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@/@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@' + rcu_db_user: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:dba_user@@' + rcu_admin_password: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:dba_password@@' diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/oam.properties b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/oam.properties new file mode 100644 index 000000000..c6c354686 --- /dev/null +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/oam.properties @@ -0,0 +1,14 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# This is an example of how to define the properties section of WDT Model for OAM domain. +# For details regarding how to work with WDT model files and WDT model attributes, please refer below links +# https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-on-pv/model-files/ +# https://oracle.github.io/weblogic-deploy-tooling/concepts/model/ +# + +Server.AdminServer.T3Channel.ListenPort=30012 +Server.AdminServer.ListenPort=7001 +Server.oam_policy_mgr.ListenPort=15100 +Server.oam_server.ListenPort=14100 +Server.oam_policy_mgr.ListenAddress=oam-policy-mgr +Server.oam_server.ListenAddress=oam-server diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/resource.yaml b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/resource.yaml new file mode 100644 index 000000000..789e77ccc --- /dev/null +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/resource.yaml @@ -0,0 +1,11 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# This is an example of how to define the resource section in WDT Model for an OAM Domain +# For details regarding how to work with WDT model files and WDT model attributes, please refer below links +# https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-on-pv/model-files/ +# https://oracle.github.io/weblogic-deploy-tooling/concepts/model/ +# + +resources: + WebAppContainer: + WeblogicPluginEnabled: true diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/topology.yaml b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/topology.yaml new file mode 100644 index 000000000..8e2a5a1af --- /dev/null +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-artifacts/topology.yaml @@ -0,0 +1,90 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# This is an example of how to define the topology section in WDT model for an OAM domain +# For details regarding how to work with WDT model files and WDT model attributes, please refer below links +# https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-on-pv/model-files/ +# https://oracle.github.io/weblogic-deploy-tooling/concepts/model/ +# + +topology: + Name: '@@ENV:DOMAIN_UID@@' + ProductionModeEnabled: true + Cluster: + oam_cluster: + CoherenceClusterSystemResource: defaultCoherenceCluster + policy_cluster: + CoherenceClusterSystemResource: defaultCoherenceCluster + Server: + AdminServer: + ListenPort: '@@PROP:Server.AdminServer.ListenPort@@' + oam_policy_mgr1: + ListenPort: '@@PROP:Server.oam_policy_mgr.ListenPort@@' + Cluster: policy_cluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oam_policy_mgr.ListenAddress@@1' + CoherenceClusterSystemResource: defaultCoherenceCluster + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + oam_policy_mgr2: + ListenPort: '@@PROP:Server.oam_policy_mgr.ListenPort@@' + Cluster: policy_cluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oam_policy_mgr.ListenAddress@@2' + CoherenceClusterSystemResource: defaultCoherenceCluster + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + oam_policy_mgr3: + ListenPort: '@@PROP:Server.oam_policy_mgr.ListenPort@@' + Cluster: policy_cluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oam_policy_mgr.ListenAddress@@3' + CoherenceClusterSystemResource: defaultCoherenceCluster + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + oam_policy_mgr4: + ListenPort: '@@PROP:Server.oam_policy_mgr.ListenPort@@' + Cluster: policy_cluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oam_policy_mgr.ListenAddress@@4' + CoherenceClusterSystemResource: defaultCoherenceCluster + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + oam_policy_mgr5: + ListenPort: '@@PROP:Server.oam_policy_mgr.ListenPort@@' + Cluster: policy_cluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oam_policy_mgr.ListenAddress@@5' + CoherenceClusterSystemResource: defaultCoherenceCluster + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + + oam_server1: + ListenPort: '@@PROP:Server.oam_server.ListenPort@@' + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oam_server.ListenAddress@@1' + Cluster: oam_cluster + CoherenceClusterSystemResource: defaultCoherenceCluster + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + oam_server2: + ListenPort: '@@PROP:Server.oam_server.ListenPort@@' + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oam_server.ListenAddress@@2' + Cluster: oam_cluster + CoherenceClusterSystemResource: defaultCoherenceCluster + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + oam_server3: + ListenPort: '@@PROP:Server.oam_server.ListenPort@@' + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oam_server.ListenAddress@@3' + Cluster: oam_cluster + CoherenceClusterSystemResource: defaultCoherenceCluster + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + oam_server4: + ListenPort: '@@PROP:Server.oam_server.ListenPort@@' + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oam_server.ListenAddress@@4' + Cluster: oam_cluster + CoherenceClusterSystemResource: defaultCoherenceCluster + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + oam_server5: + ListenPort: '@@PROP:Server.oam_server.ListenPort@@' + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oam_server.ListenAddress@@5' + Cluster: oam_cluster + CoherenceClusterSystemResource: defaultCoherenceCluster + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-utils/create-configmap.sh b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-utils/create-configmap.sh new file mode 100755 index 000000000..fce36a014 --- /dev/null +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-utils/create-configmap.sh @@ -0,0 +1,120 @@ +#!/bin/sh +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. + +usage() { + + cat << EOF + + This is a helper script for creating and labeling a Kubernetes configmap. + The configmap is labeled with the specified domain-uid. + + Usage: + + $(basename $0) -c configmapname \\ + [-n mynamespace] \\ + [-d mydomainuid] \\ + [-f filename_or_dir] [-f filename_or_dir] ... + + -d : Defaults to 'sample-domain1'. + + -n : Defaults to 'sample-domain1-ns' otherwise. + + -c : Name of configmap. Required. + + -f : File or directory location. Can be specified + more than once. Key will be the file-name(s), + value will be file contents. Required. + + -dry ${KUBERNETES_CLI:-kubectl} : Show the ${KUBERNETES_CLI:-kubectl} commands (prefixed with 'dryun:') + but do not perform them. + + -dry yaml : Show the yaml (prefixed with 'dryun:') + but do not execute it. + +EOF +} + +set -e +set -o pipefail + +DOMAIN_UID="sample-domain1" +DOMAIN_NAMESPACE="sample-domain1-ns" +CONFIGMAP_NAME="" +FILENAMES="" +DRY_RUN="" + +while [ ! "$1" = "" ]; do + if [ ! "$1" = "-?" ] && [ "$2" = "" ]; then + echo "Syntax Error. Pass '-?' for help." + exit 1 + fi + case "$1" in + -c) CONFIGMAP_NAME="${2}" ;; + -n) DOMAIN_NAMESPACE="${2}" ;; + -d) DOMAIN_UID="${2}" ;; + -f) FILENAMES="${FILENAMES}--from-file=${2} " ;; + -dry) DRY_RUN="${2}" + case "$DRY_RUN" in + ${KUBERNETES_CLI:-kubectl}|yaml) ;; + *) echo "Error: Syntax Error. Pass '-?' for usage." + exit 1 + ;; + esac + ;; + -?) usage ; exit 1 ;; + *) echo "Syntax Error. Pass '-?' for help." ; exit 1 ;; + esac + shift + shift +done + +if [ -z "$CONFIGMAP_NAME" ]; then + echo "Error: Missing '-c' argument. Pass '-?' for help." + exit 1 +fi + +if [ -z "$FILENAMES" ]; then + echo "Error: Missing '-f' argument. Pass '-?' for help." + exit 1 +fi + +set -eu + +if [ "$DRY_RUN" = "${KUBERNETES_CLI:-kubectl}" ]; then + +cat << EOF +dryrun:${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE delete configmap $CONFIGMAP_NAME --ignore-not-found +dryrun:${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE create configmap $CONFIGMAP_NAME $FILENAMES +dryrun:${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE label configmap $CONFIGMAP_NAME weblogic.domainUID=$DOMAIN_UID +EOF + +elif [ "$DRY_RUN" = "yaml" ]; then + + echo "dryrun:---" + echo "dryrun:" + + # don't change indent of the sed append commands - the spaces are significant + # (we use an ancient form of sed append to stay compatible with old bash on mac) + ${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE \ + create configmap $CONFIGMAP_NAME $FILENAMES \ + --dry-run=client -o yaml \ + \ + | sed -e '/ name:/a\ + labels:' \ + | sed -e '/labels:/a\ + weblogic.domainUID:' \ + | sed "s/domainUID:/domainUID: $DOMAIN_UID/" \ + | grep -v creationTimestamp \ + | sed "s/^/dryrun:/" + +else + + set -x + + ${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE delete configmap $CONFIGMAP_NAME --ignore-not-found + ${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE create configmap $CONFIGMAP_NAME $FILENAMES + ${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE label configmap $CONFIGMAP_NAME weblogic.domainUID=$DOMAIN_UID + +fi + diff --git a/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-utils/create-secret.sh b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-utils/create-secret.sh new file mode 100755 index 000000000..83e771dea --- /dev/null +++ b/OracleAccessManagement/kubernetes/create-access-domain/domain-home-on-pv/wdt-utils/create-secret.sh @@ -0,0 +1,159 @@ +#!/bin/bash +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# + +usage() { + + cat << EOF + + This is a helper script for creating and labeling a Kubernetes secret. + The secret is labeled with the specified domain-uid. + + Usage: + + $(basename $0) [-n mynamespace] [-d mydomainuid] \\ + -s mysecretname [-l key1=val1] [-l key2=val2] [-f key=fileloc ]... + + -d : Defaults to 'sample-domain1' otherwise. + + -n : Defaults to 'sample-domain1-ns' otherwise. + + -s : Name of secret. Required. + + -l : Secret 'literal' key/value pair, for example + '-l "password=abc123"'. Can be specified more than once. + + -f : Secret 'file-name' key/file pair, for example + '-l walletFile=./ewallet.p12'. + Can be specified more than once. + + -dry ${KUBERNETES_CLI} : Show the ${KUBERNETES_CLI} commands (prefixed with 'dryrun:') + but do not perform them. + + -dry yaml : Show the yaml (prefixed with 'dryrun:') but do not + apply it. + + -? : This help. + + Note: Spaces are not supported in the '-f' or '-l' parameters. + +EOF +} + +set -e +set -o pipefail + +KUBERNETES_CLI="${KUBERNETES_CLI:-kubectl}" +DOMAIN_UID="sample-domain1" +NAMESPACE="sample-domain1-ns" +SECRET_NAME="" +LITERALS="" +FILENAMES="" +DRY_RUN="false" + +while [ ! "${1:-}" = "" ]; do + if [ ! "$1" = "-?" ] && [ "${2:-}" = "" ]; then + echo "Syntax Error. Pass '-?' for usage." + exit 1 + fi + case "$1" in + -s) SECRET_NAME="${2}" ;; + -n) NAMESPACE="${2}" ;; + -d) DOMAIN_UID="${2}" ;; + -l) LITERALS="${LITERALS} --from-literal='${2}'" ;; + -f) FILENAMES="${FILENAMES} --from-file=${2}" ;; + -dry) DRY_RUN="${2}" + case "$DRY_RUN" in + ${KUBERNETES_CLI}|yaml) ;; + *) echo "Error: Syntax Error. Pass '-?' for usage." + exit 1 + ;; + esac + ;; + -?) usage ; exit 1 ;; + *) echo "Syntax Error. Pass '-?' for usage." ; exit 1 ;; + esac + shift + shift +done + +if [ -z "$SECRET_NAME" ]; then + echo "Error: Syntax Error. Must specify '-s'. Pass '-?' for usage." + exit 1 +fi + +if [ -z "${LITERALS}${FILENAMES}" ]; then + echo "Error: Syntax Error. Must specify at least one '-l' or '-f'. Pass '-?' for usage." + exit +fi + +set -eu + +kubernetesCLIDryRunDelete() { +cat << EOF +dryrun:${KUBERNETES_CLI} -n $NAMESPACE delete secret \\ +dryrun: $SECRET_NAME \\ +dryrun: --ignore-not-found +EOF +} + +kubernetesCLIDryRunCreate() { +local moredry="" +if [ "$DRY_RUN" = "yaml" ]; then + local moredry="--dry-run=client -o yaml" +fi +cat << EOF +dryrun:${KUBERNETES_CLI} -n $NAMESPACE create secret generic \\ +dryrun: $SECRET_NAME \\ +dryrun: $LITERALS $FILENAMES ${moredry} +EOF +} + +kubernetesCLIDryRunLabel() { +cat << EOF +dryrun:${KUBERNETES_CLI} -n $NAMESPACE label secret \\ +dryrun: $SECRET_NAME \\ +dryrun: weblogic.domainUID=$DOMAIN_UID +EOF +} + +kubernetesCLIDryRun() { +cat << EOF +dryrun: +dryrun:echo "@@ Info: Setting up secret '$SECRET_NAME'." +dryrun: +EOF +kubernetesCLIDryRunDelete +kubernetesCLIDryRunCreate +kubernetesCLIDryRunLabel +cat << EOF +dryrun: +EOF +} + +if [ "$DRY_RUN" = "${KUBERNETES_CLI}" ]; then + + kubernetesCLIDryRun + +elif [ "$DRY_RUN" = "yaml" ]; then + + echo "dryrun:---" + echo "dryrun:" + + # don't change indent of the sed '/a' commands - the spaces are significant + # (we use an old form of sed append to stay compatible with old bash on mac) + + source <( kubernetesCLIDryRunCreate | sed 's/dryrun://') \ + | sed -e '/ name:/a\ + labels:' \ + | sed -e '/labels:/a\ + weblogic.domainUID:' \ + | sed "s/domainUID:/domainUID: $DOMAIN_UID/" \ + | grep -v creationTimestamp \ + | sed "s/^/dryrun:/" + +else + + source <( kubernetesCLIDryRun | sed 's/dryrun://') +fi diff --git a/OracleAccessManagement/kubernetes/create-rcu-credentials/create-rcu-credentials.sh b/OracleAccessManagement/kubernetes/create-rcu-credentials/create-rcu-credentials.sh index 4a00e316d..70af02602 100755 --- a/OracleAccessManagement/kubernetes/create-rcu-credentials/create-rcu-credentials.sh +++ b/OracleAccessManagement/kubernetes/create-rcu-credentials/create-rcu-credentials.sh @@ -1,5 +1,5 @@ #!/usr/bin/env bash -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2022, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # Description diff --git a/OracleAccessManagement/kubernetes/create-rcu-schema/common/create-rcu-pod.sh b/OracleAccessManagement/kubernetes/create-rcu-schema/common/create-rcu-pod.sh index 4a7277d09..02c9df9d5 100755 --- a/OracleAccessManagement/kubernetes/create-rcu-schema/common/create-rcu-pod.sh +++ b/OracleAccessManagement/kubernetes/create-rcu-schema/common/create-rcu-pod.sh @@ -17,11 +17,11 @@ usage() { echo " Must contain SYSDBA username at key 'sys_username'," echo " SYSDBA password at key 'sys_password'," echo " and RCU schema owner password at key 'password'." - echo " -p FMW Infrastructure ImagePullSecret (optional) " + echo " -p OracleAccessManagement ImagePullSecret (optional) " echo " (default: none) " - echo " -i FMW Infrastructure Image (optional) " - echo " (default: container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.4) " - echo " -u FMW Infrastructure ImagePullPolicy (optional) " + echo " -i OracleAccessManagement Image (optional) " + echo " (default: oracle/oam:12.2.1.4.0) " + echo " -u OracleAccessManagement ImagePullPolicy (optional) " echo " (default: IfNotPresent) " echo " -o Output directory for the generated YAML file. (optional)" echo " (default: rcuoutput)" @@ -34,7 +34,7 @@ usage() { namespace="default" credSecret="oracle-rcu-secret" -fmwimage="container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.4" +fmwimage="oracle/oam:12.2.1.4.0" imagePullPolicy="IfNotPresent" rcuOutputDir="rcuoutput" @@ -101,3 +101,4 @@ ${KUBERNETES_CLI:-kubectl} exec -n $namespace -i rcu -- bash -c 'cat > /u01/orac ${KUBERNETES_CLI:-kubectl} get po/rcu -n $namespace echo "[INFO] Pod 'rcu' is running in namespace '$namespace'" + diff --git a/OracleAccessManagement/kubernetes/create-rcu-schema/common/createRepository.sh b/OracleAccessManagement/kubernetes/create-rcu-schema/common/createRepository.sh index be0b6106e..f1d6a4c0b 100755 --- a/OracleAccessManagement/kubernetes/create-rcu-schema/common/createRepository.sh +++ b/OracleAccessManagement/kubernetes/create-rcu-schema/common/createRepository.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2022, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # . /u01/oracle/wlserver/server/bin/setWLSEnv.sh diff --git a/OracleAccessManagement/kubernetes/create-rcu-schema/common/dropRepository.sh b/OracleAccessManagement/kubernetes/create-rcu-schema/common/dropRepository.sh index 8f83c0e12..90c3f4dbd 100755 --- a/OracleAccessManagement/kubernetes/create-rcu-schema/common/dropRepository.sh +++ b/OracleAccessManagement/kubernetes/create-rcu-schema/common/dropRepository.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2022, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # . /u01/oracle/wlserver/server/bin/setWLSEnv.sh diff --git a/OracleAccessManagement/kubernetes/create-rcu-schema/drop-rcu-schema.sh b/OracleAccessManagement/kubernetes/create-rcu-schema/drop-rcu-schema.sh index cccf835e0..c96528fd8 100755 --- a/OracleAccessManagement/kubernetes/create-rcu-schema/drop-rcu-schema.sh +++ b/OracleAccessManagement/kubernetes/create-rcu-schema/drop-rcu-schema.sh @@ -44,7 +44,7 @@ rcuType="${rcuType}" namespace="default" createPodArgs="" -while getopts ":s:t:d:n:c:p:i:u:o:v:h:" opt; do +while getopts ":s:t:d:n:c:p:i:u:o:r:h:" opt; do case $opt in s) schemaPrefix="${OPTARG}" ;; @@ -56,7 +56,7 @@ while getopts ":s:t:d:n:c:p:i:u:o:v:h:" opt; do ;; c|p|i|u|o) createPodArgs+=" -${opt} ${OPTARG}" ;; - v) customVariables="${OPTARG}" + r) customVariables="${OPTARG}" ;; h) usage 0 ;; diff --git a/OracleAccessManagement/kubernetes/create-weblogic-domain-credentials/create-weblogic-credentials.sh b/OracleAccessManagement/kubernetes/create-weblogic-domain-credentials/create-weblogic-credentials.sh index 725495166..6c8616f65 100755 --- a/OracleAccessManagement/kubernetes/create-weblogic-domain-credentials/create-weblogic-credentials.sh +++ b/OracleAccessManagement/kubernetes/create-weblogic-domain-credentials/create-weblogic-credentials.sh @@ -1,5 +1,5 @@ #!/usr/bin/env bash -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2022, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # Description diff --git a/OracleAccessManagement/kubernetes/create-weblogic-domain-pv-pvc/create-pv-pvc-inputs.yaml b/OracleAccessManagement/kubernetes/create-weblogic-domain-pv-pvc/create-pv-pvc-inputs.yaml index b5738ef50..ffa922f43 100755 --- a/OracleAccessManagement/kubernetes/create-weblogic-domain-pv-pvc/create-pv-pvc-inputs.yaml +++ b/OracleAccessManagement/kubernetes/create-weblogic-domain-pv-pvc/create-pv-pvc-inputs.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # The version of this inputs file. Do not modify. diff --git a/OracleAccessManagement/kubernetes/domain-lifecycle/README.md b/OracleAccessManagement/kubernetes/domain-lifecycle/README.md index b30dbb6a2..0d32131f3 100755 --- a/OracleAccessManagement/kubernetes/domain-lifecycle/README.md +++ b/OracleAccessManagement/kubernetes/domain-lifecycle/README.md @@ -27,6 +27,10 @@ For information on how to start, stop, restart, and scale WebLogic Server instan - [`kubectl --watch`](#kubectl---watch) - [`clusterStatus.sh`](#clusterstatussh) - [`waitForDomain.sh`](#waitfordomainsh) +- [Examine, change permissions or delete PV contents](#examine-change-or-delete-pv-contents) + - [`pv-pvc-helper.sh`](#pv-pvc-helpersh) +- [OPSS Wallet utility](#opss-wallet-utility) + - [`opss-wallet.sh`](#opss-walletsh) ### Prerequisites @@ -274,3 +278,69 @@ Use the following command to wait for a domain to fully shut down: ``` $ waitForDomain.sh -n my-namespace -d my-domain -p 0 ``` + +### Examine, change, or delete PV contents + +#### `pv-pvc-helper.sh` + +Use this helper script for examining, changing permissions, or deleting the contents of the persistent volume (such as domain files or logs) for a WebLogic Domain on PV or Model in Image domain. +The script launches a Kubernetes pod named 'pvhelper' using the provided persistent volume claim name and the mount path. +You can run the 'kubectl exec' command to get a shell to the running pod container and run commands to examine or clean up the contents of shared directories on the persistent volume. +Use the 'kubectl delete pod pvhelper -n ' command to delete the Pod after it's no longer needed. + +Use the following command for script usage: + +``` +$ pv-pvc-helper.sh -h +``` + +The following is an example command to launch the helper pod with the PVC name `sample-domain1-weblogic-sample-pvc` and mount path `/shared`. +Specifying the `-r` argument allows the script to run as the `root` user. + +``` +$ pv-pvc-helper.sh -n sample-domain1-ns -c sample-domain1-weblogic-sample-pvc -m /shared -r +``` + +After the Pod is created, use the following command to get a shell to the running pod container. + +``` +$ kubectl -n sample-domain1-ns exec -it pvhelper -- /bin/sh +``` + +After you get a shell to the running pod container, you can recursively delete the contents of the domain home and applications +directories using the `rm -rf /shared/domains/sample-domain1` and `rm -rf /shared/applications/sample-domain1` commands. Because these +commands will delete files on the persistent storage, we recommend that you understand and execute these commands carefully. + +Use the following command to delete the Pod after it's no longer needed. + +``` +$ kubectl delete pod pvhelper -n +``` + +### OPSS Wallet utility + +#### `opss-wallet.sh` + +The OPSS wallet utility is a helper script for JRF-type domains that can save an OPSS key +wallet from a running domain's introspector ConfigMap to a file and +restore an OPSS key wallet file to a Kubernetes Secret for use by a +domain that you're about to run. + +Use the following command for script usage: + +``` +$ opss-wallet.sh -? +``` + +For example, run the following command to save an OPSS key wallet from a running domain to the file './ewallet.p12': + +``` +$ opss-wallet.sh -s +``` + +Run the following command to restore the OPSS key wallet from the file './ewallet.p12' to the secret +'sample-domain1-opss-walletfile-secret' for use by a domain you're about to run: + +``` +$ opss-wallet.sh -r +``` diff --git a/OracleAccessManagement/kubernetes/domain-lifecycle/clusterStatus.sh b/OracleAccessManagement/kubernetes/domain-lifecycle/clusterStatus.sh index db1c98ddf..0dccfcab0 100755 --- a/OracleAccessManagement/kubernetes/domain-lifecycle/clusterStatus.sh +++ b/OracleAccessManagement/kubernetes/domain-lifecycle/clusterStatus.sh @@ -1,18 +1,19 @@ #!/bin/sh -# Copyright (c) 2021, 2022, Oracle and/or its affiliates. +# Copyright (c) 2021, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. set -eu set -o pipefail usage() { -cat<> << H I J >> ... +# to one stdout line for each "<< >>" clause. E.g: +# A B C D E F +# A B C H I J +# stdin lines with no "<< >>" are ignored... + +flatten() { + while read line + do + __flatten $line + done +} +__flatten() { + local prefix="" + while [ "${1:-}" != "<<" ]; do + [ -z "${1:-}" ] && return + prefix+="$1 " + shift + done + while [ "${1:-}" == "<<" ]; do + local suffix="" + shift + while [ "$1" != ">>" ]; do + suffix+="$1 " + shift + done + shift + echo $prefix $suffix + done +} + +# +# condition +# helper fn to take the thirteen column input +# and collapse some columns into an aggregate status +# +condition() { + while read line + do + __condition $line + done +} +__condition() { + local gen=$1 + local ogen=$2 + local failed=$3 + local completed=${12} + local available=${13} + local condition="IMPOSSIBLE" + if [ "$failed" = "True" ]; then + condition="Failed" + elif [ ! "$gen" = "$ogen" ] || [ "$completed" = "NotSet" ] || [ "$available" = "NotSet" ]; then + condition="Unknown" + elif [ "$completed" = "True" ]; then + condition="Completed" + elif [ "$available" = "True" ]; then + condition="Available" + else + condition="Unavailable" + fi + echo "$4 $5 $6 $condition $available $7 $8 $9 ${10} ${11}" +} + + +# +# clusterStatus +# function to display the domain cluster status in a table +# $1=ns $2=uid $3=cluster, pass "" to mean "any" +# $4=KUBERNETES_CLI +# clusterStatus() { local __ns="${1:-}" if [ -z "$__ns" ]; then @@ -52,7 +164,7 @@ clusterStatus() { local __uid="${2:-}" local __cluster_name="${3:-}" - local __kubernetes_cli="${4:-kubectl}" + local __kubernetes_cli="${4:-${KUBERNETES_CLI:-kubectl}}" if ! [ -x "$(command -v ${__kubernetes_cli})" ]; then echo "@@Error: Kubernetes CLI '${__kubernetes_cli}' is not installed." @@ -64,14 +176,18 @@ clusterStatus() { ( shopt -s nullglob # causes the for loop to silently handle case where no domains match + local _domains + local _val + + _domains="$( + $__kubernetes_cli $__ns_filter get domains.v9.weblogic.oracle \ + -o=jsonpath='{range .items[*]}{.metadata.namespace}{","}{.metadata.name}{","}{.spec.domainUID}{"\n"}{end}' + )" - echo "namespace domain cluster min max goal current ready" - echo "--------- ------ ------- --- --- ---- ------- -----" + echo "namespace domain cluster status available min max goal current ready" + echo "--------- ------ ------- ------ --------- --- --- ---- ------- -----" - local __val - for __val in \ - $($__kubernetes_cli $__ns_filter get domains \ - -o=jsonpath='{range .items[*]}{.metadata.namespace}{","}{.metadata.name}{","}{.spec.domainUID}{"\n"}{end}') + for __val in $_domains do local __ns_cur=$( echo $__val | cut -d ',' -f 1) local __name_cur=$(echo $__val | cut -d ',' -f 2) @@ -81,26 +197,46 @@ clusterStatus() { [ -n "$__uid" ] && [ ! "$__uid" = "$__uid_cur" ] && continue + # construct a json path for the domain query + + __jp+='{" ~G"}{.metadata.generation}' + __jp+='{" ~O"}{.status.observedGeneration}' + __jp+='{" ~F"}{.status.conditions[?(@.type=="Failed")].status}' if [ -z "$__cluster_name" ]; then __jp+='{range .status.clusters[*]}' else __jp+='{range .status.clusters[?(@.clusterName=='\"$__cluster_name\"')]}' fi - __jp+='{"'$__ns_cur'"}' + __jp+='{" "}{"<<"}' + __jp+='{" "}{"'$__ns_cur'"}' __jp+='{" "}{"'$__uid_cur'"}' __jp+='{" "}{.clusterName}' - __jp+='{" ~!"}{.minimumReplicas}' - __jp+='{" ~!"}{.maximumReplicas}' - __jp+='{" ~!"}{.replicasGoal}' - __jp+='{" ~!"}{.replicas}' - __jp+='{" ~!"}{.readyReplicas}' - __jp+='{"\n"}' + __jp+='{" "}{"~!"}{.minimumReplicas}' + __jp+='{" "}{"~!"}{.maximumReplicas}' + __jp+='{" "}{"~!"}{.replicasGoal}' + __jp+='{" "}{"~!"}{.replicas}' + __jp+='{" "}{"~!"}{.readyReplicas}' + __jp+='{" "}{"~C"}{.conditions[?(@.type=="Completed")].status}' + __jp+='{" "}{"~A"}{.conditions[?(@.type=="Available")].status}' + __jp+='{" "}{">>"}' __jp+='{end}' + __jp+='{"\n"}' + + # get the values, replace empty values with sentinals or '0' as appropriate, + # and remove all '~?' prefixes - $__kubernetes_cli -n "$__ns_cur" get domain "$__uid_cur" -o=jsonpath="$__jp" + $__kubernetes_cli -n "$__ns_cur" get domains.v9.weblogic.oracle "$__name_cur" -o=jsonpath="$__jp" \ + | sed 's/~G /~GunknownGen /g' \ + | sed 's/~O /~OunknownOGen /g' \ + | sed 's/~F /~FNotSet /g' \ + | sed 's/~C /~CNotSet /g' \ + | sed 's/~A /~ANotSet /g' \ + | sed 's/~[A-Z]//g' \ + | sed 's/~!\([0-9][0-9]*\)/\1/g' \ + | sed 's/~!/0/g' - done | sed 's/~!\([0-9][0-9]*\)/\1/g'\ - | sed 's/~!/0/g' \ + done | flatten \ + | condition \ | sort --version-sort ) | column --table @@ -111,7 +247,7 @@ clusterStatus() { domainNS= domainUID= clusterName= -kubernetesCli=${KUBERNETES_CLI:-kubectl} +kubernetesCli= set +u while [ ! -z ${1+x} ]; do diff --git a/OracleAccessManagement/kubernetes/domain-lifecycle/opss-wallet.sh b/OracleAccessManagement/kubernetes/domain-lifecycle/opss-wallet.sh new file mode 100755 index 000000000..933cb7de0 --- /dev/null +++ b/OracleAccessManagement/kubernetes/domain-lifecycle/opss-wallet.sh @@ -0,0 +1,152 @@ +#!/bin/bash +# Copyright (c) 2019, 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. + +# +# This is a helper script for JRF type domains that can save an OPSS key +# wallet from a running domain's introspector configmap to a file, and/or +# restore an OPSS key wallet file to a Kubernetes secret for use by a +# domain that you're about to run. +# +# For command line details, pass '-?' or see 'usage_exit()' below. +# + +set -e +set -o pipefail + +usage_exit() { +cat << EOF + + Usage: + + $(basename $0) -s [-d domain-uid] [-n namespace] \\ + [-wf wallet-file-name] + + $(basename $0) -r [-d domain-uid] [-n namespace] \\ + [-wf wallet-file-name] [-ws wallet-file-secret] + + $(basename $0) -r -s [-d domain-uid] [-n namespace] \\ + [-wf wallet-file-name] [-ws wallet-file-secret] + + Save an OPSS key wallet from a running JRF domain's introspector + configmap to a file, and/or restore an OPSS key wallet file + to a Kubernetes secret for use by a domain that you're about to run. + + Parameters: + + -d Domain UID. Default 'sample-domain1'. + + -n Kubernetes namespace. Default 'sample-domain1-ns'. + + -s Save an OPSS wallet file from an introspector + configmap to a file. (See also '-wf'.) + + -r Restore an OPSS wallet file to a Kubernetes secret. + (See also '-wf' and '-ws'). + + -wf Name of OPSS wallet file on local file system. + Default is './ewallet.p12'. + + -ws Name of Kubernetes secret to create from the + wallet file. This must match the + 'configuration.opss.walletFileSecret' + configured in your domain resource. + Ignored if '-r' not specified. + Default is 'DOMAIN_UID-opss-walletfile-secret'. + + -? Output this help message. + + Examples: + + Save an OPSS key wallet from a running domain to file './ewallet.p12': + $(basename $0) -s + + Restore the OPSS key wallet from file './ewallet.p12' to secret + 'sample-domain1-opss-walletfile-secret' for use by a domain + you're about to run: + $(basename $0) -r + +EOF + + exit 0 +} + +SCRIPTDIR="$( cd "$(dirname "$0")" > /dev/null 2>&1 ; pwd -P )" +echo "@@ Info: Running '$(basename "$0")'." + +DOMAIN_UID="sample-domain1" +DOMAIN_NAMESPACE="sample-domain1-ns" +WALLET_FILE="ewallet.p12" +WALLET_SECRET="" + +syntax_error_exit() { + echo "@@ Syntax error: Use '-?' for usage." + exit 1 +} + +SAVE_WALLET=0 +RESTORE_WALLET=0 + +while [ ! "$1" = "" ]; do + case "$1" in + -n) [ -z "$2" ] && syntax_error_exit + DOMAIN_NAMESPACE="${2}" + shift + ;; + -d) [ -z "$2" ] && syntax_error_exit + DOMAIN_UID="${2}" + shift + ;; + -s) SAVE_WALLET=1 + ;; + -r) RESTORE_WALLET=1 + ;; + -ws) [ -z "$2" ] && syntax_error_exit + WALLET_SECRET="${2}" + shift + ;; + -wf) [ -z "$2" ] && syntax_error_exit + WALLET_FILE="${2}" + shift + ;; + -?) usage_exit + ;; + *) syntax_error_exit + ;; + esac + shift +done + +[ ${SAVE_WALLET} -eq 0 ] && [ ${RESTORE_WALLET} -eq 0 ] && syntax_error_exit + +WALLET_SECRET=${WALLET_SECRET:-$DOMAIN_UID-opss-walletfile-secret} + +set -eu + +if [ ${SAVE_WALLET} -eq 1 ] ; then + echo "@@ Info: Saving wallet from from configmap '${DOMAIN_UID}-weblogic-domain-introspect-cm' in namespace '${DOMAIN_NAMESPACE}' to file '${WALLET_FILE}'." + ${KUBERNETES_CLI:-kubectl} -n ${DOMAIN_NAMESPACE} \ + get configmap ${DOMAIN_UID}-weblogic-domain-introspect-cm \ + -o jsonpath='{.data.ewallet\.p12}' \ + > ${WALLET_FILE} +fi + +if [ ! -f "$WALLET_FILE" ]; then + echo "@@ Error: Wallet file '$WALLET_FILE' not found." + exit 1 +fi + +FILESIZE=$(du -k "$WALLET_FILE" | cut -f1) +if [ $FILESIZE = 0 ]; then + echo "@@ Error: Wallet file '$WALLET_FILE' is empty. Is this a JRF domain? The wallet file will be empty for a non-RCU/non-JRF domain." + exit 1 +fi + +if [ ${RESTORE_WALLET} -eq 1 ] ; then + echo "@@ Info: Creating secret '${WALLET_SECRET}' in namespace '${DOMAIN_NAMESPACE}' for wallet file '${WALLET_FILE}', domain uid '${DOMAIN_UID}'." + $SCRIPTDIR/create-secret.sh \ + -n ${DOMAIN_NAMESPACE} \ + -d ${DOMAIN_UID} \ + -s ${WALLET_SECRET} \ + -f walletFile=${WALLET_FILE} +fi diff --git a/OracleAccessManagement/kubernetes/domain-lifecycle/pv-pvc-helper.sh b/OracleAccessManagement/kubernetes/domain-lifecycle/pv-pvc-helper.sh new file mode 100755 index 000000000..c49179c5d --- /dev/null +++ b/OracleAccessManagement/kubernetes/domain-lifecycle/pv-pvc-helper.sh @@ -0,0 +1,223 @@ +#!/bin/bash +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. + +# Launch a "Persistent volume cleanup helper" pod for examining or cleaning up the contents +# of domain directory on a persistent volume. + +script="${BASH_SOURCE[0]}" +scriptDir="$( cd "$( dirname "${script}" )" && pwd )" +source ${scriptDir}/helper.sh +source ${scriptDir}/../common/utility.sh +set -eu + +initGlobals() { + KUBERNETES_CLI=${KUBERNETES_CLI:-kubectl} + claimName="" + mountPath="" + namespace="default" + image="ghcr.io/oracle/oraclelinux:8-slim" + imagePullPolicy="IfNotPresent" + pullsecret="" + runAsRoot="" +} + +usage() { + cat << EOF + + This is a helper script for examining, changing permissions, or deleting the contents of the persistent + volume (such as domain files or logs) for a WebLogic Domain on PV or Model in Image domain. + The script launches a a Kubernetes pod named as 'pvhelper' using the provided persistent volume claim name and the mount path. + You can run the '${KUBERNETES_CLI} exec' to get a shell to the running pod container and run commands to examine or clean up the contents of + shared directories on persistent volume. + If the helper pod is already running in the namespace with the provide options, then it doesn't create a new pod. + If the helper pod is already running and the persistent volume claim name or mount path doesn't match, then script will generate an error. + Use '${KUBERNETES_CLI} delete pod pvhelper -n ' command to delete the pod when it's no longer needed. + + Please see README.md for more details. + + Usage: + + $(basename $0) -c persistentVolumeClaimName -m mountPath [-n namespace] [-i image] [-u imagePullPolicy] [-o helperOutputDir] [-r] [-h]" + + [-c | --claimName] : Persistent volume claim name. This parameter is required. + + [-m | --mountPath] : Mount path of the persistent volume in helper pod. This parameter is required. + + [-n | --namespace] : Domain namespace. Default is 'default'. + + [-i | --image] : Container image for the helper pod (optional). Default is 'ghcr.io/oracle/oraclelinux:8-slim'. + + [-u | --imagePullPolicy] : Image pull policy for the helper pod (optional). Default is 'IfNotPresent'. + + [-p | --imagePullSecret] : Image pull secret for the helper pod (optional). Default is 'None'. + + [-r | --runAsRoot] : Option to run the pod as a root user. Default is 'runAsNonRoot'. + + [-h | --help] : This help. + +EOF +exit $1 +} + +processCommandLine() { + while [[ "$#" -gt "0" ]]; do + key="$1" + case $key in + -c|--claimName) + claimName="$2" + shift + ;; + -m|--mountPath) + mountPath="$2" + shift + ;; + -n|--namespace) + namespace="$2" + shift + ;; + -i|--image) + image="$2" + shift + ;; + -u|--imagePullPolicy) + imagePullPolicy="$2" + shift + ;; + -p|--pullsecret) + pullsecret="$2" + shift + ;; + -r|--runAsRoot) + runAsRoot="#" + ;; + -h|--help) + usage 0 + ;; + -*|--*) + echo "Unknown option $1" + usage 1 + ;; + *) + # unknown option + ;; + esac + shift # past arg or value + done +} + +validatePvc() { + if [ -z "${claimName}" ]; then + printError "${script}: -c persistentVolumeClaimName must be specified." + usage 1 + fi + + pvc=$(${KUBERNETES_CLI} get pvc ${claimName} -n ${namespace} --ignore-not-found) + if [ -z "${pvc}" ]; then + printError "${script}: Persistent volume claim '$claimName' does not exist in namespace ${namespace}. \ + Please specify an existing persistent volume claim name using '-c' parameter." + exit 1 + fi +} + +validateMountPath() { + if [ -z "${mountPath}" ]; then + printError "${script}: -m mountPath must be specified." + usage 1 + elif [[ ! "$mountPath" =~ '/' ]] && [[ ! "$mountPath" =~ '\' ]]; then + printError "${script}: -m mountPath is not a valid path." + usage 1 + fi +} + +checkAndDefaultPullSecret() { + if [ -z "${pullsecret}" ]; then + pullsecret="none" + pullsecretPrefix="#" + fi +} + +validateParameters() { + validatePvc + validateMountPath + checkAndDefaultPullSecret +} + + +processExistingPod() { + existingMountPath=$(${KUBERNETES_CLI} get po pvhelper -n ${namespace} -o jsonpath='{.spec.containers[0].volumeMounts[0].mountPath}') + existingClaimName=$(${KUBERNETES_CLI} get po pvhelper -n ${namespace} -o jsonpath='{.spec.volumes[0].persistentVolumeClaim.claimName}') + if [ "$existingMountPath" != "$mountPath" ]; then + printError "Pod 'pvhelper' already exists in namespace '$namespace' but the mount path \ + '$mountPath' doesn't match the mount path '$existingMountPath' for existing pod. \ + Please delete the existing pod using '${KUBERNETES_CLI} delete pod pvhelper -n $namespace'\ + command to create a new pod." + exit 1 + fi + if [ "$existingClaimName" != "$claimName" ]; then + printError "Pod 'pvhelper' already exists but the claim name '$claimName' doesn't match \ + the claim name '$existingClaimName' of existing pod. Please delete the existing pod \ + using '${KUBERNETES_CLI} delete pod pvhelper -n $namespace' command to create a new pod." + exit 1 + fi + printInfo "Pod 'pvhelper' exists in namespace '$namespace'." +} + +createPod() { + printInfo "Creating pod 'pvhelper' using image '${image}', persistent volume claim \ + '${claimName}' and mount path '${mountPath}'." + + pvhelperYamlTemp=${scriptDir}/template/pvhelper.yaml.template + template="$(cat ${pvhelperYamlTemp})" + + template=$(echo "$template" | sed -e "s:%NAMESPACE%:${namespace}:g;\ + s:%WEBLOGIC_IMAGE_PULL_POLICY%:${imagePullPolicy}:g;\ + s:%WEBLOGIC_IMAGE_PULL_SECRET_NAME%:${pullsecret}:g;\ + s:%WEBLOGIC_IMAGE_PULL_SECRET_PREFIX%:${pullsecretPrefix}:g;\ + s:%CLAIM_NAME%:${claimName}:g;s:%VOLUME_MOUNT_PATH%:${mountPath}:g;\ + s:%RUN_AS_ROOT_PREFIX%:${runAsRoot}:g;\ + s?image:.*?image: ${image}?g") + ${KUBERNETES_CLI} delete po pvhelper -n ${namespace} --ignore-not-found + echo "$template" | ${KUBERNETES_CLI} apply -f - +} + +printCommandOutput() { + printInfo "Executing '${KUBERNETES_CLI} -n $namespace exec -i pvhelper -- ls -l ${mountPath}' \ + command to print the contents of the mount path in the persistent volume." + + cmdOut=$(${KUBERNETES_CLI} -n $namespace exec -i pvhelper -- ls -l ${mountPath}) + printInfo "=============== Command output ====================" + echo "$cmdOut" + printInfo "===================================================" +} + +printPodUsage() { + printInfo "Use command '${KUBERNETES_CLI} -n $namespace exec -it pvhelper -- /bin/sh' and \ + cd to '${mountPath}' directory to view or delete the contents on the persistent volume." + printInfo "Use command '${KUBERNETES_CLI} -n $namespace delete pod pvhelper' to delete the pod \ + created by the script." +} + +main() { + pvhelperpod=`${KUBERNETES_CLI} get po -n ${namespace} | grep "^pvhelper " | cut -f1 -d " " ` + if [ "$pvhelperpod" = "pvhelper" ]; then + processExistingPod + else + createPod + fi + + checkPod pvhelper $namespace # exits non zero non error + + checkPodState pvhelper $namespace "1/1" # exits non zero on error + + sleep 5 + + printCommandOutput + + printPodUsage +} + +initGlobals +processCommandLine "${@}" +validateParameters +main diff --git a/OracleAccessManagement/kubernetes/domain-lifecycle/template/pvhelper.yaml.template b/OracleAccessManagement/kubernetes/domain-lifecycle/template/pvhelper.yaml.template new file mode 100755 index 000000000..10aa199cf --- /dev/null +++ b/OracleAccessManagement/kubernetes/domain-lifecycle/template/pvhelper.yaml.template @@ -0,0 +1,35 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +apiVersion: v1 +kind: Pod +metadata: + labels: + run: pvhelper + name: pvhelper + namespace: %NAMESPACE% +spec: + containers: + - args: + - sleep + - infinity + image: ghcr.io/oracle/oraclelinux:8-slim + imagePullPolicy: %WEBLOGIC_IMAGE_PULL_POLICY% + name: pvhelper + volumeMounts: + - name: pv-volume + mountPath: %VOLUME_MOUNT_PATH% + %RUN_AS_ROOT_PREFIX%securityContext: + %RUN_AS_ROOT_PREFIX% allowPrivilegeEscalation: false + %RUN_AS_ROOT_PREFIX% capabilities: + %RUN_AS_ROOT_PREFIX% drop: + %RUN_AS_ROOT_PREFIX% - ALL + %RUN_AS_ROOT_PREFIX% privileged: false + %RUN_AS_ROOT_PREFIX% runAsNonRoot: true + %RUN_AS_ROOT_PREFIX% runAsUser: 1000 + volumes: + - name: pv-volume + persistentVolumeClaim: + claimName: %CLAIM_NAME% + %WEBLOGIC_IMAGE_PULL_SECRET_PREFIX%imagePullSecrets: + %WEBLOGIC_IMAGE_PULL_SECRET_PREFIX%- name: %WEBLOGIC_IMAGE_PULL_SECRET_NAME% diff --git a/OracleAccessManagement/kubernetes/domain-lifecycle/waitForDomain.sh b/OracleAccessManagement/kubernetes/domain-lifecycle/waitForDomain.sh index b0e0c2682..cca3456d3 100755 --- a/OracleAccessManagement/kubernetes/domain-lifecycle/waitForDomain.sh +++ b/OracleAccessManagement/kubernetes/domain-lifecycle/waitForDomain.sh @@ -305,8 +305,8 @@ getDomainInfo() { getDomainAIImages domain_info_goal_aiimages_current getDomainValue domain_info_api_version ".apiVersion" getDomainValue domain_info_condition_failed_str ".status.conditions[?(@.type==\"Failed\")]" # has full failure messages, if any - getDomainValue domain_info_condition_completed ".status.conditions[?(@.type==\"Completed\")].status" # "True" when complete getDomainValue domain_info_observed_generation ".status.observedGeneration" + getDomainValue domain_info_condition_completed ".status.conditions[?(@.type==\"Completed\")].status" # "True" when complete domain_info_clusters=$( echo "$domain_info_clusters" | sed 's/"name"//g' | tr -d '[]{}:' | sortlist | sed 's/,/ /') # convert to sorted space separated list diff --git a/OracleAccessManagement/kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml b/OracleAccessManagement/kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml index 7ebbb221b..6ba7c02bf 100755 --- a/OracleAccessManagement/kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml +++ b/OracleAccessManagement/kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml @@ -56,13 +56,17 @@ spec: securityContext: capabilities: add: ["SYS_CHROOT"] - image: "elasticsearch:6.8.23" + image: "elasticsearch:7.8.1" ports: - containerPort: 9200 - containerPort: 9300 env: + - name: discovery.type + value: single-node - name: ES_JAVA_OPTS value: -Xms1024m -Xmx1024m + - name: bootstrap.memory_lock + value: "false" --- kind: "Service" @@ -103,7 +107,7 @@ spec: spec: containers: - name: "kibana" - image: "kibana:6.8.23" + image: "kibana:7.8.1" ports: - containerPort: 5601 imagePullSecrets: diff --git a/OracleAccessManagement/kubernetes/kubectlserch b/OracleAccessManagement/kubernetes/kubectlserch deleted file mode 100755 index 045605e43..000000000 --- a/OracleAccessManagement/kubernetes/kubectlserch +++ /dev/null @@ -1,111 +0,0 @@ -./charts/apache-samples/custom-sample/README.md:7:$ ${KUBERNETES_CLI:-kubectl} create namespace apache-sample -./charts/apache-webtier/README.md:81:$ ${KUBERNETES_CLI:-kubectl} api-versions | grep rbac -./create-rcu-schema/create-rcu-schema.sh.mustache:59: pname=`${KUBERNETES_CLI:-kubectl} get po -n ${ns} | grep -w ${pod} | awk '{print $1}'` -./create-rcu-schema/create-rcu-schema.sh.mustache:65: rcode=`${KUBERNETES_CLI:-kubectl} get po ${pname} -n ${ns} | grep -w ${pod} | awk '{print $2}'` -./create-rcu-schema/create-rcu-schema.sh.mustache:70: rcode=`${KUBERNETES_CLI:-kubectl} get po/$pod -n ${ns} | grep -v NAME | awk '{print $2}'` -./create-rcu-schema/create-rcu-schema.sh.mustache:80: pname=`${KUBERNETES_CLI:-kubectl} get po -n ${ns} | grep -w ${pod} | awk '{print $1}'` -./create-rcu-schema/create-rcu-schema.sh.mustache:81: ${KUBERNETES_CLI:-kubectl} -n ${ns} get po ${pname} -./create-rcu-schema/create-rcu-schema.sh.mustache:123:${KUBERNETES_CLI:-kubectl} exec -n $namespace -i rcu -- /bin/bash /u01/oracle/createRepository.sh ${dburl} ${schemaPrefix} ${rcuType} ${customVariables} -./create-rcu-schema/README.md.mustache:28:$ ${KUBERNETES_CLI:-kubectl} -n default create secret generic oracle-rcu-secret \ -./create-rcu-schema/README.md.mustache:71:$ ${KUBERNETES_CLI:-kubectl} -n MYNAMESPACE create secret generic oracle-rcu-secret \ -./create-rcu-schema/README.md.mustache:203:$ ${KUBERNETES_CLI:-kubectl} -n default create secret generic oracle-rcu-secret \ -./create-rcu-schema/drop-rcu-schema.sh.mustache:78:#fmwimage=`${KUBERNETES_CLI:-kubectl} get pod/rcu -o jsonpath="{..image}"` -./create-rcu-schema/drop-rcu-schema.sh.mustache:81:${KUBERNETES_CLI:-kubectl} exec -n $namespace -i rcu -- /bin/bash /u01/oracle/dropRepository.sh ${dburl} ${schemaPrefix} ${rcuType} ${customVariables} -./create-rcu-schema/common/create-rcu-pod.sh.mustache:67:rcupod=`${KUBERNETES_CLI:-kubectl} get po -n ${namespace} | grep "^rcu " | cut -f1 -d " " ` -./create-rcu-schema/common/create-rcu-pod.sh.mustache:86: ${KUBERNETES_CLI:-kubectl} delete po rcu -n ${namespace} --ignore-not-found -./create-rcu-schema/common/create-rcu-pod.sh.mustache:87: ${KUBERNETES_CLI:-kubectl} apply -f $rcuYaml -./create-rcu-schema/common/create-rcu-pod.sh.mustache:98:${KUBERNETES_CLI:-kubectl} exec -n $namespace -i rcu -- bash -c 'cat > /u01/oracle/dropRepository.sh' < ${scriptDir}/dropRepository.sh || exit -5 -./create-rcu-schema/common/create-rcu-pod.sh.mustache:99:${KUBERNETES_CLI:-kubectl} exec -n $namespace -i rcu -- bash -c 'cat > /u01/oracle/createRepository.sh' < ${scriptDir}/createRepository.sh || exit -6 -./create-rcu-schema/common/create-rcu-pod.sh.mustache:101:${KUBERNETES_CLI:-kubectl} get po/rcu -n $namespace -./create-rcu-schema/create-image-pull-secret.sh.mustache:57:${KUBERNETES_CLI:-kubectl} delete secret/${secret} --ignore-not-found -./create-rcu-schema/create-image-pull-secret.sh.mustache:59:${KUBERNETES_CLI:-kubectl} create secret docker-registry ${secret} --docker-server=container-registry.oracle.com --docker-username=${username} --docker-password=${password} --docker-email=${email} -./create-rcu-credentials/README.md.mustache:36:You can check the secret with the `${KUBERNETES_CLI:-kubectl} describe secret` command. An example is shown below, -./create-rcu-credentials/README.md.mustache:40:$ ${KUBERNETES_CLI:-kubectl} -n <%namespace%> describe secret <%domainUID%>-rcu-credentials -o yaml -./create-rcu-credentials/create-rcu-credentials.sh.mustache:34:# Try to execute ${KUBERNETES_CLI:-kubectl} to see whether ${KUBERNETES_CLI:-kubectl} is available -./create-rcu-credentials/create-rcu-credentials.sh.mustache:36: if ! [ -x "$(command -v ${KUBERNETES_CLI:-kubectl})" ]; then -./create-rcu-credentials/create-rcu-credentials.sh.mustache:37: fail "${KUBERNETES_CLI:-kubectl} is not installed" -./create-rcu-credentials/create-rcu-credentials.sh.mustache:132:result=$(${KUBERNETES_CLI:-kubectl} get secret "${secretName}" -n "${namespace}" --ignore-not-found=true | grep "${secretName}" | wc | awk ' { print $1; }') -./create-rcu-credentials/create-rcu-credentials.sh.mustache:138:${KUBERNETES_CLI:-kubectl} -n "$namespace" create secret generic "$secretName" \ -./create-rcu-credentials/create-rcu-credentials.sh.mustache:146: ${KUBERNETES_CLI:-kubectl} label secret "${secretName}" -n "$namespace" weblogic.domainUID="$domainUID" weblogic.domainName="$domainUID" -./create-rcu-credentials/create-rcu-credentials.sh.mustache:150:SECRET=$(${KUBERNETES_CLI:-kubectl} get secret "${secretName}" -n "${namespace}" | grep "${secretName}" | wc | awk ' { print $1; }') -./logging-services/weblogic-logging-exporter/README.md.mustache:11:$ ${KUBERNETES_CLI:-kubectl} create -f https://raw.githubusercontent.com/oracle/weblogic-kubernetes-operator/master/kubernetes/samples/scripts/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml -./logging-services/weblogic-logging-exporter/README.md.mustache:35: $ ${KUBERNETES_CLI:-kubectl} cp weblogic-logging-exporter.jar <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains/<%domainUID%>/ -./logging-services/weblogic-logging-exporter/README.md.mustache:36: $ ${KUBERNETES_CLI:-kubectl} cp snakeyaml-1.27.jar <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains/<%domainUID%>/ -./logging-services/weblogic-logging-exporter/README.md.mustache:65: $ ${KUBERNETES_CLI:-kubectl} cp <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains/<%domainUID%>/bin/setDomainEnv.sh setDomainEnv.sh -./logging-services/weblogic-logging-exporter/README.md.mustache:75: $ ${KUBERNETES_CLI:-kubectl} cp setDomainEnv.sh <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains/<%domainUID%>/bin/setDomainEnv.sh -./logging-services/weblogic-logging-exporter/README.md.mustache:81: $ ${KUBERNETES_CLI:-kubectl} cp WebLogicLoggingExporter.yaml <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains/<%domainUID%>/config/ -./logging-services/weblogic-logging-exporter/README.md.mustache:104: $ ${KUBERNETES_CLI:-kubectl} get pods -n <%namespace%> -./logging-services/weblogic-logging-exporter/README.md.mustache:123: $ ${KUBERNETES_CLI:-kubectl} get pods -n <%namespace%> -./logging-services/logstash/README.md.mustache:35: $ ${KUBERNETES_CLI:-kubectl} get pvc -n <%namespace%> -./logging-services/logstash/README.md.mustache:41: $ ${KUBERNETES_CLI:-kubectl} cp logstash.conf <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains --namespace <%namespace%> -./logging-services/logstash/README.md.mustache:55: $ ${KUBERNETES_CLI:-kubectl} create -f logstash.yaml -./monitoring-service/README.md.mustache:6:- Have Docker and a Kubernetes cluster running and have `${KUBERNETES_CLI:-kubectl}` installed and configured. -./monitoring-service/README.md.mustache:34: $ ${KUBERNETES_CLI:-kubectl} create -f manifests/setup -./monitoring-service/README.md.mustache:35: $ until ${KUBERNETES_CLI:-kubectl} get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done -./monitoring-service/README.md.mustache:36: $ ${KUBERNETES_CLI:-kubectl} create -f manifests/ -./monitoring-service/README.md.mustache:42: $ ${KUBERNETES_CLI:-kubectl} label nodes --all kubernetes.io/os=linux -./monitoring-service/README.md.mustache:48: $ ${KUBERNETES_CLI:-kubectl} patch svc grafana -n monitoring --type=json -p '[{"op": "replace", "path": "/spec/type", "value": "NodePort" },{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 32100 }]' -./monitoring-service/README.md.mustache:50: $ ${KUBERNETES_CLI:-kubectl} patch svc prometheus-k8s -n monitoring --type=json -p '[{"op": "replace", "path": "/spec/type", "value": "NodePort" },{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 32101 }]' -./monitoring-service/README.md.mustache:52: $ ${KUBERNETES_CLI:-kubectl} patch svc alertmanager-main -n monitoring --type=json -p '[{"op": "replace", "path": "/spec/type", "value": "NodePort" },{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 32102 }]' -./monitoring-service/README.md.mustache:100:$ ${KUBERNETES_CLI:-kubectl} cp wls-exporter-deploy /:/u01/oracle -./monitoring-service/README.md.mustache:101:$ ${KUBERNETES_CLI:-kubectl} cp deploy-weblogic-monitoring-exporter.py /:/u01/oracle/wls-exporter-deploy -./monitoring-service/README.md.mustache:102:$ ${KUBERNETES_CLI:-kubectl} exec -it -n -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/wls-exporter-deploy/deploy-weblogic-monitoring-exporter.py \ -./monitoring-service/README.md.mustache:116:$ ${KUBERNETES_CLI:-kubectl} cp wls-exporter-deploy <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle -./monitoring-service/README.md.mustache:117:$ ${KUBERNETES_CLI:-kubectl} cp deploy-weblogic-monitoring-exporter.py <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/wls-exporter-deploy -./monitoring-service/README.md.mustache:118:$ ${KUBERNETES_CLI:-kubectl} exec -it -n <%namespace%> <%domainUID%>-<%adminServerNameToLegal%> -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/wls-exporter-deploy/deploy-weblogic-monitoring-exporter.py \ -./monitoring-service/README.md.mustache:151:$ ${KUBERNETES_CLI:-kubectl} apply -f . -./monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.sh.mustache:21:username=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.username}'|base64 --decode` -./monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.sh.mustache:22:password=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.password}'|base64 --decode` -./monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.sh.mustache:36:${KUBERNETES_CLI:-kubectl} cp $scriptDir/undeploy-weblogic-monitoring-exporter.py ${domainNamespace}/${adminServerPodName}:/u01/oracle/undeploy-weblogic-monitoring-exporter.py -./monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.sh.mustache:37:EXEC_UNDEPLOY="${KUBERNETES_CLI:-kubectl} exec -it -n ${domainNamespace} ${adminServerPodName} -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/undeploy-weblogic-monitoring-exporter.py ${InputParameterList}" -./monitoring-service/scripts/deploy-weblogic-server-grafana-dashboard.sh.mustache:23:grafanaEndpointIP=$(${KUBERNETES_CLI:-kubectl} get endpoints ${monitoringNamespace}-grafana -n ${monitoringNamespace} -o=jsonpath="{.subsets[].addresses[].ip}") -./monitoring-service/scripts/deploy-weblogic-server-grafana-dashboard.sh.mustache:24:grafanaEndpointPort=$(${KUBERNETES_CLI:-kubectl} get endpoints ${monitoringNamespace}-grafana -n ${monitoringNamespace} -o=jsonpath="{.subsets[].ports[].port}") -./monitoring-service/scripts/deploy-weblogic-server-grafana-dashboard.sh.mustache:26:${KUBERNETES_CLI:-kubectl} cp $scriptDir/../config/weblogic-server-dashboard.json ${domainNamespace}/${adminServerPodName}:/tmp/weblogic-server-dashboard.json -./monitoring-service/scripts/deploy-weblogic-server-grafana-dashboard.sh.mustache:27:EXEC_DEPLOY="${KUBERNETES_CLI:-kubectl} exec -it -n ${domainNamespace} ${adminServerPodName} -- curl --noproxy \"*\" -X POST -H \"Content-Type: application/json\" -d @/tmp/weblogic-server-dashboard.json http://admin:admin@${grafanaEndpoint}/api/dashboards/db" -./monitoring-service/scripts/deploy-weblogic-monitoring-exporter.sh.mustache:22:username=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.username}'|base64 --decode` -./monitoring-service/scripts/deploy-weblogic-monitoring-exporter.sh.mustache:23:password=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.password}'|base64 --decode` -./monitoring-service/scripts/deploy-weblogic-monitoring-exporter.sh.mustache:36:${KUBERNETES_CLI:-kubectl} cp $scriptDir/wls-exporter-deploy ${domainNamespace}/${adminServerPodName}:/u01/oracle -./monitoring-service/scripts/deploy-weblogic-monitoring-exporter.sh.mustache:37:${KUBERNETES_CLI:-kubectl} cp $scriptDir/deploy-weblogic-monitoring-exporter.py ${domainNamespace}/${adminServerPodName}:/u01/oracle/wls-exporter-deploy -./monitoring-service/scripts/deploy-weblogic-monitoring-exporter.sh.mustache:38:EXEC_DEPLOY="${KUBERNETES_CLI:-kubectl} exec -it -n ${domainNamespace} ${adminServerPodName} -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/wls-exporter-deploy/deploy-weblogic-monitoring-exporter.py ${InputParameterList}" -./monitoring-service/delete-monitoring.sh.mustache:106:${KUBERNETES_CLI:-kubectl} delete --ignore-not-found=true -f ${serviceMonitor} -./monitoring-service/setup-monitoring.sh.mustache:133: if test "$(${KUBERNETES_CLI:-kubectl} get namespace ${monitoringNamespace} --ignore-not-found | wc -l)" = 0; then -./monitoring-service/setup-monitoring.sh.mustache:135: ${KUBERNETES_CLI:-kubectl} create namespace ${monitoringNamespace} -./monitoring-service/setup-monitoring.sh.mustache:140: ${KUBERNETES_CLI:-kubectl} label nodes --all kubernetes.io/os=linux --overwrite=true -./monitoring-service/setup-monitoring.sh.mustache:149:export username=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.username}'|base64 --decode` -./monitoring-service/setup-monitoring.sh.mustache:150:export password=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.password}'|base64 --decode` -./monitoring-service/setup-monitoring.sh.mustache:170:${KUBERNETES_CLI:-kubectl} apply -f ${serviceMonitor} -./create-oracle-db-service/README.md.mustache:15:`${KUBERNETES_CLI:-kubectl} -n MYNAMESPACE create secret generic MYSECRETNAME --from-literal='password=MYSYSPASSWORD'` -./create-oracle-db-service/README.md.mustache:41:$ ${KUBERNETES_CLI:-kubectl} -n MYNAMESPACE create secret generic MYSECRETNAME --from-literal='password=MYSYSPASSWORD' -./create-oracle-db-service/start-db-service.sh.mustache:55:domns=`${KUBERNETES_CLI:-kubectl} get ns ${namespace} | grep ${namespace} | awk '{print $1}'` -./create-oracle-db-service/start-db-service.sh.mustache:58: ${KUBERNETES_CLI:-kubectl} create namespace ${namespace} -./create-oracle-db-service/start-db-service.sh.mustache:92:${KUBERNETES_CLI:-kubectl} delete service oracle-db -n ${namespace} --ignore-not-found -./create-oracle-db-service/start-db-service.sh.mustache:95:${KUBERNETES_CLI:-kubectl} apply -f ${dbYaml} -./create-oracle-db-service/start-db-service.sh.mustache:107:${KUBERNETES_CLI:-kubectl} get po -n ${namespace} -./create-oracle-db-service/start-db-service.sh.mustache:108:${KUBERNETES_CLI:-kubectl} get service -n ${namespace} -./create-oracle-db-service/start-db-service.sh.mustache:110:${KUBERNETES_CLI:-kubectl} cp ${scriptDir}/common/checkDbState.sh -n ${namespace} ${dbpod}:/home/oracle/ -./create-oracle-db-service/start-db-service.sh.mustache:112:${KUBERNETES_CLI:-kubectl} exec -it ${dbpod} -n ${namespace} -- /bin/bash /home/oracle/checkDbState.sh -./create-oracle-db-service/stop-db-service.sh.mustache:32:dbpod=`${KUBERNETES_CLI:-kubectl} get po -n ${namespace} | grep oracle-db | cut -f1 -d " " ` -./create-oracle-db-service/stop-db-service.sh.mustache:33:${KUBERNETES_CLI:-kubectl} delete -f ${scriptDir}/common/oracle.db.${namespace}.yaml --ignore-not-found -./create-oracle-db-service/stop-db-service.sh.mustache:40: ${KUBERNETES_CLI:-kubectl} delete svc/oracle-db -n ${namespace} --ignore-not-found -./create-oracle-db-service/create-image-pull-secret.sh.mustache:57:${KUBERNETES_CLI:-kubectl} delete secret/${secret} --ignore-not-found -./create-oracle-db-service/create-image-pull-secret.sh.mustache:59:${KUBERNETES_CLI:-kubectl} create secret docker-registry ${secret} --docker-server=container-registry.oracle.com --docker-username=${username} --docker-password=${password} --docker-email=${email} -./elasticsearch-and-kibana/README.md.mustache:25:$ ${KUBERNETES_CLI:-kubectl} apply -f kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml -./elasticsearch-and-kibana/README.md.mustache:30:$ ${KUBERNETES_CLI:-kubectl} delete -f kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml -./elasticsearch-and-kibana/elasticsearch_and_kibana.yaml.mustache:22:# ${KUBERNETES_CLI:-kubectl} apply -f kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml -./elasticsearch-and-kibana/elasticsearch_and_kibana.yaml.mustache:25:# ${KUBERNETES_CLI:-kubectl} delete -f kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml -./create-weblogic-domain-pv-pvc/README.md.mustache:23:The `create-pv-pvc.sh` script will create a subdirectory `pv-pvcs` under the given `/path/to/output-directory` directory. By default, the script generates two YAML files, namely `weblogic-sample-pv.yaml` and `weblogic-sample-pvc.yaml`, in the `/path/to/output-directory/pv-pvcs`. These two YAML files can be used to create the Kubernetes resources using the `${KUBERNETES_CLI:-kubectl} create -f` command. -./create-weblogic-domain-pv-pvc/README.md.mustache:26:$ ${KUBERNETES_CLI:-kubectl} create -f <%domainUID%>-domain-pv.yaml -./create-weblogic-domain-pv-pvc/README.md.mustache:27:$ ${KUBERNETES_CLI:-kubectl} create -f <%domainUID%>-domain-pvc.yaml -./create-weblogic-domain-pv-pvc/README.md.mustache:174:$ ${KUBERNETES_CLI:-kubectl} describe pv <%domainUID%>-domain-pv -./create-weblogic-domain-pv-pvc/README.md.mustache:195:$ ${KUBERNETES_CLI:-kubectl} describe pvc <%domainUID%>-domain-pvc -./create-weblogic-domain-pv-pvc/create-pv-pvc.sh.mustache:212: ${KUBERNETES_CLI:-kubectl} create -f ${pvOutput} -./create-weblogic-domain-pv-pvc/create-pv-pvc.sh.mustache:227: ${KUBERNETES_CLI:-kubectl} create -f ${pvcOutput} -./create-weblogic-domain-credentials/README.md.mustache:27:You can check the secret with the `${KUBERNETES_CLI:-kubectl} get secret` command. An example is shown below, -./create-weblogic-domain-credentials/README.md.mustache:31:$ ${KUBERNETES_CLI:-kubectl} -n <%namespace%> get secret <%domainUID%>-weblogic-credentials -o yaml -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:34:# Try to execute ${KUBERNETES_CLI:-kubectl} to see whether ${KUBERNETES_CLI:-kubectl} is available -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:36: if ! [ -x "$(command -v ${KUBERNETES_CLI:-kubectl})" ]; then -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:37: fail "${KUBERNETES_CLI:-kubectl} is not installed" -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:109:result=$(${KUBERNETES_CLI:-kubectl} get secret "${secretName}" -n "${namespace}" --ignore-not-found=true | grep "${secretName}" | wc | awk ' { print $1; }') -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:115:${KUBERNETES_CLI:-kubectl} -n "$namespace" create secret generic "$secretName" \ -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:121: ${KUBERNETES_CLI:-kubectl} label secret "${secretName}" -n "$namespace" weblogic.domainUID="$domainUID" weblogic.domainName="$domainUID" -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:125:SECRET=$(${KUBERNETES_CLI:-kubectl} get secret "${secretName}" -n "${namespace}" | grep "${secretName}" | wc | awk ' { print $1; }') diff --git a/OracleAccessManagement/kubernetes/logging-services/logstash/logstash.conf b/OracleAccessManagement/kubernetes/logging-services/logstash/logstash.conf index 4a54c04ff..97ac13a7f 100755 --- a/OracleAccessManagement/kubernetes/logging-services/logstash/logstash.conf +++ b/OracleAccessManagement/kubernetes/logging-services/logstash/logstash.conf @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # diff --git a/OracleAccessManagement/kubernetes/logging-services/logstash/logstash.yaml b/OracleAccessManagement/kubernetes/logging-services/logstash/logstash.yaml index 0cc77298a..a33370582 100755 --- a/OracleAccessManagement/kubernetes/logging-services/logstash/logstash.yaml +++ b/OracleAccessManagement/kubernetes/logging-services/logstash/logstash.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # diff --git a/OracleAccessManagement/kubernetes/monitoring-service/README.md b/OracleAccessManagement/kubernetes/monitoring-service/README.md index cf47e29da..11faf41d7 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/README.md +++ b/OracleAccessManagement/kubernetes/monitoring-service/README.md @@ -5,6 +5,8 @@ Using the `WebLogic Monitoring Exporter` you can scrape runtime information from - Have Docker and a Kubernetes cluster running and have `${KUBERNETES_CLI:-kubectl}` installed and configured. - Have Helm installed. +- Before installing kube-prometheus-stack (Prometheus, Grafana and Alertmanager), refer [link](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#uninstall-helm-chart) and cleanup if any older CRDs for monitoring services exists in your Kubernetes cluster. + **Note**: Make sure no existing monitoring services is running in the Kubernetes cluster before cleanup. If you do not want to cleanup monitoring services CRDs, refer [link](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#upgrading-chart) for upgrading the CRDs. - An OracleAccessManagement domain deployed by `weblogic-operator` is running in the Kubernetes cluster. ## Set up monitoring for OracleAccessManagement domain @@ -182,7 +184,7 @@ The following parameters can be provided in the inputs file. | `domainUID` | domainUID of the OracleAccessManagement domain. | `accessdomain` | | `domainNamespace` | Kubernetes namespace of the OracleAccessManagement domain. | `oamns` | | `setupKubePrometheusStack` | Boolean value indicating whether kube-prometheus-stack (Prometheus, Grafana and Alertmanager) to be installed | `true` | -| `additionalParamForKubePrometheusStack` | The script install's kube-prometheus-stack with `service.type` as NodePort and values for `service.nodePort` as per the parameters defined in `monitoring-inputs.yaml`. Use `additionalParamForKubePrometheusStack` parameter to further configure with additional parameters as per [values.yaml](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml). Sample value to disable NodeExporter, Prometheus-Operator TLS support and Admission webhook support for PrometheusRules resources is `--set nodeExporter.enabled=false --set prometheusOperator.tls.enabled=false --set prometheusOperator.admissionWebhooks.enabled=false`| | +| `additionalParamForKubePrometheusStack` | The script install's kube-prometheus-stack with `service.type` as NodePort and values for `service.nodePort` as per the parameters defined in `monitoring-inputs.yaml`. Use `additionalParamForKubePrometheusStack` parameter to further configure with additional parameters as per [values.yaml](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml). Sample value to disable NodeExporter, Prometheus-Operator TLS support, Admission webhook support for PrometheusRules resources and custom Grafana image repository is `--set nodeExporter.enabled=false --set prometheusOperator.tls.enabled=false --set prometheusOperator.admissionWebhooks.enabled=false --set grafana.image.repository=xxxxxxxxx/grafana/grafana`| | | `monitoringNamespace` | Kubernetes namespace for monitoring setup. | `monitoring` | | `adminServerName` | Name of the Administration Server. | `AdminServer` | | `adminServerPort` | Port number for the Administration Server inside the Kubernetes cluster. | `7001` | @@ -211,7 +213,7 @@ $ ./setup-monitoring.sh \ ``` The script will perform the following steps: -- Helm install `prometheus-community/kube-prometheus-stack` of version "16.5.0" if `setupKubePrometheusStack` is set to `true`. +- Helm install `prometheus-community/kube-prometheus-stack` if `setupKubePrometheusStack` is set to `true`. - Deploys WebLogic Monitoring Exporter to Administration Server. - Deploys WebLogic Monitoring Exporter to `oamCluster` if `wlsMonitoringExporterTooamCluster` is set to `true`. - Deploys WebLogic Monitoring Exporter to `policyCluster` if `wlsMonitoringExporterTopolicyCluster` is set to `true`. @@ -235,7 +237,7 @@ Sample output: ```bash $ helm ls -n monitoring NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION -monitoring monitoring 1 2021-06-18 12:58:35.177221969 +0000 UTC deployed kube-prometheus-stack-16.5.0 0.48.0 +monitoring monitoring 1 2023-03-15 10:31:42.44437202 +0000 UTC deployed kube-prometheus-stack-45.7.1 v0.63.0 $ ``` diff --git a/OracleAccessManagement/kubernetes/monitoring-service/config/weblogic-server-dashboard-import.json b/OracleAccessManagement/kubernetes/monitoring-service/config/weblogic-server-dashboard-import.json index c2fa9e2eb..9ee45d900 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/config/weblogic-server-dashboard-import.json +++ b/OracleAccessManagement/kubernetes/monitoring-service/config/weblogic-server-dashboard-import.json @@ -490,7 +490,7 @@ "lineColor": "rgb(31, 120, 193)", "show": true }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "100 - wls_jvm_heap_free_percent{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", @@ -582,7 +582,7 @@ "lineColor": "rgb(31, 120, 193)", "show": false }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "wls_jvm_uptime{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", @@ -674,7 +674,7 @@ "lineColor": "rgb(31, 120, 193)", "show": false }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "wls_server_open_sockets_current_count{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", diff --git a/OracleAccessManagement/kubernetes/monitoring-service/config/weblogic-server-dashboard.json b/OracleAccessManagement/kubernetes/monitoring-service/config/weblogic-server-dashboard.json index cf6d5f776..b7fda8e9a 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/config/weblogic-server-dashboard.json +++ b/OracleAccessManagement/kubernetes/monitoring-service/config/weblogic-server-dashboard.json @@ -491,7 +491,7 @@ "lineColor": "rgb(31, 120, 193)", "show": true }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "100 - wls_jvm_heap_free_percent{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", @@ -583,7 +583,7 @@ "lineColor": "rgb(31, 120, 193)", "show": false }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "wls_jvm_uptime{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", @@ -675,7 +675,7 @@ "lineColor": "rgb(31, 120, 193)", "show": false }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "wls_server_open_sockets_current_count{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", diff --git a/OracleAccessManagement/kubernetes/monitoring-service/manifests/prometheus-roleBinding-domain-namespace.yaml b/OracleAccessManagement/kubernetes/monitoring-service/manifests/prometheus-roleBinding-domain-namespace.yaml index dcb492991..e37b9830f 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/manifests/prometheus-roleBinding-domain-namespace.yaml +++ b/OracleAccessManagement/kubernetes/monitoring-service/manifests/prometheus-roleBinding-domain-namespace.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # apiVersion: rbac.authorization.k8s.io/v1 diff --git a/OracleAccessManagement/kubernetes/monitoring-service/manifests/prometheus-roleSpecific-domain-namespace.yaml b/OracleAccessManagement/kubernetes/monitoring-service/manifests/prometheus-roleSpecific-domain-namespace.yaml index 09a8e0b96..a881c8647 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/manifests/prometheus-roleSpecific-domain-namespace.yaml +++ b/OracleAccessManagement/kubernetes/monitoring-service/manifests/prometheus-roleSpecific-domain-namespace.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # apiVersion: rbac.authorization.k8s.io/v1 diff --git a/OracleAccessManagement/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml b/OracleAccessManagement/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml index c2b67c303..563ba6955 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml +++ b/OracleAccessManagement/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # apiVersion: v1 diff --git a/OracleAccessManagement/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml.template b/OracleAccessManagement/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml.template index ffdc12798..4cc298778 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml.template +++ b/OracleAccessManagement/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml.template @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # apiVersion: v1 diff --git a/OracleAccessManagement/kubernetes/monitoring-service/monitoring-inputs.yaml b/OracleAccessManagement/kubernetes/monitoring-service/monitoring-inputs.yaml index ee8307cf3..b33be9224 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/monitoring-inputs.yaml +++ b/OracleAccessManagement/kubernetes/monitoring-service/monitoring-inputs.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # The version of this inputs file. Do not modify. diff --git a/OracleAccessManagement/kubernetes/monitoring-service/scripts/deploy-weblogic-monitoring-exporter.py b/OracleAccessManagement/kubernetes/monitoring-service/scripts/deploy-weblogic-monitoring-exporter.py index 6dd335d52..fccaf5b27 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/scripts/deploy-weblogic-monitoring-exporter.py +++ b/OracleAccessManagement/kubernetes/monitoring-service/scripts/deploy-weblogic-monitoring-exporter.py @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # import sys diff --git a/OracleAccessManagement/kubernetes/monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.py b/OracleAccessManagement/kubernetes/monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.py index e0c0581a1..2a48bb569 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.py +++ b/OracleAccessManagement/kubernetes/monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.py @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # import sys diff --git a/OracleAccessManagement/kubernetes/monitoring-service/scripts/utils.sh b/OracleAccessManagement/kubernetes/monitoring-service/scripts/utils.sh index ee044a920..03b0022ff 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/scripts/utils.sh +++ b/OracleAccessManagement/kubernetes/monitoring-service/scripts/utils.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # diff --git a/OracleAccessManagement/kubernetes/monitoring-service/setup-monitoring.sh b/OracleAccessManagement/kubernetes/monitoring-service/setup-monitoring.sh index 1076beb3a..846950d64 100755 --- a/OracleAccessManagement/kubernetes/monitoring-service/setup-monitoring.sh +++ b/OracleAccessManagement/kubernetes/monitoring-service/setup-monitoring.sh @@ -82,14 +82,12 @@ function installKubePrometheusStack { --set prometheus.service.type=NodePort --set prometheus.service.nodePort=${prometheusNodePort} \ --set alertmanager.service.type=NodePort --set alertmanager.service.nodePort=${alertmanagerNodePort} \ --set grafana.adminPassword=admin --set grafana.service.type=NodePort --set grafana.service.nodePort=${grafanaNodePort} \ - --version "16.5.0" --values ${scriptDir}/values.yaml \ - --atomic --wait + --wait else helm install ${monitoringNamespace} prometheus-community/kube-prometheus-stack \ --namespace ${monitoringNamespace} ${additionalParamForKubePrometheusStack} \ --set grafana.adminPassword=admin \ - --version "16.5.0" --values ${scriptDir}/values.yaml \ - --atomic --wait + --wait fi exitIfError $? "ERROR: prometheus-community/kube-prometheus-stack install failed." } diff --git a/OracleAccessManagement/kubernetes/monitoring-service/values.yaml b/OracleAccessManagement/kubernetes/monitoring-service/values.yaml deleted file mode 100755 index 18757f394..000000000 --- a/OracleAccessManagement/kubernetes/monitoring-service/values.yaml +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) 2022, Oracle and/or its affiliates. -# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. -# -prometheusOperator: - admissionWebhooks: - patch: - enabled: true - image: - repository: k8s.gcr.io/ingress-nginx/kube-webhook-certgen - tag: v1.0 - sha: "f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068" - pullPolicy: IfNotPresent - diff --git a/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/templates/nginx-ingress-nonssl.yaml b/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/templates/nginx-ingress-nonssl.yaml index 9afaad16f..41f19dd30 100755 --- a/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/templates/nginx-ingress-nonssl.yaml +++ b/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/templates/nginx-ingress-nonssl.yaml @@ -86,6 +86,13 @@ spec: name: '{{ .Values.wlsDomain.domainUID }}-cluster-{{ .Values.wlsDomain.oimClusterName | lower | replace "_" "-" }}' port: number: {{ .Values.wlsDomain.oimManagedServerPort }} + - path: /dms + pathType: ImplementationSpecific + backend: + service: + name: '{{ .Values.wlsDomain.domainUID }}-{{ .Values.wlsDomain.adminServerName | lower | replace "_" "-" }}' + port: + number: {{ .Values.wlsDomain.adminServerPort }} - path: /oim pathType: ImplementationSpecific backend: diff --git a/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/templates/nginx-ingress-ssl.yaml b/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/templates/nginx-ingress-ssl.yaml index 1270f03cd..95213c7fc 100755 --- a/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/templates/nginx-ingress-ssl.yaml +++ b/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/templates/nginx-ingress-ssl.yaml @@ -92,6 +92,13 @@ spec: name: '{{ .Values.wlsDomain.domainUID }}-cluster-{{ .Values.wlsDomain.oimClusterName | lower | replace "_" "-" }}' port: number: {{ .Values.wlsDomain.oimManagedServerPort }} + - path: /dms + pathType: ImplementationSpecific + backend: + service: + name: '{{ .Values.wlsDomain.domainUID }}-{{ .Values.wlsDomain.adminServerName | lower | replace "_" "-" }}' + port: + number: {{ .Values.wlsDomain.adminServerPort }} - path: /oim pathType: ImplementationSpecific backend: diff --git a/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/values.yaml b/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/values.yaml index eae2c1c44..010c38537 100755 --- a/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/values.yaml +++ b/OracleIdentityGovernance/kubernetes/charts/ingress-per-domain/values.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # diff --git a/OracleIdentityGovernance/kubernetes/charts/traefik/values.yaml b/OracleIdentityGovernance/kubernetes/charts/traefik/values.yaml index f680d34e3..95a733fe6 100755 --- a/OracleIdentityGovernance/kubernetes/charts/traefik/values.yaml +++ b/OracleIdentityGovernance/kubernetes/charts/traefik/values.yaml @@ -2,8 +2,7 @@ # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # image: - name: traefik - tag: 2.6.0 + name: traefik pullPolicy: IfNotPresent ingressRoute: dashboard: diff --git a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/Chart.yaml b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/Chart.yaml index 6d2acee4e..eb5eb1201 100755 --- a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/Chart.yaml +++ b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/Chart.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. apiVersion: v1 @@ -6,5 +6,5 @@ name: weblogic-operator description: Helm chart for configuring the WebLogic operator. type: application -version: 4.0.4 -appVersion: 4.0.4 +version: 4.1.0-RELEASE-MARKER +appVersion: 4.1.0-RELEASE-MARKER diff --git a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-general.tpl b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-general.tpl index a14fa9734..239a2ad8d 100755 --- a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-general.tpl +++ b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-general.tpl @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorClusterRoleGeneral" }} @@ -27,6 +27,9 @@ rules: resources: ["customresourcedefinitions"] verbs: ["get", "list", "watch", "create", "update", "patch"] {{- end }} +- apiGroups: [""] + resources: ["persistentvolumes"] + verbs: ["get", "list", "create"] - apiGroups: ["weblogic.oracle"] resources: ["domains", "clusters", "domains/status", "clusters/status"] verbs: ["get", "create", "list", "watch", "update", "patch"] diff --git a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-namespace.tpl b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-namespace.tpl index b6a554280..b91e082a1 100755 --- a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-namespace.tpl +++ b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-clusterrole-namespace.tpl @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorClusterRoleNamespace" }} @@ -25,6 +25,9 @@ rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["persistentvolumeclaims"] + verbs: ["get", "list", "create"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get", "list"] diff --git a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-cm.tpl b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-cm.tpl index d1a06a437..640a5ee03 100755 --- a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-cm.tpl +++ b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-cm.tpl @@ -1,10 +1,11 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorConfigMap" }} --- apiVersion: "v1" data: + helmChartVersion: {{ .Chart.Version }} {{- if .externalRestEnabled }} {{- if (hasKey . "externalRestIdentitySecret") }} externalRestIdentitySecret: {{ .externalRestIdentitySecret | quote }} diff --git a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-dep.tpl b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-dep.tpl index 6c97561d5..b56f661e7 100755 --- a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-dep.tpl +++ b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-dep.tpl @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorDeployment" }} @@ -34,9 +34,10 @@ spec: {{- end }} spec: serviceAccountName: {{ .serviceAccount | quote }} - {{- if .runAsUser }} + {{- if (ne ( .kubernetesPlatform | default "Generic" ) "OpenShift") }} securityContext: - runAsUser: {{ .runAsUser }} + seccompProfile: + type: RuntimeDefault {{- end }} {{- with .nodeSelector }} nodeSelector: @@ -74,16 +75,22 @@ spec: fieldPath: "metadata.uid" - name: "OPERATOR_VERBOSE" value: "false" - - name: "JAVA_LOGGING_LEVEL" - value: {{ .javaLoggingLevel | quote }} {{- if .kubernetesPlatform }} - name: "KUBERNETES_PLATFORM" value: {{ .kubernetesPlatform | quote }} {{- end }} + {{- if and (hasKey . "enableRest") .enableRest }} + - name: "ENABLE_REST_ENDPOINT" + value: "true" + {{- end }} + - name: "JAVA_LOGGING_LEVEL" + value: {{ .javaLoggingLevel | quote }} - name: "JAVA_LOGGING_MAXSIZE" - value: {{ .javaLoggingFileSizeLimit | default 20000000 | quote }} + value: {{ int64 .javaLoggingFileSizeLimit | default 20000000 | quote }} - name: "JAVA_LOGGING_COUNT" value: {{ .javaLoggingFileCount | default 10 | quote }} + - name: "JVM_OPTIONS" + value: {{ .jvmOptions | default "-XshowSettings:vm -XX:MaxRAMPercentage=70" | quote }} {{- if .remoteDebugNodePortEnabled }} - name: "REMOTE_DEBUG_PORT" value: {{ .internalDebugHttpPort | quote }} @@ -109,15 +116,15 @@ spec: {{- if .memoryLimits}} memory: {{ .memoryLimits }} {{- end }} - {{- if (eq ( .kubernetesPlatform | default "Generic" ) "OpenShift") }} securityContext: + {{- if (ne ( .kubernetesPlatform | default "Generic" ) "OpenShift") }} + runAsUser: {{ .runAsUser | default 1000 }} + {{- end }} + runAsNonRoot: true + privileged: false allowPrivilegeEscalation: false capabilities: drop: ["ALL"] - runAsNonRoot: true - seccompProfile: - type: RuntimeDefault - {{- end }} volumeMounts: - name: "weblogic-operator-cm-volume" mountPath: "/deployment/config" @@ -217,6 +224,12 @@ spec: namespace: {{ .Release.Namespace | quote }} data: serviceaccount: {{ .serviceAccount | quote }} + {{- if .featureGates }} + featureGates: {{ .featureGates | quote }} + {{- end }} + {{- if .domainNamespaceSelectionStrategy }} + domainNamespaceSelectionStrategy: {{ .domainNamespaceSelectionStrategy | quote }} + {{- end }} --- # webhook does not exist or chart version is newer, create a new webhook apiVersion: "apps/v1" @@ -259,17 +272,18 @@ spec: {{- end }} spec: serviceAccountName: {{ .serviceAccount | quote }} - {{- if .runAsUser }} + {{- if (ne ( .kubernetesPlatform | default "Generic" ) "OpenShift") }} securityContext: - runAsUser: {{ .runAsUser }} + seccompProfile: + type: RuntimeDefault {{- end }} {{- with .nodeSelector }} nodeSelector: - {{- toYaml . | nindent 8 }} + {{- toYaml . | nindent 12 }} {{- end }} {{- with .affinity }} affinity: - {{- toYaml . | nindent 8 }} + {{- toYaml . | nindent 12 }} {{- end }} containers: - name: "weblogic-operator-webhook" @@ -296,7 +310,7 @@ spec: - name: "JAVA_LOGGING_LEVEL" value: {{ .javaLoggingLevel | quote }} - name: "JAVA_LOGGING_MAXSIZE" - value: {{ .javaLoggingFileSizeLimit | default 20000000 | quote }} + value: {{ int64 .javaLoggingFileSizeLimit | default 20000000 | quote }} - name: "JAVA_LOGGING_COUNT" value: {{ .javaLoggingFileCount | default 10 | quote }} {{- if .remoteDebugNodePortEnabled }} @@ -320,15 +334,15 @@ spec: {{- if .memoryLimits}} memory: {{ .memoryLimits }} {{- end }} - {{- if (eq ( .kubernetesPlatform | default "Generic") "OpenShift") }} securityContext: + {{- if (ne ( .kubernetesPlatform | default "Generic" ) "OpenShift") }} + runAsUser: {{ .runAsUser | default 1000 }} + {{- end }} + runAsNonRoot: true + privileged: false allowPrivilegeEscalation: false capabilities: - drop: ["ALL"] - runAsNonRoot: true - seccompProfile: - type: RuntimeDefault - {{- end }} + drop: ["ALL"] volumeMounts: - name: "weblogic-webhook-cm-volume" mountPath: "/deployment/config" diff --git a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-external-svc.tpl b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-external-svc.tpl index 0fd2ee202..f7936f537 100755 --- a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-external-svc.tpl +++ b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-external-svc.tpl @@ -1,8 +1,8 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorExternalService" }} -{{- if or .externalRestEnabled .remoteDebugNodePortEnabled }} +{{- if or (and (hasKey . "enableRest") .enableRest .externalRestEnabled) .remoteDebugNodePortEnabled }} --- apiVersion: "v1" kind: "Service" diff --git a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-internal-svc.tpl b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-internal-svc.tpl index 5e7725825..c8c91bc1e 100755 --- a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-internal-svc.tpl +++ b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator-internal-svc.tpl @@ -1,7 +1,8 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- define "operator.operatorInternalService" }} +{{- if and (hasKey . "enableRest") .enableRest }} --- apiVersion: "v1" kind: "Service" @@ -21,6 +22,7 @@ spec: - port: 8083 name: "metrics" appProtocol: http +{{- end }} --- {{- if not .operatorOnly }} apiVersion: "v1" diff --git a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator.tpl b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator.tpl index b2bb5d8a3..ed98f7eb8 100755 --- a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator.tpl +++ b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/templates/_operator.tpl @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. {{- if and (not (empty .Capabilities.APIVersions)) (not (.Capabilities.APIVersions.Has "policy/v1")) }} @@ -14,8 +14,6 @@ {{- include "operator.operatorClusterRoleOperatorAdmin" . }} {{- include "operator.operatorClusterRoleDomainAdmin" . }} {{- include "operator.clusterRoleBindingGeneral" . }} -{{- include "operator.clusterRoleBindingAuthDelegator" . }} -{{- include "operator.clusterRoleBindingDiscovery" . }} {{- if not (eq .domainNamespaceSelectionStrategy "Dedicated") }} {{- include "operator.clusterRoleBindingNonResource" . }} {{- end }} diff --git a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/values.yaml b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/values.yaml index f2bfed813..b62e1691d 100755 --- a/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/values.yaml +++ b/OracleIdentityGovernance/kubernetes/charts/weblogic-operator/values.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # serviceAccount specifies the name of the ServiceAccount in the operator's namespace that the @@ -54,7 +54,7 @@ domainNamespaceSelectionStrategy: LabelSelector enableClusterRoleBinding: true # image specifies the container image containing the operator. -image: "ghcr.io/oracle/weblogic-kubernetes-operator:4.0.4" +image: "4.1.0-RELEASE-MARKER" # imagePullPolicy specifies the image pull policy for the operator's container image. imagePullPolicy: IfNotPresent @@ -69,9 +69,13 @@ imagePullPolicy: IfNotPresent # imagePullSecrets: # - name: "my-operator-secret" +# enableRest specifies whether the operator's REST interface is enabled. Beginning with version 4.0.5, +# the REST interface will be disabled by default. +# enableRest: true + # externalRestEnabled specifies whether the operator's REST interface is exposed # outside the Kubernetes cluster on the port specified by the 'externalRestHttpsPort' -# property. +# property. Ignored if 'enableRest' is not true. # # If set to true, then the customer must provide the SSL certificate and private key for # the operator's external REST interface by specifying the 'externalOperatorCert' and @@ -265,3 +269,8 @@ clusterSizePaddingValidationEnabled: true # runAsuser specifies the UID to run the operator and conversion webhook container processes. # If not specified, it defaults to the user specified in the operator's container image. #runAsUser: 1000 + +# jvmOptions specifies a value used to control the Java process that runs the operator, such as the maximum heap size +# that will be allocated. +#jvmOptions: -XshowSettings:vm -XX:MaxRAMPercentage=70 + diff --git a/OracleIdentityGovernance/kubernetes/common/utility.sh b/OracleIdentityGovernance/kubernetes/common/utility.sh index b2b1c9857..1ec9fc1ba 100755 --- a/OracleIdentityGovernance/kubernetes/common/utility.sh +++ b/OracleIdentityGovernance/kubernetes/common/utility.sh @@ -1,5 +1,5 @@ #!/usr/bin/env bash -# Copyright (c) 2018, 2022, Oracle and/or its affiliates. +# Copyright (c) 2018, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # @@ -855,7 +855,7 @@ checkPodDelete() { checkPodState() { status="NotReady" - max=60 + max=120 count=1 pod=$1 @@ -880,7 +880,7 @@ checkPodState() { count=`expr $count + 1` done if [ $count -gt $max ] ; then - echo "[ERROR] Unable to start the Pod [$pod] after 300s "; + echo "[ERROR] Unable to start the Pod [$pod] after 600s "; exit 1 fi @@ -969,11 +969,11 @@ getPodName() { detectPod() { ns=$1 startSecs=$SECONDS - maxWaitSecs=10 + maxWaitSecs=120 while [ -z "`${KUBERNETES_CLI:-kubectl} get pod -n ${ns} -o jsonpath={.items[0].metadata.name}`" ]; do if [ $((SECONDS - startSecs)) -lt $maxWaitSecs ]; then echo "Pod not found after $((SECONDS - startSecs)) seconds, retrying ..." - sleep 2 + sleep 5 else echo "[Error] Could not find Pod after $((SECONDS - startSecs)) seconds" exit 1 diff --git a/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/domain-resources/domain.yaml b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/domain-resources/domain.yaml new file mode 100644 index 000000000..f531facc7 --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/domain-resources/domain.yaml @@ -0,0 +1,185 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of how to define an OIG Domain. For details about the fields in domain specification, refer https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-resource/ +# +apiVersion: "weblogic.oracle/v9" +kind: Domain +metadata: + name: governancedomain + namespace: oigns + labels: + weblogic.domainUID: governancedomain +spec: + # The WebLogic Domain Home + domainHome: /u01/oracle/user_projects/domains/governancedomain + + # The domain home source type + # Set to PersistentVolume for domain-in-pv, Image for domain-in-image, or FromModel for model-in-image + domainHomeSourceType: PersistentVolume + + # The WebLogic Server image that the Operator uses to start the domain + image: "oracle/oig:oct23-12.2.1.4.0" + + # imagePullPolicy defaults to "Always" if image version is :latest + imagePullPolicy: IfNotPresent + + imagePullSecrets: + - name: orclcred + # Identify which Secret contains the WebLogic Admin credentials + webLogicCredentialsSecret: + name: governancedomain-weblogic-credentials + + # Whether to include the server out file into the pod's stdout, default is true + includeServerOutInPodLog: true + + # Whether to enable log home + logHomeEnabled: true + + # Whether to write HTTP access log file to log home + httpAccessLogInLogHome: true + + # The in-pod location for domain log, server logs, server out, introspector out, and Node Manager log files + logHome: /u01/oracle/user_projects/domains/logs/governancedomain + # An (optional) in-pod location for data storage of default and custom file stores. + # If not specified or the value is either not set or empty (e.g. dataHome: "") then the + # data storage directories are determined from the WebLogic domain home configuration. + dataHome: "" + + # serverStartPolicy legal values are "Never, "IfNeeded", or "AdminOnly" + # This determines which WebLogic Servers the Operator will start up when it discovers this Domain + # - "Never" will not start any server in the domain + # - "AdminOnly" will start up only the administration server (no managed servers will be started) + # - "IfNeeded" will start all non-clustered servers, including the administration server and clustered servers up to the replica count + serverStartPolicy: IfNeeded + + serverPod: + initContainers: + #DO NOT CHANGE THE NAME OF THIS INIT CONTAINER + - name: compat-connector-init + # OIG Product image, same as spec.image mentioned above + image: "oracle/oig:oct23-12.2.1.4.0" + imagePullPolicy: IfNotPresent + command: [ "/bin/bash", "-c", "mkdir -p /u01/oracle/user_projects/domains/ConnectorDefaultDirectory", "mkdir -p /u01/oracle/user_projects/domains/wdt-logs"] + volumeMounts: + - mountPath: /u01/oracle/user_projects/ + name: weblogic-domain-storage-volume + # a mandatory list of environment variable to be set on the servers + env: + - name: JAVA_OPTIONS + value: "-Dweblogic.StdoutDebugEnabled=false" + - name: USER_MEM_ARGS + value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx1024m " + - name: WLSDEPLOY_LOG_DIRECTORY + value: "/u01/oracle/user_projects/domains/wdt-logs" + - name: FRONTENDHOST + value: "example.com" + - name: FRONTENDPORT + value: "14000" + volumes: + - name: weblogic-domain-storage-volume + persistentVolumeClaim: + claimName: governancedomain-domain-pvc + volumeMounts: + - mountPath: /u01/oracle/user_projects/ + name: weblogic-domain-storage-volume + + # adminServer is used to configure the desired behavior for starting the administration server. + adminServer: + # adminService: + # channels: + # The Admin Server's NodePort + # - channelName: default + # nodePort: 30711 + # Uncomment to export the T3Channel as a service + # - channelName: T3Channel + serverPod: + # an (optional) list of environment variable to be set on the admin servers + env: + - name: USER_MEM_ARGS + value: "-Djava.security.egd=file:/dev/./urandom -Xms512m -Xmx1024m " + + configuration: + secrets: [ governancedomain-rcu-credentials ] + initializeDomainOnPV: + persistentVolume: + metadata: + name: governancedomain-domain-pv + spec: + storageClassName: governancedomain-domain-storage-class + capacity: + # Total storage allocated to the persistent storage. + storage: 10Gi + # Reclaim policy of the persistent storage + # # The valid values are: 'Retain', 'Delete', and 'Recycle' + persistentVolumeReclaimPolicy: Retain + # Persistent volume type for the persistent storage. + # # The value must be 'hostPath' or 'nfs'. + # # If using 'nfs', server must be specified. + nfs: + path: /scratch/k8s_dir + server: nfsServer + #hostPath: + #path: "/scratch/k8s_dir" + persistentVolumeClaim: + metadata: + name: governancedomain-domain-pvc + spec: + storageClassName: governancedomain-domain-storage-class + resources: + requests: + storage: 10Gi + volumeName: governancedomain-domain-pv + domain: + # Domain | DomainAndRCU + createIfNotExists: Domain + # Image containing WDT installer and Model files. + domainCreationImages: + - image: 'oracle/oig:oct23-aux-12.2.1.4.0' + domainType: OIG + # References to Cluster resources that describe the lifecycle options for all + # the Managed Server members of a WebLogic cluster, including Java + # options, environment variables, additional Pod content, and the ability to + # explicitly start, stop, or restart cluster members. The Cluster resource + # must describe a cluster that already exists in the WebLogic domain + # configuration. + clusters: + - name: governancedomain-oim-cluster + - name: governancedomain-soa-cluster + + # The number of managed servers to start for unlisted clusters + # replicas: 1 + +--- +# This is an example of how to define a Cluster resource. +apiVersion: weblogic.oracle/v1 +kind: Cluster +metadata: + name: governancedomain-oim-cluster + namespace: oigns +spec: + clusterName: oim_cluster + serverService: + precreateService: true + replicas: 0 + serverPod: + env: + - name: USER_MEM_ARGS + value: "-Djava.security.egd=file:/dev/./urandom -Xms4096m -Xmx8192m " + +--- +# This is an example of how to define a Cluster resource. +apiVersion: weblogic.oracle/v1 +kind: Cluster +metadata: + name: governancedomain-soa-cluster + namespace: oigns +spec: + clusterName: soa_cluster + serverService: + precreateService: true + replicas: 1 + serverPod: + env: + - name: USER_MEM_ARGS + value: "-Xms4096m -Xmx8192m" diff --git a/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/OIG.json b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/OIG.json new file mode 100644 index 000000000..ee26e1564 --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/OIG.json @@ -0,0 +1,41 @@ +{ + "copyright": "Copyright (c) 2023, Oracle and/or its affiliates.", + "license": "Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl", + "name": "OIG", + "description": "Oracle Identity Governance Domain Definitions", + "versions": { + "12.2.1.4": "OIG_12CR2" + }, + "definitions": { + "OIG_12CR2": { + "baseTemplate": "Basic WebLogic Server Domain", + "extensionTemplates": [ + "Oracle Identity Manager" + ], + "serverGroupsToTarget": [ + "OIM-MGD-SVRS", + "SOA-MGD-SVRS-ONLY", + "JRF-MAN-SVR", + "WSMPM-MAN-SVR" + ], + "rcuSchemas": [ + "STB", + "WLS", + "MDS", + "IAU", + "IAU_VIEWER", + "IAU_APPEND", + "OPSS", + "UCSUMS", + "OPSS", + "SOAINFRA", + "OIM" + ], + "postCreateDomainScript": { + "unixScript": "@@ORACLE_HOME@@/dockertools/oig_post_create_script.sh" + } + } + + } +} + diff --git a/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/agl_jdbc.yaml b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/agl_jdbc.yaml new file mode 100644 index 000000000..48e9f6015 --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/agl_jdbc.yaml @@ -0,0 +1,398 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of how to define Active GridLink type datasources for OIG domain +# For details regarding how to work with WDT model files and WDT model attributes, please refer below links +# https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-on-pv/model-files/ +# https://oracle.github.io/weblogic-deploy-tooling/concepts/model/ + +resources: + JDBCSystemResource: + ApplicationDB: + Target: oim_cluster + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 50 + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + MinCapacity: 200 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + InactiveConnectionTimeoutSeconds: 300 + SecondsToTrustAnIdlePoolConnection: 30 + MaxCapacity: 200 + JDBCDataSourceParams: + JNDIName: jdbc/ApplicationDBDS + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + Properties: + oracle.net.CONNECT_TIMEOUT: + Value: '10000' + oracle.net.READ_TIMEOUT: + Value: '300000' + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + EDNDataSource: + Target: soa_cluster + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 0 + RemoveInfectedConnections: false + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + MaxCapacity: 80 + JDBCDataSourceParams: + JNDIName: jdbc/EDNDataSource + GlobalTransactionsProtocol: TwoPhaseCommit + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.xa.client.OracleXADataSource + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + JDBCXAParams: + XaRetryDurationSeconds: 300 + EDNLocalTxDataSource: + Target: soa_cluster + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 0 + RemoveInfectedConnections: false + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + MaxCapacity: 80 + JDBCDataSourceParams: + JNDIName: jdbc/EDNLocalTxDataSource + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + LocalSvcTblDataSource: + Target: oim_cluster,AdminServer + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 0 + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + SecondsToTrustAnIdlePoolConnection: 0 + MaxCapacity: 800 + JDBCDataSourceParams: + JNDIName: jdbc/LocalSvcTblDataSource + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + Properties: + SendStreamAsBlob: + Value: 'true' + oracle.net.CONNECT_TIMEOUT: + Value: '10000' + weblogic.jdbc.crossPartitionEnabled: + Value: 'true' + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + OraSDPMDataSource: + Target: soa_cluster + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 0 + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + SecondsToTrustAnIdlePoolConnection: 0 + MaxCapacity: 1200 + JDBCDataSourceParams: + JNDIName: jdbc/OraSDPMDataSource + GlobalTransactionsProtocol: TwoPhaseCommit + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.xa.client.OracleXADataSource + Properties: + SendStreamAsBlob: + Value: 'true' + oracle.net.CONNECT_TIMEOUT: + Value: '10000' + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + SOADataSource: + Target: soa_cluster + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 0 + RemoveInfectedConnections: false + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + MaxCapacity: 1200 + JDBCDataSourceParams: + JNDIName: jdbc/SOADataSource + GlobalTransactionsProtocol: TwoPhaseCommit + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.xa.client.OracleXADataSource + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + JDBCXAParams: + XaRetryDurationSeconds: 300 + SOALocalTxDataSource: + Target: soa_cluster + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 0 + RemoveInfectedConnections: false + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + MaxCapacity: 1200 + JDBCDataSourceParams: + JNDIName: jdbc/SOALocalTxDataSource + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + WLSSchemaDataSource: + Target: soa_cluster,oim_cluster,AdminServer + JdbcResource: + JDBCConnectionPoolParams: + TestConnectionsOnReserve: true + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + MaxCapacity: 300 + JDBCDataSourceParams: + JNDIName: jdbc/WLSSchemaDataSource + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + mds-oim: + Target: oim_cluster + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 15 + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + MinCapacity: 60 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + InactiveConnectionTimeoutSeconds: 300 + SecondsToTrustAnIdlePoolConnection: 30 + MaxCapacity: 60 + JDBCDataSourceParams: + JNDIName: jdbc/mds/MDS_REPOS + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + Properties: + oracle.net.CONNECT_TIMEOUT: + Value: '10000' + oracle.net.READ_TIMEOUT: + Value: '300000' + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + mds-owsm: + Target: soa_cluster,oim_cluster,AdminServer + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 0 + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + SecondsToTrustAnIdlePoolConnection: 0 + JDBCDataSourceParams: + JNDIName: jdbc/mds/owsm + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + Properties: + SendStreamAsBlob: + Value: 'true' + oracle.net.CONNECT_TIMEOUT: + Value: '120000' + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + mds-soa: + Target: soa_cluster,AdminServer + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 0 + RemoveInfectedConnections: false + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + MaxCapacity: 200 + JDBCDataSourceParams: + JNDIName: jdbc/mds/MDS_LocalTxDataSource + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + Properties: + SendStreamAsBlob: + Value: 'true' + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + oimJMSStoreDS: + Target: oim_cluster + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 15 + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + MinCapacity: 60 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + InactiveConnectionTimeoutSeconds: 300 + SecondsToTrustAnIdlePoolConnection: 30 + MaxCapacity: 60 + JDBCDataSourceParams: + JNDIName: jdbc/oimJMSStoreDS + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + Properties: + oracle.net.CONNECT_TIMEOUT: + Value: '10000' + oracle.net.READ_TIMEOUT: + Value: '300000' + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + oimOperationsDB: + Target: soa_cluster,oim_cluster + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 32 + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + MinCapacity: 128 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + InactiveConnectionTimeoutSeconds: 300 + SecondsToTrustAnIdlePoolConnection: 30 + MaxCapacity: 200 + JDBCDataSourceParams: + JNDIName: jdbc/operationsDB + GlobalTransactionsProtocol: TwoPhaseCommit + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.xa.client.OracleXADataSource + Properties: + oracle.net.CONNECT_TIMEOUT: + Value: '10000' + oracle.net.READ_TIMEOUT: + Value: '300000' + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + JDBCXAParams: + XaTransactionTimeout: 1260 + XaRetryDurationSeconds: 300 + opss-audit-DBDS: + Target: soa_cluster,oim_cluster,AdminServer + JdbcResource: + JDBCConnectionPoolParams: + TestConnectionsOnReserve: true + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + JDBCDataSourceParams: + JNDIName: jdbc/AuditAppendDataSource + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + opss-audit-viewDS: + Target: soa_cluster,oim_cluster,AdminServer + JdbcResource: + JDBCConnectionPoolParams: + TestConnectionsOnReserve: true + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + JDBCDataSourceParams: + JNDIName: jdbc/AuditViewDataSource + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + opss-data-source: + Target: soa_cluster,oim_cluster,AdminServer + JdbcResource: + JDBCConnectionPoolParams: + TestConnectionsOnReserve: true + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + JDBCDataSourceParams: + JNDIName: jdbc/OpssDataSource + GlobalTransactionsProtocol: None + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.OracleDriver + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + soaOIMLookupDB: + Target: soa_cluster,oim_cluster,AdminServer + JdbcResource: + JDBCConnectionPoolParams: + InitialCapacity: 20 + TestConnectionsOnReserve: true + ConnectionCreationRetryFrequencySeconds: 10 + MinCapacity: 80 + TestTableName: SQL ISVALID + TestFrequencySeconds: 0 + InactiveConnectionTimeoutSeconds: 300 + SecondsToTrustAnIdlePoolConnection: 30 + MaxCapacity: 80 + JDBCDataSourceParams: + JNDIName: jdbc/soaOIMLookupDB + GlobalTransactionsProtocol: TwoPhaseCommit + JDBCDriverParams: + URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@)(PORT=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@)))(CONNECT_DATA=(SERVICE_NAME=@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@))) + DriverName: oracle.jdbc.xa.client.OracleXADataSource + Properties: + oracle.net.CONNECT_TIMEOUT: + Value: '10000' + oracle.net.READ_TIMEOUT: + Value: '300000' + JDBCOracleParams: + FanEnabled: true + ActiveGridlink: true + JDBCXAParams: + XaRetryDurationSeconds: 300 \ No newline at end of file diff --git a/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/domainInfo.yaml b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/domainInfo.yaml new file mode 100644 index 000000000..063cb3d7e --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/domainInfo.yaml @@ -0,0 +1,40 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of how to define the domainInfo section of WDT Model for OIG domain. +# For details regarding how to work with WDT model files and WDT model attributes, please refer below links +# https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-on-pv/model-files/ +# https://oracle.github.io/weblogic-deploy-tooling/concepts/model/ +# + +domainInfo: + AdminUserName: '@@SECRET:__weblogic-credentials__:username@@' + AdminPassword: '@@SECRET:__weblogic-credentials__:password@@' + ServerStartMode: 'prod' + EnableJMSStoreDBPersistence: true + EnableJTATLogDBPersistence: true + OPSSInitialization: + Credential: + oim: + TargetKey: + keystore: + Username: keystore + Password: '@@SECRET:__weblogic-credentials__:password@@' + OIMSchemaPassword: + Username: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:rcu_prefix@@_OIM' + Password: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:rcu_schema_password@@' + sysadmin: + Username: xelsysadm + Password: '@@SECRET:__weblogic-credentials__:password@@' + WeblogicAdminKey: + Username: weblogic + Password: '@@SECRET:__weblogic-credentials__:password@@' + ServerGroupTargetingLimits: + 'OIM-MGD-SVRS': oim_cluster + 'SOA-MGD-SVRS': soa_cluster + RCUDbInfo: + rcu_prefix: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:rcu_prefix@@' + rcu_schema_password: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:rcu_schema_password@@' + rcu_db_conn_string: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_host@@:@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_port@@/@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:db_service@@' + rcu_db_user: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:dba_user@@' + rcu_admin_password: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-credentials:dba_password@@' diff --git a/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/oig.properties b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/oig.properties new file mode 100644 index 000000000..aa84ea429 --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/oig.properties @@ -0,0 +1,16 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of how to define the variables in WDT for OIG domain +# For details regarding how to work with WDT model files and WDT model attributes, please refer below links +# https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-on-pv/model-files/ +# https://oracle.github.io/weblogic-deploy-tooling/concepts/model/ +# + +Server.oim_server.ListenPort=14000 +Server.soa_server.ListenPort=8001 +Server.oim_server.T3PublicPort=14002 +Server.oim_server.T3ListenPort=14002 +Server.oim_server.ListenAddress=oim-server +Server.soa_server.ListenAddress=soa-server +Server.AdminServer.ListenPort=7001 diff --git a/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/resource.yaml b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/resource.yaml new file mode 100644 index 000000000..23cfbfda1 --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/resource.yaml @@ -0,0 +1,45 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of how to define the resource section in WDT Model for an OIG Domain +# For details regarding how to work with WDT model files and WDT model attributes, please refer below links +# https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-on-pv/model-files/ +# https://oracle.github.io/weblogic-deploy-tooling/concepts/model/ +# + +resources: + WebAppContainer: + WeblogicPluginEnabled: true + JaxRsMonitoringDefaultBehavior: false + ForeignJNDIProvider: + ForeignJNDIProvider-SOA: + PasswordEncrypted: '@@SECRET:__weblogic-credentials__:password@@' + InitialContextFactory: weblogic.jndi.WLInitialContextFactory + ProviderUrl: 'cluster:t3://soa_cluster' + User: '@@SECRET:__weblogic-credentials__:username@@' + Target: oim_cluster + ForeignJNDILink: + /ejb/bpel/services/workflow/TaskServiceGlobalTransactionBean: + RemoteJNDIName: /ejb/bpel/services/workflow/TaskServiceGlobalTransactionBean + LocalJNDIName: /ejb/bpel/services/workflow/TaskServiceGlobalTransactionBean + RuntimeConfigService: + RemoteJNDIName: RuntimeConfigService + LocalJNDIName: RuntimeConfigService + TaskEvidenceServiceBean: + RemoteJNDIName: TaskEvidenceServiceBean + LocalJNDIName: TaskEvidenceServiceBean + TaskQueryService: + RemoteJNDIName: TaskQueryService + LocalJNDIName: TaskQueryService + TaskReportServiceBean: + RemoteJNDIName: TaskReportServiceBean + LocalJNDIName: TaskReportServiceBean + UserMetadataService: + RemoteJNDIName: UserMetadataService + LocalJNDIName: UserMetadataService + ejb/bpel/services/workflow/TaskMetadataServiceBean: + RemoteJNDIName: ejb/bpel/services/workflow/TaskMetadataServiceBean + LocalJNDIName: ejb/bpel/services/workflow/TaskMetadataServiceBean + ejb/bpel/services/workflow/TaskServiceBean: + RemoteJNDIName: ejb/bpel/services/workflow/TaskServiceBean + LocalJNDIName: ejb/bpel/services/workflow/TaskServiceBean diff --git a/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/topology.yaml b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/topology.yaml new file mode 100644 index 000000000..a93e7da8e --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-artifacts/topology.yaml @@ -0,0 +1,171 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +# This is an example of how to define the topology section in WDT model for an OIG domain +# For details regarding how to work with WDT model files and WDT model attributes, please refer below links +# https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-on-pv/model-files/ +# https://oracle.github.io/weblogic-deploy-tooling/concepts/model/ +# + +topology: + Name: '@@ENV:DOMAIN_UID@@' + ParallelDeployApplicationModules: true + ProductionModeEnabled: true + JTA: + TimeoutSeconds: 1200 + Cluster: + oim_cluster: + CoherenceClusterSystemResource: defaultCoherenceCluster + soa_cluster: + CoherenceClusterSystemResource: defaultCoherenceCluster + Server: + AdminServer: + ServerLifeCycleTimeoutVal: 30 + TransactionLogJDBCStore: + PrefixName: TLOG_ADMINSERVER + Enabled: true + DataSource: WLSSchemaDataSource + ListenPort: '@@PROP:Server.AdminServer.ListenPort@@' + oim_server1: + ListenPort: '@@PROP:Server.oim_server.ListenPort@@' + CoherenceClusterSystemResource: defaultCoherenceCluster + Cluster: oim_cluster + JTAMigratableTarget: + Cluster: oim_cluster + UserPreferredServer: oim_server1 + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oim_server.ListenAddress@@1' + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + NetworkAccessPoint: + 'T3Channel': + PublicPort: '@@PROP:Server.oim_server.T3PublicPort@@' + ListenPort: '@@PROP:Server.oim_server.T3ListenPort@@' + TunnelingEnabled: true + HttpEnabledForThisProtocol: true + TransactionLogJDBCStore: + PrefixName: TLOG_OIM_SERVER1 + Enabled: true + DataSource: WLSSchemaDataSource + oim_server2: + ListenPort: '@@PROP:Server.oim_server.ListenPort@@' + CoherenceClusterSystemResource: defaultCoherenceCluster + Cluster: oim_cluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oim_server.ListenAddress@@2' + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + JTAMigratableTarget: + Cluster: oim_cluster + UserPreferredServer: oim_server2 + NetworkAccessPoint: + 'T3Channel': + PublicPort: '@@PROP:Server.oim_server.T3PublicPort@@' + ListenPort: '@@PROP:Server.oim_server.T3ListenPort@@' + TunnelingEnabled: true + HttpEnabledForThisProtocol: true + + oim_server3: + ListenPort: '@@PROP:Server.oim_server.ListenPort@@' + CoherenceClusterSystemResource: defaultCoherenceCluster + Cluster: oim_cluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oim_server.ListenAddress@@3' + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + JTAMigratableTarget: + Cluster: oim_cluster + UserPreferredServer: oim_server3 + NetworkAccessPoint: + 'T3Channel': + PublicPort: '@@PROP:Server.oim_server.T3PublicPort@@' + ListenPort: '@@PROP:Server.oim_server.T3ListenPort@@' + TunnelingEnabled: true + HttpEnabledForThisProtocol: true + oim_server4: + ListenPort: '@@PROP:Server.oim_server.ListenPort@@' + CoherenceClusterSystemResource: defaultCoherenceCluster + Cluster: oim_cluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oim_server.ListenAddress@@4' + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + JTAMigratableTarget: + Cluster: oim_cluster + UserPreferredServer: oim_server4 + NetworkAccessPoint: + 'T3Channel': + PublicPort: '@@PROP:Server.oim_server.T3PublicPort@@' + ListenPort: '@@PROP:Server.oim_server.T3ListenPort@@' + TunnelingEnabled: true + HttpEnabledForThisProtocol: true + oim_server5: + ListenPort: '@@PROP:Server.oim_server.ListenPort@@' + CoherenceClusterSystemResource: defaultCoherenceCluster + Cluster: oim_cluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.oim_server.ListenAddress@@5' + NumOfRetriesBeforeMsiMode: 0 + RetryIntervalBeforeMsiMode: 1 + JTAMigratableTarget: + Cluster: oim_cluster + UserPreferredServer: oim_server5 + NetworkAccessPoint: + 'T3Channel': + PublicPort: '@@PROP:Server.oim_server.T3PublicPort@@' + ListenPort: '@@PROP:Server.oim_server.T3ListenPort@@' + TunnelingEnabled: true + HttpEnabledForThisProtocol: true + soa_server1: + ListenPort: '@@PROP:Server.soa_server.ListenPort@@' + Cluster: soa_cluster + CoherenceClusterSystemResource: defaultCoherenceCluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.soa_server.ListenAddress@@1' + RetryIntervalBeforeMsiMode: 1 + NumOfRetriesBeforeMsiMode: 0 + JTAMigratableTarget: + Cluster: soa_cluster + UserPreferredServer: soa_server1 + TransactionLogJDBCStore: + PrefixName: TLOG_SOA_SERVER1 + Enabled: true + DataSource: WLSSchemaDataSource + soa_server2: + ListenPort: '@@PROP:Server.soa_server.ListenPort@@' + Cluster: soa_cluster + CoherenceClusterSystemResource: defaultCoherenceCluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.soa_server.ListenAddress@@2' + RetryIntervalBeforeMsiMode: 1 + NumOfRetriesBeforeMsiMode: 0 + JTAMigratableTarget: + Cluster: soa_cluster + UserPreferredServer: soa_server2 + soa_server3: + ListenPort: '@@PROP:Server.soa_server.ListenPort@@' + Cluster: soa_cluster + CoherenceClusterSystemResource: defaultCoherenceCluster + RetryIntervalBeforeMsiMode: 1 + NumOfRetriesBeforeMsiMode: 0 + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.soa_server.ListenAddress@@3' + JTAMigratableTarget: + Cluster: soa_cluster + UserPreferredServer: soa_server3 + soa_server4: + ListenPort: '@@PROP:Server.soa_server.ListenPort@@' + Cluster: soa_cluster + CoherenceClusterSystemResource: defaultCoherenceCluster + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.soa_server.ListenAddress@@4' + RetryIntervalBeforeMsiMode: 1 + NumOfRetriesBeforeMsiMode: 0 + JTAMigratableTarget: + Cluster: soa_cluster + UserPreferredServer: soa_server4 + soa_server5: + ListenPort: '@@PROP:Server.soa_server.ListenPort@@' + Cluster: soa_cluster + CoherenceClusterSystemResource: defaultCoherenceCluster + RetryIntervalBeforeMsiMode: 1 + NumOfRetriesBeforeMsiMode: 0 + ListenAddress: '@@ENV:DOMAIN_UID@@-@@PROP:Server.soa_server.ListenAddress@@5' + JTAMigratableTarget: + Cluster: soa_cluster + UserPreferredServer: soa_server5 + SecurityConfiguration: + NodeManagerUsername: '@@SECRET:__weblogic-credentials__:username@@' + UseKSSForDemo: true + NodeManagerPasswordEncrypted: '@@SECRET:__weblogic-credentials__:password@@' \ No newline at end of file diff --git a/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-utils/create-configmap.sh b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-utils/create-configmap.sh new file mode 100755 index 000000000..fce36a014 --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-utils/create-configmap.sh @@ -0,0 +1,120 @@ +#!/bin/sh +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. + +usage() { + + cat << EOF + + This is a helper script for creating and labeling a Kubernetes configmap. + The configmap is labeled with the specified domain-uid. + + Usage: + + $(basename $0) -c configmapname \\ + [-n mynamespace] \\ + [-d mydomainuid] \\ + [-f filename_or_dir] [-f filename_or_dir] ... + + -d : Defaults to 'sample-domain1'. + + -n : Defaults to 'sample-domain1-ns' otherwise. + + -c : Name of configmap. Required. + + -f : File or directory location. Can be specified + more than once. Key will be the file-name(s), + value will be file contents. Required. + + -dry ${KUBERNETES_CLI:-kubectl} : Show the ${KUBERNETES_CLI:-kubectl} commands (prefixed with 'dryun:') + but do not perform them. + + -dry yaml : Show the yaml (prefixed with 'dryun:') + but do not execute it. + +EOF +} + +set -e +set -o pipefail + +DOMAIN_UID="sample-domain1" +DOMAIN_NAMESPACE="sample-domain1-ns" +CONFIGMAP_NAME="" +FILENAMES="" +DRY_RUN="" + +while [ ! "$1" = "" ]; do + if [ ! "$1" = "-?" ] && [ "$2" = "" ]; then + echo "Syntax Error. Pass '-?' for help." + exit 1 + fi + case "$1" in + -c) CONFIGMAP_NAME="${2}" ;; + -n) DOMAIN_NAMESPACE="${2}" ;; + -d) DOMAIN_UID="${2}" ;; + -f) FILENAMES="${FILENAMES}--from-file=${2} " ;; + -dry) DRY_RUN="${2}" + case "$DRY_RUN" in + ${KUBERNETES_CLI:-kubectl}|yaml) ;; + *) echo "Error: Syntax Error. Pass '-?' for usage." + exit 1 + ;; + esac + ;; + -?) usage ; exit 1 ;; + *) echo "Syntax Error. Pass '-?' for help." ; exit 1 ;; + esac + shift + shift +done + +if [ -z "$CONFIGMAP_NAME" ]; then + echo "Error: Missing '-c' argument. Pass '-?' for help." + exit 1 +fi + +if [ -z "$FILENAMES" ]; then + echo "Error: Missing '-f' argument. Pass '-?' for help." + exit 1 +fi + +set -eu + +if [ "$DRY_RUN" = "${KUBERNETES_CLI:-kubectl}" ]; then + +cat << EOF +dryrun:${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE delete configmap $CONFIGMAP_NAME --ignore-not-found +dryrun:${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE create configmap $CONFIGMAP_NAME $FILENAMES +dryrun:${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE label configmap $CONFIGMAP_NAME weblogic.domainUID=$DOMAIN_UID +EOF + +elif [ "$DRY_RUN" = "yaml" ]; then + + echo "dryrun:---" + echo "dryrun:" + + # don't change indent of the sed append commands - the spaces are significant + # (we use an ancient form of sed append to stay compatible with old bash on mac) + ${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE \ + create configmap $CONFIGMAP_NAME $FILENAMES \ + --dry-run=client -o yaml \ + \ + | sed -e '/ name:/a\ + labels:' \ + | sed -e '/labels:/a\ + weblogic.domainUID:' \ + | sed "s/domainUID:/domainUID: $DOMAIN_UID/" \ + | grep -v creationTimestamp \ + | sed "s/^/dryrun:/" + +else + + set -x + + ${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE delete configmap $CONFIGMAP_NAME --ignore-not-found + ${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE create configmap $CONFIGMAP_NAME $FILENAMES + ${KUBERNETES_CLI:-kubectl} -n $DOMAIN_NAMESPACE label configmap $CONFIGMAP_NAME weblogic.domainUID=$DOMAIN_UID + +fi + diff --git a/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-utils/create-secret.sh b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-utils/create-secret.sh new file mode 100755 index 000000000..83e771dea --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/create-oim-domain/domain-home-on-pv/wdt-utils/create-secret.sh @@ -0,0 +1,159 @@ +#!/bin/bash +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# + +usage() { + + cat << EOF + + This is a helper script for creating and labeling a Kubernetes secret. + The secret is labeled with the specified domain-uid. + + Usage: + + $(basename $0) [-n mynamespace] [-d mydomainuid] \\ + -s mysecretname [-l key1=val1] [-l key2=val2] [-f key=fileloc ]... + + -d : Defaults to 'sample-domain1' otherwise. + + -n : Defaults to 'sample-domain1-ns' otherwise. + + -s : Name of secret. Required. + + -l : Secret 'literal' key/value pair, for example + '-l "password=abc123"'. Can be specified more than once. + + -f : Secret 'file-name' key/file pair, for example + '-l walletFile=./ewallet.p12'. + Can be specified more than once. + + -dry ${KUBERNETES_CLI} : Show the ${KUBERNETES_CLI} commands (prefixed with 'dryrun:') + but do not perform them. + + -dry yaml : Show the yaml (prefixed with 'dryrun:') but do not + apply it. + + -? : This help. + + Note: Spaces are not supported in the '-f' or '-l' parameters. + +EOF +} + +set -e +set -o pipefail + +KUBERNETES_CLI="${KUBERNETES_CLI:-kubectl}" +DOMAIN_UID="sample-domain1" +NAMESPACE="sample-domain1-ns" +SECRET_NAME="" +LITERALS="" +FILENAMES="" +DRY_RUN="false" + +while [ ! "${1:-}" = "" ]; do + if [ ! "$1" = "-?" ] && [ "${2:-}" = "" ]; then + echo "Syntax Error. Pass '-?' for usage." + exit 1 + fi + case "$1" in + -s) SECRET_NAME="${2}" ;; + -n) NAMESPACE="${2}" ;; + -d) DOMAIN_UID="${2}" ;; + -l) LITERALS="${LITERALS} --from-literal='${2}'" ;; + -f) FILENAMES="${FILENAMES} --from-file=${2}" ;; + -dry) DRY_RUN="${2}" + case "$DRY_RUN" in + ${KUBERNETES_CLI}|yaml) ;; + *) echo "Error: Syntax Error. Pass '-?' for usage." + exit 1 + ;; + esac + ;; + -?) usage ; exit 1 ;; + *) echo "Syntax Error. Pass '-?' for usage." ; exit 1 ;; + esac + shift + shift +done + +if [ -z "$SECRET_NAME" ]; then + echo "Error: Syntax Error. Must specify '-s'. Pass '-?' for usage." + exit 1 +fi + +if [ -z "${LITERALS}${FILENAMES}" ]; then + echo "Error: Syntax Error. Must specify at least one '-l' or '-f'. Pass '-?' for usage." + exit +fi + +set -eu + +kubernetesCLIDryRunDelete() { +cat << EOF +dryrun:${KUBERNETES_CLI} -n $NAMESPACE delete secret \\ +dryrun: $SECRET_NAME \\ +dryrun: --ignore-not-found +EOF +} + +kubernetesCLIDryRunCreate() { +local moredry="" +if [ "$DRY_RUN" = "yaml" ]; then + local moredry="--dry-run=client -o yaml" +fi +cat << EOF +dryrun:${KUBERNETES_CLI} -n $NAMESPACE create secret generic \\ +dryrun: $SECRET_NAME \\ +dryrun: $LITERALS $FILENAMES ${moredry} +EOF +} + +kubernetesCLIDryRunLabel() { +cat << EOF +dryrun:${KUBERNETES_CLI} -n $NAMESPACE label secret \\ +dryrun: $SECRET_NAME \\ +dryrun: weblogic.domainUID=$DOMAIN_UID +EOF +} + +kubernetesCLIDryRun() { +cat << EOF +dryrun: +dryrun:echo "@@ Info: Setting up secret '$SECRET_NAME'." +dryrun: +EOF +kubernetesCLIDryRunDelete +kubernetesCLIDryRunCreate +kubernetesCLIDryRunLabel +cat << EOF +dryrun: +EOF +} + +if [ "$DRY_RUN" = "${KUBERNETES_CLI}" ]; then + + kubernetesCLIDryRun + +elif [ "$DRY_RUN" = "yaml" ]; then + + echo "dryrun:---" + echo "dryrun:" + + # don't change indent of the sed '/a' commands - the spaces are significant + # (we use an old form of sed append to stay compatible with old bash on mac) + + source <( kubernetesCLIDryRunCreate | sed 's/dryrun://') \ + | sed -e '/ name:/a\ + labels:' \ + | sed -e '/labels:/a\ + weblogic.domainUID:' \ + | sed "s/domainUID:/domainUID: $DOMAIN_UID/" \ + | grep -v creationTimestamp \ + | sed "s/^/dryrun:/" + +else + + source <( kubernetesCLIDryRun | sed 's/dryrun://') +fi diff --git a/OracleIdentityGovernance/kubernetes/create-rcu-credentials/create-rcu-credentials.sh b/OracleIdentityGovernance/kubernetes/create-rcu-credentials/create-rcu-credentials.sh index e39046130..4ac7d12e1 100755 --- a/OracleIdentityGovernance/kubernetes/create-rcu-credentials/create-rcu-credentials.sh +++ b/OracleIdentityGovernance/kubernetes/create-rcu-credentials/create-rcu-credentials.sh @@ -1,5 +1,5 @@ #!/usr/bin/env bash -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2022, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # Description diff --git a/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/create-rcu-pod.sh b/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/create-rcu-pod.sh index 4a7277d09..c3d7c496c 100755 --- a/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/create-rcu-pod.sh +++ b/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/create-rcu-pod.sh @@ -17,11 +17,11 @@ usage() { echo " Must contain SYSDBA username at key 'sys_username'," echo " SYSDBA password at key 'sys_password'," echo " and RCU schema owner password at key 'password'." - echo " -p FMW Infrastructure ImagePullSecret (optional) " + echo " -p OracleIdentityGovernance ImagePullSecret (optional) " echo " (default: none) " - echo " -i FMW Infrastructure Image (optional) " - echo " (default: container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.4) " - echo " -u FMW Infrastructure ImagePullPolicy (optional) " + echo " -i OracleIdentityGovernance Image (optional) " + echo " (default: oracle/oig:12.2.1.4.0) " + echo " -u OracleIdentityGovernance ImagePullPolicy (optional) " echo " (default: IfNotPresent) " echo " -o Output directory for the generated YAML file. (optional)" echo " (default: rcuoutput)" @@ -34,7 +34,7 @@ usage() { namespace="default" credSecret="oracle-rcu-secret" -fmwimage="container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.4" +fmwimage="oracle/oig:12.2.1.4.0" imagePullPolicy="IfNotPresent" rcuOutputDir="rcuoutput" @@ -101,3 +101,4 @@ ${KUBERNETES_CLI:-kubectl} exec -n $namespace -i rcu -- bash -c 'cat > /u01/orac ${KUBERNETES_CLI:-kubectl} get po/rcu -n $namespace echo "[INFO] Pod 'rcu' is running in namespace '$namespace'" + diff --git a/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/createRepository.sh b/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/createRepository.sh index b994e575a..dea2d6f63 100755 --- a/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/createRepository.sh +++ b/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/createRepository.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2022, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # . /u01/oracle/wlserver/server/bin/setWLSEnv.sh diff --git a/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/dropRepository.sh b/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/dropRepository.sh index b0a5036e7..01f9bcab2 100755 --- a/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/dropRepository.sh +++ b/OracleIdentityGovernance/kubernetes/create-rcu-schema/common/dropRepository.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2022, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # . /u01/oracle/wlserver/server/bin/setWLSEnv.sh diff --git a/OracleIdentityGovernance/kubernetes/create-rcu-schema/drop-rcu-schema.sh b/OracleIdentityGovernance/kubernetes/create-rcu-schema/drop-rcu-schema.sh index 75ca2a296..163bd09ba 100755 --- a/OracleIdentityGovernance/kubernetes/create-rcu-schema/drop-rcu-schema.sh +++ b/OracleIdentityGovernance/kubernetes/create-rcu-schema/drop-rcu-schema.sh @@ -44,7 +44,7 @@ rcuType="${rcuType}" namespace="default" createPodArgs="" -while getopts ":s:t:d:n:c:p:i:u:o:v:h:" opt; do +while getopts ":s:t:d:n:c:p:i:u:o:r:h:" opt; do case $opt in s) schemaPrefix="${OPTARG}" ;; @@ -56,7 +56,7 @@ while getopts ":s:t:d:n:c:p:i:u:o:v:h:" opt; do ;; c|p|i|u|o) createPodArgs+=" -${opt} ${OPTARG}" ;; - v) customVariables="${OPTARG}" + r) customVariables="${OPTARG}" ;; h) usage 0 ;; diff --git a/OracleIdentityGovernance/kubernetes/create-weblogic-domain-credentials/create-weblogic-credentials.sh b/OracleIdentityGovernance/kubernetes/create-weblogic-domain-credentials/create-weblogic-credentials.sh index ec1d7878f..4605165d6 100755 --- a/OracleIdentityGovernance/kubernetes/create-weblogic-domain-credentials/create-weblogic-credentials.sh +++ b/OracleIdentityGovernance/kubernetes/create-weblogic-domain-credentials/create-weblogic-credentials.sh @@ -1,5 +1,5 @@ #!/usr/bin/env bash -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2022, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # Description diff --git a/OracleIdentityGovernance/kubernetes/create-weblogic-domain-pv-pvc/create-pv-pvc-inputs.yaml b/OracleIdentityGovernance/kubernetes/create-weblogic-domain-pv-pvc/create-pv-pvc-inputs.yaml index 2258c14e4..be76e22fe 100755 --- a/OracleIdentityGovernance/kubernetes/create-weblogic-domain-pv-pvc/create-pv-pvc-inputs.yaml +++ b/OracleIdentityGovernance/kubernetes/create-weblogic-domain-pv-pvc/create-pv-pvc-inputs.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2020, 2023, Oracle and/or its affiliates. +# Copyright (c) 2020, 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # The version of this inputs file. Do not modify. diff --git a/OracleIdentityGovernance/kubernetes/domain-lifecycle/README.md b/OracleIdentityGovernance/kubernetes/domain-lifecycle/README.md index b30dbb6a2..0d32131f3 100755 --- a/OracleIdentityGovernance/kubernetes/domain-lifecycle/README.md +++ b/OracleIdentityGovernance/kubernetes/domain-lifecycle/README.md @@ -27,6 +27,10 @@ For information on how to start, stop, restart, and scale WebLogic Server instan - [`kubectl --watch`](#kubectl---watch) - [`clusterStatus.sh`](#clusterstatussh) - [`waitForDomain.sh`](#waitfordomainsh) +- [Examine, change permissions or delete PV contents](#examine-change-or-delete-pv-contents) + - [`pv-pvc-helper.sh`](#pv-pvc-helpersh) +- [OPSS Wallet utility](#opss-wallet-utility) + - [`opss-wallet.sh`](#opss-walletsh) ### Prerequisites @@ -274,3 +278,69 @@ Use the following command to wait for a domain to fully shut down: ``` $ waitForDomain.sh -n my-namespace -d my-domain -p 0 ``` + +### Examine, change, or delete PV contents + +#### `pv-pvc-helper.sh` + +Use this helper script for examining, changing permissions, or deleting the contents of the persistent volume (such as domain files or logs) for a WebLogic Domain on PV or Model in Image domain. +The script launches a Kubernetes pod named 'pvhelper' using the provided persistent volume claim name and the mount path. +You can run the 'kubectl exec' command to get a shell to the running pod container and run commands to examine or clean up the contents of shared directories on the persistent volume. +Use the 'kubectl delete pod pvhelper -n ' command to delete the Pod after it's no longer needed. + +Use the following command for script usage: + +``` +$ pv-pvc-helper.sh -h +``` + +The following is an example command to launch the helper pod with the PVC name `sample-domain1-weblogic-sample-pvc` and mount path `/shared`. +Specifying the `-r` argument allows the script to run as the `root` user. + +``` +$ pv-pvc-helper.sh -n sample-domain1-ns -c sample-domain1-weblogic-sample-pvc -m /shared -r +``` + +After the Pod is created, use the following command to get a shell to the running pod container. + +``` +$ kubectl -n sample-domain1-ns exec -it pvhelper -- /bin/sh +``` + +After you get a shell to the running pod container, you can recursively delete the contents of the domain home and applications +directories using the `rm -rf /shared/domains/sample-domain1` and `rm -rf /shared/applications/sample-domain1` commands. Because these +commands will delete files on the persistent storage, we recommend that you understand and execute these commands carefully. + +Use the following command to delete the Pod after it's no longer needed. + +``` +$ kubectl delete pod pvhelper -n +``` + +### OPSS Wallet utility + +#### `opss-wallet.sh` + +The OPSS wallet utility is a helper script for JRF-type domains that can save an OPSS key +wallet from a running domain's introspector ConfigMap to a file and +restore an OPSS key wallet file to a Kubernetes Secret for use by a +domain that you're about to run. + +Use the following command for script usage: + +``` +$ opss-wallet.sh -? +``` + +For example, run the following command to save an OPSS key wallet from a running domain to the file './ewallet.p12': + +``` +$ opss-wallet.sh -s +``` + +Run the following command to restore the OPSS key wallet from the file './ewallet.p12' to the secret +'sample-domain1-opss-walletfile-secret' for use by a domain you're about to run: + +``` +$ opss-wallet.sh -r +``` diff --git a/OracleIdentityGovernance/kubernetes/domain-lifecycle/clusterStatus.sh b/OracleIdentityGovernance/kubernetes/domain-lifecycle/clusterStatus.sh index db1c98ddf..0dccfcab0 100755 --- a/OracleIdentityGovernance/kubernetes/domain-lifecycle/clusterStatus.sh +++ b/OracleIdentityGovernance/kubernetes/domain-lifecycle/clusterStatus.sh @@ -1,18 +1,19 @@ #!/bin/sh -# Copyright (c) 2021, 2022, Oracle and/or its affiliates. +# Copyright (c) 2021, 2023, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. set -eu set -o pipefail usage() { -cat<> << H I J >> ... +# to one stdout line for each "<< >>" clause. E.g: +# A B C D E F +# A B C H I J +# stdin lines with no "<< >>" are ignored... + +flatten() { + while read line + do + __flatten $line + done +} +__flatten() { + local prefix="" + while [ "${1:-}" != "<<" ]; do + [ -z "${1:-}" ] && return + prefix+="$1 " + shift + done + while [ "${1:-}" == "<<" ]; do + local suffix="" + shift + while [ "$1" != ">>" ]; do + suffix+="$1 " + shift + done + shift + echo $prefix $suffix + done +} + +# +# condition +# helper fn to take the thirteen column input +# and collapse some columns into an aggregate status +# +condition() { + while read line + do + __condition $line + done +} +__condition() { + local gen=$1 + local ogen=$2 + local failed=$3 + local completed=${12} + local available=${13} + local condition="IMPOSSIBLE" + if [ "$failed" = "True" ]; then + condition="Failed" + elif [ ! "$gen" = "$ogen" ] || [ "$completed" = "NotSet" ] || [ "$available" = "NotSet" ]; then + condition="Unknown" + elif [ "$completed" = "True" ]; then + condition="Completed" + elif [ "$available" = "True" ]; then + condition="Available" + else + condition="Unavailable" + fi + echo "$4 $5 $6 $condition $available $7 $8 $9 ${10} ${11}" +} + + +# +# clusterStatus +# function to display the domain cluster status in a table +# $1=ns $2=uid $3=cluster, pass "" to mean "any" +# $4=KUBERNETES_CLI +# clusterStatus() { local __ns="${1:-}" if [ -z "$__ns" ]; then @@ -52,7 +164,7 @@ clusterStatus() { local __uid="${2:-}" local __cluster_name="${3:-}" - local __kubernetes_cli="${4:-kubectl}" + local __kubernetes_cli="${4:-${KUBERNETES_CLI:-kubectl}}" if ! [ -x "$(command -v ${__kubernetes_cli})" ]; then echo "@@Error: Kubernetes CLI '${__kubernetes_cli}' is not installed." @@ -64,14 +176,18 @@ clusterStatus() { ( shopt -s nullglob # causes the for loop to silently handle case where no domains match + local _domains + local _val + + _domains="$( + $__kubernetes_cli $__ns_filter get domains.v9.weblogic.oracle \ + -o=jsonpath='{range .items[*]}{.metadata.namespace}{","}{.metadata.name}{","}{.spec.domainUID}{"\n"}{end}' + )" - echo "namespace domain cluster min max goal current ready" - echo "--------- ------ ------- --- --- ---- ------- -----" + echo "namespace domain cluster status available min max goal current ready" + echo "--------- ------ ------- ------ --------- --- --- ---- ------- -----" - local __val - for __val in \ - $($__kubernetes_cli $__ns_filter get domains \ - -o=jsonpath='{range .items[*]}{.metadata.namespace}{","}{.metadata.name}{","}{.spec.domainUID}{"\n"}{end}') + for __val in $_domains do local __ns_cur=$( echo $__val | cut -d ',' -f 1) local __name_cur=$(echo $__val | cut -d ',' -f 2) @@ -81,26 +197,46 @@ clusterStatus() { [ -n "$__uid" ] && [ ! "$__uid" = "$__uid_cur" ] && continue + # construct a json path for the domain query + + __jp+='{" ~G"}{.metadata.generation}' + __jp+='{" ~O"}{.status.observedGeneration}' + __jp+='{" ~F"}{.status.conditions[?(@.type=="Failed")].status}' if [ -z "$__cluster_name" ]; then __jp+='{range .status.clusters[*]}' else __jp+='{range .status.clusters[?(@.clusterName=='\"$__cluster_name\"')]}' fi - __jp+='{"'$__ns_cur'"}' + __jp+='{" "}{"<<"}' + __jp+='{" "}{"'$__ns_cur'"}' __jp+='{" "}{"'$__uid_cur'"}' __jp+='{" "}{.clusterName}' - __jp+='{" ~!"}{.minimumReplicas}' - __jp+='{" ~!"}{.maximumReplicas}' - __jp+='{" ~!"}{.replicasGoal}' - __jp+='{" ~!"}{.replicas}' - __jp+='{" ~!"}{.readyReplicas}' - __jp+='{"\n"}' + __jp+='{" "}{"~!"}{.minimumReplicas}' + __jp+='{" "}{"~!"}{.maximumReplicas}' + __jp+='{" "}{"~!"}{.replicasGoal}' + __jp+='{" "}{"~!"}{.replicas}' + __jp+='{" "}{"~!"}{.readyReplicas}' + __jp+='{" "}{"~C"}{.conditions[?(@.type=="Completed")].status}' + __jp+='{" "}{"~A"}{.conditions[?(@.type=="Available")].status}' + __jp+='{" "}{">>"}' __jp+='{end}' + __jp+='{"\n"}' + + # get the values, replace empty values with sentinals or '0' as appropriate, + # and remove all '~?' prefixes - $__kubernetes_cli -n "$__ns_cur" get domain "$__uid_cur" -o=jsonpath="$__jp" + $__kubernetes_cli -n "$__ns_cur" get domains.v9.weblogic.oracle "$__name_cur" -o=jsonpath="$__jp" \ + | sed 's/~G /~GunknownGen /g' \ + | sed 's/~O /~OunknownOGen /g' \ + | sed 's/~F /~FNotSet /g' \ + | sed 's/~C /~CNotSet /g' \ + | sed 's/~A /~ANotSet /g' \ + | sed 's/~[A-Z]//g' \ + | sed 's/~!\([0-9][0-9]*\)/\1/g' \ + | sed 's/~!/0/g' - done | sed 's/~!\([0-9][0-9]*\)/\1/g'\ - | sed 's/~!/0/g' \ + done | flatten \ + | condition \ | sort --version-sort ) | column --table @@ -111,7 +247,7 @@ clusterStatus() { domainNS= domainUID= clusterName= -kubernetesCli=${KUBERNETES_CLI:-kubectl} +kubernetesCli= set +u while [ ! -z ${1+x} ]; do diff --git a/OracleIdentityGovernance/kubernetes/domain-lifecycle/opss-wallet.sh b/OracleIdentityGovernance/kubernetes/domain-lifecycle/opss-wallet.sh new file mode 100755 index 000000000..933cb7de0 --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/domain-lifecycle/opss-wallet.sh @@ -0,0 +1,152 @@ +#!/bin/bash +# Copyright (c) 2019, 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. + +# +# This is a helper script for JRF type domains that can save an OPSS key +# wallet from a running domain's introspector configmap to a file, and/or +# restore an OPSS key wallet file to a Kubernetes secret for use by a +# domain that you're about to run. +# +# For command line details, pass '-?' or see 'usage_exit()' below. +# + +set -e +set -o pipefail + +usage_exit() { +cat << EOF + + Usage: + + $(basename $0) -s [-d domain-uid] [-n namespace] \\ + [-wf wallet-file-name] + + $(basename $0) -r [-d domain-uid] [-n namespace] \\ + [-wf wallet-file-name] [-ws wallet-file-secret] + + $(basename $0) -r -s [-d domain-uid] [-n namespace] \\ + [-wf wallet-file-name] [-ws wallet-file-secret] + + Save an OPSS key wallet from a running JRF domain's introspector + configmap to a file, and/or restore an OPSS key wallet file + to a Kubernetes secret for use by a domain that you're about to run. + + Parameters: + + -d Domain UID. Default 'sample-domain1'. + + -n Kubernetes namespace. Default 'sample-domain1-ns'. + + -s Save an OPSS wallet file from an introspector + configmap to a file. (See also '-wf'.) + + -r Restore an OPSS wallet file to a Kubernetes secret. + (See also '-wf' and '-ws'). + + -wf Name of OPSS wallet file on local file system. + Default is './ewallet.p12'. + + -ws Name of Kubernetes secret to create from the + wallet file. This must match the + 'configuration.opss.walletFileSecret' + configured in your domain resource. + Ignored if '-r' not specified. + Default is 'DOMAIN_UID-opss-walletfile-secret'. + + -? Output this help message. + + Examples: + + Save an OPSS key wallet from a running domain to file './ewallet.p12': + $(basename $0) -s + + Restore the OPSS key wallet from file './ewallet.p12' to secret + 'sample-domain1-opss-walletfile-secret' for use by a domain + you're about to run: + $(basename $0) -r + +EOF + + exit 0 +} + +SCRIPTDIR="$( cd "$(dirname "$0")" > /dev/null 2>&1 ; pwd -P )" +echo "@@ Info: Running '$(basename "$0")'." + +DOMAIN_UID="sample-domain1" +DOMAIN_NAMESPACE="sample-domain1-ns" +WALLET_FILE="ewallet.p12" +WALLET_SECRET="" + +syntax_error_exit() { + echo "@@ Syntax error: Use '-?' for usage." + exit 1 +} + +SAVE_WALLET=0 +RESTORE_WALLET=0 + +while [ ! "$1" = "" ]; do + case "$1" in + -n) [ -z "$2" ] && syntax_error_exit + DOMAIN_NAMESPACE="${2}" + shift + ;; + -d) [ -z "$2" ] && syntax_error_exit + DOMAIN_UID="${2}" + shift + ;; + -s) SAVE_WALLET=1 + ;; + -r) RESTORE_WALLET=1 + ;; + -ws) [ -z "$2" ] && syntax_error_exit + WALLET_SECRET="${2}" + shift + ;; + -wf) [ -z "$2" ] && syntax_error_exit + WALLET_FILE="${2}" + shift + ;; + -?) usage_exit + ;; + *) syntax_error_exit + ;; + esac + shift +done + +[ ${SAVE_WALLET} -eq 0 ] && [ ${RESTORE_WALLET} -eq 0 ] && syntax_error_exit + +WALLET_SECRET=${WALLET_SECRET:-$DOMAIN_UID-opss-walletfile-secret} + +set -eu + +if [ ${SAVE_WALLET} -eq 1 ] ; then + echo "@@ Info: Saving wallet from from configmap '${DOMAIN_UID}-weblogic-domain-introspect-cm' in namespace '${DOMAIN_NAMESPACE}' to file '${WALLET_FILE}'." + ${KUBERNETES_CLI:-kubectl} -n ${DOMAIN_NAMESPACE} \ + get configmap ${DOMAIN_UID}-weblogic-domain-introspect-cm \ + -o jsonpath='{.data.ewallet\.p12}' \ + > ${WALLET_FILE} +fi + +if [ ! -f "$WALLET_FILE" ]; then + echo "@@ Error: Wallet file '$WALLET_FILE' not found." + exit 1 +fi + +FILESIZE=$(du -k "$WALLET_FILE" | cut -f1) +if [ $FILESIZE = 0 ]; then + echo "@@ Error: Wallet file '$WALLET_FILE' is empty. Is this a JRF domain? The wallet file will be empty for a non-RCU/non-JRF domain." + exit 1 +fi + +if [ ${RESTORE_WALLET} -eq 1 ] ; then + echo "@@ Info: Creating secret '${WALLET_SECRET}' in namespace '${DOMAIN_NAMESPACE}' for wallet file '${WALLET_FILE}', domain uid '${DOMAIN_UID}'." + $SCRIPTDIR/create-secret.sh \ + -n ${DOMAIN_NAMESPACE} \ + -d ${DOMAIN_UID} \ + -s ${WALLET_SECRET} \ + -f walletFile=${WALLET_FILE} +fi diff --git a/OracleIdentityGovernance/kubernetes/domain-lifecycle/pv-pvc-helper.sh b/OracleIdentityGovernance/kubernetes/domain-lifecycle/pv-pvc-helper.sh new file mode 100755 index 000000000..c49179c5d --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/domain-lifecycle/pv-pvc-helper.sh @@ -0,0 +1,223 @@ +#!/bin/bash +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. + +# Launch a "Persistent volume cleanup helper" pod for examining or cleaning up the contents +# of domain directory on a persistent volume. + +script="${BASH_SOURCE[0]}" +scriptDir="$( cd "$( dirname "${script}" )" && pwd )" +source ${scriptDir}/helper.sh +source ${scriptDir}/../common/utility.sh +set -eu + +initGlobals() { + KUBERNETES_CLI=${KUBERNETES_CLI:-kubectl} + claimName="" + mountPath="" + namespace="default" + image="ghcr.io/oracle/oraclelinux:8-slim" + imagePullPolicy="IfNotPresent" + pullsecret="" + runAsRoot="" +} + +usage() { + cat << EOF + + This is a helper script for examining, changing permissions, or deleting the contents of the persistent + volume (such as domain files or logs) for a WebLogic Domain on PV or Model in Image domain. + The script launches a a Kubernetes pod named as 'pvhelper' using the provided persistent volume claim name and the mount path. + You can run the '${KUBERNETES_CLI} exec' to get a shell to the running pod container and run commands to examine or clean up the contents of + shared directories on persistent volume. + If the helper pod is already running in the namespace with the provide options, then it doesn't create a new pod. + If the helper pod is already running and the persistent volume claim name or mount path doesn't match, then script will generate an error. + Use '${KUBERNETES_CLI} delete pod pvhelper -n ' command to delete the pod when it's no longer needed. + + Please see README.md for more details. + + Usage: + + $(basename $0) -c persistentVolumeClaimName -m mountPath [-n namespace] [-i image] [-u imagePullPolicy] [-o helperOutputDir] [-r] [-h]" + + [-c | --claimName] : Persistent volume claim name. This parameter is required. + + [-m | --mountPath] : Mount path of the persistent volume in helper pod. This parameter is required. + + [-n | --namespace] : Domain namespace. Default is 'default'. + + [-i | --image] : Container image for the helper pod (optional). Default is 'ghcr.io/oracle/oraclelinux:8-slim'. + + [-u | --imagePullPolicy] : Image pull policy for the helper pod (optional). Default is 'IfNotPresent'. + + [-p | --imagePullSecret] : Image pull secret for the helper pod (optional). Default is 'None'. + + [-r | --runAsRoot] : Option to run the pod as a root user. Default is 'runAsNonRoot'. + + [-h | --help] : This help. + +EOF +exit $1 +} + +processCommandLine() { + while [[ "$#" -gt "0" ]]; do + key="$1" + case $key in + -c|--claimName) + claimName="$2" + shift + ;; + -m|--mountPath) + mountPath="$2" + shift + ;; + -n|--namespace) + namespace="$2" + shift + ;; + -i|--image) + image="$2" + shift + ;; + -u|--imagePullPolicy) + imagePullPolicy="$2" + shift + ;; + -p|--pullsecret) + pullsecret="$2" + shift + ;; + -r|--runAsRoot) + runAsRoot="#" + ;; + -h|--help) + usage 0 + ;; + -*|--*) + echo "Unknown option $1" + usage 1 + ;; + *) + # unknown option + ;; + esac + shift # past arg or value + done +} + +validatePvc() { + if [ -z "${claimName}" ]; then + printError "${script}: -c persistentVolumeClaimName must be specified." + usage 1 + fi + + pvc=$(${KUBERNETES_CLI} get pvc ${claimName} -n ${namespace} --ignore-not-found) + if [ -z "${pvc}" ]; then + printError "${script}: Persistent volume claim '$claimName' does not exist in namespace ${namespace}. \ + Please specify an existing persistent volume claim name using '-c' parameter." + exit 1 + fi +} + +validateMountPath() { + if [ -z "${mountPath}" ]; then + printError "${script}: -m mountPath must be specified." + usage 1 + elif [[ ! "$mountPath" =~ '/' ]] && [[ ! "$mountPath" =~ '\' ]]; then + printError "${script}: -m mountPath is not a valid path." + usage 1 + fi +} + +checkAndDefaultPullSecret() { + if [ -z "${pullsecret}" ]; then + pullsecret="none" + pullsecretPrefix="#" + fi +} + +validateParameters() { + validatePvc + validateMountPath + checkAndDefaultPullSecret +} + + +processExistingPod() { + existingMountPath=$(${KUBERNETES_CLI} get po pvhelper -n ${namespace} -o jsonpath='{.spec.containers[0].volumeMounts[0].mountPath}') + existingClaimName=$(${KUBERNETES_CLI} get po pvhelper -n ${namespace} -o jsonpath='{.spec.volumes[0].persistentVolumeClaim.claimName}') + if [ "$existingMountPath" != "$mountPath" ]; then + printError "Pod 'pvhelper' already exists in namespace '$namespace' but the mount path \ + '$mountPath' doesn't match the mount path '$existingMountPath' for existing pod. \ + Please delete the existing pod using '${KUBERNETES_CLI} delete pod pvhelper -n $namespace'\ + command to create a new pod." + exit 1 + fi + if [ "$existingClaimName" != "$claimName" ]; then + printError "Pod 'pvhelper' already exists but the claim name '$claimName' doesn't match \ + the claim name '$existingClaimName' of existing pod. Please delete the existing pod \ + using '${KUBERNETES_CLI} delete pod pvhelper -n $namespace' command to create a new pod." + exit 1 + fi + printInfo "Pod 'pvhelper' exists in namespace '$namespace'." +} + +createPod() { + printInfo "Creating pod 'pvhelper' using image '${image}', persistent volume claim \ + '${claimName}' and mount path '${mountPath}'." + + pvhelperYamlTemp=${scriptDir}/template/pvhelper.yaml.template + template="$(cat ${pvhelperYamlTemp})" + + template=$(echo "$template" | sed -e "s:%NAMESPACE%:${namespace}:g;\ + s:%WEBLOGIC_IMAGE_PULL_POLICY%:${imagePullPolicy}:g;\ + s:%WEBLOGIC_IMAGE_PULL_SECRET_NAME%:${pullsecret}:g;\ + s:%WEBLOGIC_IMAGE_PULL_SECRET_PREFIX%:${pullsecretPrefix}:g;\ + s:%CLAIM_NAME%:${claimName}:g;s:%VOLUME_MOUNT_PATH%:${mountPath}:g;\ + s:%RUN_AS_ROOT_PREFIX%:${runAsRoot}:g;\ + s?image:.*?image: ${image}?g") + ${KUBERNETES_CLI} delete po pvhelper -n ${namespace} --ignore-not-found + echo "$template" | ${KUBERNETES_CLI} apply -f - +} + +printCommandOutput() { + printInfo "Executing '${KUBERNETES_CLI} -n $namespace exec -i pvhelper -- ls -l ${mountPath}' \ + command to print the contents of the mount path in the persistent volume." + + cmdOut=$(${KUBERNETES_CLI} -n $namespace exec -i pvhelper -- ls -l ${mountPath}) + printInfo "=============== Command output ====================" + echo "$cmdOut" + printInfo "===================================================" +} + +printPodUsage() { + printInfo "Use command '${KUBERNETES_CLI} -n $namespace exec -it pvhelper -- /bin/sh' and \ + cd to '${mountPath}' directory to view or delete the contents on the persistent volume." + printInfo "Use command '${KUBERNETES_CLI} -n $namespace delete pod pvhelper' to delete the pod \ + created by the script." +} + +main() { + pvhelperpod=`${KUBERNETES_CLI} get po -n ${namespace} | grep "^pvhelper " | cut -f1 -d " " ` + if [ "$pvhelperpod" = "pvhelper" ]; then + processExistingPod + else + createPod + fi + + checkPod pvhelper $namespace # exits non zero non error + + checkPodState pvhelper $namespace "1/1" # exits non zero on error + + sleep 5 + + printCommandOutput + + printPodUsage +} + +initGlobals +processCommandLine "${@}" +validateParameters +main diff --git a/OracleIdentityGovernance/kubernetes/domain-lifecycle/template/pvhelper.yaml.template b/OracleIdentityGovernance/kubernetes/domain-lifecycle/template/pvhelper.yaml.template new file mode 100755 index 000000000..10aa199cf --- /dev/null +++ b/OracleIdentityGovernance/kubernetes/domain-lifecycle/template/pvhelper.yaml.template @@ -0,0 +1,35 @@ +# Copyright (c) 2023, Oracle and/or its affiliates. +# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. +# +apiVersion: v1 +kind: Pod +metadata: + labels: + run: pvhelper + name: pvhelper + namespace: %NAMESPACE% +spec: + containers: + - args: + - sleep + - infinity + image: ghcr.io/oracle/oraclelinux:8-slim + imagePullPolicy: %WEBLOGIC_IMAGE_PULL_POLICY% + name: pvhelper + volumeMounts: + - name: pv-volume + mountPath: %VOLUME_MOUNT_PATH% + %RUN_AS_ROOT_PREFIX%securityContext: + %RUN_AS_ROOT_PREFIX% allowPrivilegeEscalation: false + %RUN_AS_ROOT_PREFIX% capabilities: + %RUN_AS_ROOT_PREFIX% drop: + %RUN_AS_ROOT_PREFIX% - ALL + %RUN_AS_ROOT_PREFIX% privileged: false + %RUN_AS_ROOT_PREFIX% runAsNonRoot: true + %RUN_AS_ROOT_PREFIX% runAsUser: 1000 + volumes: + - name: pv-volume + persistentVolumeClaim: + claimName: %CLAIM_NAME% + %WEBLOGIC_IMAGE_PULL_SECRET_PREFIX%imagePullSecrets: + %WEBLOGIC_IMAGE_PULL_SECRET_PREFIX%- name: %WEBLOGIC_IMAGE_PULL_SECRET_NAME% diff --git a/OracleIdentityGovernance/kubernetes/domain-lifecycle/waitForDomain.sh b/OracleIdentityGovernance/kubernetes/domain-lifecycle/waitForDomain.sh index b0e0c2682..cca3456d3 100755 --- a/OracleIdentityGovernance/kubernetes/domain-lifecycle/waitForDomain.sh +++ b/OracleIdentityGovernance/kubernetes/domain-lifecycle/waitForDomain.sh @@ -305,8 +305,8 @@ getDomainInfo() { getDomainAIImages domain_info_goal_aiimages_current getDomainValue domain_info_api_version ".apiVersion" getDomainValue domain_info_condition_failed_str ".status.conditions[?(@.type==\"Failed\")]" # has full failure messages, if any - getDomainValue domain_info_condition_completed ".status.conditions[?(@.type==\"Completed\")].status" # "True" when complete getDomainValue domain_info_observed_generation ".status.observedGeneration" + getDomainValue domain_info_condition_completed ".status.conditions[?(@.type==\"Completed\")].status" # "True" when complete domain_info_clusters=$( echo "$domain_info_clusters" | sed 's/"name"//g' | tr -d '[]{}:' | sortlist | sed 's/,/ /') # convert to sorted space separated list diff --git a/OracleIdentityGovernance/kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml b/OracleIdentityGovernance/kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml index f92ca5892..aad378cbb 100755 --- a/OracleIdentityGovernance/kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml +++ b/OracleIdentityGovernance/kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml @@ -56,13 +56,17 @@ spec: securityContext: capabilities: add: ["SYS_CHROOT"] - image: "elasticsearch:6.8.23" + image: "elasticsearch:7.8.1" ports: - containerPort: 9200 - containerPort: 9300 env: + - name: discovery.type + value: single-node - name: ES_JAVA_OPTS value: -Xms1024m -Xmx1024m + - name: bootstrap.memory_lock + value: "false" --- kind: "Service" @@ -103,7 +107,7 @@ spec: spec: containers: - name: "kibana" - image: "kibana:6.8.23" + image: "kibana:7.8.1" ports: - containerPort: 5601 imagePullSecrets: diff --git a/OracleIdentityGovernance/kubernetes/kubectlserch b/OracleIdentityGovernance/kubernetes/kubectlserch deleted file mode 100755 index 045605e43..000000000 --- a/OracleIdentityGovernance/kubernetes/kubectlserch +++ /dev/null @@ -1,111 +0,0 @@ -./charts/apache-samples/custom-sample/README.md:7:$ ${KUBERNETES_CLI:-kubectl} create namespace apache-sample -./charts/apache-webtier/README.md:81:$ ${KUBERNETES_CLI:-kubectl} api-versions | grep rbac -./create-rcu-schema/create-rcu-schema.sh.mustache:59: pname=`${KUBERNETES_CLI:-kubectl} get po -n ${ns} | grep -w ${pod} | awk '{print $1}'` -./create-rcu-schema/create-rcu-schema.sh.mustache:65: rcode=`${KUBERNETES_CLI:-kubectl} get po ${pname} -n ${ns} | grep -w ${pod} | awk '{print $2}'` -./create-rcu-schema/create-rcu-schema.sh.mustache:70: rcode=`${KUBERNETES_CLI:-kubectl} get po/$pod -n ${ns} | grep -v NAME | awk '{print $2}'` -./create-rcu-schema/create-rcu-schema.sh.mustache:80: pname=`${KUBERNETES_CLI:-kubectl} get po -n ${ns} | grep -w ${pod} | awk '{print $1}'` -./create-rcu-schema/create-rcu-schema.sh.mustache:81: ${KUBERNETES_CLI:-kubectl} -n ${ns} get po ${pname} -./create-rcu-schema/create-rcu-schema.sh.mustache:123:${KUBERNETES_CLI:-kubectl} exec -n $namespace -i rcu -- /bin/bash /u01/oracle/createRepository.sh ${dburl} ${schemaPrefix} ${rcuType} ${customVariables} -./create-rcu-schema/README.md.mustache:28:$ ${KUBERNETES_CLI:-kubectl} -n default create secret generic oracle-rcu-secret \ -./create-rcu-schema/README.md.mustache:71:$ ${KUBERNETES_CLI:-kubectl} -n MYNAMESPACE create secret generic oracle-rcu-secret \ -./create-rcu-schema/README.md.mustache:203:$ ${KUBERNETES_CLI:-kubectl} -n default create secret generic oracle-rcu-secret \ -./create-rcu-schema/drop-rcu-schema.sh.mustache:78:#fmwimage=`${KUBERNETES_CLI:-kubectl} get pod/rcu -o jsonpath="{..image}"` -./create-rcu-schema/drop-rcu-schema.sh.mustache:81:${KUBERNETES_CLI:-kubectl} exec -n $namespace -i rcu -- /bin/bash /u01/oracle/dropRepository.sh ${dburl} ${schemaPrefix} ${rcuType} ${customVariables} -./create-rcu-schema/common/create-rcu-pod.sh.mustache:67:rcupod=`${KUBERNETES_CLI:-kubectl} get po -n ${namespace} | grep "^rcu " | cut -f1 -d " " ` -./create-rcu-schema/common/create-rcu-pod.sh.mustache:86: ${KUBERNETES_CLI:-kubectl} delete po rcu -n ${namespace} --ignore-not-found -./create-rcu-schema/common/create-rcu-pod.sh.mustache:87: ${KUBERNETES_CLI:-kubectl} apply -f $rcuYaml -./create-rcu-schema/common/create-rcu-pod.sh.mustache:98:${KUBERNETES_CLI:-kubectl} exec -n $namespace -i rcu -- bash -c 'cat > /u01/oracle/dropRepository.sh' < ${scriptDir}/dropRepository.sh || exit -5 -./create-rcu-schema/common/create-rcu-pod.sh.mustache:99:${KUBERNETES_CLI:-kubectl} exec -n $namespace -i rcu -- bash -c 'cat > /u01/oracle/createRepository.sh' < ${scriptDir}/createRepository.sh || exit -6 -./create-rcu-schema/common/create-rcu-pod.sh.mustache:101:${KUBERNETES_CLI:-kubectl} get po/rcu -n $namespace -./create-rcu-schema/create-image-pull-secret.sh.mustache:57:${KUBERNETES_CLI:-kubectl} delete secret/${secret} --ignore-not-found -./create-rcu-schema/create-image-pull-secret.sh.mustache:59:${KUBERNETES_CLI:-kubectl} create secret docker-registry ${secret} --docker-server=container-registry.oracle.com --docker-username=${username} --docker-password=${password} --docker-email=${email} -./create-rcu-credentials/README.md.mustache:36:You can check the secret with the `${KUBERNETES_CLI:-kubectl} describe secret` command. An example is shown below, -./create-rcu-credentials/README.md.mustache:40:$ ${KUBERNETES_CLI:-kubectl} -n <%namespace%> describe secret <%domainUID%>-rcu-credentials -o yaml -./create-rcu-credentials/create-rcu-credentials.sh.mustache:34:# Try to execute ${KUBERNETES_CLI:-kubectl} to see whether ${KUBERNETES_CLI:-kubectl} is available -./create-rcu-credentials/create-rcu-credentials.sh.mustache:36: if ! [ -x "$(command -v ${KUBERNETES_CLI:-kubectl})" ]; then -./create-rcu-credentials/create-rcu-credentials.sh.mustache:37: fail "${KUBERNETES_CLI:-kubectl} is not installed" -./create-rcu-credentials/create-rcu-credentials.sh.mustache:132:result=$(${KUBERNETES_CLI:-kubectl} get secret "${secretName}" -n "${namespace}" --ignore-not-found=true | grep "${secretName}" | wc | awk ' { print $1; }') -./create-rcu-credentials/create-rcu-credentials.sh.mustache:138:${KUBERNETES_CLI:-kubectl} -n "$namespace" create secret generic "$secretName" \ -./create-rcu-credentials/create-rcu-credentials.sh.mustache:146: ${KUBERNETES_CLI:-kubectl} label secret "${secretName}" -n "$namespace" weblogic.domainUID="$domainUID" weblogic.domainName="$domainUID" -./create-rcu-credentials/create-rcu-credentials.sh.mustache:150:SECRET=$(${KUBERNETES_CLI:-kubectl} get secret "${secretName}" -n "${namespace}" | grep "${secretName}" | wc | awk ' { print $1; }') -./logging-services/weblogic-logging-exporter/README.md.mustache:11:$ ${KUBERNETES_CLI:-kubectl} create -f https://raw.githubusercontent.com/oracle/weblogic-kubernetes-operator/master/kubernetes/samples/scripts/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml -./logging-services/weblogic-logging-exporter/README.md.mustache:35: $ ${KUBERNETES_CLI:-kubectl} cp weblogic-logging-exporter.jar <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains/<%domainUID%>/ -./logging-services/weblogic-logging-exporter/README.md.mustache:36: $ ${KUBERNETES_CLI:-kubectl} cp snakeyaml-1.27.jar <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains/<%domainUID%>/ -./logging-services/weblogic-logging-exporter/README.md.mustache:65: $ ${KUBERNETES_CLI:-kubectl} cp <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains/<%domainUID%>/bin/setDomainEnv.sh setDomainEnv.sh -./logging-services/weblogic-logging-exporter/README.md.mustache:75: $ ${KUBERNETES_CLI:-kubectl} cp setDomainEnv.sh <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains/<%domainUID%>/bin/setDomainEnv.sh -./logging-services/weblogic-logging-exporter/README.md.mustache:81: $ ${KUBERNETES_CLI:-kubectl} cp WebLogicLoggingExporter.yaml <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains/<%domainUID%>/config/ -./logging-services/weblogic-logging-exporter/README.md.mustache:104: $ ${KUBERNETES_CLI:-kubectl} get pods -n <%namespace%> -./logging-services/weblogic-logging-exporter/README.md.mustache:123: $ ${KUBERNETES_CLI:-kubectl} get pods -n <%namespace%> -./logging-services/logstash/README.md.mustache:35: $ ${KUBERNETES_CLI:-kubectl} get pvc -n <%namespace%> -./logging-services/logstash/README.md.mustache:41: $ ${KUBERNETES_CLI:-kubectl} cp logstash.conf <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/user_projects/domains --namespace <%namespace%> -./logging-services/logstash/README.md.mustache:55: $ ${KUBERNETES_CLI:-kubectl} create -f logstash.yaml -./monitoring-service/README.md.mustache:6:- Have Docker and a Kubernetes cluster running and have `${KUBERNETES_CLI:-kubectl}` installed and configured. -./monitoring-service/README.md.mustache:34: $ ${KUBERNETES_CLI:-kubectl} create -f manifests/setup -./monitoring-service/README.md.mustache:35: $ until ${KUBERNETES_CLI:-kubectl} get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done -./monitoring-service/README.md.mustache:36: $ ${KUBERNETES_CLI:-kubectl} create -f manifests/ -./monitoring-service/README.md.mustache:42: $ ${KUBERNETES_CLI:-kubectl} label nodes --all kubernetes.io/os=linux -./monitoring-service/README.md.mustache:48: $ ${KUBERNETES_CLI:-kubectl} patch svc grafana -n monitoring --type=json -p '[{"op": "replace", "path": "/spec/type", "value": "NodePort" },{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 32100 }]' -./monitoring-service/README.md.mustache:50: $ ${KUBERNETES_CLI:-kubectl} patch svc prometheus-k8s -n monitoring --type=json -p '[{"op": "replace", "path": "/spec/type", "value": "NodePort" },{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 32101 }]' -./monitoring-service/README.md.mustache:52: $ ${KUBERNETES_CLI:-kubectl} patch svc alertmanager-main -n monitoring --type=json -p '[{"op": "replace", "path": "/spec/type", "value": "NodePort" },{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 32102 }]' -./monitoring-service/README.md.mustache:100:$ ${KUBERNETES_CLI:-kubectl} cp wls-exporter-deploy /:/u01/oracle -./monitoring-service/README.md.mustache:101:$ ${KUBERNETES_CLI:-kubectl} cp deploy-weblogic-monitoring-exporter.py /:/u01/oracle/wls-exporter-deploy -./monitoring-service/README.md.mustache:102:$ ${KUBERNETES_CLI:-kubectl} exec -it -n -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/wls-exporter-deploy/deploy-weblogic-monitoring-exporter.py \ -./monitoring-service/README.md.mustache:116:$ ${KUBERNETES_CLI:-kubectl} cp wls-exporter-deploy <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle -./monitoring-service/README.md.mustache:117:$ ${KUBERNETES_CLI:-kubectl} cp deploy-weblogic-monitoring-exporter.py <%namespace%>/<%domainUID%>-<%adminServerNameToLegal%>:/u01/oracle/wls-exporter-deploy -./monitoring-service/README.md.mustache:118:$ ${KUBERNETES_CLI:-kubectl} exec -it -n <%namespace%> <%domainUID%>-<%adminServerNameToLegal%> -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/wls-exporter-deploy/deploy-weblogic-monitoring-exporter.py \ -./monitoring-service/README.md.mustache:151:$ ${KUBERNETES_CLI:-kubectl} apply -f . -./monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.sh.mustache:21:username=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.username}'|base64 --decode` -./monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.sh.mustache:22:password=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.password}'|base64 --decode` -./monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.sh.mustache:36:${KUBERNETES_CLI:-kubectl} cp $scriptDir/undeploy-weblogic-monitoring-exporter.py ${domainNamespace}/${adminServerPodName}:/u01/oracle/undeploy-weblogic-monitoring-exporter.py -./monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.sh.mustache:37:EXEC_UNDEPLOY="${KUBERNETES_CLI:-kubectl} exec -it -n ${domainNamespace} ${adminServerPodName} -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/undeploy-weblogic-monitoring-exporter.py ${InputParameterList}" -./monitoring-service/scripts/deploy-weblogic-server-grafana-dashboard.sh.mustache:23:grafanaEndpointIP=$(${KUBERNETES_CLI:-kubectl} get endpoints ${monitoringNamespace}-grafana -n ${monitoringNamespace} -o=jsonpath="{.subsets[].addresses[].ip}") -./monitoring-service/scripts/deploy-weblogic-server-grafana-dashboard.sh.mustache:24:grafanaEndpointPort=$(${KUBERNETES_CLI:-kubectl} get endpoints ${monitoringNamespace}-grafana -n ${monitoringNamespace} -o=jsonpath="{.subsets[].ports[].port}") -./monitoring-service/scripts/deploy-weblogic-server-grafana-dashboard.sh.mustache:26:${KUBERNETES_CLI:-kubectl} cp $scriptDir/../config/weblogic-server-dashboard.json ${domainNamespace}/${adminServerPodName}:/tmp/weblogic-server-dashboard.json -./monitoring-service/scripts/deploy-weblogic-server-grafana-dashboard.sh.mustache:27:EXEC_DEPLOY="${KUBERNETES_CLI:-kubectl} exec -it -n ${domainNamespace} ${adminServerPodName} -- curl --noproxy \"*\" -X POST -H \"Content-Type: application/json\" -d @/tmp/weblogic-server-dashboard.json http://admin:admin@${grafanaEndpoint}/api/dashboards/db" -./monitoring-service/scripts/deploy-weblogic-monitoring-exporter.sh.mustache:22:username=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.username}'|base64 --decode` -./monitoring-service/scripts/deploy-weblogic-monitoring-exporter.sh.mustache:23:password=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.password}'|base64 --decode` -./monitoring-service/scripts/deploy-weblogic-monitoring-exporter.sh.mustache:36:${KUBERNETES_CLI:-kubectl} cp $scriptDir/wls-exporter-deploy ${domainNamespace}/${adminServerPodName}:/u01/oracle -./monitoring-service/scripts/deploy-weblogic-monitoring-exporter.sh.mustache:37:${KUBERNETES_CLI:-kubectl} cp $scriptDir/deploy-weblogic-monitoring-exporter.py ${domainNamespace}/${adminServerPodName}:/u01/oracle/wls-exporter-deploy -./monitoring-service/scripts/deploy-weblogic-monitoring-exporter.sh.mustache:38:EXEC_DEPLOY="${KUBERNETES_CLI:-kubectl} exec -it -n ${domainNamespace} ${adminServerPodName} -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/wls-exporter-deploy/deploy-weblogic-monitoring-exporter.py ${InputParameterList}" -./monitoring-service/delete-monitoring.sh.mustache:106:${KUBERNETES_CLI:-kubectl} delete --ignore-not-found=true -f ${serviceMonitor} -./monitoring-service/setup-monitoring.sh.mustache:133: if test "$(${KUBERNETES_CLI:-kubectl} get namespace ${monitoringNamespace} --ignore-not-found | wc -l)" = 0; then -./monitoring-service/setup-monitoring.sh.mustache:135: ${KUBERNETES_CLI:-kubectl} create namespace ${monitoringNamespace} -./monitoring-service/setup-monitoring.sh.mustache:140: ${KUBERNETES_CLI:-kubectl} label nodes --all kubernetes.io/os=linux --overwrite=true -./monitoring-service/setup-monitoring.sh.mustache:149:export username=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.username}'|base64 --decode` -./monitoring-service/setup-monitoring.sh.mustache:150:export password=`${KUBERNETES_CLI:-kubectl} get secrets ${weblogicCredentialsSecretName} -n ${domainNamespace} -o=jsonpath='{.data.password}'|base64 --decode` -./monitoring-service/setup-monitoring.sh.mustache:170:${KUBERNETES_CLI:-kubectl} apply -f ${serviceMonitor} -./create-oracle-db-service/README.md.mustache:15:`${KUBERNETES_CLI:-kubectl} -n MYNAMESPACE create secret generic MYSECRETNAME --from-literal='password=MYSYSPASSWORD'` -./create-oracle-db-service/README.md.mustache:41:$ ${KUBERNETES_CLI:-kubectl} -n MYNAMESPACE create secret generic MYSECRETNAME --from-literal='password=MYSYSPASSWORD' -./create-oracle-db-service/start-db-service.sh.mustache:55:domns=`${KUBERNETES_CLI:-kubectl} get ns ${namespace} | grep ${namespace} | awk '{print $1}'` -./create-oracle-db-service/start-db-service.sh.mustache:58: ${KUBERNETES_CLI:-kubectl} create namespace ${namespace} -./create-oracle-db-service/start-db-service.sh.mustache:92:${KUBERNETES_CLI:-kubectl} delete service oracle-db -n ${namespace} --ignore-not-found -./create-oracle-db-service/start-db-service.sh.mustache:95:${KUBERNETES_CLI:-kubectl} apply -f ${dbYaml} -./create-oracle-db-service/start-db-service.sh.mustache:107:${KUBERNETES_CLI:-kubectl} get po -n ${namespace} -./create-oracle-db-service/start-db-service.sh.mustache:108:${KUBERNETES_CLI:-kubectl} get service -n ${namespace} -./create-oracle-db-service/start-db-service.sh.mustache:110:${KUBERNETES_CLI:-kubectl} cp ${scriptDir}/common/checkDbState.sh -n ${namespace} ${dbpod}:/home/oracle/ -./create-oracle-db-service/start-db-service.sh.mustache:112:${KUBERNETES_CLI:-kubectl} exec -it ${dbpod} -n ${namespace} -- /bin/bash /home/oracle/checkDbState.sh -./create-oracle-db-service/stop-db-service.sh.mustache:32:dbpod=`${KUBERNETES_CLI:-kubectl} get po -n ${namespace} | grep oracle-db | cut -f1 -d " " ` -./create-oracle-db-service/stop-db-service.sh.mustache:33:${KUBERNETES_CLI:-kubectl} delete -f ${scriptDir}/common/oracle.db.${namespace}.yaml --ignore-not-found -./create-oracle-db-service/stop-db-service.sh.mustache:40: ${KUBERNETES_CLI:-kubectl} delete svc/oracle-db -n ${namespace} --ignore-not-found -./create-oracle-db-service/create-image-pull-secret.sh.mustache:57:${KUBERNETES_CLI:-kubectl} delete secret/${secret} --ignore-not-found -./create-oracle-db-service/create-image-pull-secret.sh.mustache:59:${KUBERNETES_CLI:-kubectl} create secret docker-registry ${secret} --docker-server=container-registry.oracle.com --docker-username=${username} --docker-password=${password} --docker-email=${email} -./elasticsearch-and-kibana/README.md.mustache:25:$ ${KUBERNETES_CLI:-kubectl} apply -f kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml -./elasticsearch-and-kibana/README.md.mustache:30:$ ${KUBERNETES_CLI:-kubectl} delete -f kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml -./elasticsearch-and-kibana/elasticsearch_and_kibana.yaml.mustache:22:# ${KUBERNETES_CLI:-kubectl} apply -f kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml -./elasticsearch-and-kibana/elasticsearch_and_kibana.yaml.mustache:25:# ${KUBERNETES_CLI:-kubectl} delete -f kubernetes/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml -./create-weblogic-domain-pv-pvc/README.md.mustache:23:The `create-pv-pvc.sh` script will create a subdirectory `pv-pvcs` under the given `/path/to/output-directory` directory. By default, the script generates two YAML files, namely `weblogic-sample-pv.yaml` and `weblogic-sample-pvc.yaml`, in the `/path/to/output-directory/pv-pvcs`. These two YAML files can be used to create the Kubernetes resources using the `${KUBERNETES_CLI:-kubectl} create -f` command. -./create-weblogic-domain-pv-pvc/README.md.mustache:26:$ ${KUBERNETES_CLI:-kubectl} create -f <%domainUID%>-domain-pv.yaml -./create-weblogic-domain-pv-pvc/README.md.mustache:27:$ ${KUBERNETES_CLI:-kubectl} create -f <%domainUID%>-domain-pvc.yaml -./create-weblogic-domain-pv-pvc/README.md.mustache:174:$ ${KUBERNETES_CLI:-kubectl} describe pv <%domainUID%>-domain-pv -./create-weblogic-domain-pv-pvc/README.md.mustache:195:$ ${KUBERNETES_CLI:-kubectl} describe pvc <%domainUID%>-domain-pvc -./create-weblogic-domain-pv-pvc/create-pv-pvc.sh.mustache:212: ${KUBERNETES_CLI:-kubectl} create -f ${pvOutput} -./create-weblogic-domain-pv-pvc/create-pv-pvc.sh.mustache:227: ${KUBERNETES_CLI:-kubectl} create -f ${pvcOutput} -./create-weblogic-domain-credentials/README.md.mustache:27:You can check the secret with the `${KUBERNETES_CLI:-kubectl} get secret` command. An example is shown below, -./create-weblogic-domain-credentials/README.md.mustache:31:$ ${KUBERNETES_CLI:-kubectl} -n <%namespace%> get secret <%domainUID%>-weblogic-credentials -o yaml -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:34:# Try to execute ${KUBERNETES_CLI:-kubectl} to see whether ${KUBERNETES_CLI:-kubectl} is available -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:36: if ! [ -x "$(command -v ${KUBERNETES_CLI:-kubectl})" ]; then -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:37: fail "${KUBERNETES_CLI:-kubectl} is not installed" -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:109:result=$(${KUBERNETES_CLI:-kubectl} get secret "${secretName}" -n "${namespace}" --ignore-not-found=true | grep "${secretName}" | wc | awk ' { print $1; }') -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:115:${KUBERNETES_CLI:-kubectl} -n "$namespace" create secret generic "$secretName" \ -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:121: ${KUBERNETES_CLI:-kubectl} label secret "${secretName}" -n "$namespace" weblogic.domainUID="$domainUID" weblogic.domainName="$domainUID" -./create-weblogic-domain-credentials/create-weblogic-credentials.sh.mustache:125:SECRET=$(${KUBERNETES_CLI:-kubectl} get secret "${secretName}" -n "${namespace}" | grep "${secretName}" | wc | awk ' { print $1; }') diff --git a/OracleIdentityGovernance/kubernetes/logging-services/logstash/logstash.conf b/OracleIdentityGovernance/kubernetes/logging-services/logstash/logstash.conf index b2eb51e32..015c5c84e 100755 --- a/OracleIdentityGovernance/kubernetes/logging-services/logstash/logstash.conf +++ b/OracleIdentityGovernance/kubernetes/logging-services/logstash/logstash.conf @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # diff --git a/OracleIdentityGovernance/kubernetes/logging-services/logstash/logstash.yaml b/OracleIdentityGovernance/kubernetes/logging-services/logstash/logstash.yaml index c90bd512c..680a023f0 100755 --- a/OracleIdentityGovernance/kubernetes/logging-services/logstash/logstash.yaml +++ b/OracleIdentityGovernance/kubernetes/logging-services/logstash/logstash.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/README.md b/OracleIdentityGovernance/kubernetes/monitoring-service/README.md index 5bd0b6cba..67e1abbdb 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/README.md +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/README.md @@ -5,6 +5,8 @@ Using the `WebLogic Monitoring Exporter` you can scrape runtime information from - Have Docker and a Kubernetes cluster running and have `${KUBERNETES_CLI:-kubectl}` installed and configured. - Have Helm installed. +- Before installing kube-prometheus-stack (Prometheus, Grafana and Alertmanager), refer [link](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#uninstall-helm-chart) and cleanup if any older CRDs for monitoring services exists in your Kubernetes cluster. + **Note**: Make sure no existing monitoring services is running in the Kubernetes cluster before cleanup. If you do not want to cleanup monitoring services CRDs, refer [link](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#upgrading-chart) for upgrading the CRDs. - An OracleIdentityGovernance domain deployed by `weblogic-operator` is running in the Kubernetes cluster. ## Set up monitoring for OracleIdentityGovernance domain @@ -182,7 +184,7 @@ The following parameters can be provided in the inputs file. | `domainUID` | domainUID of the OracleIdentityGovernance domain. | `governancedomain` | | `domainNamespace` | Kubernetes namespace of the OracleIdentityGovernance domain. | `oigns` | | `setupKubePrometheusStack` | Boolean value indicating whether kube-prometheus-stack (Prometheus, Grafana and Alertmanager) to be installed | `true` | -| `additionalParamForKubePrometheusStack` | The script install's kube-prometheus-stack with `service.type` as NodePort and values for `service.nodePort` as per the parameters defined in `monitoring-inputs.yaml`. Use `additionalParamForKubePrometheusStack` parameter to further configure with additional parameters as per [values.yaml](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml). Sample value to disable NodeExporter, Prometheus-Operator TLS support and Admission webhook support for PrometheusRules resources is `--set nodeExporter.enabled=false --set prometheusOperator.tls.enabled=false --set prometheusOperator.admissionWebhooks.enabled=false`| | +| `additionalParamForKubePrometheusStack` | The script install's kube-prometheus-stack with `service.type` as NodePort and values for `service.nodePort` as per the parameters defined in `monitoring-inputs.yaml`. Use `additionalParamForKubePrometheusStack` parameter to further configure with additional parameters as per [values.yaml](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml). Sample value to disable NodeExporter, Prometheus-Operator TLS support, Admission webhook support for PrometheusRules resources and custom Grafana image repository is `--set nodeExporter.enabled=false --set prometheusOperator.tls.enabled=false --set prometheusOperator.admissionWebhooks.enabled=false --set grafana.image.repository=xxxxxxxxx/grafana/grafana`| | | `monitoringNamespace` | Kubernetes namespace for monitoring setup. | `monitoring` | | `adminServerName` | Name of the Administration Server. | `AdminServer` | | `adminServerPort` | Port number for the Administration Server inside the Kubernetes cluster. | `7001` | @@ -211,7 +213,7 @@ $ ./setup-monitoring.sh \ ``` The script will perform the following steps: -- Helm install `prometheus-community/kube-prometheus-stack` of version "16.5.0" if `setupKubePrometheusStack` is set to `true`. +- Helm install `prometheus-community/kube-prometheus-stack` if `setupKubePrometheusStack` is set to `true`. - Deploys WebLogic Monitoring Exporter to Administration Server. - Deploys WebLogic Monitoring Exporter to `soaCluster` if `wlsMonitoringExporterTosoaCluster` is set to `true`. - Deploys WebLogic Monitoring Exporter to `oimCluster` if `wlsMonitoringExporterTooimCluster` is set to `true`. @@ -235,7 +237,7 @@ Sample output: ```bash $ helm ls -n monitoring NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION -monitoring monitoring 1 2021-06-18 12:58:35.177221969 +0000 UTC deployed kube-prometheus-stack-16.5.0 0.48.0 +monitoring monitoring 1 2023-03-15 10:31:42.44437202 +0000 UTC deployed kube-prometheus-stack-45.7.1 v0.63.0 $ ``` diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/config/weblogic-server-dashboard-import.json b/OracleIdentityGovernance/kubernetes/monitoring-service/config/weblogic-server-dashboard-import.json index c2fa9e2eb..9ee45d900 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/config/weblogic-server-dashboard-import.json +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/config/weblogic-server-dashboard-import.json @@ -490,7 +490,7 @@ "lineColor": "rgb(31, 120, 193)", "show": true }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "100 - wls_jvm_heap_free_percent{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", @@ -582,7 +582,7 @@ "lineColor": "rgb(31, 120, 193)", "show": false }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "wls_jvm_uptime{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", @@ -674,7 +674,7 @@ "lineColor": "rgb(31, 120, 193)", "show": false }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "wls_server_open_sockets_current_count{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/config/weblogic-server-dashboard.json b/OracleIdentityGovernance/kubernetes/monitoring-service/config/weblogic-server-dashboard.json index cf6d5f776..b7fda8e9a 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/config/weblogic-server-dashboard.json +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/config/weblogic-server-dashboard.json @@ -491,7 +491,7 @@ "lineColor": "rgb(31, 120, 193)", "show": true }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "100 - wls_jvm_heap_free_percent{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", @@ -583,7 +583,7 @@ "lineColor": "rgb(31, 120, 193)", "show": false }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "wls_jvm_uptime{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", @@ -675,7 +675,7 @@ "lineColor": "rgb(31, 120, 193)", "show": false }, - "tableColumn": "instance", + "tableColumn": "", "targets": [ { "expr": "wls_server_open_sockets_current_count{weblogic_domainUID=\"$domainName\", weblogic_clusterName=\"$clusterName\", weblogic_serverName=\"$serverName\"}", diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/prometheus-roleBinding-domain-namespace.yaml b/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/prometheus-roleBinding-domain-namespace.yaml index 752dc948f..8774b0c31 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/prometheus-roleBinding-domain-namespace.yaml +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/prometheus-roleBinding-domain-namespace.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # apiVersion: rbac.authorization.k8s.io/v1 diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/prometheus-roleSpecific-domain-namespace.yaml b/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/prometheus-roleSpecific-domain-namespace.yaml index 491e4ed7a..82cc07210 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/prometheus-roleSpecific-domain-namespace.yaml +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/prometheus-roleSpecific-domain-namespace.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # apiVersion: rbac.authorization.k8s.io/v1 diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml b/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml index c4152d80e..3c9cd8cfd 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # apiVersion: v1 diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml.template b/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml.template index 1e79b310a..503f75c45 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml.template +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/manifests/wls-exporter-ServiceMonitor.yaml.template @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # apiVersion: v1 diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/monitoring-inputs.yaml b/OracleIdentityGovernance/kubernetes/monitoring-service/monitoring-inputs.yaml index 6c0c1814d..a365a2484 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/monitoring-inputs.yaml +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/monitoring-inputs.yaml @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # # The version of this inputs file. Do not modify. diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/deploy-weblogic-monitoring-exporter.py b/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/deploy-weblogic-monitoring-exporter.py index b4336e81e..7458b7d14 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/deploy-weblogic-monitoring-exporter.py +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/deploy-weblogic-monitoring-exporter.py @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # import sys diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.py b/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.py index 4e8070833..931244708 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.py +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/undeploy-weblogic-monitoring-exporter.py @@ -1,4 +1,4 @@ -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # import sys diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/utils.sh b/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/utils.sh index 141610691..8b0874887 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/utils.sh +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/scripts/utils.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2021, 2023, Oracle and/or its affiliates. +# Copyright (c) 2021, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. # diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/setup-monitoring.sh b/OracleIdentityGovernance/kubernetes/monitoring-service/setup-monitoring.sh index 1076beb3a..846950d64 100755 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/setup-monitoring.sh +++ b/OracleIdentityGovernance/kubernetes/monitoring-service/setup-monitoring.sh @@ -82,14 +82,12 @@ function installKubePrometheusStack { --set prometheus.service.type=NodePort --set prometheus.service.nodePort=${prometheusNodePort} \ --set alertmanager.service.type=NodePort --set alertmanager.service.nodePort=${alertmanagerNodePort} \ --set grafana.adminPassword=admin --set grafana.service.type=NodePort --set grafana.service.nodePort=${grafanaNodePort} \ - --version "16.5.0" --values ${scriptDir}/values.yaml \ - --atomic --wait + --wait else helm install ${monitoringNamespace} prometheus-community/kube-prometheus-stack \ --namespace ${monitoringNamespace} ${additionalParamForKubePrometheusStack} \ --set grafana.adminPassword=admin \ - --version "16.5.0" --values ${scriptDir}/values.yaml \ - --atomic --wait + --wait fi exitIfError $? "ERROR: prometheus-community/kube-prometheus-stack install failed." } diff --git a/OracleIdentityGovernance/kubernetes/monitoring-service/values.yaml b/OracleIdentityGovernance/kubernetes/monitoring-service/values.yaml deleted file mode 100755 index 18757f394..000000000 --- a/OracleIdentityGovernance/kubernetes/monitoring-service/values.yaml +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) 2022, Oracle and/or its affiliates. -# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. -# -prometheusOperator: - admissionWebhooks: - patch: - enabled: true - image: - repository: k8s.gcr.io/ingress-nginx/kube-webhook-certgen - tag: v1.0 - sha: "f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068" - pullPolicy: IfNotPresent - diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_elasticsearch-svc.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_elasticsearch-svc.yaml deleted file mode 100755 index 65ce345c3..000000000 --- a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_elasticsearch-svc.yaml +++ /dev/null @@ -1,59 +0,0 @@ -# -# Copyright (c) 2020, Oracle and/or its affiliates. -# -# Licensed under the Universal Permissive License v 1.0 as shown at -# https://oss.oracle.com/licenses/upl -# -# -{{- if .Values.elk.enabled }} -kind: Service -apiVersion: v1 -metadata: - name: {{ include "oud-ds-rs.fullname" . }}-elasticsearch - labels: - app: {{ include "oud-ds-rs.fullname" . }}-elasticsearch -spec: - selector: - app: {{ include "oud-ds-rs.fullname" . }}-elasticsearch - clusterIP: None - ports: - - port: {{ .Values.elk.elkPorts.rest}} - name: rest - - port: {{ .Values.elk.elkPorts.internode}} - name: inter-node -{{- end }} ---- -{{- if .Values.elk.enabled }} -apiVersion: v1 -kind: Service -metadata: - namespace: - name: {{ include "oud-ds-rs.fullname" . }}-kibana - labels: - app: kibana -spec: - type: {{ .Values.elk.kibana.service.type }} - ports: - - port: {{ .Values.elk.kibana.service.targetPort }} - targetPort: {{ .Values.elk.kibana.service.targetPort }} - nodePort: {{ .Values.elk.kibana.service.nodePort }} - selector: - app: kibana -{{- end }} ---- - -{{- if .Values.elk.enabled }} -kind: Service -apiVersion: v1 -metadata: - name: {{ include "oud-ds-rs.fullname" . }}-logstash-service -spec: - type: {{ .Values.elk.logstash.service.type }} - selector: - app: logstash - ports: - - protocol: TCP - port: {{ .Values.elk.logstash.service.targetPort }} - targetPort: {{ .Values.elk.logstash.service.targetPort }} - name: logstash -{{- end }} diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_elasticsearch.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_elasticsearch.yaml deleted file mode 100755 index 21db7b01d..000000000 --- a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_elasticsearch.yaml +++ /dev/null @@ -1,93 +0,0 @@ -# -# Copyright (c) 2020, 2022, Oracle and/or its affiliates. -# -# Licensed under the Universal Permissive License v 1.0 as shown at -# https://oss.oracle.com/licenses/upl -# -# -{{- if .Values.elk.enabled }} -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: {{ include "oud-ds-rs.fullname" . }}-es-cluster -spec: - serviceName: {{ include "oud-ds-rs.fullname" . }}-elasticsearch - replicas: {{ .Values.elk.elasticsearch.esreplicas }} - selector: - matchLabels: - app: {{ include "oud-ds-rs.fullname" . }}-elasticsearch - template: - metadata: - labels: - app: {{ include "oud-ds-rs.fullname" . }}-elasticsearch - spec: - containers: - - name: elasticsearch - securityContext: - capabilities: - add: ["SYS_CHROOT"] - image: "{{ .Values.elk.elasticsearch.image.repository }}:{{ .Values.elk.elasticsearch.image.tag }}" - resources: -{{ toYaml .Values.elk.elasticsearch.resources | indent 10 }} - ports: - - containerPort: {{ .Values.elk.elkPorts.rest }} - name: rest - protocol: TCP - - containerPort: {{ .Values.elk.elkPorts.internode }} - name: inter-node - protocol: TCP - volumeMounts: - - name: data - mountPath: {{ .Values.elkVolume.mountPath }} - env: - - name: cluster.name - value: OUD-elk - - name: node.name - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: discovery.zen.ping.unicast.hosts - value: {{ include "es-discovery-hosts" . | quote }} - #value: "oud-ds-rs-es-cluster-0.oud-ds-rs-elasticsearch,oud-ds-rs-es-cluster-1.oud-ds-rs-elasticsearch,oud-ds-rs-es-cluster-2.oud-ds-rs-elasticsearch" - - name: discovery.zen.minimum_master_nodes - value: {{ .Values.elk.elasticsearch.minimumMasterNodes | quote }} - - name: ES_JAVA_OPTS - value: {{ .Values.elk.elasticsearch.esJAVAOpts | quote }} - initContainers: - {{- if (eq "filesystem" .Values.elkVolume.type) }} - - name: fix-permissions - image: {{ .Values.elk.busybox.image }} - command: ["sh", "-c", "chown -R 1000:1000 {{ .Values.elkVolume.mountPath }}"] - securityContext: - privileged: true - volumeMounts: - - name: data - mountPath: {{ .Values.elkVolume.mountPath }} - {{- end }} - - name: increase-vm-max-map - image: {{ .Values.elk.busybox.image }} - command: ["sysctl", "-w", "vm.max_map_count={{ .Values.elk.elasticsearch.sysctlVmMaxMapCount }}"] - securityContext: - privileged: true - - name: increase-fd-ulimit - image: {{ .Values.elk.busybox.image }} - command: ["sh", "-c", "ulimit -n 65536"] - securityContext: - privileged: true - {{- with .Values.elk.imagePullSecrets }} - imagePullSecrets: - {{- toYaml . | nindent 6 }} - {{- end }} - - volumeClaimTemplates: - - metadata: - name: data - labels: - app: {{ include "oud-ds-rs.fullname" . }}-elasticsearch - spec: - accessModes: [ {{ .Values.elkVolume.accessMode | quote }} ] - storageClassName: {{ .Values.elkVolume.storageClass }} - resources: - requests: - storage: {{ .Values.elkVolume.size }} -{{- end }} diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_kibana.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_kibana.yaml deleted file mode 100755 index 0151599f9..000000000 --- a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_kibana.yaml +++ /dev/null @@ -1,40 +0,0 @@ -# -# Copyright (c) 2020, 2022, Oracle and/or its affiliates. -# -# Licensed under the Universal Permissive License v 1.0 as shown at -# https://oss.oracle.com/licenses/upl -# -# -{{- if .Values.elk.enabled }} -apiVersion: apps/v1 -kind: Deployment -metadata: - name: {{ include "oud-ds-rs.fullname" . }}-kibana - labels: - app: kibana -spec: - replicas: {{ .Values.elk.kibana.kibanaReplicas }} - selector: - matchLabels: - app: kibana - template: - metadata: - labels: - app: kibana - spec: - containers: - - name: kibana - image: "{{ .Values.elk.kibana.image.repository }}:{{ .Values.elk.kibana.image.tag }}" - resources: -{{ toYaml .Values.elk.elasticsearch.resources | indent 10 }} - env: - - name: ELASTICSEARCH_URL - value: http://{{ include "oud-ds-rs.fullname" . }}-elasticsearch:{{ .Values.elk.elkPorts.rest }} - ports: - - containerPort: {{ .Values.elk.kibana.service.targetPort }} - {{- with .Values.elk.imagePullSecrets }} - imagePullSecrets: - {{- toYaml . | nindent 6 }} - {{- end }} - -{{- end }} diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_pv-elasticsearch.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_pv-elasticsearch.yaml deleted file mode 100755 index a2f146ed9..000000000 --- a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_pv-elasticsearch.yaml +++ /dev/null @@ -1,48 +0,0 @@ -# -# Copyright (c) 2020, 2022, Oracle and/or its affiliates. -# -# Licensed under the Universal Permissive License v 1.0 as shown at -# https://oss.oracle.com/licenses/upl -# -# -{{- if .Values.elk.enabled }} -# -{{- if and .Values.elkVolume.enabled (not .Values.elkVolume.pvname) }} -{{- $root := . -}} -{{- range $replicaIndex, $replicaN := until (.Values.elk.elasticsearch.esreplicas |int) }} -{{- $replicaIndx := (add $replicaIndex 1) -}} -# -apiVersion: v1 -kind: PersistentVolume -metadata: - name: {{ include "oud-ds-rs.fullname" $root }}-espv{{ $replicaIndx }} - labels: - {{- include "oud-ds-rs.labels" $root | nindent 4 }} -spec: - {{- if $root.Values.elkVolume.storageClass }} - storageClassName: {{ $root.Values.elkVolume.storageClass }} - {{- end }} - capacity: - storage: {{ $root.Values.elkVolume.size | quote }} - accessModes: - - {{ $root.Values.elkVolume.accessMode | quote }} - {{- if (eq "networkstorage" $root.Values.elkVolume.type) }} - nfs: - {{- if eq ($root.Values.elk.elasticsearch.esreplicas|int) 1 }} - path: {{ $root.Values.elkVolume.networkstorage.nfs.path }} - {{- else if gt ($root.Values.elk.elasticsearch.esreplicas|int) 1 }} - path: {{ $root.Values.elkVolume.networkstorage.nfs.path }}{{ $replicaIndx }} - {{- end }} - server: {{ $root.Values.elkVolume.networkstorage.nfs.server }} - {{- else if (eq "filesystem" $root.Values.elkVolume.type) }} - hostPath: - path: {{ $root.Values.elkVolume.filesystem.hostPath.path }}{{ $replicaIndx }} - {{- else if (eq "custom" $root.Values.elkVolume.type) }} - {{- with $root.Values.elkVolume.custom }} - {{- toYaml . | nindent 2 }} - {{- end }} - {{- end }} ---- - {{- end }} -{{- end }} -{{- end }} diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_storageclass.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_storageclass.yaml deleted file mode 100755 index 9285e429e..000000000 --- a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/elk_storageclass.yaml +++ /dev/null @@ -1,23 +0,0 @@ -# -# Copyright (c) 2020, 2022, Oracle and/or its affiliates. -# -# Licensed under the Universal Permissive License v 1.0 as shown at -# https://oss.oracle.com/licenses/upl -# -# -{{- if .Values.elk.enabled }} ---- -{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}} -apiVersion: storage.k8s.io/v1 -{{- else -}} -apiVersion: storage.k8s.io/v1beta1 -{{- end }} -kind: StorageClass -metadata: - name: {{ .Values.elkVolume.storageClass }} - annotations: - storageclass.beta.kubernetes.io/is-default-class: "true" -provisioner: kubernetes.io/is-default-class -parameters: - repl: {{ .Values.elk.elasticsearch.esreplicas | quote }} -{{- end }} diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-pv.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-pv.yaml index eb9ac82c3..2bc8f2514 100755 --- a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-pv.yaml +++ b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-pv.yaml @@ -1,9 +1,10 @@ # -# Copyright (c) 2020, 2022, Oracle and/or its affiliates. +# Copyright (c) 2020, 2023, Oracle and/or its affiliates. # # Licensed under the Universal Permissive License v 1.0 as shown at # https://oss.oracle.com/licenses/upl # +{{- if (ne "blockstorage" .Values.persistence.type) }} {{- if and .Values.persistence.enabled (not .Values.persistence.pvname) }} apiVersion: v1 kind: PersistentVolume @@ -35,3 +36,4 @@ spec: {{- end }} {{- end }} {{- end }} +{{- end }} diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-pvc.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-pvc.yaml index 0012543dd..3518590c5 100755 --- a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-pvc.yaml +++ b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-pvc.yaml @@ -1,9 +1,10 @@ # -# Copyright (c) 2020, 2022, Oracle and/or its affiliates. +# Copyright (c) 2020, 2023, Oracle and/or its affiliates. # # Licensed under the Universal Permissive License v 1.0 as shown at # https://oss.oracle.com/licenses/upl # +{{- if (ne "blockstorage" .Values.persistence.type) }} {{- if and .Values.persistence.enabled (not .Values.persistence.pvcname) }} # apiVersion: v1 @@ -32,3 +33,4 @@ spec: requests: storage: {{ .Values.persistence.size | quote }} {{- end }} +{{- end }} diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-statefulset-block.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-statefulset-block.yaml new file mode 100755 index 000000000..565c71d89 --- /dev/null +++ b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-statefulset-block.yaml @@ -0,0 +1,466 @@ +# +# Copyright (c) 2023, Oracle and/or its affiliates. +# +# Licensed under the Universal Permissive License v 1.0 as shown at +# https://oss.oracle.com/licenses/upl +# +# +{{- if (eq "blockstorage" .Values.persistence.type) }} +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: {{ include "oud-ds-rs.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: + app.kubernetes.io/name: {{ include "oud-ds-rs.name" . }} + helm.sh/chart: {{ include "oud-ds-rs.chart" . }} + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/managed-by: {{ .Release.Service }} +spec: + replicas: {{ .Values.replicaCount }} + serviceName: {{ include "oud-ds-rs.fullname" . }} + podManagementPolicy: {{ .Values.podManagementPolicy }} + updateStrategy: + type: {{ .Values.updateStrategy }} + selector: + matchLabels: + app.kubernetes.io/name: {{ include "oud-ds-rs.name" . }} + app.kubernetes.io/instance: {{ .Release.Name }} + template: + metadata: + labels: + app.kubernetes.io/name: {{ include "oud-ds-rs.name" . }} + app.kubernetes.io/instance: {{ .Release.Name }} + spec: + securityContext: +{{- toYaml .Values.podSecurityContext | nindent 8 }} + serviceAccountName: {{ include "oud-ds-rs.serviceAccountName" . }} + terminationGracePeriodSeconds: {{ (.Values.deploymentConfig.terminationPeriodSeconds| int) }} + {{- with .Values.busybox.imagePullSecrets }} + imagePullSecrets: + {{- toYaml . | nindent 6 }} + {{- end }} + + initContainers: + - name: mount-cpv + image: {{ .Values.busybox.image }} + env: + - name: OUD_INSTANCE_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: CONFIGVOLUME_ENABLED + value: "{{ .Values.configVolume.enabled }}" + - name: CONFIGVOLUME_MOUNTPATH + value: {{ .Values.configVolume.mountPath }} + + volumeMounts: + {{- if .Values.configVolume.enabled }} + - mountPath: {{ .Values.configVolume.mountPath }} + {{- if .Values.configVolume.pvname }} + name: {{ .Values.configVolume.pvname }} + {{ else }} + name: {{ include "oud-ds-rs.fullname" . }}-pv-config + {{- end }} + - mountPath: /mnt + name: config-map + {{- end }} + command: [ "/bin/sh", "-c" ] + args: + - + ordinal=${OUD_INSTANCE_NAME##*-}; + if [[ ${CONFIGVOLUME_ENABLED} == "true" ]]; + then + if [[ "$ordinal" == "0" ]]; + then + cp "/mnt/baseOUD.props" "${CONFIGVOLUME_MOUNTPATH}/config-baseOUD.props"; + else + cp "/mnt/replOUD.props" "${CONFIGVOLUME_MOUNTPATH}/config-replOUD.props"; + fi; + fi; + + - name: mount-pv + image: {{ .Values.busybox.image }} + env: + - name: OUD_INSTANCE_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: CLEANUP_BEFORE_START + value: "{{ .Values.oudConfig.cleanupbeforeStart }}" + volumeMounts: + - mountPath: /u01/oracle/user_projects + {{- if .Values.persistence.enabled }} + {{- if .Values.persistence.pvname }} + name: {{ .Values.persistence.pvname }} + {{ else }} + name: {{ include "oud-ds-rs.fullname" . }}-pv + {{- end }} + {{- else }} + name: oud-storage + subPath: user_projects + {{- end }} + command: [ "/bin/sh", "-c" ] + args: + - + chown -R {{ .Values.usergroup }} /u01/oracle/user_projects/ + ordinal=${OUD_INSTANCE_NAME##*-}; + if [[ ${CLEANUP_BEFORE_START} == "true" ]]; + then + if [[ "$ordinal" != "0" ]]; + then + cd /u01/oracle; rm -fr /u01/oracle/user_projects/$(OUD_INSTANCE_NAME)/OUD; + fi; + fi + {{- with .Values.imagePullSecrets }} + imagePullSecrets: +{{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.affinity }} + affinity: +{{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.nodeSelector }} + nodeSelector: +{{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.tolerations }} + tolerations: +{{- toYaml . | nindent 8 }} + {{- end }} + containers: + - name: {{ .Chart.Name }} + securityContext: + {{- toYaml .Values.securityContext | nindent 10 }} + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy }} + resources: + requests: + memory: {{ .Values.oudConfig.resources.requests.memory }} + cpu: {{ .Values.oudConfig.resources.requests.cpu }} + limits: + memory: {{ .Values.oudConfig.resources.limits.memory }} + cpu: {{ .Values.oudConfig.resources.limits.cpu }} + {{- if .Values.oudConfig.disablereplicationbeforeStop }} + lifecycle: + preStop: + exec: + command: + - /bin/sh + - -c + - | + ordinal=${OUD_INSTANCE_NAME##*-} + if [[ "$ordinal" != "0" ]] + then + echo $adminPassword > /tmp/adminpassword.txt && /u01/oracle/oud/bin/dsreplication disable --hostname localhost --port $adminConnectorPort --adminUID admin --trustAll --adminPasswordFile /tmp/adminpassword.txt --no-prompt --disableAll + fi + {{- end }} + ports: + - name: adminldaps + containerPort: {{ .Values.oudPorts.adminldaps }} + protocol: TCP + - name: adminhttps + containerPort: {{ .Values.oudPorts.adminhttps }} + protocol: TCP + - name: ldap + containerPort: {{ .Values.oudPorts.ldap }} + protocol: TCP + - name: ldaps + containerPort: {{ .Values.oudPorts.ldaps }} + protocol: TCP + - name: http + containerPort: {{ .Values.oudPorts.http }} + protocol: TCP + - name: https + containerPort: {{ .Values.oudPorts.https }} + protocol: TCP + - name: replication + containerPort: {{ .Values.oudPorts.replication }} + protocol: TCP + + env: + - name: instanceType + value: DS2RS_STS + - name: OUD_INSTANCE_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: MY_NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: MY_POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: sleepBeforeConfig + value: "3" + - name: sourceHost + value: {{ include "oud-ds-rs.fullname" . }}-0 + - name: baseDN + value: {{ .Values.oudConfig.baseDN }} + - name: integration + value: {{ .Values.oudConfig.integration }} + {{- if .Values.secret.enabled }} + - name: rootUserDN + valueFrom: + secretKeyRef: + {{- if .Values.secret.name }} + name: {{ .Values.secret.name }} + key: rootUserDN + {{- else }} + name: {{ include "oud-ds-rs.fullname" . }}-creds + key: rootUserDN + {{- end }} + {{- else }} + - name: rootUserDN + value: {{ .Values.oudConfig.rootUserDN }} + {{- end }} + {{- if .Values.secret.enabled }} + - name: rootUserPassword + valueFrom: + secretKeyRef: + {{- if .Values.secret.name }} + name: {{ .Values.secret.name }} + key: rootUserPassword + {{- else }} + name: {{ include "oud-ds-rs.fullname" . }}-creds + key: rootUserPassword + {{- end }} + {{- else }} + - name: rootUserPassword + value: {{ .Values.oudConfig.rootUserPassword }} + {{- end }} + {{- if .Values.secret.enabled }} + - name: adminUID + valueFrom: + secretKeyRef: + {{- if .Values.secret.name }} + name: {{ .Values.secret.name }} + key: adminUID + {{- else }} + name: {{ include "oud-ds-rs.fullname" . }}-creds + key: adminUID + {{- end }} + {{- else }} + - name: adminUID + value: {{ .Values.oudConfig.adminUID }} + {{- end }} + {{- if .Values.secret.enabled }} + - name: adminPassword + valueFrom: + secretKeyRef: + {{- if .Values.secret.name }} + name: {{ .Values.secret.name }} + key: adminPassword + {{- else }} + name: {{ include "oud-ds-rs.fullname" . }}-creds + key: adminPassword + {{- end }} + {{- else }} + - name: adminPassword + value: {{ .Values.oudConfig.adminPassword }} + {{- end }} + {{- if .Values.secret.enabled }} + - name: bindDN1 + valueFrom: + secretKeyRef: + {{- if .Values.secret.name }} + name: {{ .Values.secret.name }} + key: bindDN1 + {{- else }} + name: {{ include "oud-ds-rs.fullname" . }}-creds + key: bindDN1 + {{- end }} + {{- else }} + - name: bindDN1 + value: {{ .Values.oudConfig.rootUserDN }} + {{- end }} + {{- if .Values.secret.enabled }} + - name: bindPassword1 + valueFrom: + secretKeyRef: + {{- if .Values.secret.name }} + name: {{ .Values.secret.name }} + key: bindPassword1 + {{- else }} + name: {{ include "oud-ds-rs.fullname" . }}-creds + key: bindPassword1 + {{- end }} + {{- else }} + - name: bindPassword1 + value: {{ .Values.oudConfig.rootUserPassword }} + {{- end }} + {{- if .Values.secret.enabled }} + - name: bindDN2 + valueFrom: + secretKeyRef: + {{- if .Values.secret.name }} + name: {{ .Values.secret.name }} + key: bindDN2 + {{- else }} + name: {{ include "oud-ds-rs.fullname" . }}-creds + key: bindDN2 + {{- end }} + {{- else }} + - name: bindDN2 + value: {{ .Values.oudConfig.rootUserDN }} + {{- end }} + {{- if .Values.secret.enabled }} + - name: bindPassword2 + valueFrom: + secretKeyRef: + {{- if .Values.secret.name }} + name: {{ .Values.secret.name }} + key: bindPassword2 + {{- else }} + name: {{ include "oud-ds-rs.fullname" . }}-creds + key: bindPassword2 + {{- end }} + {{- else }} + - name: bindPassword2 + value: {{ .Values.oudConfig.rootUserPassword }} + {{- end }} + {{- if .Values.sourceServerPorts }} + - name: sourceServerPorts + value: {{ .Values.sourceServerPorts }} + {{ else }} + - name: sourceServerPorts + value: {{ include "oud-ds-rs.fullname" . }}-0:{{ .Values.oudPorts.adminldaps }} + {{- end }} + {{- if .Values.sourceAdminConnectorPort }} + - name: sourceAdminConnectorPort + value: {{ .Values.sourceAdminConnectorPort | quote }} + {{ else }} + - name: sourceAdminConnectorPort + value: {{ .Values.oudPorts.adminldaps | quote }} + {{- end }} + {{- if .Values.sourceReplicationPort }} + - name: sourceReplicationPort + value: {{ .Values.sourceReplicationPort | quote }} + {{ else }} + - name: sourceReplicationPort + value: {{ .Values.oudPorts.replication | quote }} + {{- end }} + - name: sampleData + value: {{ .Values.oudConfig.sampleData | quote }} + - name: adminConnectorPort + value: {{ .Values.oudPorts.adminldaps | quote }} + - name: httpAdminConnectorPort + value: {{ .Values.oudPorts.adminhttps | quote }} + - name: ldapPort + value: {{ .Values.oudPorts.ldap | quote }} + - name: ldapsPort + value: {{ .Values.oudPorts.ldaps | quote }} + - name: httpPort + value: {{ .Values.oudPorts.http | quote }} + - name: httpsPort + value: {{ .Values.oudPorts.https | quote }} + - name: replicationPort + value: {{ .Values.oudPorts.replication | quote }} + - name: dsreplication_1 + value: verify --hostname ${sourceHost} --port ${sourceAdminConnectorPort} --baseDN ${baseDN} --serverToRemove $(OUD_INSTANCE_NAME):${adminConnectorPort} --connectTimeout {{ .Values.deploymentConfig.replicationTimeout | int }} --readTimeout {{ .Values.deploymentConfig.replicationTimeout | int }} + - name: dsreplication_2 + value: enable --host1 ${sourceHost} --port1 ${sourceAdminConnectorPort} --replicationPort1 ${sourceReplicationPort} --host2 $(OUD_INSTANCE_NAME) --port2 ${adminConnectorPort} --replicationPort2 ${replicationPort} --baseDN ${baseDN} --connectTimeout {{ .Values.deploymentConfig.replicationTimeout | int }} --readTimeout {{ .Values.deploymentConfig.replicationTimeout | int }} + - name: dsreplication_3 + value: initialize --hostSource ${initializeFromHost} --portSource ${sourceAdminConnectorPort} --hostDestination $(OUD_INSTANCE_NAME) --portDestination ${adminConnectorPort} --baseDN ${baseDN} --connectTimeout {{ .Values.deploymentConfig.replicationTimeout | int }} --readTimeout {{ .Values.deploymentConfig.replicationTimeout | int }} + - name: dsreplication_4 + value: verify --hostname $(OUD_INSTANCE_NAME) --port ${adminConnectorPort} --baseDN ${baseDN} --connectTimeout {{ .Values.deploymentConfig.replicationTimeout | int }} --readTimeout {{ .Values.deploymentConfig.replicationTimeout | int }} + - name: post_dsreplication_dsconfig_1 + value: set-replication-domain-prop --domain-name ${baseDN} --set group-id:{{ (.Values.replOUD.groupId|int) }} + - name: post_dsreplication_dsconfig_2 + value: set-replication-server-prop --set group-id:{{ (.Values.replOUD.groupId|int) }} + volumeMounts: + - mountPath: /u01/oracle/user_projects + {{- if .Values.persistence.enabled }} + {{- if .Values.persistence.pvname }} + name: {{ .Values.persistence.pvname }} + {{ else }} + name: {{ include "oud-ds-rs.fullname" . }}-pv + {{- end }} + {{- else }} + name: oud-storage + subPath: user_projects + {{- end }} + {{- if .Values.configVolume.enabled }} + - mountPath: {{ .Values.configVolume.mountPath }} + {{- if .Values.configVolume.pvname }} + name: {{ .Values.configVolume.pvname }} + {{ else }} + name: {{ include "oud-ds-rs.fullname" . }}-pv-config + {{- end }} + - mountPath: /mnt + name: config-map + {{- end }} + + livenessProbe: + tcpSocket: + port: ldap + initialDelaySeconds: {{ (.Values.deploymentConfig.startupTime|int) }} + timeoutSeconds: {{ (.Values.deploymentConfig.timeout| int) }} + periodSeconds: {{ (.Values.deploymentConfig.period| int) }} + failureThreshold: 5 + readinessProbe: + tcpSocket: + port: ldap + initialDelaySeconds: {{ (.Values.deploymentConfig.startupTime|int) }} + periodSeconds: {{ (.Values.deploymentConfig.period| int) }} + timeoutSeconds: {{ (.Values.deploymentConfig.timeout| int) }} + failureThreshold: 5 + readinessProbe: + tcpSocket: + port: adminldaps + initialDelaySeconds: {{ (.Values.deploymentConfig.startupTime|int) }} + periodSeconds: {{ (.Values.deploymentConfig.period| int) }} + timeoutSeconds: {{ (.Values.deploymentConfig.timeout| int) }} + failureThreshold: 5 + readinessProbe: + exec: + command: [ + "/bin/sh","-c","/u01/oracle/oud/bin/ldapsearch -T -h localhost -Z -X -p $ldapsPort -b '' -s base '(objectClass=*)' '*'" + ] + initialDelaySeconds: {{ (.Values.deploymentConfig.startupTime| int) }} + periodSeconds: {{ (.Values.deploymentConfig.period| int) }} + timeoutSeconds: {{ (.Values.deploymentConfig.timeout| int) }} + failureThreshold: 5 + readinessProbe: + exec: + command: + - "/u01/oracle/container-scripts/checkOUDInstance.sh" + initialDelaySeconds: {{ (.Values.deploymentConfig.startupTime|int) }} + timeoutSeconds: {{ (.Values.deploymentConfig.timeout| int) }} + periodSeconds: {{ (.Values.deploymentConfig.period| int) }} + failureThreshold: 10 + volumes: + {{- if .Values.configVolume.enabled }} + - name: config-map + configMap: + name: {{ include "oud-ds-rs.fullname" . }}-configmap + {{- if .Values.configVolume.pvname }} + - name: {{ .Values.configVolume.pvname }} + {{ else }} + - name: {{ include "oud-ds-rs.fullname" . }}-pv-config + {{- end }} + persistentVolumeClaim: + {{- if .Values.configVolume.pvcname }} + claimName: {{ .Values.configVolume.pvcname }} + {{ else }} + claimName: {{ include "oud-ds-rs.fullname" . }}-pvc-config + {{- end }} + {{- end }} + volumeClaimTemplates: + - metadata: + {{- if .Values.persistence.enabled }} + {{- if .Values.persistence.pvname }} + name: {{ .Values.persistence.pvname }} + {{ else }} + name: {{ include "oud-ds-rs.fullname" . }}-pv + {{- end }} + {{- end }} + spec: + accessModes: [ {{ .Values.persistence.accessMode | quote }} ] + resources: + requests: + storage: {{ .Values.persistence.size | quote }} + storageClassName: {{ .Values.persistence.storageClass }} +{{- end }} diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-statefulset.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-statefulset.yaml index 57f02d02f..6ed174c39 100755 --- a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-statefulset.yaml +++ b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-statefulset.yaml @@ -1,10 +1,11 @@ # -# Copyright (c) 2022, Oracle and/or its affiliates. +# Copyright (c) 2022, 2023, Oracle and/or its affiliates. # # Licensed under the Universal Permissive License v 1.0 as shown at # https://oss.oracle.com/licenses/upl # # +{{- if (ne "blockstorage" .Values.persistence.type) }} apiVersion: apps/v1 kind: StatefulSet metadata: @@ -185,6 +186,8 @@ spec: value: {{ include "oud-ds-rs.fullname" . }}-0 - name: baseDN value: {{ .Values.oudConfig.baseDN }} + - name: integration + value: {{ .Values.oudConfig.integration }} {{- if .Values.secret.enabled }} - name: rootUserDN valueFrom: @@ -445,4 +448,4 @@ spec: claimName: {{ include "oud-ds-rs.fullname" . }}-pvc-config {{- end }} {{- end }} - +{{- end }} diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-storageclass-config.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-storageclass-config.yaml new file mode 100644 index 000000000..c2e0805da --- /dev/null +++ b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-storageclass-config.yaml @@ -0,0 +1,18 @@ +# +# Copyright (c) 2023, Oracle and/or its affiliates. +# +# Licensed under the Universal Permissive License v 1.0 as shown at +# https://oss.oracle.com/licenses/upl +# +# +{{- if .Values.configVolume.enabled }} +{{ if .Values.configVolume.storageClassCreate }} +kind: StorageClass +apiVersion: storage.k8s.io/v1 +metadata: + name: {{ .Values.configVolume.storageClass }} + annotations: + storageclass.kubernetes.io/is-default-class: {{ .Values.configVolume.storageClassDefault | quote }} +provisioner: {{ .Values.configVolume.provisioner }} +{{ end }} +{{- end }} diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-storageclass.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-storageclass.yaml index 0eea72c29..0e1c74bc6 100755 --- a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-storageclass.yaml +++ b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/templates/oud-storageclass.yaml @@ -1,16 +1,19 @@ # -# Copyright (c) 2022, Oracle and/or its affiliates. +# Copyright (c) 2022, 2023, Oracle and/or its affiliates. # # Licensed under the Universal Permissive License v 1.0 as shown at # https://oss.oracle.com/licenses/upl # # {{- if .Values.persistence.enabled }} +{{ if .Values.persistence.storageClassCreate }} kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: {{ .Values.persistence.storageClass }} annotations: - storageclass.beta.kubernetes.io/is-default-class: "true" -provisioner: kubernetes.io/is-default-class + storageclass.kubernetes.io/is-default-class: {{ .Values.persistence.storageClassDefault | quote }} +provisioner: {{ .Values.persistence.provisioner }} +{{ end }} {{- end }} + diff --git a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/values.yaml b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/values.yaml index 16d2b1a35..f902d08ec 100755 --- a/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/values.yaml +++ b/OracleUnifiedDirectory/kubernetes/helm/oud-ds-rs/values.yaml @@ -51,6 +51,9 @@ securityContext: {} # runAsNonRoot: true # runAsUser: 1000 +# pv/datadir owner:group permissions, change as per user chosen +usergroup: "1000:0" + service: # Type of Service to be created for OUD Interfaces (like LDAP, HTTP, Admin) type: ClusterIP @@ -154,11 +157,18 @@ persistence: pvname: # provide the pvname to use an already created Persistent Volume Claim. If blank, will use default name from Chart pvcname: + # Specify Accessmode ReadWriteMany for NFS and for block ReadWriteOnce accessMode: ReadWriteMany size: 20Gi + # if enabled, it will create the storageclass. if value is false, please provide existing storage class to be used. + storageClassCreate: true + # Storageclass name to be used. storageClass: manual + # if enabled, it will mark the created storageclass as default. + storageClassDefault: true + provisioner: kubernetes.io/is-default-class reclaimPolicy: "Delete" -# default supported values: either filesystem or networkstorage or custom +# default supported values: either filesystem or networkstorage or custom or blockstorage type: filesystem networkstorage: nfs: @@ -186,7 +196,12 @@ configVolume: pvcname: accessMode: ReadWriteMany size: 10Gi - storageClass: manual + storageClassCreate: true + # Storageclass name to be used. + storageClass: manual-config + # if enabled, it will mark the created storageclass as default. + storageClassDefault: true + provisioner: kubernetes.io/is-default-class reclaimPolicy: "Retain" # default supported values: either filesystem or networkstorage or custom type: networkstorage @@ -292,15 +307,16 @@ oudConfig: cleanupbeforeStart: false # This parameter is used to disable replication when a pod is restarted. disablereplicationbeforeStop: false - + # possible values are no-integration | basic | generic | eus + integration: no-integration # memory, cpu parameters for both requests and limits for oud instances resources: requests: memory: "4Gi" - cpu: ".5" + cpu: "500m" limits: - memory: "4Gi" - cpu: "1" + memory: "8Gi" + cpu: "2" # Configuration for Base OUD instance (oud-ds-rs-0) baseOUD: @@ -331,10 +347,9 @@ replOUD: # Configuration for Logstash deployment elk: - # Enabled flag to enable the integrated ELK stack for OUD - enabled: false imagePullSecrets: - name: dockercred + # IntegrationEnabled flag to enable Logstash deployment for OUD IntegrationEnabled: false logStashImage: logstash:8.3.1 logstashConfigMap: diff --git a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_elasticsearch-svc.yaml b/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_elasticsearch-svc.yaml deleted file mode 100755 index 985238b1f..000000000 --- a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_elasticsearch-svc.yaml +++ /dev/null @@ -1,59 +0,0 @@ -# -# Copyright (c) 2020, Oracle and/or its affiliates. -# -# Licensed under the Universal Permissive License v 1.0 as shown at -# https://oss.oracle.com/licenses/upl -# -# -{{- if .Values.elk.enabled }} -kind: Service -apiVersion: v1 -metadata: - name: {{ include "oudsm.fullname" . }}-elasticsearch - labels: - app: {{ include "oudsm.fullname" . }}-elasticsearch -spec: - selector: - app: {{ include "oudsm.fullname" . }}-elasticsearch - clusterIP: None - ports: - - port: {{ .Values.elk.elkPorts.rest}} - name: rest - - port: {{ .Values.elk.elkPorts.internode}} - name: inter-node -{{- end }} ---- -{{- if .Values.elk.enabled }} -apiVersion: v1 -kind: Service -metadata: - namespace: - name: {{ include "oudsm.fullname" . }}-kibana - labels: - app: kibana -spec: - type: {{ .Values.elk.kibana.service.type }} - ports: - - port: {{ .Values.elk.kibana.service.targetPort }} - targetPort: {{ .Values.elk.kibana.service.targetPort }} - nodePort: {{ .Values.elk.kibana.service.nodePort }} - selector: - app: kibana -{{- end }} ---- - -{{- if .Values.elk.enabled }} -kind: Service -apiVersion: v1 -metadata: - name: {{ include "oudsm.fullname" . }}-logstash-service -spec: - type: {{ .Values.elk.logstash.service.type }} - selector: - app: logstash - ports: - - protocol: TCP - port: {{ .Values.elk.logstash.service.targetPort }} - targetPort: {{ .Values.elk.logstash.service.targetPort }} - name: logstash -{{- end }} diff --git a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_elasticsearch.yaml b/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_elasticsearch.yaml deleted file mode 100755 index 6a02e80be..000000000 --- a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_elasticsearch.yaml +++ /dev/null @@ -1,93 +0,0 @@ -# -# Copyright (c) 2020, 2022, Oracle and/or its affiliates. -# -# Licensed under the Universal Permissive License v 1.0 as shown at -# https://oss.oracle.com/licenses/upl -# -# -{{- if .Values.elk.enabled }} -apiVersion: apps/v1 -kind: StatefulSet -metadata: - name: {{ include "oudsm.fullname" . }}-es-cluster -spec: - serviceName: {{ include "oudsm.fullname" . }}-elasticsearch - replicas: {{ .Values.elk.elasticsearch.esreplicas }} - selector: - matchLabels: - app: {{ include "oudsm.fullname" . }}-elasticsearch - template: - metadata: - labels: - app: {{ include "oudsm.fullname" . }}-elasticsearch - spec: - containers: - - name: elasticsearch - securityContext: - capabilities: - add: ["SYS_CHROOT"] - image: "{{ .Values.elk.elasticsearch.image.repository }}:{{ .Values.elk.elasticsearch.image.tag }}" - resources: -{{ toYaml .Values.elk.elasticsearch.resources | indent 10 }} - ports: - - containerPort: {{ .Values.elk.elkPorts.rest }} - name: rest - protocol: TCP - - containerPort: {{ .Values.elk.elkPorts.internode }} - name: inter-node - protocol: TCP - volumeMounts: - - name: data - mountPath: {{ .Values.elkVolume.mountPath }} - env: - - name: cluster.name - value: OUD-elk - - name: node.name - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: discovery.zen.ping.unicast.hosts - value: {{ include "es-discovery-hosts" . | quote }} - #value: "oudsm-es-cluster-0.oudsm-elasticsearch,oudsm-es-cluster-1.oudsm-elasticsearch,oudsm-es-cluster-2.oudsm-elasticsearch" - - name: discovery.zen.minimum_master_nodes - value: {{ .Values.elk.elasticsearch.minimumMasterNodes | quote }} - - name: ES_JAVA_OPTS - value: {{ .Values.elk.elasticsearch.esJAVAOpts | quote }} - initContainers: - {{- if (eq "filesystem" .Values.elkVolume.type) }} - - name: fix-permissions - image: {{ .Values.elk.busybox.image }} - command: ["sh", "-c", "chown -R 1000:1000 {{ .Values.elkVolume.mountPath }}"] - securityContext: - privileged: true - volumeMounts: - - name: data - mountPath: {{ .Values.elkVolume.mountPath }} - {{- end }} - - name: increase-vm-max-map - image: {{ .Values.elk.busybox.image }} - command: ["sysctl", "-w", "vm.max_map_count={{ .Values.elk.elasticsearch.sysctlVmMaxMapCount }}"] - securityContext: - privileged: true - - name: increase-fd-ulimit - image: {{ .Values.elk.busybox.image }} - command: ["sh", "-c", "ulimit -n 65536"] - securityContext: - privileged: true - {{- with .Values.elk.imagePullSecrets }} - imagePullSecrets: - {{- toYaml . | nindent 6 }} - {{- end }} - - volumeClaimTemplates: - - metadata: - name: data - labels: - app: {{ include "oudsm.fullname" . }}-elasticsearch - spec: - accessModes: [ {{ .Values.elkVolume.accessMode | quote }} ] - storageClassName: {{ .Values.elkVolume.storageClass }} - resources: - requests: - storage: {{ .Values.elkVolume.size }} -{{- end }} diff --git a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_kibana.yaml b/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_kibana.yaml deleted file mode 100755 index 35ba903bc..000000000 --- a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_kibana.yaml +++ /dev/null @@ -1,39 +0,0 @@ -# -# Copyright (c) 2020, 2022, Oracle and/or its affiliates. -# -# Licensed under the Universal Permissive License v 1.0 as shown at -# https://oss.oracle.com/licenses/upl -# -# -{{- if .Values.elk.enabled }} -apiVersion: apps/v1 -kind: Deployment -metadata: - name: {{ include "oudsm.fullname" . }}-kibana - labels: - app: kibana -spec: - replicas: {{ .Values.elk.kibana.kibanaReplicas }} - selector: - matchLabels: - app: kibana - template: - metadata: - labels: - app: kibana - spec: - containers: - - name: kibana - image: "{{ .Values.elk.kibana.image.repository }}:{{ .Values.elk.kibana.image.tag }}" - resources: -{{ toYaml .Values.elk.elasticsearch.resources | indent 10 }} - env: - - name: ELASTICSEARCH_URL - value: http://{{ include "oudsm.fullname" . }}-elasticsearch:{{ .Values.elk.elkPorts.rest }} - ports: - - containerPort: {{ .Values.elk.kibana.service.targetPort }} - {{- with .Values.elk.imagePullSecrets }} - imagePullSecrets: - {{- toYaml . | nindent 6 }} - {{- end }} -{{- end }} diff --git a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_pv-elasticsearch.yaml b/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_pv-elasticsearch.yaml deleted file mode 100755 index 71f3a88c4..000000000 --- a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_pv-elasticsearch.yaml +++ /dev/null @@ -1,51 +0,0 @@ -# -# Copyright (c) 2020, 2022, Oracle and/or its affiliates. -# -# Licensed under the Universal Permissive License v 1.0 as shown at -# https://oss.oracle.com/licenses/upl -# -{{- if .Values.elk.enabled }} -# -{{- if and .Values.elkVolume.enabled (not .Values.elkVolume.pvname) }} -{{- $root := . -}} -{{- range $replicaIndex, $replicaN := until (.Values.elk.elasticsearch.esreplicas |int) }} -{{- $replicaIndx := (add $replicaIndex 1) -}} -# -apiVersion: v1 -kind: PersistentVolume -metadata: - name: {{ include "oudsm.fullname" $root }}-espv{{ $replicaIndx }} - labels: - {{- include "oudsm.labels" $root | nindent 4 }} -spec: - {{- if $root.Values.elkVolume.storageClass }} - storageClassName: {{ $root.Values.elkVolume.storageClass }} - {{- end }} - capacity: - storage: {{ $root.Values.elkVolume.size | quote }} - accessModes: - - {{ $root.Values.elkVolume.accessMode | quote }} - {{- if (eq "networkstorage" $root.Values.elkVolume.type) }} - nfs: - {{- if eq ($root.Values.elk.elasticsearch.esreplicas|int) 1 }} - path: {{ $root.Values.elkVolume.networkstorage.nfs.path }} - {{- else if gt ($root.Values.elk.elasticsearch.esreplicas|int) 1 }} - path: {{ $root.Values.elkVolume.networkstorage.nfs.path }}{{ $replicaIndx }} - {{- end }} - server: {{ $root.Values.elkVolume.networkstorage.nfs.server }} - {{- else if (eq "filesystem" $root.Values.elkVolume.type) }} - hostPath: - {{- if eq ($root.Values.elk.elasticsearch.esreplicas|int) 1 }} - path: {{ $root.Values.elkVolume.filesystem.hostPath.path }} - {{- else if gt ($root.Values.elk.elasticsearch.esreplicas|int) 1 }} - path: {{ $root.Values.elkVolume.filesystem.hostPath.path }}{{ $replicaIndx }} - {{- end }} - {{- else if (eq "custom" $root.Values.elkVolume.type) }} - {{- with $root.Values.elkVolume.custom }} - {{- toYaml . | nindent 2 }} - {{- end }} - {{- end }} ---- - {{- end }} -{{- end }} -{{- end }} diff --git a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_storageclass.yaml b/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_storageclass.yaml deleted file mode 100755 index 9285e429e..000000000 --- a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/templates/elk_storageclass.yaml +++ /dev/null @@ -1,23 +0,0 @@ -# -# Copyright (c) 2020, 2022, Oracle and/or its affiliates. -# -# Licensed under the Universal Permissive License v 1.0 as shown at -# https://oss.oracle.com/licenses/upl -# -# -{{- if .Values.elk.enabled }} ---- -{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}} -apiVersion: storage.k8s.io/v1 -{{- else -}} -apiVersion: storage.k8s.io/v1beta1 -{{- end }} -kind: StorageClass -metadata: - name: {{ .Values.elkVolume.storageClass }} - annotations: - storageclass.beta.kubernetes.io/is-default-class: "true" -provisioner: kubernetes.io/is-default-class -parameters: - repl: {{ .Values.elk.elasticsearch.esreplicas | quote }} -{{- end }} diff --git a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/values.yaml b/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/values.yaml index 35112c3ba..9c1cf0e30 100755 --- a/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/values.yaml +++ b/OracleUnifiedDirectorySM/kubernetes/helm/oudsm/values.yaml @@ -159,10 +159,9 @@ oudsm: # Configuration for Logstash deployment elk: - # Enabled flag to enable the integrated ELK stack for OUD - enabled: false imagePullSecrets: - name: dockercred + # IntegrationEnabled flag to enable Logstash deployment for OUDSM IntegrationEnabled: false logStashImage: logstash:8.3.1 logstashConfigMap: diff --git a/docs-source/content/idm-products/_index.md b/docs-source/content/idm-products/_index.md index 5fa5bd7be..dc61eade4 100644 --- a/docs-source/content/idm-products/_index.md +++ b/docs-source/content/idm-products/_index.md @@ -6,7 +6,20 @@ description= "This document lists all the Oracle Identity Management products d ### Oracle Fusion Middleware on Kubernetes -Oracle supports the deployment of the following Oracle Identity Management products on Kubernetes. Click on the appropriate document link below to get started on setting up the product. +Oracle supports the deployment of the following Oracle Identity Management products on Kubernetes. Click on the appropriate document link below to get started on configuring the product. + +Please note the following: + ++ The individual product guides below for [Oracle Access Management](../idm-products/oam), [Oracle Identity Governance](../idm-products/oig), [Oracle Unified Directory](../idm-products/oud), and [Oracle Unified Directory Services Manager](../idm-products/oudsm), are for configuring that product on a Kubernetes cluster where no other Oracle Identity Management products will be deployed. For example, if you are deploying Oracle Access Management (OAM) only, then you can follow the [Oracle Access Management](../idm-products/oam) guide. If you are deploying multiple Oracle Identity Management products on the same Kubernetes cluster, then you must follow the Enterprise Deployment Guide outlined in [Enterprise Deployments](../idm-products/enterprise-deployments). Please note, you also have the option to follow the Enterprise Deployment Guide even if you are only installing one product, such as OAM for example. + ++ The individual product guides do not explain how to configure a Kubernetes cluster given the product can be deployed on any compliant Kubernetes vendor. If you need to understand how to configure a Kubernetes cluster ready for an Oracle Identity Management deployment, you should follow the Enterprise Deployment Guide in [Enterprise Deployments](../idm-products/enterprise-deployments). + ++ The [Enterprise Deployment Automation](../idm-products/enterprise-deployments/enterprise-deployment-automation) section also contains details on automation scripts that can: + + + Automate the creation of a Kubernetes cluster on Oracle Cloud Infrastructure (OCI), ready for the deployment of Oracle Identity Management products. + + Automate the deployment of Oracle Identity Management products on any compliant Kubernetes cluster. + + {{% children style="h3" description="true" %}} diff --git a/docs-source/content/idm-products/enterprise-deployments/enterprise-deployment-automation/_index.md b/docs-source/content/idm-products/enterprise-deployments/enterprise-deployment-automation/_index.md index a4708e072..92738f4a1 100644 --- a/docs-source/content/idm-products/enterprise-deployments/enterprise-deployment-automation/_index.md +++ b/docs-source/content/idm-products/enterprise-deployments/enterprise-deployment-automation/_index.md @@ -6,8 +6,11 @@ description: "The Enterprise Deployment Automation scripts deploy the entire Ora ### Enterprise Deployment Automation -The [Enterprise Deployment Automation scripts](https://github.com/oracle/fmw-kubernetes/tree/master/FMWKubernetesMAA/OracleEnterpriseDeploymentAutomation/OracleIdentityManagement), allow you to automatically deploy the entire Oracle Identity and Access Management suite in a production environment. +The Enterprise Deployment Automation scripts allow you to deploy the entire Oracle Identity and Access Management suite in a production environment. You can use the scripts to: + + + + Automate the creation of a Kubernetes cluster on Oracle Cloud Infrastructure (OCI), ready for the deployment of Oracle Identity and Access Management products. See [Automating the OCI Infrastructure Creation for the Identity and Access Management Kubernetes Cluster](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/automating-oci-infrastructure-creation-identity-and-access-management-kubernetes-cluster.html). + + Automate the deployment of Oracle Identity and Access Management products on any compliant Kubernetes cluster. See [Automating the Identity and Access Management Enterprise Deployment](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/automating-identity-management-deployment.html). -For more information about the use of these scripts, see [Automating the Identity and Access Management Enterprise Deployment](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/automating-identity-management-deployment.html). diff --git a/docs-source/content/idm-products/oam/configure-ingress/_index.md b/docs-source/content/idm-products/oam/configure-ingress/_index.md index a8a5f24fa..566f7e4c3 100644 --- a/docs-source/content/idm-products/oam/configure-ingress/_index.md +++ b/docs-source/content/idm-products/oam/configure-ingress/_index.md @@ -327,8 +327,8 @@ If you are using a Managed Service for your Kubernetes cluster, for example Orac The output will look similar to the following: ``` - NAME CLASS HOSTS ADDRESS PORTS AGE - access-ingress * 10.101.132.251 80 2m53s + NAME CLASS HOSTS ADDRESS PORTS AGE + accessdomain-nginx * 80 5s ``` 1. Find the node port of NGINX using the following command: @@ -367,6 +367,7 @@ If you are using a Managed Service for your Kubernetes cluster, for example Orac Name: accessdomain-nginx Namespace: oamns Address: 10.106.70.55 + Ingress Class: Default backend: default-http-backend:80 () Rules: Host Path Backends diff --git a/docs-source/content/idm-products/oam/create-oam-domains/_index.md b/docs-source/content/idm-products/oam/create-oam-domains/_index.md index ca071e51f..61ccf96b3 100644 --- a/docs-source/content/idm-products/oam/create-oam-domains/_index.md +++ b/docs-source/content/idm-products/oam/create-oam-domains/_index.md @@ -66,7 +66,7 @@ The sample scripts for Oracle Access Management domain deployment are available ```bash domainUID: accessdomain domainHome: /u01/oracle/user_projects/domains/accessdomain - image: container-registry.oracle.com/middleware/oam_cpu:12.2.1.4-jdk8-ol7- + image: container-registry.oracle.com/middleware/oam_cpu:12.2.1.4-jdk8-ol7- imagePullSecretName: orclcred weblogicCredentialsSecretName: accessdomain-credentials logHome: /u01/oracle/user_projects/domains/logs/accessdomain @@ -90,9 +90,9 @@ A full list of parameters in the `create-domain-inputs.yaml` file are shown belo | `createDomainFilesDir` | Directory on the host machine to locate all the files to create a WebLogic domain, including the script that is specified in the `createDomainScriptName` property. By default, this directory is set to the relative path `wlst`, and the create script will use the built-in WLST offline scripts in the `wlst` directory to create the WebLogic domain. It can also be set to the relative path `wdt`, and then the built-in WDT scripts will be used instead. An absolute path is also supported to point to an arbitrary directory in the file system. The built-in scripts can be replaced by the user-provided scripts or model files as long as those files are in the specified directory. Files in this directory are put into a Kubernetes config map, which in turn is mounted to the `createDomainScriptsMountPath`, so that the Kubernetes pod can use the scripts and supporting files to create a domain home. | `wlst` | | `createDomainScriptsMountPath` | Mount path where the create domain scripts are located inside a pod. The `create-domain.sh` script creates a Kubernetes job to run the script (specified in the `createDomainScriptName` property) in a Kubernetes pod to create a domain home. Files in the `createDomainFilesDir` directory are mounted to this location in the pod, so that the Kubernetes pod can use the scripts and supporting files to create a domain home. | `/u01/weblogic` | | `createDomainScriptName` | Script that the create domain script uses to create a WebLogic domain. The `create-domain.sh` script creates a Kubernetes job to run this script to create a domain home. The script is located in the in-pod directory that is specified in the `createDomainScriptsMountPath` property. If you need to provide your own scripts to create the domain home, instead of using the built-it scripts, you must use this property to set the name of the script that you want the create domain job to run. | `create-domain-job.sh` | -| `domainHome` | Home directory of the OAM domain. If not specified, the value is derived from the `domainUID` as `/shared/domains/`. | `/u01/oracle/user_projects/domains/accessinfra` | -| `domainPVMountPath` | Mount path of the domain persistent volume. | `/u01/oracle/user_projects` | -| `domainUID` | Unique ID that will be used to identify this particular domain. Used as the name of the generated WebLogic domain as well as the name of the Kubernetes domain resource. This ID must be unique across all domains in a Kubernetes cluster. This ID cannot contain any character that is not valid in a Kubernetes service name. | `accessinfra` | +| `domainHome` | Home directory of the OAM domain. If not specified, the value is derived from the `domainUID` as `/shared/domains/`. | `/u01/oracle/user_projects/domains/accessdomain` | +| `domainPVMountPath` | Mount path of the domain persistent volume. | `/u01/oracle/user_projects/domains` | +| `domainUID` | Unique ID that will be used to identify this particular domain. Used as the name of the generated WebLogic domain as well as the name of the Kubernetes domain resource. This ID must be unique across all domains in a Kubernetes cluster. This ID cannot contain any character that is not valid in a Kubernetes service name. | `accessdomain` | | `domainType` | Type of the domain. Mandatory input for OAM domains. You must provide one of the supported domain type value: `oam` (deploys an OAM domain)| `oam` | `exposeAdminNodePort` | Boolean indicating if the Administration Server is exposed outside of the Kubernetes cluster. | `false` | | `exposeAdminT3Channel` | Boolean indicating if the T3 administrative channel is exposed outside the Kubernetes cluster. | `true` | @@ -102,21 +102,21 @@ A full list of parameters in the `create-domain-inputs.yaml` file are shown belo | `includeServerOutInPodLog` | Boolean indicating whether to include the server .out to the pod's stdout. | `true` | | `initialManagedServerReplicas` | Number of Managed Servers to initially start for the domain. | `2` | | `javaOptions` | Java options for starting the Administration Server and Managed Servers. A Java option can have references to one or more of the following pre-defined variables to obtain WebLogic domain information: `$(DOMAIN_NAME)`, `$(DOMAIN_HOME)`, `$(ADMIN_NAME)`, `$(ADMIN_PORT)`, and `$(SERVER_NAME)`. | `-Dweblogic.StdoutDebugEnabled=false` | -| `logHome` | The in-pod location for the domain log, server logs, server out, and Node Manager log files. If not specified, the value is derived from the `domainUID` as `/shared/logs/`. | `/u01/oracle/user_projects/domains/logs/accessinfra` | +| `logHome` | The in-pod location for the domain log, server logs, server out, and Node Manager log files. If not specified, the value is derived from the `domainUID` as `/shared/logs/`. | `/u01/oracle/user_projects/domains/logs/accessdomain` | | `managedServerNameBase` | Base string used to generate Managed Server names. | `oam_server` | | `managedServerPort` | Port number for each Managed Server. | `8001` | | `namespace` | Kubernetes namespace in which to create the domain. | `accessns` | -| `persistentVolumeClaimName` | Name of the persistent volume claim created to host the domain home. If not specified, the value is derived from the `domainUID` as `-weblogic-sample-pvc`. | `accessinfra-domain-pvc` | +| `persistentVolumeClaimName` | Name of the persistent volume claim created to host the domain home. If not specified, the value is derived from the `domainUID` as `-weblogic-sample-pvc`. | `accessdomain-domain-pvc` | | `productionModeEnabled` | Boolean indicating if production mode is enabled for the domain. | `true` | | `serverStartPolicy` | Determines which WebLogic Server instances will be started. Legal values are `Never`, `IfNeeded`, `AdminOnly`. | `IfNeeded` | | `t3ChannelPort` | Port for the T3 channel of the NetworkAccessPoint. | `30012` | | `t3PublicAddress` | Public address for the T3 channel. This should be set to the public address of the Kubernetes cluster. This would typically be a load balancer address.

For development environments only: In a single server (all-in-one) Kubernetes deployment, this may be set to the address of the master, or at the very least, it must be set to the address of one of the worker nodes. | If not provided, the script will attempt to set it to the IP address of the Kubernetes cluster | -| `weblogicCredentialsSecretName` | Name of the Kubernetes secret for the Administration Server's user name and password. If not specified, then the value is derived from the `domainUID` as `-weblogic-credentials`. | `accessinfra-domain-credentials` | +| `weblogicCredentialsSecretName` | Name of the Kubernetes secret for the Administration Server's user name and password. If not specified, then the value is derived from the `domainUID` as `-weblogic-credentials`. | `accessdomain-domain-credentials` | | `weblogicImagePullSecretName` | Name of the Kubernetes secret for the container registry, used to pull the WebLogic Server image. | | | `serverPodCpuRequest`, `serverPodMemoryRequest`, `serverPodCpuCLimit`, `serverPodMemoryLimit` | The maximum amount of compute resources allowed, and minimum amount of compute resources required, for each server pod. Please refer to the Kubernetes documentation on `Managing Compute Resources for Containers` for details. | Resource requests and resource limits are not specified. | | `rcuSchemaPrefix` | The schema prefix to use in the database, for example `OAM1`. You may wish to make this the same as the domainUID in order to simplify matching domains to their RCU schemas. | `OAM1` | | `rcuDatabaseURL` | The database URL. | `oracle-db.default.svc.cluster.local:1521/devpdb.k8s` | -| `rcuCredentialsSecret` | The Kubernetes secret containing the database credentials. | `accessinfra-rcu-credentials` | +| `rcuCredentialsSecret` | The Kubernetes secret containing the database credentials. | `accessdomain-rcu-credentials` | | `datasourceType` | Type of JDBC datasource applicable for the OAM domain. Legal values are `agl` and `generic`. Choose `agl` for Active GridLink datasource and `generic` for Generic datasource. For enterprise deployments, Oracle recommends that you use GridLink data sources to connect to Oracle RAC databases. See the [Enterprise Deployment Guide](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/preparing-existing-database-enterprise-deployment.html#GUID-E3705EFF-AEF2-4F75-B5CE-1A829CDF0A1F) for further details. | `generic` | @@ -165,7 +165,7 @@ generated artifacts: export initialManagedServerReplicas="2" export managedServerNameBase="oam_server" export managedServerPort="14100" - export image="container-registry.oracle.com/middleware/oam_cpu:12.2.1.4-jdk8-ol7-" + export image="container-registry.oracle.com/middleware/oam_cpu:12.2.1.4-jdk8-ol7-" export imagePullPolicy="IfNotPresent" export imagePullSecretName="orclcred" export productionModeEnabled="true" @@ -248,14 +248,22 @@ By default, the java memory parameters assigned to the oam_server cluster are ve 1. Edit the `domain.yaml` file and inside `name: accessdomain-oam-cluster`, add the memory setting as below: ``` - serverPod: - env: - - name: USER_MEM_ARGS - value: "-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m" + serverPod: + env: + - name: USER_MEM_ARGS + value: "-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m" + resources: + limits: + cpu: "2" + memory: "8Gi" + requests: + cpu: "1000m" + memory: "4Gi" ``` - For example: + For example: + ``` apiVersion: weblogic.oracle/v1 kind: Cluster @@ -270,9 +278,26 @@ By default, the java memory parameters assigned to the oam_server cluster are ve env: - name: USER_MEM_ARGS value: "-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m" - replicas: 2 + resources: + limits: + cpu: "2" + memory: "8Gi" + requests: + cpu: "1000m" + memory: "4Gi" + replicas: 1 + ``` + + **Note**: The above CPU and memory values are for development environments only. For Enterprise Deployments, please review the performance recommendations and sizing requirements in [Enterprise Deployment Guide for Oracle Identity and Access Management in a Kubernetes Cluster](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/procuring-resources-oracle-cloud-infrastructure-deployment.html#GUID-2E3C8D01-43EB-4691-B1D6-25B1DC2475AE). + + **Note**: Limits and requests for CPU resources are measured in CPU units. One CPU in Kubernetes is equivalent to 1 vCPU/Core for cloud providers, and 1 hyperthread on bare-metal Intel processors. An "`m`" suffix in a CPU attribute indicates ‘milli-CPU’, so 500m is 50% of a CPU. Memory can be expressed in various units, where one Mi is one IEC unit mega-byte (1024^2), and one Gi is one IEC unit giga-byte (1024^3). For more information, see [Resource Management for Pods and Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/), [Assign Memory Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/), and [Assign CPU Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/). + + **Note**: The parameters above are also utilized by the Kubernetes Horizontal Pod Autoscaler (HPA). For more details on HPA, see [Kubernetes Horizontal Pod Autoscaler](../manage-oam-domains/hpa). + + **Note**: If required you can also set the same resources and limits for the `accessdomain-policy-cluster`. + 1. In the `domain.yaml` locate the section of the file starting with `adminServer:`. Under the `env:` tag add the following `CLASSPATH` entries. This is required for running the `idmconfigtool` from the Administration Server. @@ -359,7 +384,7 @@ By default, the java memory parameters assigned to the oam_server cluster are ve For example: - ```bash + ```bash $ kubectl get all,domains -n oamns ``` @@ -370,9 +395,7 @@ By default, the java memory parameters assigned to the oam_server cluster are ve pod/accessdomain-adminserver 1/1 Running 0 11m pod/accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 18m pod/accessdomain-oam-policy-mgr1 1/1 Running 0 3m31s - pod/accessdomain-oam-policy-mgr2 1/1 Running 0 3m31s pod/accessdomain-oam-server1 1/1 Running 0 3m31s - pod/accessdomain-oam-server2 1/1 Running 0 3m31s pod/helper 1/1 Running 0 33m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE @@ -380,12 +403,12 @@ By default, the java memory parameters assigned to the oam_server cluster are ve service/accessdomain-cluster-oam-cluster ClusterIP 10.101.59.154 14100/TCP 3m31s service/accessdomain-cluster-policy-cluster ClusterIP 10.98.236.51 15100/TCP 3m31s service/accessdomain-oam-policy-mgr1 ClusterIP None 15100/TCP 3m31s - service/accessdomain-oam-policy-mgr2 ClusterIP None 15100/TCP 3m31s + service/accessdomain-oam-policy-mgr2 ClusterIP 10.104.92.12 15100/TCP 3m31s service/accessdomain-oam-policy-mgr3 ClusterIP 10.96.244.37 15100/TCP 3m31s service/accessdomain-oam-policy-mgr4 ClusterIP 10.105.201.23 15100/TCP 3m31s service/accessdomain-oam-policy-mgr5 ClusterIP 10.110.12.227 15100/TCP 3m31s service/accessdomain-oam-server1 ClusterIP None 14100/TCP 3m31s - service/accessdomain-oam-server2 ClusterIP None 14100/TCP 3m31s + service/accessdomain-oam-server2 ClusterIP 10.96.137.33 14100/TCP 3m31s service/accessdomain-oam-server3 ClusterIP 10.103.178.35 14100/TCP 3m31s service/accessdomain-oam-server4 ClusterIP 10.97.254.78 14100/TCP 3m31s service/accessdomain-oam-server5 ClusterIP 10.105.65.104 14100/TCP 3m31s @@ -416,8 +439,8 @@ By default, the java memory parameters assigned to the oam_server cluster are ve * An Administration Server named `AdminServer` listening on port `7001`. * A configured OAM cluster named `oam_cluster` of size 5. * A configured Policy Manager cluster named `policy_cluster` of size 5. - * Two started OAM managed Servers, named `oam_server1` and `oam_server2`, listening on port `14100`. - * Two started Policy Manager managed servers named `oam-policy-mgr1` and `oam-policy-mgr2`, listening on port `15100`. + * One started OAM Managed Server, named `oam_server1`, listening on port `14100`. + * One started Policy Manager Managed Servers named `oam-policy-mgr1`, listening on port `15100`. * Log files that are located in `/logs/`. #### Verify the domain @@ -446,20 +469,6 @@ By default, the java memory parameters assigned to the oam_server cluster are ve Creation Timestamp: Generation: 1 Managed Fields: - API Version: weblogic.oracle/v9 - Fields Type: FieldsV1 - fieldsV1: - f:status: - .: - f:clusters: - f:conditions: - f:observedGeneration: - f:servers: - f:startTime: - Manager: Kubernetes Java Client - Operation: Update - Subresource: status - Time: API Version: weblogic.oracle/v9 Fields Type: FieldsV1 fieldsV1: @@ -506,11 +515,25 @@ By default, the java memory parameters assigned to the oam_server cluster are ve f:webLogicCredentialsSecret: .: f:name: - Manager: kubectl-client-side-apply + Manager: kubectl-client-side-apply + Operation: Update + Time: + API Version: weblogic.oracle/v9 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:clusters: + f:conditions: + f:observedGeneration: + f:servers: + f:startTime: + Manager: Kubernetes Java Client Operation: Update + Subresource: status Time: - Resource Version: 141884 - UID: b85571f1-f39d-4188-acfd-9264e5b925b5 + Resource Version: 2074089 + UID: e194d483-7383-4359-adb9-bf97de36518b Spec: Admin Server: Admin Channel Port Forwarding Enabled: true @@ -530,7 +553,7 @@ By default, the java memory parameters assigned to the oam_server cluster are ve Failure Retry Interval Seconds: 120 Failure Retry Limit Minutes: 1440 Http Access Log In Log Home: true - Image: container-registry.oracle.com/middleware/oam_cpu: + Image: container-registry.oracle.com/middleware/oam_cpu:12.2.1.4-jdk8-ol7- Image Pull Policy: IfNotPresent Image Pull Secrets: Name: orclcred @@ -572,9 +595,9 @@ By default, the java memory parameters assigned to the oam_server cluster are ve Maximum Replicas: 5 Minimum Replicas: 0 Observed Generation: 1 - Ready Replicas: 2 - Replicas: 2 - Replicas Goal: 2 + Ready Replicas: 1 + Replicas: 1 + Replicas Goal: 1 Cluster Name: policy_cluster Conditions: Last Transition Time: @@ -587,9 +610,9 @@ By default, the java memory parameters assigned to the oam_server cluster are ve Maximum Replicas: 5 Minimum Replicas: 0 Observed Generation: 1 - Ready Replicas: 2 - Replicas: 2 - Replicas Goal: 2 + Ready Replicas: 1 + Replicas: 1 + Replicas Goal: 1 Conditions: Last Transition Time: Status: True @@ -605,7 +628,7 @@ By default, the java memory parameters assigned to the oam_server cluster are ve Subsystems: Subsystem Name: ServerRuntime Symptoms: - Node Name: worker-node1 + Node Name: worker-node2 Pod Phase: Running Pod Ready: True Server Name: AdminServer @@ -617,26 +640,17 @@ By default, the java memory parameters assigned to the oam_server cluster are ve Overall Health: ok Subsystems: Subsystem Name: ServerRuntime - Symptoms: - Node Name: worker-node2 + Symptoms: + Node Name: worker-node1 Pod Phase: Running Pod Ready: True Server Name: oam_server1 State: RUNNING State Goal: RUNNING Cluster Name: oam_cluster - Health: - Activation Time: - Overall Health: ok - Subsystems: - Subsystem Name: ServerRuntime - Symptoms: - Node Name: worker-node1 - Pod Phase: Running - Pod Ready: True Server Name: oam_server2 - State: RUNNING - State Goal: RUNNING + State: SHUTDOWN + State Goal: SHUTDOWN Cluster Name: oam_cluster Server Name: oam_server3 State: SHUTDOWN @@ -656,25 +670,16 @@ By default, the java memory parameters assigned to the oam_server cluster are ve Subsystems: Subsystem Name: ServerRuntime Symptoms: - Node Name: worker-node2 + Node Name: worker-node1 Pod Phase: Running Pod Ready: True Server Name: oam_policy_mgr1 State: RUNNING State Goal: RUNNING Cluster Name: policy_cluster - Health: - Activation Time: - Overall Health: ok - Subsystems: - Subsystem Name: ServerRuntime - Symptoms: - Node Name: worker-node1 - Pod Phase: Running - Pod Ready: True Server Name: oam_policy_mgr2 - State: RUNNING - State Goal: RUNNING + State: SHUTDOWN + State Goal: SHUTDOWN Cluster Name: policy_cluster Server Name: oam_policy_mgr3 State: SHUTDOWN @@ -688,7 +693,12 @@ By default, the java memory parameters assigned to the oam_server cluster are ve State: SHUTDOWN State Goal: SHUTDOWN Start Time: - Events: + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Created 15m weblogic.operator Domain accessdomain was created. + Normal Available 2m56s weblogic.operator Domain accessdomain is available: a sufficient number of its servers have reached the ready state. + Normal Completed 2m56s weblogic.operator Domain accessdomain is complete because all of the following are true: there is no failure detected, there are no pending server shutdowns, and all servers expected to be running are ready and at their target image, auxiliary images, restart version, and introspect version. ``` In the `Status` section of the output, the available servers and clusters are listed. @@ -714,9 +724,7 @@ By default, the java memory parameters assigned to the oam_server cluster are ve accessdomain-adminserver 1/1 Running 0 18m 10.244.6.63 10.250.42.252 accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 25m 10.244.6.61 10.250.42.252 accessdomain-oam-policy-mgr1 1/1 Running 0 10m 10.244.5.13 10.250.42.255 - accessdomain-oam-policy-mgr2 1/1 Running 0 10m 10.244.6.65 10.250.42.252 accessdomain-oam-server1 1/1 Running 0 10m 10.244.5.12 10.250.42.255 - accessdomain-oam-server2 1/1 Running 0 10m 10.244.6.64 10.250.42.252 helper 1/1 Running 0 40m 10.244.6.60 10.250.42.252 ``` diff --git a/docs-source/content/idm-products/oam/introduction/_index.md b/docs-source/content/idm-products/oam/introduction/_index.md index 5c9f7c64e..774c31b78 100644 --- a/docs-source/content/idm-products/oam/introduction/_index.md +++ b/docs-source/content/idm-products/oam/introduction/_index.md @@ -23,7 +23,9 @@ environment. You can: ### Current production release -The current production release for the Oracle Access Management domain deployment on Kubernetes is [23.3.1](https://github.com/oracle/fmw-kubernetes/releases). This release uses the WebLogic Kubernetes Operator version 4.0.4. +The current production release for the Oracle Access Management domain deployment on Kubernetes is [23.4.1](https://github.com/oracle/fmw-kubernetes/releases). This release uses the WebLogic Kubernetes Operator version 4.1.2. + +For 4.0.X WebLogic Kubernetes Operator refer to [Version 23.3.1](https://oracle.github.io/fmw-kubernetes/23.3.1/idm-products/oam/) For 3.4.X WebLogic Kubernetes Operator refer to [Version 23.1.1](https://oracle.github.io/fmw-kubernetes/23.1.1/idm-products/oam/) @@ -37,15 +39,23 @@ See [here](../prerequisites/#limitations) for limitations in this release. ### Getting started -This documentation explains how to configure OAM on a Kubernetes cluster where no other Oracle Identity Management products will be deployed. For detailed information about this type of deployment , start at [Prerequisites](../prerequisites) and follow this documentation sequentially. +This documentation explains how to configure OAM on a Kubernetes cluster where no other Oracle Identity Management products will be deployed. For detailed information about this type of deployment, start at [Prerequisites](../prerequisites) and follow this documentation sequentially. Please note that this documentation does not explain how to configure a Kubernetes cluster given the product can be deployed on any compliant Kubernetes vendor. + +If you are deploying multiple Oracle Identity Management products on the same Kubernetes cluster, then you must follow the Enterprise Deployment Guide outlined in [Enterprise Deployments](../../enterprise-deployments). +Please note, you also have the option to follow the Enterprise Deployment Guide even if you are only installing OAM and no other Oracle Identity Management products. + +**Note**: If you need to understand how to configure a Kubernetes cluster ready for an Oracle Access Management deployment, you should follow the Enterprise Deployment Guide referenced in [Enterprise Deployments](../../enterprise-deployments). The [Enterprise Deployment Automation](../../enterprise-deployments/enterprise-deployment-automation) section also contains details on automation scripts that can: + + + Automate the creation of a Kubernetes cluster on Oracle Cloud Infrastructure (OCI), ready for the deployment of Oracle Identity Management products. + + Automate the deployment of Oracle Identity Management products on any compliant Kubernetes cluster. -If performing an Enterprise Deployment where multiple Oracle Identity Management products are deployed, refer to the [Enterprise Deployment Guide for Oracle Identity and Access Management in a Kubernetes Cluster](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/index.html) instead. ### Documentation for earlier releases To view documentation for an earlier release, see: +* [Version 23.3.1](https://oracle.github.io/fmw-kubernetes/23.3.1/idm-products/oam/) * [Version 23.2.1](https://oracle.github.io/fmw-kubernetes/23.2.1/idm-products/oam/) * [Version 23.1.1](https://oracle.github.io/fmw-kubernetes/23.1.1/idm-products/oam/) * [Version 22.4.1](https://oracle.github.io/fmw-kubernetes/22.4.1/oam/) diff --git a/docs-source/content/idm-products/oam/manage-oam-domains/delete-domain-home.md b/docs-source/content/idm-products/oam/manage-oam-domains/delete-domain-home.md index 1b79acb4b..ee1889946 100644 --- a/docs-source/content/idm-products/oam/manage-oam-domains/delete-domain-home.md +++ b/docs-source/content/idm-products/oam/manage-oam-domains/delete-domain-home.md @@ -1,5 +1,5 @@ --- -title: "e. Delete the OAM domain home" +title: "f. Delete the OAM domain home" description: "Learn about the steps to cleanup the OAM domain home." --- diff --git a/docs-source/content/idm-products/oam/manage-oam-domains/domain-lifecycle.md b/docs-source/content/idm-products/oam/manage-oam-domains/domain-lifecycle.md index 12a499ddc..d072a5ad3 100644 --- a/docs-source/content/idm-products/oam/manage-oam-domains/domain-lifecycle.md +++ b/docs-source/content/idm-products/oam/manage-oam-domains/domain-lifecycle.md @@ -17,16 +17,21 @@ As OAM domains use the WebLogic Kubernetes Operator, domain lifecyle operations This document shows the basic operations for starting, stopping and scaling servers in the OAM domain. For more detailed information refer to [Domain Life Cycle](https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/domain-lifecycle/) in the [WebLogic Kubernetes Operator](https://oracle.github.io/weblogic-kubernetes-operator/) documentation. - + + {{% notice note %}} Do not use the WebLogic Server Administration Console or Oracle Enterprise Manager Console to start or stop servers. {{% /notice %}} + +**Note**: The instructions below are for starting, stopping, or scaling servers manually. If you wish to use autoscaling, see [Kubernetes Horizontal Pod Autoscaler](../hpa). Please note, if you have enabled autoscaling, it is recommended to delete the autoscaler before running the commands below. + + ### View existing OAM servers -The default OAM deployment starts the Administration Server (`AdminServer`), two OAM Managed Servers (`oam_server1` and `oam_server2`) and two OAM Policy Manager servers (`oam_policy_mgr1` and `oam_policy_mgr2` ). +The default OAM deployment starts the Administration Server (`AdminServer`), one OAM Managed Server (`oam_server1`) and one OAM Policy Manager server (`oam_policy_mgr1`). -The deployment also creates, but doesn't start, three extra OAM Managed Servers (`oam-server3` to `oam-server5`) and three more OAM Policy Manager servers (`oam_policy_mgr3` to `oam_policy_mgr5`). +The deployment also creates, but doesn't start, four extra OAM Managed Servers (`oam-server2` to `oam-server5`) and four more OAM Policy Manager servers (`oam_policy_mgr2` to `oam_policy_mgr5`). All these servers are visible in the WebLogic Server Console `https://${MASTERNODE-HOSTNAME}:${MASTERNODE-PORT}/console` by navigating to *Domain Structure* > *oamcluster* > *Environment* > *Servers*. @@ -49,9 +54,7 @@ NAME READY STATUS RES accessdomain-adminserver 1/1 Running 0 3h29m accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 3h36m accessdomain-oam-policy-mgr1 1/1 Running 0 3h21m -accessdomain-oam-policy-mgr2 1/1 Running 0 3h21m accessdomain-oam-server1 1/1 Running 0 3h21m -accessdomain-oam-server2 1/1 Running 0 3h21m helper 1/1 Running 0 3h51m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 55m ``` @@ -74,13 +77,13 @@ The number of OAM Managed Servers running is dependent on the `replicas` paramet **Note**: This opens an edit session for the oam-cluster where parameters can be changed using standard `vi` commands. -1. In the edit session, search for `spec:`, and then look for the `replicas` parameter under `clusterName: oam_cluster`. By default the replicas parameter is set to "2" hence two OAM Managed Servers are started (`oam_server1` and `oam_server2`): +1. In the edit session, search for `spec:`, and then look for the `replicas` parameter under `clusterName: oam_cluster`. By default the replicas parameter is set to "1" hence one OAM Managed Server is started (`oam_server1`): ``` ... spec: clusterName: oam_cluster - replicas: 2 + replicas: 1 serverPod: env: - name: USER_MEM_ARGS @@ -89,13 +92,13 @@ The number of OAM Managed Servers running is dependent on the `replicas` paramet ... ``` -1. To start more OAM Managed Servers, increase the `replicas` value as desired. In the example below, two more managed servers will be started by setting `replicas` to "4": +1. To start more OAM Managed Servers, increase the `replicas` value as desired. In the example below, two more managed servers will be started by setting `replicas` to "3": ``` ... spec: clusterName: oam_cluster - replicas: 4 + replicas: 3 serverPod: env: - name: USER_MEM_ARGS @@ -131,27 +134,23 @@ The number of OAM Managed Servers running is dependent on the `replicas` paramet accessdomain-adminserver 1/1 Running 0 3h33m accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 3h40m accessdomain-oam-policy-mgr1 1/1 Running 0 3h25m - accessdomain-oam-policy-mgr2 1/1 Running 0 3h25m accessdomain-oam-server1 1/1 Running 0 3h25m - accessdomain-oam-server2 1/1 Running 0 3h25m - accessdomain-oam-server3 0/1 Running 0 9s - accessdomain-oam-server4 0/1 Running 0 9s + accessdomain-oam-server2 0/1 Running 0 3h25m + accessdomain-oam-server3 0/1 Pending 0 9s helper 1/1 Running 0 3h55m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 59m ``` - Two new pods (`accessdomain-oam-server3` and `accessdomain-oam-server4`) are started, but currently have a `READY` status of `0/1`. This means `oam_server3` and `oam_server4` are not currently running but are in the process of starting. The servers will take several minutes to start so keep executing the command until `READY` shows `1/1`: + Two new pods (`accessdomain-oam-server2` and `accessdomain-oam-server3`) are started, but currently have a `READY` status of `0/1`. This means `oam_server2` and `oam_server3` are not currently running but are in the process of starting. The servers will take several minutes to start so keep executing the command until `READY` shows `1/1`: ``` NAME READY STATUS RESTARTS AGE accessdomain-adminserver 1/1 Running 0 3h37m accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 3h43m accessdomain-oam-policy-mgr1 1/1 Running 0 3h29m - accessdomain-oam-policy-mgr2 1/1 Running 0 3h29m accessdomain-oam-server1 1/1 Running 0 3h29m accessdomain-oam-server2 1/1 Running 0 3h29m accessdomain-oam-server3 1/1 Running 0 3m45s - accessdomain-oam-server4 1/1 Running 0 3m45s helper 1/1 Running 0 3h59m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 63m @@ -186,13 +185,13 @@ As mentioned in the previous section, the number of OAM Managed Servers running $ kubectl edit cluster accessdomain-oam-cluster -n oamns ``` -1. In the edit session, search for `spec:`, and then look for the `replicas` parameter under `clusterName: oam_cluster`. In the example below `replicas` is set to "4", hence four OAM Managed Servers are started (`access-domain-oam_server1` - `access-domain-oam_server4`): +1. In the edit session, search for `spec:`, and then look for the `replicas` parameter under `clusterName: oam_cluster`. In the example below `replicas` is set to "3", hence three OAM Managed Servers are started (`access-domain-oam_server1` - `access-domain-oam_server3`): ``` ... spec: clusterName: oam_cluster - replicas: 4 + replicas: 3 serverPod: env: - name: USER_MEM_ARGS @@ -201,12 +200,12 @@ As mentioned in the previous section, the number of OAM Managed Servers running ... ``` -1. To stop OAM Managed Servers, decrease the `replicas` value as desired. In the example below, we will stop two managed servers by setting replicas to "2": +1. To stop OAM Managed Servers, decrease the `replicas` value as desired. In the example below, we will stop two managed servers by setting replicas to "1": ``` spec: clusterName: oam_cluster - replicas: 2 + replicas: 1 serverPod: env: - name: USER_MEM_ARGS @@ -236,25 +235,21 @@ As mentioned in the previous section, the number of OAM Managed Servers running accessdomain-adminserver 1/1 Running 0 3h45m accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 3h51m accessdomain-oam-policy-mgr1 1/1 Running 0 3h37m - accessdomain-oam-policy-mgr2 1/1 Running 0 3h37m accessdomain-oam-server1 1/1 Running 0 3h37m accessdomain-oam-server2 1/1 Running 0 3h37m - accessdomain-oam-server3 1/1 Running 0 11m - accessdomain-oam-server4 1/1 Terminating 0 11m + accessdomain-oam-server3 1/1 Terminating 0 11m helper 1/1 Running 0 4h6m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 71m ``` - One pod now has a `STATUS` of `Terminating` (`accessdomain-oam-server4`). The server will take a minute or two to stop. Once terminated the other pod (`accessdomain-oam-server3`) will move to `Terminating` and then stop. Keep executing the command until the pods have disappeared: + One pod now has a `STATUS` of `Terminating` (`accessdomain-oam-server3`). The server will take a minute or two to stop. Once terminated the other pod (`accessdomain-oam-server2`) will move to `Terminating` and then stop. Keep executing the command until the pods have disappeared: ``` NAME READY STATUS RESTARTS AGE accessdomain-adminserver 1/1 Running 0 3h48m accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 3h54m accessdomain-oam-policy-mgr1 1/1 Running 0 3h40m - accessdomain-oam-policy-mgr2 1/1 Running 0 3h40m accessdomain-oam-server1 1/1 Running 0 3h40m - accessdomain-oam-server2 1/1 Running 0 3h40m helper 1/1 Running 0 4h9m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 74m ``` @@ -277,25 +272,25 @@ The number of OAM Policy Managed Servers running is dependent on the `replicas` **Note**: This opens an edit session for the policy-cluster where parameters can be changed using standard `vi` commands. -1. In the edit session, search for `spec:`, and then look for the `replicas` parameter under `clusterName: policy_cluster`. By default the replicas parameter is set to "2" hence two OAM Policy Managed Servers are started (`oam_policy_mgr1` and `oam_policy_mgr2`): +1. In the edit session, search for `spec:`, and then look for the `replicas` parameter under `clusterName: policy_cluster`. By default the replicas parameter is set to "1" hence one OAM Policy Managed Server is started (`oam_policy_mgr1`): ``` ... spec: clusterName: policy_cluster - replicas: 2 + replicas: 1 serverService: precreateService: true ... ``` -1. To start more OAM Policy Managed Servers, increase the `replicas` value as desired. In the example below, two more managed servers will be started by setting `replicas` to "4": +1. To start more OAM Policy Managed Servers, increase the `replicas` value as desired. In the example below, two more managed servers will be started by setting `replicas` to "3": ``` ... spec: clusterName: policy_cluster - replicas: 4 + replicas: 3 serverService: precreateService: true ... @@ -309,7 +304,7 @@ The number of OAM Policy Managed Servers running is dependent on the `replicas` cluster.weblogic.oracle/accessdomain-policy-cluster edited ``` - After saving the changes two new pods will be started (`accessdomain-oam-policy-mgr3` and `accessdomain-oam-policy-mgr4`). After a few minutes they will have a `READY` status of `1/1`. In the example below `accessdomain-oam-policy-mgr3` and `accessdomain-oam-policy-mgr4` are started: + After saving the changes two new pods will be started (`accessdomain-oam-policy-mgr2` and `accessdomain-oam-policy-mgr3`). After a few minutes they will have a `READY` status of `1/1`. In the example below `accessdomain-oam-policy-mgr2` and `accessdomain-oam-policy-mgr3` are started: ``` NAME READY STATUS RESTARTS AGE @@ -318,9 +313,7 @@ The number of OAM Policy Managed Servers running is dependent on the `replicas` accessdomain-oam-policy-mgr1 1/1 Running 0 3h35m accessdomain-oam-policy-mgr2 1/1 Running 0 3h35m accessdomain-oam-policy-mgr3 1/1 Running 0 4m18s - accessdomain-oam-policy-mgr4 1/1 Running 0 4m18s accessdomain-oam-server1 1/1 Running 0 3h35m - accessdomain-oam-server2 1/1 Running 0 3h35m helper 1/1 Running 0 4h4m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 69m ``` @@ -343,19 +336,19 @@ As mentioned in the previous section, the number of OAM Policy Managed Servers r $ kubectl edit cluster accessdomain-policy-cluster -n oamns ``` -1. In the edit session, search for `spec:`, and then look for the `replicas` parameter under `clusterName: policy_cluster`. To stop OAM Policy Managed Servers, decrease the `replicas` value as desired. In the example below, we will stop two managed servers by setting replicas to "2": +1. In the edit session, search for `spec:`, and then look for the `replicas` parameter under `clusterName: policy_cluster`. To stop OAM Policy Managed Servers, decrease the `replicas` value as desired. In the example below, we will stop two managed servers by setting replicas to "1": ``` ... spec: clusterName: policy_cluster - replicas: 2 + replicas: 1 serverService: precreateService: true ... ``` - After saving the changes one pod will move to a `STATUS` of `Terminating` (`accessdomain-oam-policy-mgr4`). + After saving the changes one pod will move to a `STATUS` of `Terminating` (`accessdomain-oam-policy-mgr3`). ``` NAME READY STATUS RESTARTS AGE @@ -363,10 +356,8 @@ As mentioned in the previous section, the number of OAM Policy Managed Servers r accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 3h55m accessdomain-oam-policy-mgr1 1/1 Running 0 3h41m accessdomain-oam-policy-mgr2 1/1 Running 0 3h41m - accessdomain-oam-policy-mgr3 1/1 Running 0 10m - accessdomain-oam-policy-mgr4 1/1 Terminating 0 10m + accessdomain-oam-policy-mgr3 1/1 Terminating 0 10m accessdomain-oam-server1 1/1 Running 0 3h41m - accessdomain-oam-server2 1/1 Running 0 3h41m helper 1/1 Running 0 4h11m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 75m ``` @@ -378,9 +369,7 @@ As mentioned in the previous section, the number of OAM Policy Managed Servers r accessdomain-adminserver 1/1 Running 0 3h50m accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 3h57m accessdomain-oam-policy-mgr1 1/1 Running 0 3h42m - accessdomain-oam-policy-mgr2 1/1 Running 0 3h42m accessdomain-oam-server1 1/1 Running 0 3h42m - accessdomain-oam-server2 1/1 Running 0 3h42m helper 1/1 Running 0 4h12m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 76m ``` @@ -459,9 +448,7 @@ To stop all the OAM Managed Servers and the Administration Server in one operati accessdomain-adminserver 1/1 Terminating 0 3h52m accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 3h59m accessdomain-oam-policy-mgr1 1/1 Terminating 0 3h44m - accessdomain-oam-policy-mgr2 1/1 Terminating 0 3h44m accessdomain-oam-server1 1/1 Terminating 0 3h44m - accessdomain-oam-server2 1/1 Terminating 0 3h44m helper 1/1 Running 0 4h14m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 78m ``` @@ -520,9 +507,7 @@ To stop all the OAM Managed Servers and the Administration Server in one operati accessdomain-adminserver 1/1 Running 0 10m accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 4h12m accessdomain-oam-policy-mgr1 1/1 Running 0 7m35s - accessdomain-oam-policy-mgr2 1/1 Running 0 7m35s accessdomain-oam-server1 1/1 Running 0 7m35s - accessdomain-oam-server2 1/1 Running 0 7m35s helper 1/1 Running 0 4h28m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 92m ``` diff --git a/docs-source/content/idm-products/oam/manage-oam-domains/hpa.md b/docs-source/content/idm-products/oam/manage-oam-domains/hpa.md new file mode 100644 index 000000000..33694a988 --- /dev/null +++ b/docs-source/content/idm-products/oam/manage-oam-domains/hpa.md @@ -0,0 +1,456 @@ +--- +title: "e. Kubernetes Horizontal Pod Autoscaler" +description: "Describes the steps for implementing the Horizontal Pod Autoscaler." +--- + + +1. [Prerequisite configuration](#prerequisite-configuration) +1. [Deploy the Kubernetes Metrics Server](#deploy-the-kubernetes-metrics-server) + 1. [Troubleshooting](#troubleshooting) +1. [Deploy HPA](#deploy-hpa) +1. [Testing HPA](#testing-hpa) +1. [Delete the HPA](#delete-the-hpa) +1. [Other considerations](#other-considerations) + + +Kubernetes Horizontal Pod Autoscaler (HPA) is supported from Weblogic Kubernetes Operator 4.0.X and later. + +HPA allows automatic scaling (up and down) of the OAM Managed Servers. If load increases then extra OAM Managed Servers will be started as required, up to the value `configuredManagedServerCount` defined when the domain was created (see [Prepare the create domain script](../../create-oam-domains#prepare-the-create-domain-script)). Similarly, if load decreases, OAM Managed Servers will be automatically shutdown. + +For more information on HPA, see [Horizontal Pod Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). + +The instructions below show you how to configure and run an HPA to scale an OAM cluster (`accessdomain-oam-cluster`) resource, based on CPU utilization or memory resource metrics. If required, you can also perform the following for the `accessdomain-policy-cluster`. + + + +**Note**: If you enable HPA and then decide you want to start/stop/scale OAM Managed servers manually as per [Domain Life Cycle](../domain-lifecycle), it is recommended to delete HPA beforehand as per [Delete the HPA](#delete-the-hpa). + +### Prerequisite configuration + +In order to use HPA, the OAM domain must have been created with the required `resources` parameter as per [Set the OAM server memory parameters](../../create-oam-domains#set-the-oam-server-memory-parameters). For example: + + ``` + serverPod: + env: + - name: USER_MEM_ARGS + value: "-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m" + resources: + limits: + cpu: "2" + memory: "8Gi" + requests: + cpu: "1000m" + memory: "4Gi" + ``` + +If you created the OAM domain without setting these parameters, then you can update the domain using the following steps: + +1. Run the following command to edit the cluster: + + ``` + $ kubectl edit cluster accessdomain-oam-cluster -n oamns + ``` + + **Note**: This opens an edit session for the `oam-cluster` where parameters can be changed using standard vi commands. + +1. In the edit session, search for `spec:`, and then look for the replicas parameter under `clusterName: oam_cluster`. Change the entry so it looks as follows: + + ``` + spec: + clusterName: oam_cluster + replicas: 1 + serverPod: + env: + - name: USER_MEM_ARGS + value: -XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m + resources: + limits: + cpu: "2" + memory: 8Gi + requests: + cpu: 1000m + memory: 4Gi + serverService: + precreateService: true + ... + ``` + +1. Save the file and exit (:wq!) + + The output will look similar to the following: + + ``` + cluster.weblogic.oracle/accessdomain-oam-cluster edited + ``` + + The OAM Managed Server pods will then automatically be restarted. + +### Deploy the Kubernetes Metrics Server + +Before deploying HPA you must deploy the Kubernetes Metrics Server. + +1. Check to see if the Kubernetes Metrics Server is already deployed: + + ``` + $ kubectl get pods -n kube-system | grep metric + ``` + If a row is returned as follows, then Kubernetes Metric Server is deployed and you can move to [Deploy HPA](#deploy-hpa). + + ``` + metrics-server-d9694457-mf69d 1/1 Running 0 5m13s + ``` + +1. If no rows are returned by the previous command, then the Kubernetes Metric Server needs to be deployed. Run the following commands to get the `components.yaml`: + + ``` + $ mkdir $WORKDIR/kubernetes/hpa + $ cd $WORKDIR/kubernetes/hpa + $ wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml + ``` + +1. Deploy the Kubernetes Metrics Server by running the following command: + + ``` + $ kubectl apply -f components.yaml + ``` + + The output will look similar to the following: + + ``` + serviceaccount/metrics-server created + clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created + clusterrole.rbac.authorization.k8s.io/system:metrics-server created + rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created + clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created + clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created + service/metrics-server created + deployment.apps/metrics-server created + apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created + ``` + +1. Run the following command to check Kubernetes Metric Server is running: + + ``` + $ kubectl get pods -n kube-system | grep metric + ``` + + Make sure the pod has a `READY` status of `1/1`: + + ``` + metrics-server-d9694457-mf69d 1/1 Running 0 39s + ``` + + +#### Troubleshooting + +If the Kubernetes Metric Server does not reach the `READY 1/1` state, run the following commands: + +``` +$ kubectl describe pod -n kube-system +$ kubectl logs -n kube-system +``` + +If you see errors such as: + +``` +Readiness probe failed: HTTP probe failed with statuscode: 500 +``` + +and: + +``` +E0907 13:07:50.937308 1 scraper.go:140] "Failed to scrape node" err="Get \"https://100.105.18.113:10250/metrics/resource\": x509: cannot validate certificate for 100.105.18.113 because it doesn't contain any IP SANs" node="worker-node1" +``` + +then you may need to install a valid cluster certificate for your Kubernetes cluster. + +For testing purposes, you can resolve this issue by: + +1. Delete the Kubernetes Metrics Server by running the following command: + + ``` + $ kubectl delete -f $WORKDIR/kubernetes/hpa/components.yaml + ``` + +1. Edit the `$WORKDIR/hpa/components.yaml` and locate the `args:` section. Add `kubelet-insecure-tls` to the arguments. For example: + + ``` + spec: + containers: + - args: + - --cert-dir=/tmp + - --secure-port=4443 + - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname + - --kubelet-use-node-status-port + - --kubelet-insecure-tls + - --metric-resolution=15s + image: registry.k8s.io/metrics-server/metrics-server:v0.6.4 + ... + ``` + +1. Deploy the Kubenetes Metrics Server using the command: + + ``` + $ kubectl apply -f components.yaml + ``` + + Run the following and make sure the READY status shows `1/1`: + + ``` + $ kubectl get pods -n kube-system | grep metric + ``` + + The output should look similar to the following: + + ``` + metrics-server-d9694457-mf69d 1/1 Running 0 40s + ``` + + +### Deploy HPA + +The steps below show how to configure and run an HPA to scale the `accessdomain-oam-cluster`, based on the CPU or memory utilization resource metrics. + +The default OAM deployment creates the cluster `accessdomain-oam-cluster` which starts one OAM Managed Server (`oam_server1`). The deployment also creates, but doesn’t start, four extra OAM Managed Servers (`oam-server2` to `oam-server5`). + +In the following example an HPA resource is created, targeted at the cluster resource `accessdomain-oam-cluster`. This resource will autoscale OAM Managed Servers from a minimum of 1 cluster member up to 5 cluster members. Scaling up will occur when the average CPU is consistently over 70%. Scaling down will occur when the average CPU is consistently below 70%. + + +1. Navigate to the `$WORKDIR/kubernetes/hpa` and create an `autoscalehpa.yaml` file that contains the following. + + ``` + # + apiVersion: autoscaling/v2 + kind: HorizontalPodAutoscaler + metadata: + name: accessdomain-oam-cluster-hpa + namespace: oamns + spec: + scaleTargetRef: + apiVersion: weblogic.oracle/v1 + kind: Cluster + name: accessdomain-oam-cluster + behavior: + scaleDown: + stabilizationWindowSeconds: 60 + scaleUp: + stabilizationWindowSeconds: 60 + minReplicas: 1 + maxReplicas: 5 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 70 + ``` + + **Note** : `minReplicas` and `maxReplicas` should match your current domain settings. + + **Note**: For setting HPA based on Memory Metrics, update the metrics block with the following content. Please note we recommend using only CPU or Memory, not both. + + ``` + metrics: + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: 70 + ``` + + +1. Run the following command to create the autoscaler: + + ``` + $ kubectl apply -f autoscalehpa.yaml + ``` + + The output will look similar to the following: + + ``` + horizontalpodautoscaler.autoscaling/accessdomain-oam-cluster-hpa created + ``` + +1. Verify the status of the autoscaler by running the following: + + ``` + $ kubectl get hpa -n oamns + ``` + + The output will look similar to the following: + + ``` + NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE + accessdomain-oam-cluster-hpa Cluster/accessdomain-oam-cluster 5%/70% 1 5 1 21s + ``` + + In the example above, this shows that CPU is currently running at 5% for the `accessdomain-oam-cluster-hpa`. + + +### Testing HPA + +1. Check the current status of the OAM Managed Servers: + + ``` + $ kubectl get pods -n oamns + ``` + + The output will look similar to the following: + + ``` + NAME READY STATUS RESTARTS AGE + accessdomain-adminserver 0/1 Running 0 141m + accessdomain-create-oam-infra-domain-job-6br2j 0/1 Completed 0 5h19m + accessdomain-oam-policy-mgr1 0/1 Running 0 138m + accessdomain-oam-server1 1/1 Running 0 138m + helper 1/1 Running 0 21h + nginx-ingress-ingress-nginx-controller-5f9bdf4c9-f5trt 1/1 Running 0 4h33m + ``` + + In the above, only `accessdomain-oam-server1` is running. + + + +1. To test HPA can scale up the WebLogic cluster `accessdomain-oam-cluster`, run the following commands: + + ``` + $ kubectl exec --stdin --tty accessdomain-oam-server1 -n oamns -- /bin/bash + ``` + + This will take you inside a bash shell inside the `oam_server1` pod: + + ``` + [oracle@accessdomain-oam-server1 oracle]$ + ``` + + Inside the bash shell, run the following command to increase the load on the CPU: + + ``` + [oracle@accessdomain-oam-server1 oracle]$ dd if=/dev/zero of=/dev/null + ``` + + This command will continue to run in the foreground. + + + +1. In a command window outside the bash shell, run the following command to view the current CPU usage: + + ``` + $ kubectl get hpa -n oamns + ``` + + The output will look similar to the following: + + ``` + NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE + accessdomain-oam-cluster-hpa Cluster/accessdomain-oam-cluster 470%/70% 1 5 1 21s + ``` + + In the above example the CPU has increased to 470%. As this is above the 70% limit, the autoscaler increases the replicas on the Cluster resource and the operator responds by starting additional cluster members. + +1. Run the following to see if any more OAM Managed Servers are started: + + ``` + $ kubectl get pods -n oamns + ``` + + The output will look similar to the following: + + ``` + NAME READY STATUS RESTARTS AGE + accessdomain-adminserver 0/1 Running 143m + accessdomain-create-oam-infra-domain-job-6br2j 0/1 Completed 0 5h21m + accessdomain-oam-policy-mgr1 0/1 Running 0 140m + accessdomain-oam-server1 1/1 Running 0 140m + accessdomain-oam-server2 1/1 Running 0 3m20s + accessdomain-oam-server3 1/1 Running 0 3m20s + accessdomain-oam-server4 1/1 Running 0 3m19s + accessdomain-oam-server5 1/1 Running 0 3m5s + helper 1/1 Running 0 21h + ``` + + In the example above four more OAM Managed Servers have been started (`oam-server2` - `oam-server5`). + + **Note**: It may take some time for the servers to appear and start. Once the servers are at `READY` status of `1/1`, the servers are started. + + +1. To stop the load on the CPU, in the bash shell, issue a Control C, and then exit the bash shell: + + ``` + [oracle@accessdomain-oam-server1 oracle]$ dd if=/dev/zero of=/dev/null + ^C + [oracle@accessdomain-oam-server1 oracle]$ exit + ``` + +1. Run the following command to view the current CPU usage: + + ``` + $ kubectl get hpa -n oamns + ``` + + The output will look similar to the following: + + ``` + NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE + accessdomain-oam-cluster-hpa Cluster/accessdomain-oam-cluster 19%/70% 1 5 5 19m + ``` + + In the above example CPU has dropped to 19%. As this is below the 70% threshold, you should see the autoscaler scale down the servers: + + ``` + $ kubectl get pods -n oamns + ``` + + The output will look similar to the following: + + ``` + NAME READY STATUS RESTARTS AGE + accessdomain-adminserver 1/1 Running 0 152m + accessdomain-create-oam-infra-domain-job-6br2j 0/1 Completed 0 5h30m + accessdomain-oam-policy-mgr1 1/1 Running 0 149m + accessdomain-oam-server1 1/1 Running 0 149m + accessdomain-oam-server2 1/1 Running 0 14m + accessdomain-oam-server3 0/1 Terminating 0 14m + helper 1/1 Running 0 21h + nginx-ingress-ingress-nginx-controller-5f9bdf4c9-f5trt 1/1 Running 0 4h45m + ``` + + Eventually, all the servers except `oam-server1` will disappear: + + ``` + NAME READY STATUS RESTARTS AGE + accessdomain-adminserver 1/1 Running 0 154m + accessdomain-create-oam-infra-domain-job-6br2j 0/1 Completed 0 5h32m + accessdomain-oam-policy-mgr1 1/1 Running 0 151m + accessdomain-oam-server1 1/1 Running 0 151m + helper 1/1 Running 0 21h + nginx-ingress-ingress-nginx-controller-5f9bdf4c9-f5trt 1/1 Running 0 4h47m + ``` + + +### Delete the HPA + +1. If you need to delete the HPA, you can do so by running the following command: + + ``` + $ cd $WORKDIR/kubernetes/hpa + $ kubectl delete -f autoscalehpa.yaml + ``` + +### Other considerations + ++ If HPA is deployed and you need to upgrade the OAM image, then you must delete the HPA before upgrading. Once the upgrade is successful you can deploy HPA again. ++ If you choose to start/stop an OAM Managed Server manually as per [Domain Life Cycle](../domain-lifecycle), then it is recommended to delete the HPA before doing so. + + + + + + + + + + + diff --git a/docs-source/content/idm-products/oam/manage-oam-domains/logging-and-visualization.md b/docs-source/content/idm-products/oam/manage-oam-domains/logging-and-visualization.md index 658ce6da0..f1a02f975 100644 --- a/docs-source/content/idm-products/oam/manage-oam-domains/logging-and-visualization.md +++ b/docs-source/content/idm-products/oam/manage-oam-domains/logging-and-visualization.md @@ -453,7 +453,7 @@ You will also need the BASE64 version of the Certificate Authority (CA) certific - containerPort: 5044 name: logstash volumeMounts: - - mountPath: /u01/oracle/user_projects + - mountPath: /u01/oracle/user_projects/domains name: weblogic-domain-storage-volume - name: shared-logs mountPath: /shared-logs @@ -490,7 +490,8 @@ You will also need the BASE64 version of the Certificate Authority (CA) certific persistentVolumeClaim: claimName: accessdomain-domain-pvc - name: shared-logs - emptyDir: {} ``` + emptyDir: {} + ``` 1. Deploy the `logstash` pod by executing the following command: diff --git a/docs-source/content/idm-products/oam/manage-oam-domains/monitoring-oam-domains.md b/docs-source/content/idm-products/oam/manage-oam-domains/monitoring-oam-domains.md index 5de6e40a7..c8d841d04 100644 --- a/docs-source/content/idm-products/oam/manage-oam-domains/monitoring-oam-domains.md +++ b/docs-source/content/idm-products/oam/manage-oam-domains/monitoring-oam-domains.md @@ -239,7 +239,6 @@ For usage details execute `./setup-monitoring.sh -h`. ============================================================== ``` - **Note**: If you see the warning `W0320 9968 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+` you can ignore this message. #### Prometheus service discovery diff --git a/docs-source/content/idm-products/oam/patch-and-upgrade/upgrade-an-operator-release.md b/docs-source/content/idm-products/oam/patch-and-upgrade/upgrade-an-operator-release.md index 951eaa15d..812724dba 100644 --- a/docs-source/content/idm-products/oam/patch-and-upgrade/upgrade-an-operator-release.md +++ b/docs-source/content/idm-products/oam/patch-and-upgrade/upgrade-an-operator-release.md @@ -3,7 +3,7 @@ title: "a. Upgrade an operator release" description: "Instructions on how to update the WebLogic Kubernetes Operator version." --- -These instructions apply to upgrading operators from 3.X.X to 4.X, or from within the 4.x release family as additional versions are released. +These instructions apply to upgrading operators from 3.X.X to 4.X, or from within the 4.X release family as additional versions are released. 1. On the master node, download the new WebLogic Kubernetes Operator source code from the operator github project: @@ -71,7 +71,6 @@ These instructions apply to upgrading operators from 3.X.X to 4.X, or from withi pod/weblogic-operator-webhook-7996b8b58b-frtwp 1/1 Running 0 42s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - service/internal-weblogic-operator-svc ClusterIP 10.107.3.1 8082/TCP,8083/TCP 6d service/weblogic-operator-webhook-svc ClusterIP 10.106.51.57 8083/TCP,8084/TCP 42s NAME READY UP-TO-DATE AVAILABLE AGE diff --git a/docs-source/content/idm-products/oam/post-install-config/_index.md b/docs-source/content/idm-products/oam/post-install-config/_index.md index 766f9c228..5e708f2a3 100644 --- a/docs-source/content/idm-products/oam/post-install-config/_index.md +++ b/docs-source/content/idm-products/oam/post-install-config/_index.md @@ -76,14 +76,12 @@ Follow these post install configuration steps. accessdomain-adminserver 1/1 Terminating 0 27m accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 4h29m accessdomain-oam-policy-mgr1 1/1 Terminating 0 24m - accessdomain-oam-policy-mgr2 1/1 Terminating 0 24m accessdomain-oam-server1 1/1 Terminating 0 24m - accessdomain-oam-server2 1/1 Terminating 0 24m helper 1/1 Running 0 4h44m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 108m ``` - The Administration Server pods and Managed Server pods will move to a STATUS of `Terminating`. After a few minutes, run the command again and the pods should have disappeared: + The Administration Server pod and Managed Server pods will move to a STATUS of `Terminating`. After a few minutes, run the command again and the pods should have disappeared: ``` NAME READY STATUS RESTARTS AGE @@ -133,9 +131,7 @@ Follow these post install configuration steps. accessdomain-adminserver 1/1 Running 0 5m38s accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 4h37m accessdomain-oam-policy-mgr1 1/1 Running 0 2m51s - accessdomain-oam-policy-mgr2 1/1 Running 0 2m51s accessdomain-oam-server1 1/1 Running 0 2m50s - accessdomain-oam-server2 1/1 Running 0 2m50s helper 1/1 Running 0 4h52m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 116m ``` @@ -242,14 +238,12 @@ For the above changes to take effect, you must restart the OAM domain: accessdomain-adminserver 1/1 Terminating 0 27m accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 4h29m accessdomain-oam-policy-mgr1 1/1 Terminating 0 24m - accessdomain-oam-policy-mgr2 1/1 Terminating 0 24m accessdomain-oam-server1 1/1 Terminating 0 24m - accessdomain-oam-server2 1/1 Terminating 0 24m helper 1/1 Running 0 4h44m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 108m ``` - The Administration Server pods and Managed Server pods will move to a STATUS of `Terminating`. After a few minutes, run the command again and the pods should have disappeared: + The Administration Server pod and Managed Server pods will move to a STATUS of `Terminating`. After a few minutes, run the command again and the pods should have disappeared: ``` NAME READY STATUS RESTARTS AGE @@ -299,9 +293,7 @@ For the above changes to take effect, you must restart the OAM domain: accessdomain-adminserver 1/1 Running 0 5m38s accessdomain-create-oam-infra-domain-job-7c9r9 0/1 Completed 0 4h37m accessdomain-oam-policy-mgr1 1/1 Running 0 2m51s - accessdomain-oam-policy-mgr2 1/1 Running 0 2m51s accessdomain-oam-server1 1/1 Running 0 2m50s - accessdomain-oam-server2 1/1 Running 0 2m50s helper 1/1 Running 0 4h52m nginx-ingress-ingress-nginx-controller-76fb7678f-k8rhq 1/1 Running 0 116m ``` \ No newline at end of file diff --git a/docs-source/content/idm-products/oam/prepare-your-environment/_index.md b/docs-source/content/idm-products/oam/prepare-your-environment/_index.md index b5ca9c4a3..c552aa14f 100644 --- a/docs-source/content/idm-products/oam/prepare-your-environment/_index.md +++ b/docs-source/content/idm-products/oam/prepare-your-environment/_index.md @@ -38,9 +38,9 @@ Check that all the nodes in the Kubernetes cluster are running. ``` NAME STATUS ROLES AGE VERSION - node/worker-node1 Ready 17h v1.24.5+1.el7 - node/worker-node2 Ready 17h v1.24.5+1.el7 - node/master-node Ready control-plane,master 23h v1.24.5+1.el7 + node/worker-node1 Ready 17h v1.26.6+1.el8 + node/worker-node2 Ready 17h v1.26.6+1.el8 + node/master-node Ready control-plane,master 23h v1.26.6+1.el8 NAME READY STATUS RESTARTS AGE pod/coredns-66bff467f8-fnhbq 1/1 Running 0 23h @@ -54,7 +54,7 @@ Check that all the nodes in the Kubernetes cluster are running. pod/kube-proxy-2kxv2 1/1 Running 0 17h pod/kube-proxy-82vvj 1/1 Running 0 17h pod/kube-proxy-nrgw9 1/1 Running 0 23h - pod/kube-scheduler-master 1/1 Running 0 21 + pod/kube-scheduler-master 1/1 Running 0 21h ``` ### Obtain the OAM container image @@ -67,7 +67,7 @@ The OAM Kubernetes deployment requires access to an OAM container image. The ima #### Prebuilt OAM container image -The prebuilt OAM April 2023 container image can be downloaded from [Oracle Container Registry](https://container-registry.oracle.com). This image is prebuilt by Oracle and includes Oracle Access Management 12.2.1.4.0, the April Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program. +The prebuilt OAM October 2023 container image can be downloaded from [Oracle Container Registry](https://container-registry.oracle.com). This image is prebuilt by Oracle and includes Oracle Access Management 12.2.1.4.0, the October Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program. **Note**: Before using this image you must login to [Oracle Container Registry](https://container-registry.oracle.com), navigate to `Middleware` > `oam_cpu` and accept the license agreement. @@ -145,17 +145,18 @@ OAM domain deployment on Kubernetes leverages the WebLogic Kubernetes Operator i No resources found ``` - If you see the following: + If you see any of the following: ``` - NAME AGE - domains.weblogic.oracle 5d + NAME AGE + clusters.weblogic.oracle 5d + domains.weblogic.oracle 5d ``` - then run the following command to delete the existing crd: + then run the following command to delete the existing crd's: ```bash + $ kubectl delete crd clusters.weblogic.oracle $ kubectl delete crd domains.weblogic.oracle - customresourcedefinition.apiextensions.k8s.io "domains.weblogic.oracle" deleted ``` @@ -203,7 +204,7 @@ OAM domain deployment on Kubernetes leverages the WebLogic Kubernetes Operator i $ cd $WORKDIR $ helm install weblogic-kubernetes-operator kubernetes/charts/weblogic-operator \ --namespace \ - --set image=ghcr.io/oracle/weblogic-kubernetes-operator:4.0.4 \ + --set image=ghcr.io/oracle/weblogic-kubernetes-operator:4.1.2 \ --set serviceAccount= \ --set “enableClusterRoleBinding=true” \ --set "domainNamespaceSelectionStrategy=LabelSelector" \ @@ -217,7 +218,7 @@ OAM domain deployment on Kubernetes leverages the WebLogic Kubernetes Operator i $ cd $WORKDIR $ helm install weblogic-kubernetes-operator kubernetes/charts/weblogic-operator \ --namespace opns \ - --set image=ghcr.io/oracle/weblogic-kubernetes-operator:4.0.4 \ + --set image=ghcr.io/oracle/weblogic-kubernetes-operator:4.1.2 \ --set serviceAccount=op-sa \ --set "enableClusterRoleBinding=true" \ --set "domainNamespaceSelectionStrategy=LabelSelector" \ @@ -257,7 +258,6 @@ OAM domain deployment on Kubernetes leverages the WebLogic Kubernetes Operator i pod/weblogic-operator-webhook-7996b8b58b-9sfhd 1/1 Running 0 40s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - service/internal-weblogic-operator-svc ClusterIP 10.101.1.198 8082/TCP,8083/TCP 40s service/weblogic-operator-webhook-svc ClusterIP 10.100.91.237 8083/TCP,8084/TCP 47s NAME READY UP-TO-DATE AVAILABLE AGE @@ -409,7 +409,7 @@ Before following the steps in this section, make sure that the database and list For example: ```bash - $ kubectl run --image=container-registry.oracle.com/middleware/oam_cpu:12.2.1.4-jdk8-ol7- --image-pull-policy="IfNotPresent" --overrides='{"apiVersion": "v1","spec":{"imagePullSecrets": [{"name": "orclcred"}]}}' helper -n oamns -- sleep infinity + $ kubectl run --image=container-registry.oracle.com/middleware/oam_cpu:12.2.1.4-jdk8-ol7- --image-pull-policy="IfNotPresent" --overrides='{"apiVersion": "v1","spec":{"imagePullSecrets": [{"name": "orclcred"}]}}' helper -n oamns -- sleep infinity ``` If you are not using a container registry and have loaded the image on each of the master and worker nodes, run the following command: @@ -421,7 +421,7 @@ Before following the steps in this section, make sure that the database and list For example: ```bash - $ kubectl run helper --image oracle/oam:12.2.1.4-jdk8-ol7- -n oamns -- sleep infinity + $ kubectl run helper --image oracle/oam:12.2.1.4-jdk8-ol7- -n oamns -- sleep infinity ``` The output will look similar to the following: diff --git a/docs-source/content/idm-products/oam/prerequisites/_index.md b/docs-source/content/idm-products/oam/prerequisites/_index.md index ebcc975c5..7df5f2465 100644 --- a/docs-source/content/idm-products/oam/prerequisites/_index.md +++ b/docs-source/content/idm-products/oam/prerequisites/_index.md @@ -7,7 +7,7 @@ description: "System requirements and limitations for deploying and running an O ### Introduction -This document provides information about the system requirements and limitations for deploying and running OAM domains with the WebLogic Kubernetes Operator 4.0.4. +This document provides information about the system requirements and limitations for deploying and running OAM domains with the WebLogic Kubernetes Operator 4.1.2. @@ -26,7 +26,7 @@ This document provides information about the system requirements and limitations * A running Oracle Database 12.2.0.1 or later. The database must be a supported version for OAM as outlined in [Oracle Fusion Middleware 12c certifications](https://www.oracle.com/technetwork/middleware/fmw-122140-certmatrix-5763476.xlsx). It must meet the requirements as outlined in [About Database Requirements for an Oracle Fusion Middleware Installation](http://www.oracle.com/pls/topic/lookup?ctx=fmw122140&id=GUID-4D3068C8-6686-490A-9C3C-E6D2A435F20A) and in [RCU Requirements for Oracle Databases](http://www.oracle.com/pls/topic/lookup?ctx=fmw122140&id=GUID-35B584F3-6F42-4CA5-9BBB-116E447DAB83). It is recommended that the database initialization parameters are set as per [Minimum Initialization Parameters](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/preparing-existing-database-enterprise-deployment.html#GUID-4597879E-0E9C-4727-8C9F-94DE3EE6BEFB). **Note**: This documentation does not tell you how to install a Kubernetes cluster, Helm, the container engine, or how to push container images to a container registry. -Please refer to your vendor specific documentation for this information. +Please refer to your vendor specific documentation for this information. Also see [Getting Started](../introduction#getting-started). @@ -35,7 +35,7 @@ Please refer to your vendor specific documentation for this information. Compared to running a WebLogic Server domain in Kubernetes using the operator, the following limitations currently exist for OAM domains: * In this release, OAM domains are supported using the “domain on a persistent volume” [model](https://oracle.github.io/weblogic-kubernetes-operator/userguide/managing-domains/choosing-a-model/) only, where the domain home is located in a persistent volume (PV).The "domain in image" model is not supported. -* Only configured clusters are supported. Dynamic clusters are not supported for OAM domains. Note that you can still use all of the scaling features, you just need to define the maximum size of your cluster at domain creation time. -* The [WebLogic Monitoring Exporter](https://github.com/oracle/weblogic-monitoring-exporter) currently supports the WebLogic MBean trees only. Support for JRF MBeans has not been added yet. +* Only configured clusters are supported. Dynamic clusters are not supported for OAM domains. Note that you can still use all of the scaling features, but you need to define the maximum size of your cluster at domain creation time, using the parameter `configuredManagedServerCount`. For more details on this parameter, see [Prepare the create domain script](../create-oam-domains/#prepare-the-create-domain-script). It is recommended to pre-configure your cluster so it's sized a little larger than the maximum size you plan to expand it to. You must rigorously test at this maximum size to make sure that your system can scale as expected. +* The [WebLogic Monitoring Exporter](https://github.com/oracle/weblogic-monitoring-exporter) currently supports the WebLogic MBean trees only. Support for JRF MBeans has not been added yet. * We do not currently support running OAM in non-Linux containers. diff --git a/docs-source/content/idm-products/oam/release-notes/_index.md b/docs-source/content/idm-products/oam/release-notes/_index.md index b3eedade3..834f8b3a0 100644 --- a/docs-source/content/idm-products/oam/release-notes/_index.md +++ b/docs-source/content/idm-products/oam/release-notes/_index.md @@ -10,6 +10,21 @@ Review the latest changes and known issues for Oracle Access Management on Kuber | Date | Version | Change | | --- | --- | --- | +| October, 2023 | 23.4.1 | Supports Oracle Access Management 12.2.1.4 domain deployment using the October 2023 container image which contains the October Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.| +| | | This release contains the following changes: +| | | + Support for WebLogic Kubernetes Operator 4.1.2.| +| | | + Ability to set resource requests and limits for CPU and memory on a cluster resource. See, [Set the OAM server memory parameters](../create-oam-domains/#set-the-oam-server-memory-parameters). | +| | | + Support for the Kubernetes Horizontal Pod Autoscaler (HPA). See, [Kubernetes Horizontal Pod Autoscaler](../manage-oam-domains/hpa).| +| | | + The default domain now only starts one OAM Managed Server (oam_server1) and one Policy Managed Server (policy_mgr1).| +| | | If upgrading to October 23 (23.4.1) from October 22 (22.4.1) or later, you must upgrade the following in order: +| | | 1. WebLogic Kubernetes Operator to 4.1.2| +| | | 2. Patch the OAM container image to October 23| +| | | If upgrading to October 23 (23.4.1) from a release prior to October 22 (22.4.1), you must upgrade the following in order: +| | | 1. WebLogic Kubernetes Operator to 4.1.2| +| | | 2. Patch the OAM container image to October 23| +| | | 3. Upgrade the Ingress| +| | | 4. Upgrade Elasticsearch and Kibana| +| | | See [Patch and Upgrade](../patch-and-upgrade) for these instructions.| | July, 2023 | 23.3.1 | Supports Oracle Access Management 12.2.1.4 domain deployment using the July 2023 container image which contains the July Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.| | | | If upgrading to July 23 (23.3.1) from April 23 (23.2.1), upgrade as follows: | | | 1. Patch the OAM container image to July 23| diff --git a/docs-source/content/idm-products/oig/configure-ingress/ingress-nginx-setup-for-oig-domain-setup-on-K8S-ssl.md b/docs-source/content/idm-products/oig/configure-ingress/ingress-nginx-setup-for-oig-domain-setup-on-K8S-ssl.md index e9b25bf73..669f83de6 100644 --- a/docs-source/content/idm-products/oig/configure-ingress/ingress-nginx-setup-for-oig-domain-setup-on-K8S-ssl.md +++ b/docs-source/content/idm-products/oig/configure-ingress/ingress-nginx-setup-for-oig-domain-setup-on-K8S-ssl.md @@ -510,4 +510,4 @@ If you are using a Managed Service for your Kubernetes cluster, for example Orac #### Verify that you can access the domain URL -After setting up the NGINX ingress, verify that the domain applications are accessible through the NGINX ingress port (for example 32033) as per [Validate Domain URLs ](../../validate-domain-urls) \ No newline at end of file +After setting up the NGINX ingress, verify that the domain applications are accessible through the NGINX ingress port (for example 32033) as per [Validate Domain URLs ](../validate-domain-urls) \ No newline at end of file diff --git a/docs-source/content/idm-products/oig/configure-ingress/ingress-nginx-setup-for-oig-domain-setup-on-K8S.md b/docs-source/content/idm-products/oig/configure-ingress/ingress-nginx-setup-for-oig-domain-setup-on-K8S.md index dc3ba26c1..5e43cb56a 100644 --- a/docs-source/content/idm-products/oig/configure-ingress/ingress-nginx-setup-for-oig-domain-setup-on-K8S.md +++ b/docs-source/content/idm-products/oig/configure-ingress/ingress-nginx-setup-for-oig-domain-setup-on-K8S.md @@ -414,5 +414,5 @@ If you are using a Managed Service for your Kubernetes cluster,for example Oracl ### Verify that you can access the domain URL -After setting up the NGINX ingress, verify that the domain applications are accessible through the NGINX ingress port (for example 31530) as per [Validate Domain URLs ](../../validate-domain-urls) +After setting up the NGINX ingress, verify that the domain applications are accessible through the NGINX ingress port (for example 31530) as per [Validate Domain URLs ](../validate-domain-urls) diff --git a/docs-source/content/idm-products/oig/create-oig-domains/_index.md b/docs-source/content/idm-products/oig/create-oig-domains/_index.md index 04bb4bb13..5e4721b8a 100644 --- a/docs-source/content/idm-products/oig/create-oig-domains/_index.md +++ b/docs-source/content/idm-products/oig/create-oig-domains/_index.md @@ -73,7 +73,7 @@ The sample scripts for Oracle Identity Governance domain deployment are availabl ``` domainUID: governancedomain domainHome: /u01/oracle/user_projects/domains/governancedomain - image: container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- + image: container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- imagePullSecretName: orclcred weblogicCredentialsSecretName: oig-domain-credentials logHome: /u01/oracle/user_projects/domains/logs/governancedomain @@ -177,7 +177,7 @@ generated artifacts: export initialManagedServerReplicas="1" export managedServerNameBase="oim_server" export managedServerPort="14000" - export image="container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7-" + export image="container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7-" export imagePullPolicy="IfNotPresent" export imagePullSecretName="orclcred" export productionModeEnabled="true" @@ -274,7 +274,14 @@ generated artifacts: serverPod: env: - name: USER_MEM_ARGS - value: "-Djava.security.egd=file:/dev/./urandom -Xms2408m -Xmx8192m" + value: "-Djava.security.egd=file:/dev/./urandom -Xms2408m -Xmx8192m + resources: + limits: + cpu: "2" + memory: "8Gi" + requests: + cpu: "1000m" + memory: "4Gi" ``` The file should looks as follows: @@ -294,9 +301,24 @@ generated artifacts: serverPod: env: - name: USER_MEM_ARGS - value: "-Djava.security.egd=file:/dev/./urandom -Xms2408m -Xmx8192m" + value: "-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m" + resources: + limits: + cpu: "2" + memory: "8Gi" + requests: + cpu: "1000m" + memory: "4Gi" ... ``` + + **Note**: The above CPU and memory values are for development environments only. For Enterprise Deployments, please review the performance recommendations and sizing requirements in [Enterprise Deployment Guide for Oracle Identity and Access Management in a Kubernetes Cluster](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/procuring-resources-oracle-cloud-infrastructure-deployment.html#GUID-2E3C8D01-43EB-4691-B1D6-25B1DC2475AE). + + **Note**: Limits and requests for CPU resources are measured in CPU units. One CPU in Kubernetes is equivalent to 1 vCPU/Core for cloud providers, and 1 hyperthread on bare-metal Intel processors. An "`m`" suffix in a CPU attribute indicates ‘milli-CPU’, so 500m is 50% of a CPU. Memory can be expressed in various units, where one Mi is one IEC unit mega-byte (1024^2), and one Gi is one IEC unit giga-byte (1024^3). For more information, see [Resource Management for Pods and Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/), [Assign Memory Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/), and [Assign CPU Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/). + + **Note**: The parameters above are also utilized by the Kubernetes Horizontal Pod Autoscaler (HPA). For more details on HPA, see [Kubernetes Horizontal Pod Autoscaler](../manage-oig-domains/hpa). + + **Note**: If required you can also set the same resources and limits for the `governancedomain-soa-cluster`. #### Run the create domain scripts @@ -402,12 +424,13 @@ generated artifacts: governancedomain-oim-server1 1/1 Running 0 4m25s governancedomain-soa-server1 1/1 Running 0 4m helper 1/1 Running 0 3h38m + ``` **Note**: It will take several minutes before the `governancedomain-oim-server1` pod has a `STATUS` of `1/1`. While the pod is starting you can check the startup status in the pod log, by running the following command: ```bash $ kubectl logs governancedomain-oim-server1 -n oigns - + ``` ### Verify the results @@ -581,7 +604,7 @@ The default domain created by the script has the following characteristics: Failure Retry Interval Seconds: 120 Failure Retry Limit Minutes: 1440 Http Access Log In Log Home: true - Image: container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- + Image: container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- Image Pull Policy: IfNotPresent Image Pull Secrets: Name: orclcred diff --git a/docs-source/content/idm-products/oig/introduction/_index.md b/docs-source/content/idm-products/oig/introduction/_index.md index 4b17b2cb2..9d593fd47 100644 --- a/docs-source/content/idm-products/oig/introduction/_index.md +++ b/docs-source/content/idm-products/oig/introduction/_index.md @@ -22,7 +22,9 @@ environment. You can: ### Current production release -The current production release for the Oracle Identity Governance domain deployment on Kubernetes is [23.3.1](https://github.com/oracle/fmw-kubernetes/releases). This release uses the WebLogic Kubernetes Operator version 4.0.4. +The current production release for the Oracle Identity Governance domain deployment on Kubernetes is [23.4.1](https://github.com/oracle/fmw-kubernetes/releases). This release uses the WebLogic Kubernetes Operator version 4.1.2. + +For 4.0.X WebLogic Kubernetes Operator refer to [Version 23.3.1](https://oracle.github.io/fmw-kubernetes/23.3.1/idm-products/oig/) For 3.4.X WebLogic Kubernetes Operator refer to [Version 23.1.1](https://oracle.github.io/fmw-kubernetes/23.1.1/idm-products/oig/) @@ -36,14 +38,21 @@ See [here](../prerequisites#limitations) for limitations in this release. ### Getting started -This documentation explains how to configure OIG on a Kubernetes cluster where no other Oracle Identity Management products will be deployed. For detailed information about this type of deployment, start at [Prerequisites](../prerequisites) and follow this documentation sequentially. +This documentation explains how to configure OIG on a Kubernetes cluster where no other Oracle Identity Management products will be deployed. For detailed information about this type of deployment, start at [Prerequisites](../prerequisites) and follow this documentation sequentially. Please note that this documentation does not explain how to configure a Kubernetes cluster given the product can be deployed on any compliant Kubernetes vendor. + +If you are deploying multiple Oracle Identity Management products on the same Kubernetes cluster, then you must follow the Enterprise Deployment Guide outlined in [Enterprise Deployments](../../enterprise-deployments). +Please note, you also have the option to follow the Enterprise Deployment Guide even if you are only installing OIG and no other Oracle Identity Management products. + +**Note**: If you need to understand how to configure a Kubernetes cluster ready for an Oracle Identity Governance deployment, you should follow the Enterprise Deployment Guide referenced in [Enterprise Deployments](../../enterprise-deployments). The [Enterprise Deployment Automation](../../enterprise-deployments/enterprise-deployment-automation) section also contains details on automation scripts that can: -If performing an Enterprise Deployment, refer to the [Enterprise Deployment Guide for Oracle Identity and Access Management in a Kubernetes Cluster](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/index.html) instead. + + Automate the creation of a Kubernetes cluster on Oracle Cloud Infrastructure (OCI), ready for the deployment of Oracle Identity Management products. + + Automate the deployment of Oracle Identity Management products on any compliant Kubernetes cluster. ### Documentation for earlier releases To view documentation for an earlier release, see: +* [Version 23.3.1](https://oracle.github.io/fmw-kubernetes/23.3.1/idm-products/oig/) * [Version 23.2.1](https://oracle.github.io/fmw-kubernetes/23.2.1/idm-products/oig/) * [Version 23.1.1](https://oracle.github.io/fmw-kubernetes/23.1.1/idm-products/oig/) * [Version 22.4.1](https://oracle.github.io/fmw-kubernetes/22.4.1/oig/) diff --git a/docs-source/content/idm-products/oig/manage-oig-domains/delete-domain-home.md b/docs-source/content/idm-products/oig/manage-oig-domains/delete-domain-home.md index 53e19780b..8d7351e4e 100644 --- a/docs-source/content/idm-products/oig/manage-oig-domains/delete-domain-home.md +++ b/docs-source/content/idm-products/oig/manage-oig-domains/delete-domain-home.md @@ -1,5 +1,5 @@ --- -title: "f. Delete the OIG domain home" +title: "g. Delete the OIG domain home" description: "Learn about the steps to cleanup the OIG domain home." --- diff --git a/docs-source/content/idm-products/oig/manage-oig-domains/domain-lifecycle.md b/docs-source/content/idm-products/oig/manage-oig-domains/domain-lifecycle.md index 2f3c34fac..d61023940 100644 --- a/docs-source/content/idm-products/oig/manage-oig-domains/domain-lifecycle.md +++ b/docs-source/content/idm-products/oig/manage-oig-domains/domain-lifecycle.md @@ -18,6 +18,8 @@ For more detailed information refer to [Domain Life Cycle](https://oracle.github {{% notice note %}} Do not use the WebLogic Server Administration Console or Oracle Enterprise Manager Console to start or stop servers. {{% /notice %}} + +**Note**: The instructions below are for starting, stopping, or scaling servers manually. If you wish to use autoscaling, see [Kubernetes Horizontal Pod Autoscaler](../hpa). Please note, if you have enabled autoscaling, it is recommended to delete the autoscaler before running the commands below. ### View existing OIG Servers @@ -76,9 +78,8 @@ The number of OIG Managed Servers running is dependent on the `replicas` paramet serverPod: env: - name: USER_MEM_ARGS - value: -Djava.security.egd=file:/dev/./urandom -Xms2408m -Xmx8192m - serverService: - precreateService: true + value: -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m + ... ``` 1. To start more OIG Managed Servers, increase the `replicas` value as desired. In the example below, one more Managed Server will be started by setting `replicas` to "2": @@ -90,9 +91,8 @@ The number of OIG Managed Servers running is dependent on the `replicas` paramet serverPod: env: - name: USER_MEM_ARGS - value: -Djava.security.egd=file:/dev/./urandom -Xms2408m -Xmx8192m - serverService: - precreateService: true + value: -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m + ... ``` 1. Save the file and exit (:wq) @@ -174,9 +174,8 @@ As mentioned in the previous section, the number of OIG Managed Servers running serverPod: env: - name: USER_MEM_ARGS - value: -Djava.security.egd=file:/dev/./urandom -Xms2408m -Xmx8192m - serverService: - precreateService: true + value: -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m + ... ``` 1. To stop OIG Managed Servers, decrease the `replicas` value as desired. In the example below, we will stop one Managed Server by setting replicas to "1": @@ -188,10 +187,8 @@ As mentioned in the previous section, the number of OIG Managed Servers running serverPod: env: - name: USER_MEM_ARGS - value: -Djava.security.egd=file:/dev/./urandom -Xms2408m -Xmx8192m - serverService: - precreateService: true - + value: -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m + ... ``` 1. Save the file and exit (:wq) diff --git a/docs-source/content/idm-products/oig/manage-oig-domains/hpa.md b/docs-source/content/idm-products/oig/manage-oig-domains/hpa.md new file mode 100644 index 000000000..fdb353cbe --- /dev/null +++ b/docs-source/content/idm-products/oig/manage-oig-domains/hpa.md @@ -0,0 +1,453 @@ +--- +title: "f. Kubernetes Horizontal Pod Autoscaler" +description: "Describes the steps for implementing the Horizontal Pod Autoscaler." +--- + + +1. [Prerequisite configuration](#prerequisite-configuration) +1. [Deploy the Kubernetes Metrics Server](#deploy-the-kubernetes-metrics-server) + 1. [Troubleshooting](#troubleshooting) +1. [Deploy HPA](#deploy-hpa) +1. [Testing HPA](#testing-hpa) +1. [Delete the HPA](#delete-the-hpa) +1. [Other considerations](#other-considerations) + + +Kubernetes Horizontal Pod Autoscaler (HPA) is supported from Weblogic Kubernetes Operator 4.0.X and later. + +HPA allows automatic scaling (up and down) of the OIG Managed Servers. If load increases then extra OIG Managed Servers will be started as required, up to the value `configuredManagedServerCount` defined when the domain was created (see [Prepare the create domain script](../../create-oig-domains#prepare-the-create-domain-script)). Similarly, if load decreases, OIG Managed Servers will be automatically shutdown. + +For more information on HPA, see [Horizontal Pod Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). + +The instructions below show you how to configure and run an HPA to scale an OIG cluster (`governancedomain-oim-cluster`) resource, based on CPU utilization or memory resource metrics. If required, you can also perform the following for the `governancedomain-soa-cluster`. + +**Note**: If you enable HPA and then decide you want to start/stop/scale OIG Managed servers manually as per [Domain Life Cycle](../domain-lifecycle), it is recommended to delete HPA beforehand as per [Delete the HPA](#delete-the-hpa). + +### Prerequisite configuration + +In order to use HPA, the OIG domain must have been created with the required `resources` parameter as per [Set the OIM server memory parameters](../../create-oig-domains#setting-the-oim-server-memory-parameters). For example: + + ``` + serverPod: + env: + - name: USER_MEM_ARGS + value: "-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m" + resources: + limits: + cpu: "2" + memory: "8Gi" + requests: + cpu: "1000m" + memory: "4Gi" + ``` + +If you created the OIG domain without setting these parameters, then you can update the domain using the following steps: + +1. Run the following command to edit the cluster: + + ``` + $ kubectl edit cluster governancedomain-oim-cluster -n oigns + ``` + + **Note**: This opens an edit session for the `governancedomain-oim-cluster` where parameters can be changed using standard vi commands. + +1. In the edit session, search for `spec:`, and then look for the replicas parameter under `clusterName: oim_cluster`. Change the entry so it looks as follows: + + ``` + spec: + clusterName: oim_cluster + replicas: 1 + serverPod: + env: + - name: USER_MEM_ARGS + value: -XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom -Xms8192m -Xmx8192m + resources: + limits: + cpu: "2" + memory: 8Gi + requests: + cpu: 1000m + memory: 4Gi + serverService: + precreateService: true + ... + ``` + +1. Save the file and exit (:wq!) + + The output will look similar to the following: + + ``` + cluster.weblogic.oracle/governancedomain-oim-cluster edited + ``` + + The OIG Managed Server pods will then automatically be restarted. + +### Deploy the Kubernetes Metrics Server + +Before deploying HPA you must deploy the Kubernetes Metrics Server. + +1. Check to see if the Kubernetes Metrics Server is already deployed: + + ``` + $ kubectl get pods -n kube-system | grep metric + ``` + If a row is returned as follows, then Kubernetes Metric Server is deployed and you can move to [Deploy HPA](#deploy-hpa). + + ``` + metrics-server-d9694457-mf69d 1/1 Running 0 5m13s + ``` + +1. If no rows are returned by the previous command, then the Kubernetes Metric Server needs to be deployed. Run the following commands to get the `components.yaml`: + + ``` + $ mkdir $WORKDIR/kubernetes/hpa + $ cd $WORKDIR/kubernetes/hpa + $ wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml + ``` + +1. Deploy the Kubernetes Metrics Server by running the following command: + + ``` + $ kubectl apply -f components.yaml + ``` + + The output will look similar to the following: + + ``` + serviceaccount/metrics-server created + clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created + clusterrole.rbac.authorization.k8s.io/system:metrics-server created + rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created + clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created + clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created + service/metrics-server created + deployment.apps/metrics-server created + apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created + ``` + +1. Run the following command to check Kubernetes Metric Server is running: + + ``` + $ kubectl get pods -n kube-system | grep metric + ``` + + Make sure the pod has a `READY` status of `1/1`: + + ``` + metrics-server-d9694457-mf69d 1/1 Running 0 39s + ``` + + +#### Troubleshooting + +If the Kubernetes Metric Server does not reach the `READY 1/1` state, run the following commands: + +``` +$ kubectl describe pod -n kube-system +$ kubectl logs -n kube-system +``` + +If you see errors such as: + +``` +Readiness probe failed: HTTP probe failed with statuscode: 500 +``` + +and: + +``` +E0907 13:07:50.937308 1 scraper.go:140] "Failed to scrape node" err="Get \"https://100.105.18.113:10250/metrics/resource\": x509: cannot validate certificate for 100.105.18.113 because it doesn't contain any IP SANs" node="worker-node1" +``` + +then you may need to install a valid cluster certificate for your Kubernetes cluster. + +For testing purposes, you can resolve this issue by: + +1. Delete the Kubernetes Metrics Server by running the following command: + + ``` + $ kubectl delete -f $WORKDIR/kubernetes/hpa/components.yaml + ``` + +1. Edit the `$WORKDIR/hpa/components.yaml` and locate the `args:` section. Add `kubelet-insecure-tls` to the arguments. For example: + + ``` + spec: + containers: + - args: + - --cert-dir=/tmp + - --secure-port=4443 + - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname + - --kubelet-use-node-status-port + - --kubelet-insecure-tls + - --metric-resolution=15s + image: registry.k8s.io/metrics-server/metrics-server:v0.6.4 + ... + ``` + +1. Deploy the Kubenetes Metrics Server using the command: + + ``` + $ kubectl apply -f components.yaml + ``` + + Run the following and make sure the READY status shows `1/1`: + + ``` + $ kubectl get pods -n kube-system | grep metric + ``` + + The output should look similar to the following: + + ``` + metrics-server-d9694457-mf69d 1/1 Running 0 40s + ``` + + +### Deploy HPA + +The steps below show how to configure and run an HPA to scale the `governancedomain-oim-cluster`, based on the CPU or memory utilization resource metrics. + +The default OIG deployment creates the cluster `governancedomain-oim-cluster` which starts one OIG Managed Server (`oim_server1`). The deployment also creates, but doesn’t start, four extra OIG Managed Servers (`oim-server2` to `oim-server5`). + +In the following example an HPA resource is created, targeted at the cluster resource `governancedomain-oim-cluster`. This resource will autoscale OIG Managed Servers from a minimum of 1 cluster member up to 5 cluster members. Scaling up will occur when the average CPU is consistently over 70%. Scaling down will occur when the average CPU is consistently below 70%. + + +1. Navigate to the `$WORKDIR/kubernetes/hpa` and create an `autoscalehpa.yaml` file that contains the following. + + ``` + # + apiVersion: autoscaling/v2 + kind: HorizontalPodAutoscaler + metadata: + name: governancedomain-oim-cluster-hpa + namespace: oigns + spec: + scaleTargetRef: + apiVersion: weblogic.oracle/v1 + kind: Cluster + name: governancedomain-oim-cluster + behavior: + scaleDown: + stabilizationWindowSeconds: 60 + scaleUp: + stabilizationWindowSeconds: 60 + minReplicas: 1 + maxReplicas: 5 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 70 + ``` + + **Note** : `minReplicas` and `maxReplicas` should match your current domain settings. + + **Note**: For setting HPA based on Memory Metrics, update the metrics block with the following content. Please note we recommend using only CPU or Memory, not both. + + ``` + metrics: + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: 70 + ``` + + +1. Run the following command to create the autoscaler: + + ``` + $ kubectl apply -f autoscalehpa.yaml + ``` + + The output will look similar to the following: + + ``` + horizontalpodautoscaler.autoscaling/governancedomain-oim-cluster-hpa created + ``` + +1. Verify the status of the autoscaler by running the following: + + ``` + $ kubectl get hpa -n oigns + ``` + + The output will look similar to the following: + + ``` + NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE + governancedomain-oim-cluster-hpa Cluster/governancedomain-oim-cluster 16%/70% 1 5 1 20s + ``` + + In the example above, this shows that CPU is currently running at 16% for the `governancedomain-oim-cluster-hpa`. + + +### Testing HPA + +1. Check the current status of the OIG Managed Servers: + + ``` + $ kubectl get pods -n oigns + ``` + + The output will look similar to the following: + + ``` + NAME READY STATUS RESTARTS AGE + governancedomain-adminserver 1/1 Running 0 20m + governancedomain-create-fmw-infra-sample-domain-job-8wd2b 0/1 Completed 0 2d18h + governancedomain-oim-server1 1/1 Running 0 17m + governancedomain-soa-server1 1/1 Running 0 17m + helper 1/1 Running 0 2d18h + ``` + + In the above only `governancedomain-oim-server1` is running. + + + +1. To test HPA can scale up the WebLogic cluster `governancedomain-oim-cluster`, run the following commands: + + ``` + $ kubectl exec --stdin --tty governancedomain-oim-server1 -n oigns -- /bin/bash + ``` + + This will take you inside a bash shell inside the `oim_server1` pod: + + ``` + [oracle@governancedomain-oim-server1 oracle]$ + ``` + + Inside the bash shell, run the following command to increase the load on the CPU: + + ``` + [oracle@governancedomain-oim-server1 oracle]$ dd if=/dev/zero of=/dev/null + ``` + + This command will continue to run in the foreground. + + + +1. In a command window outside the bash shell, run the following command to view the current CPU usage: + + ``` + $ kubectl get hpa -n oigns + ``` + + The output will look similar to the following: + + ``` + NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE + governancedomain-oim-cluster-hpa Cluster/governancedomain-oim-cluster 386%/70% 1 5 1 2m47s + ``` + + In the above example the CPU has increased to 386%. As this is above the 70% limit, the autoscaler increases the replicas on the Cluster resource and the operator responds by starting additional cluster members. + +1. Run the following to see if any more OIG Managed Servers are started: + + ``` + $ kubectl get pods -n oigns + ``` + + The output will look similar to the following: + + ``` + NAME READY STATUS RESTARTS AGE + governancedomain-adminserver 1/1 Running 0 30m + governancedomain-create-fmw-infra-sample-domain-job-8wd2b 0/1 Completed 0 2d18h + governancedomain-oim-server1 1/1 Running 0 27m + governancedomain-oim-server2 1/1 Running 0 10m + governancedomain-oim-server3 1/1 Running 0 10m + governancedomain-oim-server4 1/1 Running 0 10m + governancedomain-oim-server5 1/1 Running 0 10m + governancedomain-soa-server1 1/1 Running 0 27m + helper 1/1 Running 0 2d18h + ``` + + In the example above four more OIG Managed Servers have been started (`oim-server2` - `oim-server5`). + + **Note**: It may take some time for the servers to appear and start. Once the servers are at `READY` status of `1/1`, the servers are started. + + +1. To stop the load on the CPU, in the bash shell, issue a Control C, and then exit the bash shell: + + ``` + [oracle@governancedomain-oim-server1 oracle]$ dd if=/dev/zero of=/dev/null + ^C + [oracle@governancedomain-oim-server1 oracle]$ exit + ``` + +1. Run the following command to view the current CPU usage: + + ``` + $ kubectl get hpa -n oigns + ``` + + The output will look similar to the following: + + ``` + NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE + governancedomain-oim-cluster-hpa Cluster/governancedomain-oim-cluster 33%/70% 1 5 5 37m + ``` + + In the above example CPU has dropped to 33%. As this is below the 70% threshold, you should see the autoscaler scale down the servers: + + ``` + $ kubectl get pods -n oigns + ``` + + The output will look similar to the following: + + ``` + NAME READY STATUS RESTARTS AGE + governancedomain-adminserver 1/1 Running 0 43m + governancedomain-create-fmw-infra-sample-domain-job-8wd2b 0/1 Completed 0 2d18h + governancedomain-oim-server1 1/1 Running 0 40m + governancedomain-oim-server2 1/1 Running 0 13m + governancedomain-oim-server3 1/1 Running 0 13m + governancedomain-oim-server4 1/1 Running 0 13m + governancedomain-oim-server5 0/1 Terminating 0 13m + governancedomain-soa-server1 1/1 Running 0 40m + helper 1/1 Running 0 2d19h + ``` + + Eventually, all the servers except `oim-server1` will disappear: + + ``` + NAME READY STATUS RESTARTS AGE + governancedomain-adminserver 1/1 Running 0 44m + governancedomain-create-fmw-infra-sample-domain-job-8wd2b 0/1 Completed 0 2d18h + governancedomain-oim-server1 1/1 Running 0 41m + governancedomain-soa-server1 1/1 Running 0 41m + helper 1/1 Running 0 2d20h + ``` + + +### Delete the HPA + +1. If you need to delete the HPA, you can do so by running the following command: + + ``` + $ cd $WORKDIR/kubernetes/hpa + $ kubectl delete -f autoscalehpa.yaml + ``` + +### Other considerations + ++ If HPA is deployed and you need to upgrade the OIG image, then you must delete the HPA before upgrading. Once the upgrade is successful you can deploy HPA again. ++ If you choose to start/stop an OIG Managed Server manually as per [Domain Life Cycle](../domain-lifecycle), then it is recommended to delete the HPA before doing so. + + + + + + + + + + + diff --git a/docs-source/content/idm-products/oig/manage-oig-domains/monitoring-oim-domains.md b/docs-source/content/idm-products/oig/manage-oig-domains/monitoring-oim-domains.md index d1a034091..48292e6c7 100644 --- a/docs-source/content/idm-products/oig/manage-oig-domains/monitoring-oim-domains.md +++ b/docs-source/content/idm-products/oig/manage-oig-domains/monitoring-oim-domains.md @@ -23,7 +23,7 @@ For usage details execute `./setup-monitoring.sh -h`. 1. Edit the `$WORKDIR/kubernetes/monitoring-service/monitoring-inputs.yaml` and change the `domainUID`, `domainNamespace`, and `weblogicCredentialsSecretName` to correspond to your deployment. Also change `wlsMonitoringExporterTosoaCluster`, `wlsMonitoringExporterTooimCluster`, `exposeMonitoringNodePort` to `true`. For example: ``` - version: create-oimcluster-monitoring-inputs-v1 + version: create-governancedomain-monitoring-inputs-v1 # Unique ID identifying your domain. # This ID must not contain an underscope ("_"), and must be lowercase and unique across all domains in a Kubernetes cluster. @@ -110,11 +110,6 @@ For usage details execute `./setup-monitoring.sh -h`. node/worker-node1 not labeled node/worker-node2 not labeled node/master-node not labeled - W0320 9968 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ - W0320 9968 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ - W0320 9968 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ - ... - W0320 9968 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Setup prometheus-community/kube-prometheus-stack started "prometheus-community" already exists with the same configuration, skipping Hang tight while we grab the latest from your chart repositories... @@ -236,8 +231,6 @@ For usage details execute `./setup-monitoring.sh -h`. ============================================================== ``` - **Note**: If you see the warning `W0320 9968 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+` you can ignore this message. - #### Prometheus service discovery After the ServiceMonitor is deployed, the wls-exporter should be discovered by Prometheus and be able to collect metrics. diff --git a/docs-source/content/idm-products/oig/patch-and-upgrade/patch-an-image.md b/docs-source/content/idm-products/oig/patch-and-upgrade/patch-an-image.md index 149ba9af7..53701fb54 100644 --- a/docs-source/content/idm-products/oig/patch-and-upgrade/patch-an-image.md +++ b/docs-source/content/idm-products/oig/patch-and-upgrade/patch-an-image.md @@ -8,7 +8,7 @@ description: "Instructions on how to update your OIG Kubernetes cluster with a n The OIG domain patching script automatically performs the update of your OIG Kubernetes cluster with a new OIG container image. -**Note**: Before following the steps below, you must have upgraded to WebLogic Kubernetes Operator 4.0.4. +**Note**: Before following the steps below, you must have upgraded to WebLogic Kubernetes Operator 4.1.2. The script executes the following steps sequentially: @@ -91,7 +91,7 @@ Download the latest code repository as follows: ```bash $ cd $WORKDIR/kubernetes/domain-lifecycle $ ./patch_oig_domain.sh -h - $ ./patch_oig_domain.sh -i 12.2.1.4-jdk8-ol7- -n oigns + $ ./patch_oig_domain.sh -i 12.2.1.4-jdk8-ol7- -n oigns ``` The output will look similar to the following @@ -105,16 +105,16 @@ Download the latest code repository as follows: [INFO] Deleting pod helper pod "helper" deleted [INFO] Fetched Image Pull Secret: orclcred - [INFO] Creating new helper pod with image: container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- + [INFO] Creating new helper pod with image: container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- pod/helper created Checking helper Running [INFO] Stopping Admin, SOA and OIM servers in domain governancedomain. This may take some time, monitor log /scratch/OIGK8Slatest/fmw-kubernetes/OracleIdentityGovernance/kubernetes/domain-lifecycle/log/oim_patch_log-/stop_servers.log for details [INFO] All servers are now stopped successfully. Proceeding with DB Schema changes [INFO] Patching OIM schemas... [INFO] DB schema update successful. Check log /scratch/OIGK8Slatest/fmw-kubernetes/OracleIdentityGovernance/kubernetes/domain-lifecycle/log/oim_patch_log-/patch_oim_wls.log for details - [INFO] Starting Admin, SOA and OIM servers with new image container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- + [INFO] Starting Admin, SOA and OIM servers with new image container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- [INFO] Waiting for 3 weblogic pods to be ready..This may take several minutes, do not close the window. Check log /scratch/OIGK8Slatest/fmw-kubernetes/OracleIdentityGovernance/kubernetes/domain-lifecycle/log/oim_patch_log-/monitor_weblogic_pods.log for progress - [SUCCESS] All servers under governancedomain are now in ready state with new image: container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- + [SUCCESS] All servers under governancedomain are now in ready state with new image: container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- ``` The logs are available at `$WORKDIR/kubernetes/domain-lifecycle` by default. A custom log location can also be provided to the script. diff --git a/docs-source/content/idm-products/oig/patch-and-upgrade/upgrade-an-operator-release.md b/docs-source/content/idm-products/oig/patch-and-upgrade/upgrade-an-operator-release.md index e284b0f24..6effeaf56 100644 --- a/docs-source/content/idm-products/oig/patch-and-upgrade/upgrade-an-operator-release.md +++ b/docs-source/content/idm-products/oig/patch-and-upgrade/upgrade-an-operator-release.md @@ -71,7 +71,6 @@ These instructions apply to upgrading operators from 3.X.X to 4.X, or from withi pod/weblogic-operator-webhook-7996b8b58b-frtwp 1/1 Running 0 42s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - service/internal-weblogic-operator-svc ClusterIP 10.107.3.1 8082/TCP,8083/TCP 6d service/weblogic-operator-webhook-svc ClusterIP 10.106.51.57 8083/TCP,8084/TCP 42s NAME READY UP-TO-DATE AVAILABLE AGE diff --git a/docs-source/content/idm-products/oig/prepare-your-environment/_index.md b/docs-source/content/idm-products/oig/prepare-your-environment/_index.md index b3977780d..aa986f163 100644 --- a/docs-source/content/idm-products/oig/prepare-your-environment/_index.md +++ b/docs-source/content/idm-products/oig/prepare-your-environment/_index.md @@ -34,9 +34,9 @@ As per the [Prerequisites](../prerequisites/#system-requirements-for-oig-domains ``` NAME STATUS ROLES AGE VERSION - node/worker-node1 Ready 17h v1.24.5+1.el7 - node/worker-node2 Ready 17h v1.24.5+1.el7 - node/master-node Ready master 23h v1.24.5+1.el7 + node/worker-node1 Ready 17h v1.26.6+1.el8 + node/worker-node2 Ready 17h v1.26.6+1.el8 + node/master-node Ready master 23h v1.26.6+1.el8 NAME READY STATUS RESTARTS AGE pod/coredns-66bff467f8-fnhbq 1/1 Running 0 23h @@ -50,7 +50,7 @@ As per the [Prerequisites](../prerequisites/#system-requirements-for-oig-domains pod/kube-proxy-2kxv2 1/1 Running 0 17h pod/kube-proxy-82vvj 1/1 Running 0 17h pod/kube-proxy-nrgw9 1/1 Running 0 23h - pod/kube-scheduler-master 1/1 Running 0 21$ + pod/kube-scheduler-master 1/1 Running 0 21h ``` ### Obtain the OIG container image @@ -64,7 +64,7 @@ The OIG Kubernetes deployment requires access to an OIG container image. The ima #### Prebuilt OIG container image -The latest prebuilt OIG April 2023 container image can be downloaded from [Oracle Container Registry](https://container-registry.oracle.com). This image is prebuilt by Oracle and includes Oracle Identity Governance 12.2.1.4.0, the April Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.. +The latest prebuilt OIG October 2023 container image can be downloaded from [Oracle Container Registry](https://container-registry.oracle.com). This image is prebuilt by Oracle and includes Oracle Identity Governance 12.2.1.4.0, the October Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.. **Note**: Before using this image you must login to [Oracle Container Registry](https://container-registry.oracle.com), navigate to `Middleware` > `oig_cpu` and accept the license agreement. @@ -141,19 +141,20 @@ Oracle Identity Governance domain deployment on Kubernetes leverages the WebLogi No resources found in default namespace. ``` - If you see the following: + If you see any of the following: ``` - NAME AGE - domains.weblogic.oracle 5d + NAME AGE + clusters.weblogic.oracle 5d + domains.weblogic.oracle 5d ``` - then run the following command to delete the existing crd: + then run the following command to delete the existing crd's: ```bash + $ kubectl delete crd clusters.weblogic.oracle $ kubectl delete crd domains.weblogic.oracle - customresourcedefinition.apiextensions.k8s.io "domains.weblogic.oracle" deleted - ``` + ``` ### Install the WebLogic Kubernetes Operator @@ -199,7 +200,7 @@ Oracle Identity Governance domain deployment on Kubernetes leverages the WebLogi $ cd $WORKDIR $ helm install weblogic-kubernetes-operator kubernetes/charts/weblogic-operator \ --namespace \ - --set image=ghcr.io/oracle/weblogic-kubernetes-operator:4.0.4 \ + --set image=ghcr.io/oracle/weblogic-kubernetes-operator:4.1.2 \ --set serviceAccount= \ --set “enableClusterRoleBinding=true” \ --set "domainNamespaceSelectionStrategy=LabelSelector" \ @@ -213,7 +214,7 @@ Oracle Identity Governance domain deployment on Kubernetes leverages the WebLogi $ cd $WORKDIR $ helm install weblogic-kubernetes-operator kubernetes/charts/weblogic-operator \ --namespace opns \ - --set image=ghcr.io/oracle/weblogic-kubernetes-operator:4.0.4 \ + --set image=ghcr.io/oracle/weblogic-kubernetes-operator:4.1.2 \ --set serviceAccount=op-sa \ --set "enableClusterRoleBinding=true" \ --set "domainNamespaceSelectionStrategy=LabelSelector" \ @@ -252,7 +253,6 @@ Oracle Identity Governance domain deployment on Kubernetes leverages the WebLogi pod/weblogic-operator-webhook-7996b8b58b-68l8s 1/1 Running 0 33s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - service/internal-weblogic-operator-svc ClusterIP 10.103.112.20 8082/TCP,8083/TCP 33s service/weblogic-operator-webhook-svc ClusterIP 10.109.163.130 8083/TCP,8084/TCP 34s NAME READY UP-TO-DATE AVAILABLE AGE @@ -279,9 +279,9 @@ Oracle Identity Governance domain deployment on Kubernetes leverages the WebLogi The output will look similar to the following: ``` - {"timestamp":"2023-03-15T17:44:55.852803077Z","thread":37,"fiber":"","namespace":"","domainUID":"","level":"FINE","class":"oracle.kubernetes.operator.DeploymentLiveness","method":"run","timeInMillis":1678902295852,"message":"Liveness file last modified time set","exception":"","code":"","headers":{},"body":""} - {"timestamp":"2023-03-15T17:45:00.853833985Z","thread":42,"fiber":"","namespace":"","domainUID":"","level":"FINE","class":"oracle.kubernetes.operator.DeploymentLiveness","method":"run","timeInMillis":1678902300853,"message":"Liveness file last modified time set","exception":"","code":"","headers":{},"body":""} - {"timestamp":"2023-03-15T17:45:05.854897954Z","thread":21,"fiber":"","namespace":"","domainUID":"","level":"FINE","class":"oracle.kubernetes.operator.DeploymentLiveness","method":"run","timeInMillis":1678902305854,"message":"Liveness file last modified time set","exception":"","code":"","headers":{},"body":""} + {"timestamp":"","thread":37,"fiber":"","namespace":"","domainUID":"","level":"FINE","class":"oracle.kubernetes.operator.DeploymentLiveness","method":"run","timeInMillis":1678902295852,"message":"Liveness file last modified time set","exception":"","code":"","headers":{},"body":""} + {"timestamp":"","thread":42,"fiber":"","namespace":"","domainUID":"","level":"FINE","class":"oracle.kubernetes.operator.DeploymentLiveness","method":"run","timeInMillis":1678902300853,"message":"Liveness file last modified time set","exception":"","code":"","headers":{},"body":""} + {"timestamp":"","thread":21,"fiber":"","namespace":"","domainUID":"","level":"FINE","class":"oracle.kubernetes.operator.DeploymentLiveness","method":"run","timeInMillis":1678902305854,"message":"Liveness file last modified time set","exception":"","code":"","headers":{},"body":""} ``` ### Create a namespace for Oracle Identity Governance @@ -401,7 +401,7 @@ Before following the steps in this section, make sure that the database and list For example: ```bash - $ kubectl run --image=container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- --image-pull-policy="IfNotPresent" --overrides='{"apiVersion": "v1","spec":{"imagePullSecrets": [{"name": "orclcred"}]}}' helper -n oigns -- sleep infinity + $ kubectl run --image=container-registry.oracle.com/middleware/oig_cpu:12.2.1.4-jdk8-ol7- --image-pull-policy="IfNotPresent" --overrides='{"apiVersion": "v1","spec":{"imagePullSecrets": [{"name": "orclcred"}]}}' helper -n oigns -- sleep infinity ``` If you are not using a container registry and have loaded the image on each of the master and worker nodes, run the following command: @@ -413,7 +413,7 @@ Before following the steps in this section, make sure that the database and list For example: ```bash - $ kubectl run helper --image oracle/oig:12.2.1.4-jdk8-ol7- -n oigns -- sleep infinity + $ kubectl run helper --image oracle/oig:12.2.1.4-jdk8-ol7- -n oigns -- sleep infinity ``` The output will look similar to the following: @@ -660,7 +660,8 @@ Before following the steps in this section, make sure that the database and list [sql] Executing resource: /u01/oracle/idm/server/db/oim/oracle/Upgrade/oim12cps4/list/oim12cps4_upg_ent_trg_bkp.sql [sql] Executing resource: /u01/oracle/idm/server/db/oim/oracle/Upgrade/oim12cps4/list/oim12cps4_upg_ent_trg_fix.sql [sql] Executing resource: /u01/oracle/idm/server/db/oim/oracle/Upgrade/oim12cps4/list/oim12cps4_upg_ent_trg_restore_bkp.sql - [sql] 61 of 61 SQL statements executed successfully + [sql] Executing resource: /u01/oracle/idm/server/db/oim/oracle/Upgrade/oim12cps4/list/oim12cps4_ddl_alter_pwr_add_column.sql + [sql] 67 of 67 SQL statements executed successfully BUILD SUCCESSFUL Total time: 6 seconds @@ -746,7 +747,7 @@ In this section you prepare the environment for the OIG domain creation. This in type: Opaque ``` -1. Create a Kubernetes secret for RCU in the same Kubernetes namespace as the domain, using the `create-weblogic-credentials.sh` script: +1. Create a Kubernetes secret for RCU in the same Kubernetes namespace as the domain, using the `create-rcu-credentials.sh` script: ```bash $ cd $WORKDIR/kubernetes/create-rcu-credentials diff --git a/docs-source/content/idm-products/oig/prerequisites/_index.md b/docs-source/content/idm-products/oig/prerequisites/_index.md index 82bdfc5ac..6be3da8bd 100644 --- a/docs-source/content/idm-products/oig/prerequisites/_index.md +++ b/docs-source/content/idm-products/oig/prerequisites/_index.md @@ -7,7 +7,7 @@ description: "System requirements and limitations for deploying and running an O ### Introduction -This document provides information about the system requirements and limitations for deploying and running OIG domains with the WebLogic Kubernetes Operator 4.0.4. +This document provides information about the system requirements and limitations for deploying and running OIG domains with the WebLogic Kubernetes Operator 4.1.2. ### System requirements for OIG domains @@ -24,7 +24,7 @@ This document provides information about the system requirements and limitations * A running Oracle Database 12.2.0.1 or later. The database must be a supported version for OIG as outlined in [Oracle Fusion Middleware 12c certifications](https://www.oracle.com/technetwork/middleware/fmw-122140-certmatrix-5763476.xlsx). It must meet the requirements as outlined in [About Database Requirements for an Oracle Fusion Middleware Installation](http://www.oracle.com/pls/topic/lookup?ctx=fmw122140&id=GUID-4D3068C8-6686-490A-9C3C-E6D2A435F20A) and in [RCU Requirements for Oracle Databases](http://www.oracle.com/pls/topic/lookup?ctx=fmw122140&id=GUID-35B584F3-6F42-4CA5-9BBB-116E447DAB83). **Note**: This documentation does not tell you how to install a Kubernetes cluster, Helm, the container engine, or how to push container images to a container registry. -Please refer to your vendor specific documentation for this information. +Please refer to your vendor specific documentation for this information. Also see [Getting Started](../introduction#getting-started). ### Limitations @@ -33,7 +33,7 @@ Compared to running a WebLogic Server domain in Kubernetes using the operator, t * In this release, OIG domains are supported using the “domain on a persistent volume” [model](https://oracle.github.io/weblogic-kubernetes-operator/userguide/managing-domains/choosing-a-model/) only, where the domain home is located in a persistent volume (PV). * The "domain in image" model is not supported. -* Only configured clusters are supported. Dynamic clusters are not supported for OIG domains. Note that you can still use all of the scaling features, you just need to define the maximum size of your cluster at domain creation time. +* Only configured clusters are supported. Dynamic clusters are not supported for OIG domains. Note that you can still use all of the scaling features, you just need to define the maximum size of your cluster at domain creation time. * The [WebLogic Monitoring Exporter](https://github.com/oracle/weblogic-monitoring-exporter) currently supports the WebLogic MBean trees only. Support for JRF MBeans has not been added yet. * We do not currently support running OIG in non-Linux containers. diff --git a/docs-source/content/idm-products/oig/release-notes/_index.md b/docs-source/content/idm-products/oig/release-notes/_index.md index 7ff0b0c6e..ae70f832d 100644 --- a/docs-source/content/idm-products/oig/release-notes/_index.md +++ b/docs-source/content/idm-products/oig/release-notes/_index.md @@ -10,6 +10,20 @@ Review the latest changes and known issues for Oracle Identity Governance on Kub | Date | Version | Change | | --- | --- | --- | +| October, 2023 | 23.4.1 | Supports Oracle Identity Governance 12.2.1.4 domain deployment using the October 2023 container image which contains the October Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.| +| | | This release contains the following changes: +| | | + Support for WebLogic Kubernetes Operator 4.1.2.| +| | | + Ability to set resource requests and limits for CPU and memory on a cluster resource. See, [Setting the OIM server memory parameters](../create-oig-domains/#setting-the-oim-server-memory-parameters). | +| | | + Support for the Kubernetes Horizontal Pod Autoscaler (HPA). See, [Kubernetes Horizontal Pod Autoscaler](../manage-oig-domains/hpa).| +| | | If upgrading to October 23 (23.4.1) from October 22 (22.4.1) or later, you must upgrade the following in order: +| | | 1. WebLogic Kubernetes Operator to 4.1.2| +| | | 2. Patch the OIG container image to October 23| +| | | If upgrading to October 23 (23.4.1) from a release prior to October 22 (22.4.1), you must upgrade the following in order: +| | | 1. WebLogic Kubernetes Operator to 4.1.2| +| | | 2. Patch the OIG container image to October 23| +| | | 3. Upgrade the Ingress| +| | | 4. Upgrade Elasticsearch and Kibana| +| | | See [Patch and Upgrade](../patch-and-upgrade) for these instructions.| | July, 2023 | 23.3.1 | Supports Oracle Identity Governance 12.2.1.4 domain deployment using the July 2023 container image which contains the July Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.| | | | If upgrading to July 23 (23.3.1) from April 23 (23.2.1), upgrade as follows: | | | 1. Patch the OIG container image to July 23| diff --git a/docs-source/content/idm-products/oud/create-oud-instances/_index.md b/docs-source/content/idm-products/oud/create-oud-instances/_index.md index 410292ad7..33004ebe8 100644 --- a/docs-source/content/idm-products/oud/create-oud-instances/_index.md +++ b/docs-source/content/idm-products/oud/create-oud-instances/_index.md @@ -168,6 +168,14 @@ You can create OUD instances using one of the following methods: imagePullSecrets: - name: orclcred oudConfig: + # memory, cpu parameters for both requests and limits for oud instances + resources: + limits: + cpu: "1" + memory: "4Gi" + requests: + cpu: "500m" + memory: "4Gi" rootUserPassword: sampleData: "200" persistence: @@ -190,11 +198,19 @@ You can create OUD instances using one of the following methods: ```yaml image: repository: container-registry.oracle.com/middleware/oud_cpu - tag: 12.2.1.4-jdk8-ol7- + tag: 12.2.1.4-jdk8-ol7- pullPolicy: IfNotPresent imagePullSecrets: - name: orclcred oudConfig: + # memory, cpu parameters for both requests and limits for oud instances + resources: + limits: + cpu: "1" + memory: "8Gi" + requests: + cpu: "500m" + memory: "4Gi" rootUserPassword: sampleData: "200" persistence: @@ -205,7 +221,7 @@ You can create OUD instances using one of the following methods: cronJob: kubectlImage: repository: bitnami/kubectl - tag: 1.24.5 + tag: 1.26.6 pullPolicy: IfNotPresent imagePullSecrets: @@ -228,7 +244,7 @@ You can create OUD instances using one of the following methods: ``` - * The `` in *kubectlImage* `tag:` should be set to the same version as your Kubernetes version (`kubectl version`). For example if your Kubernetes version is `1.24.5` set to `1.24.5`. + * The `` in *kubectlImage* `tag:` should be set to the same version as your Kubernetes version (`kubectl version`). For example if your Kubernetes version is `1.26.6` set to `1.26.6`. * If you are not using Oracle Container Registry or your own container registry for your OUD container image, then you can remove the following: ``` @@ -242,7 +258,7 @@ You can create OUD instances using one of the following methods: cronJob: kubectlImage: repository: container-registry.example.com/bitnami/kubectl - tag: 1.24.5 + tag: 1.26.6 pullPolicy: IfNotPresent busybox: @@ -250,6 +266,10 @@ You can create OUD instances using one of the following methods: ``` * If using NFS for your persistent volume then change the `persistence` section as follows: + + **Note**: If you want to use NFS you should ensure that you have a default Kubernetes storage class defined for your environment that allows network storage. + + For more information on storage classes, see [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/). ```yaml persistence: @@ -258,9 +278,70 @@ You can create OUD instances using one of the following methods: nfs: path: /oud_user_projects server: + # if true, it will create the storageclass. if value is false, please provide existing storage class (storageClass) to be used. + storageClassCreate: true + storageClass: oud-sc + # if storageClassCreate is true, please provide the custom provisioner if any to use. If you do not have a custom provisioner, delete this line, and it will use the default class kubernetes.io/is-default-class. + provisioner: kubernetes.io/is-default-class ``` + + The following caveats exist: + + * If you want to create your own storage class, set `storageClassCreate: true`. If `storageClassCreate: true` it is recommended to set `storageClass` to a value of your choice, and `provisioner` to the provisioner supported by your cloud vendor. + * If you have an existing storageClass that supports network storage, set `storageClassCreate: false` and `storageClass` to the NAME value returned in "`kubectl get storageclass`". The `provisioner` can be ignored. + + + * If using Block Device storage for your persistent volume then change the `persistence` section as follows: + + **Note**: If you want to use block devices you should ensure that you have a default Kubernetes storage class defined for your environment that allows dynamic storage. Each vendor has its own storage provider but it may not be configured to provide dynamic storage allocation. + + For more information on storage classes, see [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/). + + ```yaml + persistence: + type: blockstorage + # Specify Accessmode ReadWriteMany for NFS and for block ReadWriteOnce + accessMode: ReadWriteOnce + # if true, it will create the storageclass. if value is false, please provide existing storage class (storageClass) to be used. + storageClassCreate: true + storageClass: oud-sc + # if storageClassCreate is true, please provide the custom provisioner if any to use or else it will use default. + provisioner: oracle.com/oci + ``` + + The following caveats exist: + + * If you want to create your own storage class, set `storageClassCreate: true`. If `storageClassCreate: true` it is recommended to set `storageClass` to a value of your choice, and `provisioner` to the provisioner supported by your cloud vendor. + * If you have an existing storageClass that supports dynamic storage, set `storageClassCreate: false` and `storageClass` to the NAME value returned in "`kubectl get storageclass`". The `provisioner` can be ignored. + + * For `resources`, `limits` and `requests`, the example CPU and memory values shown are for development environments only. For Enterprise Deployments, please review the performance recommendations and sizing requirements in [Enterprise Deployment Guide for Oracle Identity and Access Management in a Kubernetes Cluster](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/procuring-resources-oracle-cloud-infrastructure-deployment.html#GUID-2E3C8D01-43EB-4691-B1D6-25B1DC2475AE). + **Note**: Limits and requests for CPU resources are measured in CPU units. One CPU in Kubernetes is equivalent to 1 vCPU/Core for cloud providers, and 1 hyperthread on bare-metal Intel processors. An "`m`" suffix in a CPU attribute indicates ‘milli-CPU’, so 500m is 50% of a CPU. Memory can be expressed in various units, where one Mi is one IEC unit mega-byte (1024^2), and one Gi is one IEC unit giga-byte (1024^3). For more information, see [Resource Management for Pods and Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/), [Assign Memory Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/), and [Assign CPU Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/). + + **Note**: The parameters above are also utilized by the Kubernetes Horizontal Pod Autoscaler (HPA). For more details on HPA, see [Kubernetes Horizontal Pod Autoscaler](../manage-oud-containers/hpa). + * If you plan on integrating OUD with other Oracle components then you must specify the following under the `oudConfig:` section: + + ``` + integration: + ``` + + For example: + ``` + oudConfig: + etc... + integration: + ``` + + It is recommended to choose the option covering your minimal requirements. Allowed values include: `no-integration` (no integration), `basic` (Directory Integration Platform), `generic` (Directory Integration Platform, Database Net Services and E-Business Suite integration), `eus` (Directory Integration Platform, Database Net Services, E-Business Suite and Enterprise User Security integration). The default value is `no-integration` + + + **Note**: This will enable the integration type only. To integrate OUD with the Oracle component referenced, refer to the relevant product component documentation. + + * If you want to enable Assured Replication, see [Enabling Assured Replication (Optional)](#enabling-assured-replication-optional). + + + 1. Run the following command to deploy OUD: ```bash @@ -291,8 +372,11 @@ You can create OUD instances using one of the following methods: ```bash $ helm install --namespace \ - --set oudConfig.rootUserPassword=,persistence.filesystem.hostPath.path=/oud_user_projects,image.repository=,image.tag= \ + --set oudConfig.rootUserPassword= \ + --set persistence.filesystem.hostPath.path=/oud_user_projects \ + --set image.repository=,image.tag= \ --set oudConfig.sampleData="200" \ + --set oudConfig.resources.limits.cpu="1",oudConfig.resources.limits.memory="8Gi",oudConfig.resources.requests.cpu="500m",oudConfig.resources.requests.memory="4Gi" \ --set cronJob.kubectlImage.repository=bitnami/kubectl,cronJob.kubectlImage.tag= \ --set cronJob.imagePullSecrets[0].name="dockercred" \ --set imagePullSecrets[0].name="orclcred" \ @@ -303,9 +387,12 @@ You can create OUD instances using one of the following methods: ```bash $ helm install --namespace oudns \ - --set oudConfig.rootUserPassword=,persistence.filesystem.hostPath.path=/scratch/shared/oud_user_projects,image.repository=container-registry.oracle.com/middleware/oud_cpu,image.tag=12.2.1.4-jdk8-ol7- \ + --set oudConfig.rootUserPassword= \ + --set persistence.filesystem.hostPath.path=/scratch/shared/oud_user_projects \ + --set image.repository=container-registry.oracle.com/middleware/oud_cpu,image.tag=12.2.1.4-jdk8-ol7- \ --set oudConfig.sampleData="200" \ - --set cronJob.kubectlImage.repository=bitnami/kubectl,cronJob.kubectlImage.tag=1.24.5 \ + --set oudConfig.resources.limits.cpu="1",oudConfig.resources.limits.memory="8Gi",oudConfig.resources.requests.cpu="500m",oudConfig.resources.requests.memory="4Gi" \ + --set cronJob.kubectlImage.repository=bitnami/kubectl,cronJob.kubectlImage.tag=1.26.6 \ --set cronJob.imagePullSecrets[0].name="dockercred" \ --set imagePullSecrets[0].name="orclcred" \ oud-ds-rs oud-ds-rs @@ -315,12 +402,103 @@ You can create OUD instances using one of the following methods: * Replace `` with a the relevant password. * `sampleData: "200"` will load 200 sample users into the default baseDN `dc=example,dc=com`. If you do not want sample data, remove this entry. If `sampleData` is set to `1,000,000` users or greater, then you must add the following entries to the yaml file to prevent inconsistencies in dsreplication: `--set deploymentConfig.startupTime=720,deploymentConfig.period=120,deploymentConfig.timeout=60`. - * The `` in *kubectlImage* `tag:` should be set to the same version as your Kubernetes version (`kubectl version`). For example if your Kubernetes version is `1.24.5` set to `1.24.5`. - * If using using NFS for your persistent volume then use `persistence.networkstorage.nfs.path=/oud_user_projects,persistence.networkstorage.nfs.server:`. + * The `` in *kubectlImage* `tag:` should be set to the same version as your Kubernetes version (`kubectl version`). For example if your Kubernetes version is `1.26.6` set to `1.26.6`. + * If using using NFS for your persistent volume then use: + + ``` + --set persistence.networkstorage.nfs.path=/oud_user_projects,persistence.networkstorage.nfs.server:` \ + --set persistence.storageClassCreate="true",persistence.storageClass="oud-sc",persistence.provisioner="kubernetes.io/is-default-class" \ + ``` + * If you want to create your own storage class, set `storageClassCreate: true`. If `storageClassCreate: true` it is recommended to set `storageClass` to a value of your choice, and `provisioner` to the provisioner supported by your cloud vendor. + * If you have an existing storageClass that supports dynamic storage, set `storageClassCreate: false` and `storageClass` to the NAME value returned in "`kubectl get storageclass`". The `provisioner` can be ignored. + + * If using using block storage for your persistent volume then use: + + ``` + --set persistence.type="blockstorage",persistence.accessMode="ReadWriteOnce" \ + --set persistence.storageClassCreate="true",persistence.storageClass="oud-sc",persistence.provisioner="oracle.com/oci" \ + ``` + * If you want to create your own storage class, set `storageClassCreate: true`. If `storageClassCreate: true` it is recommended to set `storageClass` to a value of your choice, and `provisioner` to the provisioner supported by your cloud vendor. + * If you have an existing storageClass that supports dynamic storage, set `storageClassCreate: false` and `storageClass` to the NAME value returned in "`kubectl get storageclass`". The `provisioner` can be ignored. + * If you are not using Oracle Container Registry or your own container registry for your OUD container image, then you can remove the following: `--set imagePullSecrets[0].name="orclcred"`. + * For `resources`, `limits` and `requests1, the example CPU and memory values shown are for development environments only. For Enterprise Deployments, please review the performance recommendations and sizing requirements in [Enterprise Deployment Guide for Oracle Identity and Access Management in a Kubernetes Cluster](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/procuring-resources-oracle-cloud-infrastructure-deployment.html#GUID-2E3C8D01-43EB-4691-B1D6-25B1DC2475AE). + + **Note**: Limits and requests for CPU resources are measured in CPU units. One CPU in Kubernetes is equivalent to 1 vCPU/Core for cloud providers, and 1 hyperthread on bare-metal Intel processors. An "`m`" suffix in a CPU attribute indicates ‘milli-CPU’, so 500m is 50% of a CPU. Memory can be expressed in various units, where one Mi is one IEC unit mega-byte (1024^2), and one Gi is one IEC unit giga-byte (1024^3). For more information, see [Resource Management for Pods and Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/), [Assign Memory Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/), and [Assign CPU Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/). + + **Note**: The parameters above are also utilized by the Kubernetes Horizontal Pod Autoscaler (HPA). For more details on HPA, see [Kubernetes Horizontal Pod Autoscaler](../manage-oud-domains/hpa). + + * If you plan on integrating OUD with other Oracle components then you must specify the following: + + ``` + --set oudConfig.integration= + ``` + + It is recommended to choose the option covering your minimal requirements. Allowed values include: `no-integration` (no integration), `basic` (Directory Integration Platform), `generic` (Directory Integration Platform, Database Net Services and E-Business Suite integration), `eus` (Directory Integration Platform, Database Net Services, E-Business Suite and Enterprise User Security integration). The default value is `no-integration` + + **Note**: This will enable the integration type only. To integrate OUD with the Oracle component referenced, refer to the relevant product component documentation. + + * If you want to enable Assured Replication, see [Enabling Assured Replication (Optional)](#enabling-assured-replication-optional). 1. Check the OUD deployment as per [Verify the OUD deployment](#verify-the-oud-deployment) and [Verify the OUD replication](#verify-the-oud-replication). + +### Enabling Assured Replication (Optional) + +If you want to enable assured replication, perform the following steps: + +1. Create a directory on the persistent volume as follows: + + ``` + $ cd + $ mkdir oud-repl-config + $ sudo chown -R 1000:0 oud-repl-config + ``` + + For example: + + ``` + $ cd /scratch/shared + $ mkdir oud-repl-config + $ sudo chown -R 1000:0 oud-repl-config + ``` + + +1. Add the following section in the `oud-ds-rs-values-override.yaml`: + + ``` + replOUD: + envVars: + - name: post_dsreplication_dsconfig_3 + value: set-replication-domain-prop --domain-name ${baseDN} --advanced --set assured-type:safe-read --set assured-sd-level:2 --set assured-timeout:5s + - name: execCmd_1 + value: /u01/oracle/user_projects/${OUD_INSTANCE_NAME}/OUD/bin/dsconfig --no-prompt --hostname ${sourceHost} --port ${adminConnectorPort} --bindDN "${rootUserDN}" --bindPasswordFile /u01/oracle/user_projects/${OUD_INSTANCE_NAME}/admin/rootPwdFile.txt --trustAll set-replication-domain-prop --domain-name ${baseDN} --advanced --set assured-type:safe-read --set assured-sd-level:2 --set assured-timeout:5s --provider-name "Multimaster Synchronization" + configVolume: + enabled: true + type: networkstorage + storageClassCreate: true + storageClass: oud-config + provisioner: kubernetes.io/is-default-class + networkstorage: + nfs: + server: + path: /oud-repl-config + mountPath: /u01/oracle/config-input + ``` + + For more information on OUD Assured Replication, and other options and levels, see, [Understanding the Oracle Unified Directory Replication Model](https://docs.oracle.com/en/middleware/idm/unified-directory/12.2.1.4/oudag/understanding-oracle-unified-directory-replication-model.html#GUID-A2438E61-D4DB-4B3B-8E2D-AE5921C3CF8C). + + The following caveats exist: + + * `post_dsreplication_dsconfig_N` and `execCmd_N` should be a unique key - change the suffix accordingly. For more information on the environment variable and respective keys, see, [Appendix B: Environment Variables](#appendix-b-environment-variables). + + * For configVolume the storage can be networkstorage(nfs) or filesystem(hostPath) as the config volume path has to be accessible from all the Kuberenetes nodes. Please note that block storage is not supported for configVolume. + + * If you want to create your own storage class, set `storageClassCreate: true`. If `storageClassCreate: true` it is recommended to set `storageClass` to a value of your choice, and `provisioner` to the provisioner supported by your cloud vendor. + + * If you have an existing storageClass that supports network storage, set `storageClassCreate: false` and `storageClass` to the NAME value returned in "`kubectl get storageclass`". Please note that the storage-class should not be the one you used for the persistent volume earlier. The `provisioner` can be ignored. + + ### Helm command output In all the examples above, the following output is shown following a successful execution of the `helm install` command. @@ -432,6 +610,20 @@ ingress.networking.k8s.io/oud-ds-rs-http-ingress-nginx oud-ds-rs-htt ``` +**Note**: If you are using block storage you will see slightly different entries for PV and PVC, for example: + +``` +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE +persistentvolume/ocid1.volume.oc1.iad. 50Gi RWO Delete Bound oudns/oud-ds-rs-pv-oud-ds-rs-2 oud-sc 60m Filesystem +persistentvolume/ocid1.volume.oc1.iad. 50Gi RWO Delete Bound oudns/oud-ds-rs-pv-oud-ds-rs-1 oud-sc 67m Filesystem +persistentvolume/ocid1.volume.oc1.iad. 50Gi RWO Delete Bound oudns/oud-ds-rs-pv-oud-ds-rs-3 oud-sc 45m Filesystem + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE +persistentvolumeclaim/oud-ds-rs-pv-oud-ds-rs-1 Bound ocid1.volume.oc1.iad. 50Gi RWO oud-sc 67m Filesystem +persistentvolumeclaim/oud-ds-rs-pv-oud-ds-rs-2 Bound ocid1.volume.oc1.iad. 50Gi RWO oud-sc 60m Filesystem +persistentvolumeclaim/oud-ds-rs-pv-oud-ds-rs-3 Bound ocid1.volume.oc1.iad. 50Gi RWO oud-sc 45m Filesystem +``` + **Note**: Initially `pod/oud-ds-rs-0` will appear with a `STATUS` of `0/1` and it will take approximately 5 minutes before OUD is started (`1/1`). Once `pod/oud-ds-rs-0` has a `STATUS` of `1/1`, `pod/oud-ds-rs-1` will appear with a `STATUS` of `0/1`. Once `pod/oud-ds-rs-1` is started (`1/1`), `pod/oud-ds-rs-2` will appear. It will take around 15 minutes for all the pods to fully started. While the oud-ds-rs pods have a `STATUS` of `0/1` the pod is running but OUD server associated with it is currently starting. While the pod is starting you can check the startup status in the pod logs, by running the following command: @@ -603,6 +795,44 @@ Once all the PODs created are visible as `READY` (i.e. `1/1`), you can verify yo The output will be the same as per [Run dresplication inside the pod](#run-dresplication-inside-the-pod). +#### Verify OUD assured replication status + +**Note**: This section only needs to be followed if you enabled assured replication as per [Enabling Assured Replication (Optional)](#enabling-assured-replication-optional). + +1. Run the following command to create a bash shell in the pod: + + ```bash + $ kubectl --namespace exec -it -c -- bash + ``` + + For example: + + ```bash + $ kubectl --namespace oudns exec -it -c oud-ds-rs oud-ds-rs-0 -- bash + ``` + + This will take you into the pod: + + ```bash + [oracle@oud-ds-rs-0 oracle]$ + ``` + +1. At the prompt, enter the following commands: + + ```bash + $ echo $bindPassword1 > /tmp/pwd.txt + $ /u01/oracle/user_projects/${OUD_INSTANCE_NAME}/OUD/bin/dsconfig --no-prompt --hostname ${OUD_INSTANCE_NAME} --port ${adminConnectorPort} --bindDN "${rootUserDN}" --bindPasswordFile /tmp/pwd.txt --trustAll get-replication-domain-prop --domain-name ${baseDN} --advanced --property assured-type --property assured-sd-level --property assured-timeout --provider-name "Multimaster Synchronization" + ``` + + The output will look similar to the following: + + ``` + Property : Value(s) + -----------------:---------- + assured-sd-level : 2 + assured-timeout : 5 s + assured-type : safe-read + ``` ### Verify the cronjob @@ -641,7 +871,7 @@ Once all the PODs created are visible as `READY` (i.e. `1/1`), you can verify yo ```bash NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR - oud-pod-cron-job-27586680 1/1 1s 5m36s cron-kubectl bitnami/kubectl:1.24.5 controller-uid=700ab9f7-6094-488a-854d-f1b914de5f61 + oud-pod-cron-job-27586680 1/1 1s 5m36s cron-kubectl bitnami/kubectl:1.26.6 controller-uid=700ab9f7-6094-488a-854d-f1b914de5f61 ``` @@ -739,7 +969,7 @@ With an OUD instance now deployed you are now ready to configure an ingress cont release "oud-ds-rs" uninstalled ``` -1. Run the following command to view the status: +1. Run the following command to view the status: ```bash $ kubectl --namespace oudns get pod,service,secret,pv,pvc,ingress -o wide @@ -768,8 +998,21 @@ With an OUD instance now deployed you are now ready to configure an ingress cont Run the command again until the pods, PV and PVC disappear. +1. If the PV or PVC's don't delete, remove them manually: + + ``` + $ kubectl delete pvc oud-ds-rs-pvc -n oudns + $ kubectl delete pv oud-ds-rs-pv -n oudns + ``` + + **Note**: If using blockstorage, you will see a PV and PVC for each pod. Delete all of the PVC's and PV's using the above commands. + + + #### Delete the persistent volume contents +**Note**: The steps below are not relevant for block storage. + 1. Delete the contents of the `oud_user_projects` directory in the persistent volume: ```bash @@ -824,14 +1067,16 @@ The following table lists the configurable parameters of the `oud-ds-rs` chart a | persistence.enabled | If enabled, it will use the persistent volume. if value is false, PV and PVC would not be used and pods would be using the default emptyDir mount volume. | true | | persistence.pvname | pvname to use an already created Persistent Volume , If blank will use the default name | oud-ds-rs-< fullname >-pv | | persistence.pvcname | pvcname to use an already created Persistent Volume Claim , If blank will use default name |oud-ds-rs-< fullname >-pvc | -| persistence.type | supported values: either filesystem or networkstorage or custom | filesystem | +| persistence.type | supported values: either filesystem or networkstorage or blockstorage or custom | filesystem | | persistence.filesystem.hostPath.path | The path location mentioned should be created and accessible from the local host provided with necessary privileges for the user. | /scratch/shared/oud_user_projects | | persistence.networkstorage.nfs.path | Path of NFS Share location | /scratch/shared/oud_user_projects | | persistence.networkstorage.nfs.server | IP or hostname of NFS Server | 0.0.0.0 | | persistence.custom.* | Based on values/data, YAML content would be included in PersistenceVolume Object | | -| persistence.accessMode | Specifies the access mode of the location provided | ReadWriteMany | +| persistence.accessMode | Specifies the access mode of the location provided. ReadWriteMany for Filesystem/NFS, ReadWriteOnce for block storage. | ReadWriteMany | | persistence.size | Specifies the size of the storage | 10Gi | +| persistence.storageClassCreate | if true, it will create the storageclass. if value is false, please provide existing storage class (storageClass) to be used. | empty | | persistence.storageClass | Specifies the storageclass of the persistence volume. | empty | +| persistence.provisioner | If storageClassCreate is true, provide the custom provisioner if any . | kubernetes.io/is-default-class | | persistence.annotations | specifies any annotations that will be used| { } | | configVolume.enabled | If enabled, it will use the persistent volume. If value is false, PV and PVC would not be used and pods would be using the default emptyDir mount volume. | true | | configVolume.mountPath | If enabled, it will use the persistent volume. If value is false, PV and PVC would not be used and there would not be any mount point available for config | false | @@ -845,7 +1090,9 @@ The following table lists the configurable parameters of the `oud-ds-rs` chart a | configVolume.accessMode | Specifies the access mode of the location provided | ReadWriteMany | | configVolume.size | Specifies the size of the storage | 10Gi | | configVolume.storageClass | Specifies the storageclass of the persistence volume. | empty | -| configVolume.annotations | specifies any annotations that will be used| { } | +| configVolume.annotations | Specifies any annotations that will be used| { } | +| configVolume.storageClassCreate | If true, it will create the storageclass. if value is false, provide existing storage class (storageClass) to be used. | true | +| configVolume.provisioner | If configVolume.storageClassCreate is true, please provide the custom provisioner if any. | kubernetes.io/is-default-class | | oudPorts.adminldaps | Port on which Oracle Unified Directory Instance in the container should listen for Administration Communication over LDAPS Protocol | 1444 | | oudPorts.adminhttps | Port on which Oracle Unified Directory Instance in the container should listen for Administration Communication over HTTPS Protocol. | 1888 | | oudPorts.ldap | Port on which Oracle Unified Directory Instance in the container should listen for LDAP Communication. | 1389 | @@ -881,49 +1128,15 @@ The following table lists the configurable parameters of the `oud-ds-rs` chart a | oudPorts.nodePorts.ldaps | Public port on which the OUD instance in the container should listen for LDAPS communication. The port number should be between 30000-32767. No duplicate values are allowed. **Note**: Set only if service.lbrtype is set as NodePort. If left blank then k8s will assign random ports in between 30000 and 32767. | | | oudPorts.nodePorts.http | Public port on which the OUD instance in the container should listen for HTTP communication. The port number should be between 30000-32767. No duplicate values are allowed. **Note**: Set only if service.lbrtype is set as NodePort. If left blank then k8s will assign random ports in between 30000 and 32767. | | | oudPorts.nodePorts.https | Public port on which the OUD instance in the container should listen for HTTPS communication. The port number should be between 30000-32767. No duplicate values are allowed. **Note**: Set only if service.lbrtype is set as NodePort. If left blank then k8s will assign random ports in between 30000 and 32767. | | -| elk.elasticsearch.enabled | If enabled it will create the elastic search statefulset deployment | false | -| elk.elasticsearch.image.repository | Elastic Search Image name/Registry/Repository . Based on this elastic search instances will be created | docker.elastic.co/elasticsearch/elasticsearch | -| elk.elasticsearch.image.tag | Elastic Search Image tag .Based on this, image parameter would be configured for Elastic Search pods/instances | 6.4.3 | -| elk.elasticsearch.image.pullPolicy | policy to pull the image | IfnotPresent | -| elk.elasticsearch.esreplicas | Number of Elastic search Instances will be created | 3 | -| elk.elasticsearch.minimumMasterNodes | The value for discovery.zen.minimum_master_nodes. Should be set to (esreplicas / 2) + 1. | 2 | -| elk.elasticsearch.esJAVAOpts | Java options for Elasticsearch. This is where you should configure the jvm heap size | -Xms512m -Xmx512m | -| elk.elasticsearch.sysctlVmMaxMapCount | Sets the sysctl vm.max_map_count needed for Elasticsearch | 262144 | -| elk.elasticsearch.resources.requests.cpu | cpu resources requested for the elastic search | 100m | -| elk.elasticsearch.resources.limits.cpu | total cpu limits that are configures for the elastic search | 1000m | -| elk.elasticsearch.esService.type | Type of Service to be created for elastic search | ClusterIP | -| elk.elasticsearch.esService.lbrtype | Type of load balancer Service to be created for elastic search | ClusterIP | -| elk.kibana.enabled | If enabled it will create a kibana deployment | false | -| elk.kibana.image.repository | Kibana Image Registry/Repository and name. Based on this Kibana instance will be created | docker.elastic.co/kibana/kibana | -| elk.kibana.image.tag | Kibana Image tag. Based on this, Image parameter would be configured. | 6.4.3 | -| elk.kibana.image.pullPolicy | policy to pull the image | IfnotPresent | -| elk.kibana.kibanaReplicas | Number of Kibana instances will be created | 1 | -| elk.kibana.service.tye | Type of service to be created | NodePort | -| elk.kibana.service.targetPort | Port on which the kibana will be accessed | 5601 | -| elk.kibana.service.nodePort | nodePort is the port on which kibana service will be accessed from outside | 31119 | -| elk.logstash.enabled | If enabled it will create a logstash deployment | false | -| elk.logstash.image.repository | logstash Image Registry/Repository and name. Based on this logstash instance will be created | logstash | -| elk.logstash.image.tag | logstash Image tag. Based on this, Image parameter would be configured. | 6.6.0 | -| elk.logstash.image.pullPolicy | policy to pull the image | IfnotPresent | -| elk.logstash.containerPort | Port on which the logstash container will be running | 5044 | -| elk.logstash.service.tye | Type of service to be created | NodePort | -| elk.logstash.service.targetPort | Port on which the logstash will be accessed | 9600 | -| elk.logstash.service.nodePort | nodePort is the port on which logstash service will be accessed from outside | 32222 | -| elk.logstash.logstashConfigMap | Provide the configmap name which is already created with the logstash conf. if empty default logstash configmap will be created and used | | -| elk.elkPorts.rest | Port for REST | 9200 | -| elk.elkPorts.internode | port used for communication between the nodes | 9300 | -| elk.busybox.image | busy box image name. Used for initcontianers | busybox | -| elk.elkVolume.enabled | If enabled, it will use the persistent volume. if value is false, PV and pods would be using the default emptyDir mount volume. | true | -| elk.elkVolume.pvname | pvname to use an already created Persistent Volume , If blank will use the default name | oud-ds-rs-< fullname >-espv | -| elk.elkVolume.type | supported values: either filesystem or networkstorage or custom | filesystem | -| elk.elkVolume.filesystem.hostPath.path | The path location mentioned should be created and accessible from the local host provided with necessary privileges for the user. | /scratch/shared/oud_elk/data | -| elk.elkVolume.networkstorage.nfs.path | Path of NFS Share location | /scratch/shared/oud_elk/data | -| elk.elkVolume.networkstorage.nfs.server | IP or hostname of NFS Server | 0.0.0.0 | -| elk.elkVolume.custom.* | Based on values/data, YAML content would be included in PersistenceVolume Object | | -| elk.elkVolume.accessMode | Specifies the access mode of the location provided | ReadWriteMany | -| elk.elkVolume.size | Specifies the size of the storage | 20Gi | -| elk.elkVolume.storageClass | Specifies the storageclass of the persistence volume. | elk | -| elk.elkVolume.annotations | specifies any annotations that will be used| { } | +| oudConfig.integration | Specifies which Oracle components the server can be integrated with. It is recommended to choose the option covering your minimal requirements. Allowed values: no-integration (no integration), basic (Directory Integration Platform), generic (Directory Integration Platform, Database Net Services and E-Business Suite integration), eus (Directory Integration Platform, Database Net Services, E-Business Suite and Enterprise User Security integration)| no-integration | +| elk.logStashImage | The version of logstash you want to install | logstash:8.3.1 | +| elk.sslenabled | If SSL is enabled for ELK set the value to true, or if NON-SSL set to false. This value must be lowercase | TRUE | +| elk.eshosts | The URL for sending logs to Elasticsearch. HTTP if NON-SSL is used | https://elasticsearch.example.com:9200 | +| elk.esuser | The name of the user for logstash to access Elasticsearch | logstash_internal | +| elk.espassword | The password for ELK_USER | password | +| elk.esapikey | The API key details | apikey | +| elk.esindex | The log name | oudlogs-00001 | +| elk.imagePullSecrets | secret to be used for pulling logstash image | dockercred | ### Appendix B: Environment Variables diff --git a/docs-source/content/idm-products/oud/introduction/_index.md b/docs-source/content/idm-products/oud/introduction/_index.md index 09adb2ddd..60c4ea991 100644 --- a/docs-source/content/idm-products/oud/introduction/_index.md +++ b/docs-source/content/idm-products/oud/introduction/_index.md @@ -12,7 +12,7 @@ This project supports deployment of Oracle Unified Directory (OUD) container ima This project has several key features to assist you with deploying and managing Oracle Unified Directory in a Kubernetes environment. You can: -* Create Oracle Unified Directory instances in a Kubernetes persistent volume (PV). This PV can reside in an NFS file system or other Kubernetes volume types. +* Create Oracle Unified Directory instances in a Kubernetes persistent volume (PV). This PV can reside in an NFS file system, block storage device, or other Kubernetes volume types. * Start servers based on declarative startup parameters and desired states. * Expose the Oracle Unified Directory services for external access. * Scale Oracle Unified Directory by starting and stopping servers on demand. @@ -21,7 +21,7 @@ This project has several key features to assist you with deploying and managing ### Current production release -The current production release for the Oracle Unified Directory 12c PS4 (12.2.1.4.0) deployment on Kubernetes is [23.3.1](https://github.com/oracle/fmw-kubernetes/releases). +The current production release for the Oracle Unified Directory 12c PS4 (12.2.1.4.0) deployment on Kubernetes is [23.4.1](https://github.com/oracle/fmw-kubernetes/releases). ### Recent changes and known issues @@ -29,14 +29,21 @@ See the [Release Notes](../release-notes) for recent changes and known issues fo ### Getting started -This documentation explains how to configure OUD on a Kubernetes cluster where no other Oracle Identity Management products will be deployed. For detailed information about this type of deployment, start at [Prerequisites](../prerequisites) and follow this documentation sequentially. +This documentation explains how to configure OUD on a Kubernetes cluster where no other Oracle Identity Management products will be deployed. For detailed information about this type of deployment, start at [Prerequisites](../prerequisites) and follow this documentation sequentially. Please note that this documentation does not explain how to configure a Kubernetes cluster given the product can be deployed on any compliant Kubernetes vendor. -If performing an Enterprise Deployment, refer to the [Enterprise Deployment Guide for Oracle Identity and Access Management in a Kubernetes Cluster](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/index.html) instead. +If you are deploying multiple Oracle Identity Management products on the same Kubernetes cluster, then you must follow the Enterprise Deployment Guide outlined in [Enterprise Deployments](../../enterprise-deployments). +Please note, you also have the option to follow the Enterprise Deployment Guide even if you are only installing OUD and no other Oracle Identity Management products. + +**Note**: If you need to understand how to configure a Kubernetes cluster ready for an Oracle Unified Directory deployment, you should follow the Enterprise Deployment Guide referenced in [Enterprise Deployments](../../enterprise-deployments). The [Enterprise Deployment Automation](../../enterprise-deployments/enterprise-deployment-automation) section also contains details on automation scripts that can: + + + Automate the creation of a Kubernetes cluster on Oracle Cloud Infrastructure (OCI), ready for the deployment of Oracle Identity Management products. + + Automate the deployment of Oracle Identity Management products on any compliant Kubernetes cluster. ### Documentation for earlier releases To view documentation for an earlier release, see: +* [Version 23.3.1](https://oracle.github.io/fmw-kubernetes/23.3.1/idm-products/oud/) * [Version 23.2.1](https://oracle.github.io/fmw-kubernetes/23.2.1/idm-products/oud/) * [Version 23.1.1](https://oracle.github.io/fmw-kubernetes/23.1.1/idm-products/oud/) * [Version 22.4.1](https://oracle.github.io/fmw-kubernetes/22.4.1/oud/) diff --git a/docs-source/content/idm-products/oud/manage-oud-containers/hpa.md b/docs-source/content/idm-products/oud/manage-oud-containers/hpa.md new file mode 100644 index 000000000..3d6c54d7f --- /dev/null +++ b/docs-source/content/idm-products/oud/manage-oud-containers/hpa.md @@ -0,0 +1,439 @@ +--- +title: "d. Kubernetes Horizontal Pod Autoscaler" +description: "Describes the steps for implementing the Horizontal Pod Autoscaler." +--- + + +1. [Prerequisite configuration](#prerequisite-configuration) +1. [Deploy the Kubernetes Metrics Server](#deploy-the-kubernetes-metrics-server) + 1. [Troubleshooting](#troubleshooting) +1. [Deploy HPA](#deploy-hpa) +1. [Testing HPA](#testing-hpa) +1. [Delete the HPA](#delete-the-hpa) +1. [Other considerations](#other-considerations) + + +Kubernetes Horizontal Pod Autoscaler (HPA) allows automatic scaling (up and down) of the OUD servers. If load increases then extra OUD servers will be started as required. Similarly, if load decreases, OUD servers will be automatically shutdown. + +For more information on HPA, see [Horizontal Pod Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). + +The instructions below show you how to configure and run an HPA to scale OUD servers, based on CPU utilization or memory resource metrics. + +**Note**: If you enable HPA and then decide you want to start/stop/scale OUD servers manually as per [Scaling Up/Down OUD Pods](../scaling-up-down), it is recommended to delete HPA beforehand as per [Delete the HPA](#delete-the-hpa). + +### Prerequisite configuration + +In order to use HPA, OUD must have been created with the required `resources` parameter as per [Create OUD instances](../../create-oud-instances#create-oud-instances). For example: + + ``` + oudConfig: + # memory, cpu parameters for both requests and limits for oud instances + resources: + limits: + cpu: "1" + memory: "8Gi" + requests: + cpu: "500m" + memory: "4Gi" + ``` + +If you created the OUD servers at any point since July 22 (22.3.1) then these values are the defaults. You can check using the following command: + + ``` + $ helm show values oud-ds-rs -n oudns + ``` + + The output will look similar to the following: + + ``` + ... + # memory, cpu parameters for both requests and limits for oud instances + resources: + requests: + memory: "4Gi" + cpu: "500m" + limits: + memory: "8Gi" + cpu: "2" + ... + ``` + +### Deploy the Kubernetes Metrics Server + +Before deploying HPA you must deploy the Kubernetes Metrics Server. + +1. Check to see if the Kubernetes Metrics Server is already deployed: + + ``` + $ kubectl get pods -n kube-system | grep metric + ``` + If a row is returned as follows, then Kubernetes Metric Server is deployed and you can move to [Deploy HPA](#deploy-hpa). + + ``` + metrics-server-d9694457-mf69d 1/1 Running 0 5m13s + ``` + +1. If no rows are returned by the previous command, then the Kubernetes Metric Server needs to be deployed. Run the following commands to get the `components.yaml`: + + ``` + $ mkdir $WORKDIR/kubernetes/hpa + $ cd $WORKDIR/kubernetes/hpa + $ wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml + ``` + +1. Deploy the Kubernetes Metrics Server by running the following command: + + ``` + $ kubectl apply -f components.yaml + ``` + + The output will look similar to the following: + + ``` + serviceaccount/metrics-server created + clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created + clusterrole.rbac.authorization.k8s.io/system:metrics-server created + rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created + clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created + clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created + service/metrics-server created + deployment.apps/metrics-server created + apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created + ``` + +1. Run the following command to check Kubernetes Metric Server is running: + + ``` + $ kubectl get pods -n kube-system | grep metric + ``` + + Make sure the pod has a `READY` status of `1/1`: + + ``` + metrics-server-d9694457-mf69d 1/1 Running 0 39s + ``` + + +#### Troubleshooting + +If the Kubernetes Metric Server does not reach the `READY 1/1` state, run the following commands: + +``` +$ kubectl describe pod -n kube-system +$ kubectl logs -n kube-system +``` + +If you see errors such as: + +``` +Readiness probe failed: HTTP probe failed with statuscode: 500 +``` + +and: + +``` +E0907 13:07:50.937308 1 scraper.go:140] "Failed to scrape node" err="Get \"https://X.X.X.X:10250/metrics/resource\": x509: cannot validate certificate for 100.105.18.113 because it doesn't contain any IP SANs" node="worker-node1" +``` + +then you may need to install a valid cluster certificate for your Kubernetes cluster. + +For testing purposes, you can resolve this issue by: + +1. Delete the Kubernetes Metrics Server by running the following command: + + ``` + $ kubectl delete -f $WORKDIR/kubernetes/hpa/components.yaml + ``` + +1. Edit the `$WORKDIR/hpa/components.yaml` and locate the `args:` section. Add `kubelet-insecure-tls` to the arguments. For example: + + ``` + spec: + containers: + - args: + - --cert-dir=/tmp + - --secure-port=4443 + - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname + - --kubelet-use-node-status-port + - --kubelet-insecure-tls + - --metric-resolution=15s + image: registry.k8s.io/metrics-server/metrics-server:v0.6.4 + ... + ``` + +1. Deploy the Kubenetes Metrics Server using the command: + + ``` + $ kubectl apply -f components.yaml + ``` + + Run the following and make sure the READY status shows `1/1`: + + ``` + $ kubectl get pods -n kube-system | grep metric + ``` + + The output should look similar to the following: + + ``` + metrics-server-d9694457-mf69d 1/1 Running 0 40s + ``` + + +### Deploy HPA + +The steps below show how to configure and run an HPA to scale OUD, based on the CPU or memory utilization resource metrics. + +Assuming the example OUD configuration in [Create OUD instances](../../create-oud-instances#create-oud-instances), three OUD servers are started by default (`oud-ds-rs-0`, `oud-ds-rs-1`, `oud-ds-rs-2`). + +In the following example an HPA resource is created, targeted at the statefulset `oud-ds-rs`. This resource will autoscale OUD servers from a minimum of 3 OUD servers up to 5 OUD servers. Scaling up will occur when the average CPU is consistently over 70%. Scaling down will occur when the average CPU is consistently below 70%. + + +1. Navigate to the `$WORKDIR/kubernetes/hpa` and create an `autoscalehpa.yaml` file that contains the following. + + ``` + # + apiVersion: autoscaling/v2 + kind: HorizontalPodAutoscaler + metadata: + name: oud-sts-hpa + namespace: oudns + spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: StatefulSet + name: oud-ds-rs #statefulset name of oud + behavior: + scaleDown: + stabilizationWindowSeconds: 60 + scaleUp: + stabilizationWindowSeconds: 60 + minReplicas: 3 + maxReplicas: 5 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 70 + ``` + + **Note** : `minReplicas` should match the number of OUD servers started by default. Set `maxReplicas` to the maximum amount of OUD servers that can be started. + + **Note**: To find the statefulset name, in this example `oud-ds-rs`, run "`kubectl get statefulset -n oudns`". + + **Note**: For setting HPA based on Memory Metrics, update the metrics block with the following content. Please note we recommend using only CPU or Memory, not both. + + ``` + metrics: + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: 70 + ``` + + +1. Run the following command to create the autoscaler: + + ``` + $ kubectl apply -f autoscalehpa.yaml + ``` + + The output will look similar to the following: + + ``` + horizontalpodautoscaler.autoscaling/oud-sts-hpa created + ``` + +1. Verify the status of the autoscaler by running the following: + + ``` + $ kubectl get hpa -n oudns + ``` + + The output will look similar to the following: + + ``` + NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE + oud-sts-hpa StatefulSet/oud-ds-rs 5%/70% 3 5 3 33s + ``` + + In the example above, this shows that CPU is currently running at 5% for the `oud-sts-hpa`. + + +### Testing HPA + +1. Check the current status of the OUD servers: + + ``` + $ kubectl get pods -n oudns + ``` + + The output will look similar to the following: + + ``` + NAME READY STATUS RESTARTS AGE + oud-ds-rs-0 1/1 Running 0 5h15m + oud-ds-rs-1 1/1 Running 0 5h9m + oud-ds-rs-2 1/1 Running 0 5h2m + oud-pod-cron-job-28242120-bwtcz 0/1 Completed 0 61m + oud-pod-cron-job-28242150-qf8fg 0/1 Completed 0 31m + oud-pod-cron-job-28242180-q69lm 0/1 Completed 0 92s + ``` + + In the above `oud-ds-rs-0`, `oud-ds-rs-0`, `oud-ds-rs-2` are running. + + + +1. To test HPA can scale up the OUD servers, run the following commands: + + ``` + $ kubectl exec --stdin --tty oud-ds-rs-0 -n oudns -- /bin/bash + ``` + + This will take you inside a bash shell inside the `oud-ds-rs-0` pod: + + ``` + [oracle@oud-ds-rs-0 oracle]$ + ``` + + Inside the bash shell, run the following command to increase the load on the CPU: + + ``` + [oracle@oud-ds-rs-0 oracle]$ dd if=/dev/zero of=/dev/null + ``` + + This command will continue to run in the foreground. + +1. Repeat the step above for the oud-ds-rs-1 pod: + + ``` + $ kubectl exec --stdin --tty oud-ds-rs-1 -n oudns -- /bin/bash + [oracle@oud-ds-rs-1 oracle]$ + [oracle@oud-ds-rs-1 oracle]$ dd if=/dev/zero of=/dev/null + ``` + + + +1. In a command window outside the bash shells, run the following command to view the current CPU usage: + + ``` + $ kubectl get hpa -n oudns + ``` + + The output will look similar to the following: + + ``` + NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE + oud-sts-hpa StatefulSet/oud-ds-rs 125%/70% 3 5 3 5m15s + ``` + + In the above example the CPU has increased to 125%. As this is above the 70% limit, the autoscaler increases the replicas by starting additional OUD servers. + +1. Run the following to see if any more OUD servers are started: + + ``` + $ kubectl get pods -n oudns + ``` + + The output will look similar to the following: + + ``` + NAME READY STATUS RESTARTS AGE + oud-ds-rs-0 1/1 Running 0 5h50m + oud-ds-rs-1 1/1 Running 0 5h44m + oud-ds-rs-2 1/1 Running 0 5h37m + oud-ds-rs-3 1/1 Running 0 9m29s + oud-ds-rs-4 1/1 Running 0 5m17s + oud-pod-cron-job-28242150-qf8fg 0/1 Completed 0 66m + oud-pod-cron-job-28242180-q69lm 0/1 Completed 0 36m + oud-pod-cron-job-28242210-kn7sv 0/1 Completed 0 6m28s + ``` + + In the example above one more OUD server has started (`oud-ds-rs-4`). + + **Note**: It may take some time for the server to appear and start. Once the server is at `READY` status of `1/1`, the server is started. + + +1. To stop the load on the CPU, in both bash shells, issue a Control C, and then exit the bash shell: + + ``` + [oracle@oud-ds-rs-0 oracle]$ dd if=/dev/zero of=/dev/null + ^C + [oracle@oud-ds-rs-0 oracle]$ exit + ``` + +1. Run the following command to view the current CPU usage: + + ``` + $ kubectl get hpa -n oudns + ``` + + The output will look similar to the following: + + ``` + NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE + oud-sts-hpa StatefulSet/oud-ds-rs 4%/70% 3 5 5 40m + ``` + + In the above example CPU has dropped to 4%. As this is below the 70% threshold, you should see the autoscaler scale down the servers: + + ``` + $ kubectl get pods -n oudns + ``` + + The output will look similar to the following: + + ``` + NAME READY STATUS RESTARTS AGE + oud-ds-rs-0 1/1 Running 0 5h54m + oud-ds-rs-1 1/1 Running 0 5h48m + oud-ds-rs-2 1/1 Running 0 5h41m + oud-ds-rs-3 1/1 Running 0 13m + oud-ds-rs-4 1/1 Terminating 0 8m27s + oud-pod-cron-job-28242150-qf8fg 0/1 Completed 0 70m + oud-pod-cron-job-28242180-q69lm 0/1 Completed 0 40m + oud-pod-cron-job-28242210-kn7sv 0/1 Completed 0 10m + ``` + + Eventually, the extra server will disappear: + + ``` + NAME READY STATUS RESTARTS AGE + oud-ds-rs-0 1/1 Running 0 5h57m + oud-ds-rs-1 1/1 Running 0 5h51m + oud-ds-rs-2 1/1 Running 0 5h44m + oud-ds-rs-3 1/1 Running 0 16m + oud-pod-cron-job-28242150-qf8fg 0/1 Completed 0 73m + oud-pod-cron-job-28242180-q69lm 0/1 Completed 0 43m + oud-pod-cron-job-28242210-kn7sv 0/1 Completed 0 13m + ``` + + +### Delete the HPA + +1. If you need to delete the HPA, you can do so by running the following command: + + ``` + $ cd $WORKDIR/kubernetes/hpa + $ kubectl delete -f autoscalehpa.yaml + ``` + +### Other considerations + ++ If HPA is deployed and you need to upgrade the OUD image, then you must delete the HPA before upgrading. Once the upgrade is successful you can deploy HPA again. ++ If you choose to scale up or scale down an OUD server manually as per [Scaling Up/Down OUD Pods](../scaling-up-down), then it is recommended to delete the HPA before doing so. + + + + + + + + + + + diff --git a/docs-source/content/idm-products/oud/manage-oud-containers/scaling-up-down.md b/docs-source/content/idm-products/oud/manage-oud-containers/scaling-up-down.md index 93965f070..d9aa828a5 100644 --- a/docs-source/content/idm-products/oud/manage-oud-containers/scaling-up-down.md +++ b/docs-source/content/idm-products/oud/manage-oud-containers/scaling-up-down.md @@ -7,6 +7,8 @@ description: "Describes the steps for scaling up/down for OUD pods." This section describes how to increase or decrease the number of OUD pods in the Kubernetes deployment. +**Note**: The instructions below are for scaling servers up or down manually. If you wish to use autoscaling, see [Kubernetes Horizontal Pod Autoscaler](../hpa). Please note, if you have enabled autoscaling, it is recommended to delete the autoscaler before running the commands below. + ### View existing OUD pods diff --git a/docs-source/content/idm-products/oud/patch-and-upgrade/index.md b/docs-source/content/idm-products/oud/patch-and-upgrade/index.md index 03cbed809..6d8de38bf 100644 --- a/docs-source/content/idm-products/oud/patch-and-upgrade/index.md +++ b/docs-source/content/idm-products/oud/patch-and-upgrade/index.md @@ -7,16 +7,18 @@ description= "This document provides steps to patch or upgrade an OUD image" In this section you learn how to upgrade OUD from a previous version. Follow the section relevant to the version you are upgrading from. -1. [Upgrading to July 23 (23.3.1) from April 23 (23.2.1)](#upgrading-to-july-23-2331-from-april-23-2321) -1. [Upgrading to July 23 (23.3.1) from October 22 (22.4.1) or January 23 (23.1.1)](#upgrading-to-july-23-2331-from-october-22-2241-or-january-23-2311) -1. [Upgrading to July 23 (23.3.1) from July 22 (22.3.1)](#upgrading-to-july-23-2331-from-july-22-2231) -1. [Upgrading to July 23 (23.3.1) from releases prior to July 22 (22.3.1)](#upgrading-to-july-23-2331-from-releases-prior-to-july-22-2231) +1. [Upgrading to October 23 (23.4.1) from April 23 (23.2.1) or later](#upgrading-to-october-23-2341-from-april-23-2321-or-later) +1. [Upgrading to October 23 (23.4.1) from October 22 (22.4.1) or January 23 (23.1.1)](#upgrading-to-october-23-2341-from-october-22-2241-or-january-23-2311) +1. [Upgrading to October 23 (23.4.1) from July 22 (22.3.1)](#upgrading-to-october-23-2341-from-july-22-2231) +1. [Upgrading to October 23 (23.4.1) from releases prior to July 22 (22.3.1)](#upgrading-to-october-23-2341-from-releases-prior-to-july-22-2231) 1. [Upgrading Elasticsearch and Kibana](#upgrading-elasticsearch-and-kibana) +**Note**: If on July 22 (22.3.1) or later, and have [Kubernetes Horizontal Pod Autoscaler](../manage-oud-containers/hpa) (HPA) enabled, you must disable HPA before performing the steps in the relevant upgrade section. See [Delete the HPA](../manage-oud-containers/hpa#delete-the-hpa). -### Upgrading to July 23 (23.3.1) from April 23 (23.2.1) -The instructions below are for upgrading from April 23 ([23.2.1](https://github.com/oracle/fmw-kubernetes/releases)) to July 23 ([23.3.1](https://github.com/oracle/fmw-kubernetes/releases)). +### Upgrading to October 23 (23.4.1) from April 23 (23.2.1) or later + +The instructions below are for upgrading from April 23 ([23.2.1](https://github.com/oracle/fmw-kubernetes/releases)) or later to October 23 ([23.4.1](https://github.com/oracle/fmw-kubernetes/releases)). **Note**: If you are not using Oracle Container Registry or your own container registry, then you must first load the new container image on all nodes in your Kubernetes cluster. @@ -41,7 +43,7 @@ The instructions below are for upgrading from April 23 ([23.2.1](https://github. ```yaml image: repository: container-registry.oracle.com/middleware/oud_cpu - tag: 12.2.1.4-jdk8-ol7- + tag: 12.2.1.4-jdk8-ol7- imagePullSecrets: - name: orclcred ``` @@ -73,9 +75,9 @@ The instructions below are for upgrading from April 23 ([23.2.1](https://github. -### Upgrading to July 23 (23.3.1) from October 22 (22.4.1) or January 23 (23.1.1) +### Upgrading to October 23 (23.4.1) from October 22 (22.4.1) or January 23 (23.1.1) -The instructions below are for upgrading from October 22 ([22.4.1](https://github.com/oracle/fmw-kubernetes/releases)) or January 23 ([23.1.1](https://github.com/oracle/fmw-kubernetes/releases)), to July 23 ([23.3.1](https://github.com/oracle/fmw-kubernetes/releases)). +The instructions below are for upgrading from October 22 ([22.4.1](https://github.com/oracle/fmw-kubernetes/releases)) or January 23 ([23.1.1](https://github.com/oracle/fmw-kubernetes/releases)), to October 23 ([23.4.1](https://github.com/oracle/fmw-kubernetes/releases)). **Note**: If you are not using Oracle Container Registry or your own container registry, then you must first load the new container image on all nodes in your Kubernetes cluster. @@ -162,7 +164,7 @@ The instructions below are for upgrading from October 22 ([22.4.1](https://githu $ helm upgrade -n oudns --set replicaCount=1 oud-ds-rs oud-ds-rs --reuse-values ``` - **Note**: The `$WORKDIR` is the directory for your existing release, not July 23. + **Note**: The `$WORKDIR` is the directory for your existing release, not October 23. The output will be similar to the following: @@ -225,7 +227,7 @@ The instructions below are for upgrading from October 22 ([22.4.1](https://githu -#### Setup the July 23 code repository to deploy OUD +#### Setup the October 23 code repository to deploy OUD 1. Create a working directory on the persistent volume to setup the latest source code: @@ -294,7 +296,7 @@ The instructions below are for upgrading from October 22 ([22.4.1](https://githu ```yaml image: repository: container-registry.oracle.com/middleware/oud_cpu - tag: 12.2.1.4-jdk8-ol7- + tag: 12.2.1.4-jdk8-ol7- pullPolicy: IfNotPresent imagePullSecrets: - name: orclcred @@ -386,7 +388,7 @@ The instructions below are for upgrading from October 22 ([22.4.1](https://githu ```bash ... - Image: container-registry.oracle.com/middleware/oud_cpu:12.2.1.4-jdk8-ol7- + Image: container-registry.oracle.com/middleware/oud_cpu:12.2.1.4-jdk8-ol7- Image ID: container-registry.oracle.com/middleware/oud_cpu@sha256: ``` @@ -452,15 +454,15 @@ The instructions below are for upgrading from October 22 ([22.4.1](https://githu ``` -### Upgrading to July 23 (23.3.1) from July 22 (22.3.1) +### Upgrading to October 23 (23.4.1) from July 22 (22.3.1) -The instructions below are for upgrading from July 22 ([22.3.1](https://github.com/oracle/fmw-kubernetes/releases)) to July 23 ([23.3.1](https://github.com/oracle/fmw-kubernetes/releases)). +The instructions below are for upgrading from July 22 ([22.3.1](https://github.com/oracle/fmw-kubernetes/releases)) to October 23 ([23.4.1](https://github.com/oracle/fmw-kubernetes/releases)). -1. Follow [Upgrading to July 23 (23.3.1) from October 22 (22.4.1) or January 23 (23.1.1)](#upgrading-to-july-23-2331-from-october-22-2241-or-january-23-2311) to upgrade the image. +1. Follow [Upgrading to October 23 (23.4.1) from October 22 (22.4.1) or January 23 (23.1.1)](#upgrading-to-october-23-2341-from-october-22-2241-or-january-23-2311) to upgrade the image. 1. Once the image is upgraded, follow [Upgrading Elasticsearch and Kibana](#upgrading-elasticsearch-and-kibana). -### Upgrading to July 23 (23.3.1) from releases prior to July 22 (22.3.1) +### Upgrading to October 23 (23.4.1) from releases prior to July 22 (22.3.1) In releases prior to July 22 ([22.3.1](https://github.com/oracle/fmw-kubernetes/releases)) OUD used pod based deployment. From July 22 ([22.3.1](https://github.com/oracle/fmw-kubernetes/releases)) onwards OUD is deployed using [StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). @@ -615,7 +617,7 @@ If you are upgrading from a release prior to July 22 ([22.3.1](https://github.co ```yaml image: repository: container-registry.oracle.com/middleware/oud_cpu - tag: 12.2.1.4-jdk8-ol7- + tag: 12.2.1.4-jdk8-ol7- pullPolicy: IfNotPresent imagePullSecrets: - name: orclcred @@ -689,7 +691,7 @@ If you are upgrading from a release prior to July 22 ([22.3.1](https://github.co This section shows how to upgrade Elasticsearch and Kibana. From October 22 (22.4.1) onwards, OUD logs should be stored on a centralized Elasticsearch and Kibana stack. -***Note***: This section should only be followed if upgrading from July 22 (22.3.1) or earlier to July 23 (23.3.1). If you are upgrading from October 22 or later to July 23 do not follow this section. +***Note***: This section should only be followed if upgrading from July 22 (22.3.1) or earlier to October 23 (23.4.1). If you are upgrading from October 22 or later to October 23 do not follow this section. #### Undeploy Elasticsearch and Kibana @@ -698,7 +700,7 @@ From October 22 (22.4.1) onwards, OUD logs should be stored on a centralized Ela Deployments prior to October 22 (22.4.1) used local deployments of Elasticsearch and Kibana. -If you are upgrading from July 22 (22.3.1) or earlier, to July 23 (23.3.1), you must first undeploy Elasticsearch and Kibana using the steps below: +If you are upgrading from July 22 (22.3.1) or earlier, to October 23 (23.4.1), you must first undeploy Elasticsearch and Kibana using the steps below: 1. Navigate to the `$WORKDIR/kubernetes/helm` directory and create a `logging-override-values-uninstall.yaml` with the following: diff --git a/docs-source/content/idm-products/oud/prepare-your-environment/_index.md b/docs-source/content/idm-products/oud/prepare-your-environment/_index.md index 505a7a0c9..2d5af1e45 100644 --- a/docs-source/content/idm-products/oud/prepare-your-environment/_index.md +++ b/docs-source/content/idm-products/oud/prepare-your-environment/_index.md @@ -25,23 +25,23 @@ As per the [Prerequisites](../prerequisites/#system-requirements-for-oracle-unif ``` NAME STATUS ROLES AGE VERSION - node/worker-node1 Ready 17h v1.24.5+1.el7 - node/worker-node2 Ready 17h v1.24.5+1.el7 - node/master-node Ready control-plane,master 23h v1.24.5+1.el7 - - NAME READY STATUS RESTARTS AGE - pod/coredns-66bff467f8-slxdq 1/1 Running 1 67d - pod/coredns-66bff467f8-v77qt 1/1 Running 1 67d - pod/etcd-10.89.73.42 1/1 Running 1 67d - pod/kube-apiserver-10.89.73.42 1/1 Running 1 67d - pod/kube-controller-manager-10.89.73.42 1/1 Running 27 67d - pod/kube-flannel-ds-amd64-r2m8r 1/1 Running 2 48d - pod/kube-flannel-ds-amd64-rdhrf 1/1 Running 2 6d1h - pod/kube-flannel-ds-amd64-vpcbj 1/1 Running 3 66d - pod/kube-proxy-jtcxm 1/1 Running 1 67d - pod/kube-proxy-swfmm 1/1 Running 1 66d - pod/kube-proxy-w6x6t 1/1 Running 1 66d - pod/kube-scheduler-10.89.73.42 1/1 Running 29 67d + node/worker-node1 Ready 17h v1.26.6+1.el8 + node/worker-node2 Ready 17h v1.26.6+1.el8 + node/master-node Ready control-plane,master 23h v1.26.6+1.el8 + + NAME READY STATUS RESTARTS AGE + pod/coredns-66bff467f8-fnhbq 1/1 Running 0 23h + pod/coredns-66bff467f8-xtc8k 1/1 Running 0 23h + pod/etcd-master 1/1 Running 0 21h + pod/kube-apiserver-master-node 1/1 Running 0 21h + pod/kube-controller-manager-master-node 1/1 Running 0 21h + pod/kube-flannel-ds-amd64-lxsfw 1/1 Running 0 17h + pod/kube-flannel-ds-amd64-pqrqr 1/1 Running 0 17h + pod/kube-flannel-ds-amd64-wj5nh 1/1 Running 0 17h + pod/kube-proxy-2kxv2 1/1 Running 0 17h + pod/kube-proxy-82vvj 1/1 Running 0 17h + pod/kube-proxy-nrgw9 1/1 Running 0 23h + pod/kube-scheduler-master 1/1 Running 0 21h ``` ### Obtain the OUD container image @@ -54,7 +54,7 @@ The OUD Kubernetes deployment requires access to an OUD container image. The ima #### Prebuilt OUD container image -The prebuilt OUD April 2023 container image can be downloaded from [Oracle Container Registry](https://container-registry.oracle.com). This image is prebuilt by Oracle and includes Oracle Unified Directory 12.2.1.4.0, the April Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.. +The prebuilt OUD October 2023 container image can be downloaded from [Oracle Container Registry](https://container-registry.oracle.com). This image is prebuilt by Oracle and includes Oracle Unified Directory 12.2.1.4.0, the October Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.. **Note**: Before using this image you must login to [Oracle Container Registry](https://container-registry.oracle.com), navigate to `Middleware` > `oud_cpu` and accept the license agreement. @@ -78,6 +78,8 @@ You can use an image built with WebLogic Image Tool in the following ways: ### Create a persistent volume directory +**Note**: This section should not be followed if using block storage. + As referenced in [Prerequisites](../prerequisites) the nodes in the Kubernetes cluster must have access to a persistent volume such as a Network File System (NFS) mount or a shared file system. In this example `/scratch/shared/` is a shared directory accessible from all nodes. @@ -127,12 +129,12 @@ In this example `/scratch/shared/` is a shared directory accessible from all nod ### Setup the code repository to deploy OUD -Oracle Unified Directory deployment on Kubernetes leverages deployment scripts provided by Oracle for creating Oracle Unified Directory containers using the Helm charts provided. To deploy Oracle Unified Directory on Kubernetes you should set up the deployment scripts on the persistent volume as below: +Oracle Unified Directory deployment on Kubernetes leverages deployment scripts provided by Oracle for creating Oracle Unified Directory containers using the Helm charts provided. To deploy Oracle Unified Directory on Kubernetes you should set up the deployment scripts as below: -1. Create a working directory on the persistent volume to setup the source code. +1. Create a working directory to setup the source code. ```bash - $ mkdir / + $ mkdir ``` For example: @@ -144,7 +146,7 @@ Oracle Unified Directory deployment on Kubernetes leverages deployment scripts p 1. Download the latest OUD deployment scripts from the OUD repository: ```bash - $ cd / + $ cd $ git clone https://github.com/oracle/fmw-kubernetes.git ``` diff --git a/docs-source/content/idm-products/oud/prerequisites/_index.md b/docs-source/content/idm-products/oud/prerequisites/_index.md index 38eb36743..7df8f8a28 100644 --- a/docs-source/content/idm-products/oud/prerequisites/_index.md +++ b/docs-source/content/idm-products/oud/prerequisites/_index.md @@ -16,7 +16,7 @@ This document provides information about the system requirements for deploying a * An installation of Helm is required on the Kubernetes cluster. Helm is used to create and deploy the necessary resources on the Kubernetes cluster. * A supported container engine must be installed and running on the Kubernetes cluster. * The Kubernetes cluster and container engine must meet the minimum version requirements outlined in document ID 2723908.1 on [My Oracle Support](https://support.oracle.com). - * The nodes in the Kubernetes cluster must have access to a persistent volume such as a Network File System (NFS) mount or a shared file system. + * The nodes in the Kubernetes cluster must have access to a persistent volume such as a Network File System (NFS) mount, a shared file system, or block storage. If you intend to use assured replication in OUD, you must have a persistent volume available that uses a Network File System (NFS) mount, or a shared file system for the config volume. See [Enabling Assured Replication](../create-oud-instances/#enabling-assured-replication-optional). **Note**: This documentation does not tell you how to install a Kubernetes cluster, Helm, the container engine, or how to push container images to a container registry. -Please refer to your vendor specific documentation for this information. \ No newline at end of file +Please refer to your vendor specific documentation for this information. Also see [Getting Started](../introduction#getting-started). \ No newline at end of file diff --git a/docs-source/content/idm-products/oud/release-notes/_index.md b/docs-source/content/idm-products/oud/release-notes/_index.md index bc6face5e..a57765e95 100644 --- a/docs-source/content/idm-products/oud/release-notes/_index.md +++ b/docs-source/content/idm-products/oud/release-notes/_index.md @@ -10,6 +10,13 @@ Review the latest changes and known issues for Oracle Unified Directory on Kuber | Date | Version | Change | | --- | --- | --- | +| October, 2023 | 23.4.1 | Supports Oracle Unified Directory 12.2.1.4 domain deployment using the October 2023 container image which contains the October Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.| +| | | + Support for Block Device Storage. See, [Create OUD Instances](../create-oud-instances#using-a-yaml-file).| +| | | + Ability to set resource requests and limits for CPU and memory on an OUD instance. See, [Create OUD Instances](../create-oud-instances#using-a-yaml-file). | +| | | + Support for Assured Replication. See, [Create OUD Instances](../create-oud-instances#using-a-yaml-file).| +| | | + Support for the Kubernetes Horizontal Pod Autoscaler (HPA). See, [Kubernetes Horizontal Pod Autoscaler](../manage-oud-containers/hpa).| +| | | + Supports integration options such as Enterprise User Security (EUS), EBusiness Suite (EBS), and Directory Integration Platform (DIP). +| | | To upgrade to October 23 (23.4.1) you must follow the instructions in [Patch and Upgrade](../patch-and-upgrade).| | July, 2023 | 23.3.1 | Supports Oracle Unified Directory 12.2.1.4 domain deployment using the July 2023 container image which contains the July Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.| | | | To upgrade to July 23 (23.3.1) you must follow the instructions in [Patch and Upgrade](../patch-and-upgrade).| | April, 2023 | 23.2.1 | Supports Oracle Unified Directory 12.2.1.4 domain deployment using the April 2023 container image which contains the April Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.| diff --git a/docs-source/content/idm-products/oudsm/create-oudsm-instances/_index.md b/docs-source/content/idm-products/oudsm/create-oudsm-instances/_index.md index 54fb9151b..36c3969f5 100644 --- a/docs-source/content/idm-products/oudsm/create-oudsm-instances/_index.md +++ b/docs-source/content/idm-products/oudsm/create-oudsm-instances/_index.md @@ -198,7 +198,7 @@ You can create OUDSM instances using one of the following methods: ```yaml image: repository: container-registry.oracle.com/middleware/oudsm_cpu - tag: 12.2.1.4-jdk8-ol7- + tag: 12.2.1.4-jdk8-ol7- pullPolicy: IfNotPresent imagePullSecrets: - name: orclcred @@ -272,7 +272,7 @@ You can create OUDSM instances using one of the following methods: ```bash $ helm install --namespace oudsmns \ - --set oudsm.adminUser=weblogic,oudsm.adminPass=,persistence.filesystem.hostPath.path=/scratch/shared/oudsm_user_projects,image.repository=container-registry.oracle.com/middleware/oudsm_cpu,image.tag=12.2.1.4-jdk8-ol7- \ + --set oudsm.adminUser=weblogic,oudsm.adminPass=,persistence.filesystem.hostPath.path=/scratch/shared/oudsm_user_projects,image.repository=container-registry.oracle.com/middleware/oudsm_cpu,image.tag=12.2.1.4-jdk8-ol7- \ --set imagePullSecrets[0].name="orclcred" \ oudsm oudsm ``` @@ -344,7 +344,7 @@ ingress.extensions/oudsm-ingress-nginx oudsm-1,oudsm-2,oudsm + 1 more... 100 $ kubectl logs oudsm-1 -n oudsmns ``` -**Note** : If the OUD deployment fails additionally refer to [Troubleshooting](../troubleshooting) for instructions on how describe the failing pod(s). +**Note** : If the OUDSM deployment fails additionally refer to [Troubleshooting](../troubleshooting) for instructions on how describe the failing pod(s). Once the problem is identified follow [Undeploy an OUDSM deployment](#undeploy-an-oudsm-deployment) to clean down the deployment before deploying again. @@ -470,47 +470,12 @@ The following table lists the configurable parameters of the 'oudsm' chart and t | oudsm.adminUser | Weblogic Administration User | weblogic | | oudsm.adminPass | Password for Weblogic Administration User | | | oudsm.startupTime | Expected startup time. After specified seconds readinessProbe would start | 900 | -| oudsm.livenessProbeInitialDelay | Paramter to decide livenessProbe initialDelaySeconds | 1200 -| elk.elasticsearch.enabled | If enabled it will create the elastic search statefulset deployment | false | -| elk.elasticsearch.image.repository | Elastic Search Image name/Registry/Repository . Based on this elastic search instances will be created | docker.elastic.co/elasticsearch/elasticsearch | -| elk.elasticsearch.image.tag | Elastic Search Image tag .Based on this, image parameter would be configured for Elastic Search pods/instances | 6.4.3 | -| elk.elasticsearch.image.pullPolicy | policy to pull the image | IfnotPresent | -| elk.elasticsearch.esreplicas | Number of Elastic search Instances will be created | 3 | -| elk.elasticsearch.minimumMasterNodes | The value for discovery.zen.minimum_master_nodes. Should be set to (esreplicas / 2) + 1. | 2 | -| elk.elasticsearch.esJAVAOpts | Java options for Elasticsearch. This is where you should configure the jvm heap size | -Xms512m -Xmx512m | -| elk.elasticsearch.sysctlVmMaxMapCount | Sets the sysctl vm.max_map_count needed for Elasticsearch | 262144 | -| elk.elasticsearch.resources.requests.cpu | cpu resources requested for the elastic search | 100m | -| elk.elasticsearch.resources.limits.cpu | total cpu limits that are configures for the elastic search | 1000m | -| elk.elasticsearch.esService.type | Type of Service to be created for elastic search | ClusterIP | -| elk.elasticsearch.esService.lbrtype | Type of load balancer Service to be created for elastic search | ClusterIP | -| elk.kibana.enabled | If enabled it will create a kibana deployment | false | -| elk.kibana.image.repository | Kibana Image Registry/Repository and name. Based on this Kibana instance will be created | docker.elastic.co/kibana/kibana | -| elk.kibana.image.tag | Kibana Image tag. Based on this, Image parameter would be configured. | 6.4.3 | -| elk.kibana.image.pullPolicy | policy to pull the image | IfnotPresent | -| elk.kibana.kibanaReplicas | Number of Kibana instances will be created | 1 | -| elk.kibana.service.tye | Type of service to be created | NodePort | -| elk.kibana.service.targetPort | Port on which the kibana will be accessed | 5601 | -| elk.kibana.service.nodePort | nodePort is the port on which kibana service will be accessed from outside | 31119 | -| elk.logstash.enabled | If enabled it will create a logstash deployment | false | -| elk.logstash.image.repository | logstash Image Registry/Repository and name. Based on this logstash instance will be created | logstash | -| elk.logstash.image.tag | logstash Image tag. Based on this, Image parameter would be configured. | 6.6.0 | -| elk.logstash.image.pullPolicy | policy to pull the image | IfnotPresent | -| elk.logstash.containerPort | Port on which the logstash container will be running | 5044 | -| elk.logstash.service.tye | Type of service to be created | NodePort | -| elk.logstash.service.targetPort | Port on which the logstash will be accessed | 9600 | -| elk.logstash.service.nodePort | nodePort is the port on which logstash service will be accessed from outside | 32222 | -| elk.logstash.logstashConfigMap | Provide the configmap name which is already created with the logstash conf. if empty default logstash configmap will be created and used | | -| elk.elkPorts.rest | Port for REST | 9200 | -| elk.elkPorts.internode | port used for communication between the nodes | 9300 | -| elk.busybox.image | busy box image name. Used for initcontianers | busybox | -| elk.elkVolume.enabled | If enabled, it will use the persistent volume. if value is false, PV and pods would be using the default emptyDir mount volume. | true | -| elk.elkVolume.pvname | pvname to use an already created Persistent Volume , If blank will use the default name | oudsm-< fullname >-espv | -| elk.elkVolume.type | supported values: either filesystem or networkstorage or custom | filesystem | -| elk.elkVolume.filesystem.hostPath.path | The path location mentioned should be created and accessible from the local host provided with necessary privileges for the user. | /scratch/shared/oud_elk/data | -| elk.elkVolume.networkstorage.nfs.path | Path of NFS Share location | /scratch/shared/oudsm_elk/data | -| elk.elkVolume.networkstorage.nfs.server | IP or hostname of NFS Server | 0.0.0.0 | -| elk.elkVolume.custom.* | Based on values/data, YAML content would be included in PersistenceVolume Object | | -| elk.elkVolume.accessMode | Specifies the access mode of the location provided | ReadWriteMany | -| elk.elkVolume.size | Specifies the size of the storage | 20Gi | -| elk.elkVolume.storageClass | Specifies the storageclass of the persistence volume. | elk | -| elk.elkVolume.annotations | specifies any annotations that will be used| { } | +| oudsm.livenessProbeInitialDelay | Paramter to decide livenessProbe initialDelaySeconds | 1200 | +| elk.logStashImage | The version of logstash you want to install | logstash:8.3.1 | +| elk.sslenabled | If SSL is enabled for ELK set the value to true, or if NON-SSL set to false. This value must be lowercase | TRUE | +| elk.eshosts | The URL for sending logs to Elasticsearch. HTTP if NON-SSL is used | https://elasticsearch.example.com:9200 | +| elk.esuser | The name of the user for logstash to access Elasticsearch | logstash_internal | +| elk.espassword | The password for ELK_USER | password | +| elk.esapikey | The API key details | apikey | +| elk.esindex | The log name | oudsmlogs-00001 | +| elk.imagePullSecrets | secret to be used for pulling logstash image | dockercred | \ No newline at end of file diff --git a/docs-source/content/idm-products/oudsm/introduction/_index.md b/docs-source/content/idm-products/oudsm/introduction/_index.md index 18ba7e16d..8de49e256 100644 --- a/docs-source/content/idm-products/oudsm/introduction/_index.md +++ b/docs-source/content/idm-products/oudsm/introduction/_index.md @@ -13,7 +13,7 @@ Follow the instructions in this guide to set up Oracle Unified Directory Service ### Current production release -The current production release for the Oracle Unified Directory 12c PS4 (12.2.1.4.0) deployment on Kubernetes is [23.3.1](https://github.com/oracle/fmw-kubernetes/releases). +The current production release for the Oracle Unified Directory 12c PS4 (12.2.1.4.0) deployment on Kubernetes is [23.4.1](https://github.com/oracle/fmw-kubernetes/releases). ### Recent changes and known issues @@ -21,14 +21,21 @@ See the [Release Notes](../release-notes) for recent changes and known issues fo ### Getting started -This documentation explains how to configure OUDSM on a Kubernetes cluster where no other Oracle Identity Management products will be deployed. For detailed information about this type of deployment, start at [Prerequisites](../prerequisites) and follow this documentation sequentially. +This documentation explains how to configure OUDSM on a Kubernetes cluster where no other Oracle Identity Management products will be deployed. For detailed information about this type of deployment, start at [Prerequisites](../prerequisites) and follow this documentation sequentially. Please note that this documentation does not explain how to configure a Kubernetes cluster given the product can be deployed on any compliant Kubernetes vendor. -If performing an Enterprise Deployment, refer to the [Enterprise Deployment Guide for Oracle Identity and Access Management in a Kubernetes Cluster](https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.4/ikedg/index.html) instead. +If you are deploying multiple Oracle Identity Management products on the same Kubernetes cluster, then you must follow the Enterprise Deployment Guide outlined in [Enterprise Deployments](../../enterprise-deployments). +Please note, you also have the option to follow the Enterprise Deployment Guide even if you are only installing OUDSM and no other Oracle Identity Management products. + +**Note**: If you need to understand how to configure a Kubernetes cluster ready for an Oracle Unified Directory Services Manager deployment, you should follow the Enterprise Deployment Guide referenced in [Enterprise Deployments](../../enterprise-deployments). The [Enterprise Deployment Automation](../../enterprise-deployments/enterprise-deployment-automation) section also contains details on automation scripts that can: + + + Automate the creation of a Kubernetes cluster on Oracle Cloud Infrastructure (OCI), ready for the deployment of Oracle Identity Management products. + + Automate the deployment of Oracle Identity Management products on any compliant Kubernetes cluster. ### Documentation for earlier releases To view documentation for an earlier release, see: +* [Version 23.3.1](https://oracle.github.io/fmw-kubernetes/23.3.1/idm-products/oudsm/) * [Version 23.2.1](https://oracle.github.io/fmw-kubernetes/23.2.1/idm-products/oudsm/) * [Version 23.1.1](https://oracle.github.io/fmw-kubernetes/23.1.1/idm-products/oudsm/) * [Version 22.4.1](https://oracle.github.io/fmw-kubernetes/22.4.1/oudsm/) diff --git a/docs-source/content/idm-products/oudsm/patch-and-upgrade/patch-an-oudsm-image.md b/docs-source/content/idm-products/oudsm/patch-and-upgrade/patch-an-oudsm-image.md index 9b5d09abc..76cb14242 100644 --- a/docs-source/content/idm-products/oudsm/patch-and-upgrade/patch-an-oudsm-image.md +++ b/docs-source/content/idm-products/oudsm/patch-and-upgrade/patch-an-oudsm-image.md @@ -38,7 +38,7 @@ You can update the deployment with a new OUDSM container image using one of the ```yaml image: repository: container-registry.oracle.com/middleware/oudsm_cpu - tag: 12.2.1.4-jdk8-ol7- + tag: 12.2.1.4-jdk8-ol7- imagePullSecrets: - name: orclcred ``` @@ -89,7 +89,7 @@ You can update the deployment with a new OUDSM container image using one of the ```bash $ helm upgrade --namespace oudsmns \ - --set image.repository=container-registry.oracle.com/middleware/oudsm_cpu,image.tag=12.2.1.4-jdk8-ol7- \ + --set image.repository=container-registry.oracle.com/middleware/oudsm_cpu,image.tag=12.2.1.4-jdk8-ol7- \ --set imagePullSecrets[0].name="orclcred" \ oudsm oudsm --reuse-values ``` @@ -163,6 +163,6 @@ You can update the deployment with a new OUDSM container image using one of the ---- ------ ---- ---- ------- Normal Killing 22m kubelet Container oudsm definition changed, will be restarted Normal Created 21m (x2 over 61m) kubelet Created container oudsm - Normal Pulling 21m kubelet Container image "container-registry.oracle.com/middleware/oudsm_cpu:12.2.1.4-jdk8-ol7-" + Normal Pulling 21m kubelet Container image "container-registry.oracle.com/middleware/oudsm_cpu:12.2.1.4-jdk8-ol7-" Normal Started 21m (x2 over 61m) kubelet Started container oudsm ``` diff --git a/docs-source/content/idm-products/oudsm/patch-and-upgrade/upgrade-elk.md b/docs-source/content/idm-products/oudsm/patch-and-upgrade/upgrade-elk.md index e891301f2..fa4f7b821 100644 --- a/docs-source/content/idm-products/oudsm/patch-and-upgrade/upgrade-elk.md +++ b/docs-source/content/idm-products/oudsm/patch-and-upgrade/upgrade-elk.md @@ -18,7 +18,7 @@ Download the latest code repository as follows: For example: ```bash - $ mkdir /scratch/OUDSMK8SApril23 + $ mkdir /scratch/OUDSMK8SOctober23 ``` 1. Download the latest OUDSM deployment scripts from the OUDSM repository. @@ -31,7 +31,7 @@ Download the latest code repository as follows: For example: ```bash - $ cd /scratch/OUDSMK8SApril23 + $ cd /scratch/OUDSMK8SOctober23 $ git clone https://github.com/oracle/fmw-kubernetes.git ``` @@ -44,7 +44,7 @@ Download the latest code repository as follows: For example: ```bash - $ export WORKDIR=/scratch/OUDSMK8SApril23/fmw-kubernetes/OracleUnifiedDirectorySM + $ export WORKDIR=/scratch/OUDSMK8SOctober23/fmw-kubernetes/OracleUnifiedDirectorySM ``` ### Undeploy Elasticsearch and Kibana @@ -53,7 +53,7 @@ From October 22 (22.4.1) onwards, OUDSM logs should be stored on a centralized E Deployments prior to October 22 (22.4.1) used local deployments of Elasticsearch and Kibana. -If you are upgrading from July 22 (22.3.1) or earlier, to April 23 (23.2.1), you must first undeploy Elasticsearch and Kibana using the steps below: +If you are upgrading from July 22 (22.3.1) or earlier, to October 23 (23.4.1), you must first undeploy Elasticsearch and Kibana using the steps below: 1. Navigate to the `$WORKDIR/kubernetes/helm` directory and create a `logging-override-values-uninstall.yaml` with the following: diff --git a/docs-source/content/idm-products/oudsm/prepare-your-environment/_index.md b/docs-source/content/idm-products/oudsm/prepare-your-environment/_index.md index f89900de6..ad2251808 100644 --- a/docs-source/content/idm-products/oudsm/prepare-your-environment/_index.md +++ b/docs-source/content/idm-products/oudsm/prepare-your-environment/_index.md @@ -24,24 +24,24 @@ As per the [Prerequisites](../prerequisites/#system-requirements-for-oracle-unif The output will look similar to the following: ``` - NAME STATUS ROLES AGE VERSION - node/worker-node1 Ready 17h v1.24.5+1.el7 - node/worker-node2 Ready 17h v1.24.5+1.el7 - node/master-node Ready control-plane,master 23h v1.24.5+1.el7 - - NAME READY STATUS RESTARTS AGE - pod/coredns-66bff467f8-slxdq 1/1 Running 1 67d - pod/coredns-66bff467f8-v77qt 1/1 Running 1 67d - pod/etcd-10.89.73.42 1/1 Running 1 67d - pod/kube-apiserver-10.89.73.42 1/1 Running 1 67d - pod/kube-controller-manager-10.89.73.42 1/1 Running 27 67d - pod/kube-flannel-ds-amd64-r2m8r 1/1 Running 2 48d - pod/kube-flannel-ds-amd64-rdhrf 1/1 Running 2 6d1h - pod/kube-flannel-ds-amd64-vpcbj 1/1 Running 3 66d - pod/kube-proxy-jtcxm 1/1 Running 1 67d - pod/kube-proxy-swfmm 1/1 Running 1 66d - pod/kube-proxy-w6x6t 1/1 Running 1 66d - pod/kube-scheduler-10.89.73.42 1/1 Running 29 67d + NAME STATUS ROLES AGE VERSION + node/worker-node1 Ready 17h v1.26.6+1.el8 + node/worker-node2 Ready 17h v1.26.6+1.el8 + node/master-node Ready master 23h v1.26.6+1.el8 + + NAME READY STATUS RESTARTS AGE + pod/coredns-66bff467f8-fnhbq 1/1 Running 0 23h + pod/coredns-66bff467f8-xtc8k 1/1 Running 0 23h + pod/etcd-master 1/1 Running 0 21h + pod/kube-apiserver-master-node 1/1 Running 0 21h + pod/kube-controller-manager-master-node 1/1 Running 0 21h + pod/kube-flannel-ds-amd64-lxsfw 1/1 Running 0 17h + pod/kube-flannel-ds-amd64-pqrqr 1/1 Running 0 17h + pod/kube-flannel-ds-amd64-wj5nh 1/1 Running 0 17h + pod/kube-proxy-2kxv2 1/1 Running 0 17h + pod/kube-proxy-82vvj 1/1 Running 0 17h + pod/kube-proxy-nrgw9 1/1 Running 0 23h + pod/kube-scheduler-master 1/1 Running 0 21$ ``` ### Obtain the OUDSM container image diff --git a/docs-source/content/idm-products/oudsm/prerequisites/_index.md b/docs-source/content/idm-products/oudsm/prerequisites/_index.md index c8d5d0ae9..f4f149113 100644 --- a/docs-source/content/idm-products/oudsm/prerequisites/_index.md +++ b/docs-source/content/idm-products/oudsm/prerequisites/_index.md @@ -19,4 +19,4 @@ This document provides information about the system requirements for deploying a * The nodes in the Kubernetes cluster must have access to a persistent volume such as a Network File System (NFS) mount or a shared file system. **Note**: This documentation does not tell you how to install a Kubernetes cluster, Helm, the container engine, or how to push container images to a container registry. -Please refer to your vendor specific documentation for this information. \ No newline at end of file +Please refer to your vendor specific documentation for this information. Also see [Getting Started](../introduction#getting-started). \ No newline at end of file diff --git a/docs-source/content/idm-products/oudsm/release-notes/_index.md b/docs-source/content/idm-products/oudsm/release-notes/_index.md index 54cbf61c5..1698c6114 100644 --- a/docs-source/content/idm-products/oudsm/release-notes/_index.md +++ b/docs-source/content/idm-products/oudsm/release-notes/_index.md @@ -10,6 +10,13 @@ Review the latest changes and known issues for Oracle Unified Directory Services | Date | Version | Change | | --- | --- | --- | +| October, 2023 | 23.4.1 | Supports Oracle Unified Directory Services Manager 12.2.1.4 domain deployment using the October 2023 container image which contains the October Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.| +| | | If upgrading to October 23 (23.3.1) from October 22 (22.4.1) or later, upgrade as follows:| +| | | 1. Patch the OUDSM container image to October 23| +| | | If upgrading to October 23 (23.3.1) from July 22 (22.3.1) or earlier, you must upgrade the following in order:| +| | | 1. Patch the OUDSM container image to October 23| +| | | 2. Upgrade Elasticsearch and Kibana.| +| | | To upgrade to October 23 (23.4.1) you must follow the instructions in [Patch and Upgrade](../patch-and-upgrade).| | July, 2023 | 23.3.1 | Supports Oracle Unified Directory Services Manager 12.2.1.4 domain deployment using the July 2023 container image which contains the July Patch Set Update (PSU) and other fixes released with the Critical Patch Update (CPU) program.| | | | If upgrading to July 23 (23.3.1) from October 22 (22.4.1) or later, upgrade as follows:| | | | 1. Patch the OUDSM container image to July 23| diff --git a/docs-source/content/idm-products/oudsm/troubleshooting/_index.md b/docs-source/content/idm-products/oudsm/troubleshooting/_index.md index efb19c8cc..1cfe4c2ec 100644 --- a/docs-source/content/idm-products/oudsm/troubleshooting/_index.md +++ b/docs-source/content/idm-products/oudsm/troubleshooting/_index.md @@ -105,7 +105,7 @@ IPs: Containers: oudsm: Container ID: cri-o://37dbe00257095adc0a424b8841db40b70bbb65645451e0bc53718a0fd7ce22e4 - Image: container-registry.oracle.com/middleware/oudsm_cpu:12.2.1.4-jdk8-ol7- + Image: container-registry.oracle.com/middleware/oudsm_cpu:12.2.1.4-jdk8-ol7- Image ID: container-registry.oracle.com/middleware/oudsm_cpu@sha256:47960d36d502d699bfd8f9b1be4c9216e302db95317c288f335f9c8a32974f2c Ports: 7001/TCP, 7002/TCP Host Ports: 0/TCP, 0/TCP @@ -151,7 +151,7 @@ Events: ---- ------ ---- ---- ------- Warning FailedScheduling 39m default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. Normal Scheduled 39m default-scheduler Successfully assigned oudsmns/oudsm-1 to - Normal Pulled 39m kubelet Container image "container-registry.oracle.com/middleware/oudsm_cpu:12.2.1.4-jdk8-ol7-" already present on machine + Normal Pulled 39m kubelet Container image "container-registry.oracle.com/middleware/oudsm_cpu:12.2.1.4-jdk8-ol7-" already present on machine Normal Created 39m kubelet Created container oudsm Normal Started 39m kubelet Started container oudsm