-
Notifications
You must be signed in to change notification settings - Fork 216
Changed release tag for WDT releases #1843
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
CarolynRountree
approved these changes
Jul 31, 2020
rjeberhard
reviewed
Aug 4, 2020
| @@ -410,7 +410,7 @@ | |||
| "title": "Model in image", | |||
| "tags": [], | |||
| "description": "Sample for supplying a WebLogic Deploy Tooling (WDT) model that the operator expands into a full domain home during runtime.", | |||
| "content": " This feature is supported only in 3.0.0-rc1.\n Contents Introduction Model in Image domain types (WLS, JRF, and Restricted JRF) Use cases Sample directory structure Prerequisites for all domain types Additional prerequisites for JRF domains Initial use case: An initial WebLogic domain Update1 use case: Dynamically adding a data source using a model ConfigMap Cleanup References Introduction This sample demonstrates deploying a Model in Image domain home source type. Unlike Domain in PV and Domain in Image, Model in Image eliminates the need to pre-create your WebLogic domain home prior to deploying your domain resource. Instead, Model in Image uses a WebLogic Deploy Tooling (WDT) model to specify your WebLogic configuration.\nWDT models are a convenient and simple alternative to WebLogic WLST configuration scripts and templates. They compactly define a WebLogic domain using YAML files and support including application archives in a ZIP file. The WDT model format is described in the open source, WebLogic Deploy Tooling GitHub project, and the required directory structure for a WDT archive is specifically discussed here.\nFor more information on Model in Image, see the Model in Image user guide. For a comparison of Model in Image to other domain home source types, see Choose a domain home source type.\nModel in Image domain types (WLS, JRF, and Restricted JRF) There are three types of domains supported by Model in Image: a standard WLS domain, an Oracle Fusion Middleware Infrastructure Java Required Files (JRF) domain, and a RestrictedJRF domain. This sample demonstrates the WLS and JRF types.\nThe JRF domain path through the sample includes additional steps required for JRF: deploying an infrastructure database, initializing the database using the Repository Creation Utility (RCU) tool, referencing the infrastructure database from the WebLogic configuration, setting an Oracle Platform Security Services (OPSS) wallet password, and exporting/importing an OPSS wallet file. JRF domains may be used by Oracle products that layer on top of WebLogic Server, such as SOA and OSB. Similarly, RestrictedJRF domains may be used by Oracle layered products, such as Oracle Communications products.\nUse cases This sample demonstrates two Model in Image use cases:\n Initial: An initial WebLogic domain with the following characteristics:\n Image model-in-image:WLS-v1 with: A WebLogic installation A WebLogic Deploy Tooling (WDT) installation A WDT archive with version v1 of an exploded Java EE web application A WDT model with: A WebLogic Administration Server A WebLogic cluster A reference to the web application Kubernetes Secrets: WebLogic credentials Required WDT runtime password A domain resource with: spec.domainHomeSourceType: FromModel spec.image: model-in-image:WLS-v1 References to the secrets Update1: Demonstrates udpating the initial domain by dynamically adding a data source using a model ConfigMap:\n Image model-in-image:WLS-v1: Same image as Initial use case Kubernetes Secrets: Same as Initial use case plus secrets for data source credentials and URL Kubernetes ConfigMap with: A WDT model for a data source targeted to the cluster A domain resource with: Same as Initial use case plus: spec.model.configMap referencing the ConfigMap References to data source secrets Sample directory structure The sample contains the following files and directories:\n Location Description domain-resources JRF and WLS domain resources. archives Source code location for WebLogic Deploy Tooling application ZIP archives. model-images Staging for each model image\u0026rsquo;s WDT YAML, WDT properties, and WDT archive ZIP files. The directories in model images are named for their respective images. model-configmaps Staging files for a model ConfigMap that configures a data source. ingresses Ingress resources. utils/wl-pod-wait.sh Utility for watching the pods in a domain reach their expected restartVersion, image name, and ready state. utils/patch-restart-version.sh Utility for updating a running domain spec.restartVersion field (which causes it to \u0026lsquo;re-instrospect\u0026rsquo; and \u0026lsquo;roll\u0026rsquo;). utils/opss-wallet.sh Utility for exporting or importing a JRF domain OPSS wallet file. Prerequisites for all domain types Choose the type of domain you\u0026rsquo;re going to use throughout the sample, WLS or JRF.\n The first time you try this sample, we recommend that you choose WLS even if you\u0026rsquo;re familiar with JRF. This is because WLS is simpler and will more easily familiarize you with Model in Image concepts. We recommend choosing JRF only if you are already familiar with JRF, you have already tried the WLS path through this sample, and you have a definite use case where you need to use JRF. The JAVA_HOME environment variable must be set and must reference a valid JDK 8 or 11 installation.\n Get the operator source from the release/3.0.0-rc1 branch and put it in /tmp/operator-source.\nFor example:\n$ mkdir /tmp/operator-source $ cd /tmp/operator-source $ git clone https://github.com/oracle/weblogic-kubernetes-operator.git $ git checkout release/3.0.0-rc1 Note: We will refer to the top directory of the operator source tree as /tmp/operator-source; however, you can use a different location.\n For additional information about obtaining the operator source, see the Developer Guide Requirements.\n Copy the sample to a new directory; for example, use directory /tmp/mii-sample.\n$ mkdir /tmp/mii-sample $ cp -r /tmp/operator-source/kubernetes/samples/scripts/create-weblogic-domain/model-in-image/* /tmp/mii-sample Note: We will refer to this working copy of the sample as /tmp/mii-sample; however, you can use a different location. Make sure an operator is set up to manage namespace sample-domain1-ns. Also, make sure a Traefik ingress controller is managing the same namespace and listening on port 30305.\nFor example, follow the same steps as the Quick Start guide from the beginning through to the Prepare for a domain step.\nMake sure you stop when you complete the \u0026ldquo;Prepare for a domain\u0026rdquo; step and then resume following these instructions.\n Set up ingresses that will redirect HTTP from Traefik port 30305 to the clusters in this sample\u0026rsquo;s WebLogic domains.\n Option 1: To create the ingresses, use the following YAML to create a file called /tmp/mii-sample/ingresses/myingresses.yaml and then call kubectl apply -f /tmp/mii-sample/ingresses/myingresses.yaml:\napiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-ingress-sample-domain1-admin-server namespace: sample-domain1-ns labels: weblogic.domainUID: sample-domain1 annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: http: paths: - path: /console backend: serviceName: sample-domain1-admin-server servicePort: 7001 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-ingress-sample-domain1-cluster-cluster-1 namespace: sample-domain1-ns labels: weblogic.domainUID: sample-domain1 annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: sample-domain1-cluster-cluster-1.mii-sample.org http: paths: - path: backend: serviceName: sample-domain1-cluster-cluster-1 servicePort: 8001 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-ingress-sample-domain1-cluster-cluster-2 namespace: sample-domain1-ns labels: weblogic.domainUID: sample-domain1 annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: sample-domain1-cluster-cluster-2.mii-sample.org http: paths: - path: backend: serviceName: sample-domain1-cluster-cluster-2 servicePort: 8001 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-ingress-sample-domain2-cluster-cluster-1 namespace: sample-domain1-ns labels: weblogic.domainUID: sample-domain2 annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: sample-domain2-cluster-cluster-1.mii-sample.org http: paths: - path: backend: serviceName: sample-domain2-cluster-cluster-1 servicePort: 8001 Option 2: Run kubectl apply -f on each of the ingress YAML files that are already included in the sample source /tmp/mii-sample/ingresses directory:\n $ cd /tmp/mii-sample/ingresses $ kubectl apply -f traefik-ingress-sample-domain1-admin-server.yaml $ kubectl apply -f traefik-ingress-sample-domain1-cluster-cluster-1.yaml $ kubectl apply -f traefik-ingress-sample-domain1-cluster-cluster-2.yaml $ kubectl apply -f traefik-ingress-sample-domain2-cluster-cluster-1.yaml $ kubectl apply -f traefik-ingress-sample-domain2-cluster-cluster-2.yaml NOTE: We give each cluster ingress a different host name that is decorated using both its operator domain UID and its cluster name. This makes each cluster uniquely addressable even when cluster names are the same across different clusters. When using curl to access the WebLogic domain through the ingress, you will need to supply a host name header that matches the host names in the ingress.\n For more on information ingresses and load balancers, see Ingress.\n Obtain the WebLogic 12.2.1.4 image that is required to create the sample\u0026rsquo;s model images.\na. Use a browser to access Oracle Container Registry.\nb. Choose an image location: for JRF domains, select Middleware, then fmw-infrastructure; for WLS domains, select Middleware, then weblogic.\nc. Select Sign In and accept the license agreement.\nd. Use your terminal to log in to Docker locally: docker login container-registry.oracle.com.\ne. Later in this sample, when you run WebLogic Image Tool commands, the tool will use the image as a base image for creating model images. Specifically, the tool will implicitly call docker pull for one of the above licensed images as specified in the tool\u0026rsquo;s command line using the --fromImage parameter. For JRF, this sample specifies container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.4, and for WLS, the sample specifies container-registry.oracle.com/middleware/weblogic:12.2.1.4.\nIf you prefer, you can create your own base image and then substitute this image name in the WebLogic Image Tool --fromImage parameter throughout this sample. See Preparing a Base Image.\n Download the latest WebLogic Deploying Tooling and WebLogic Image Tool installer ZIP files to your /tmp/mii-sample/model-images directory.\nBoth WDT and WIT are required to create your Model in Image Docker images. Download the latest version of each tool\u0026rsquo;s installer ZIP file to the /tmp/mii-sample/model-images directory.\nFor example, visit the GitHub WebLogic Deploy Tooling Releses and WebLogic Image Tool Releases web pages to determine the latest release version for each, and then, assuming the version numbers are 1.8.0 and 1.8.4 respectively, call:\n$ curl -m 30 -fL https://github.com/oracle/weblogic-deploy-tooling/releases/download/weblogic-deploy-tooling-1.8.0/weblogic-deploy.zip \\ -o /tmp/mii-sample/model-images/weblogic-deploy.zip $ curl -m 30 -fL https://github.com/oracle/weblogic-image-tool/releases/download/release-1.8.4/imagetool.zip \\ -o /tmp/mii-sample/model-images/imagetool.zip Set up the WebLogic Image Tool.\nRun the following commands:\n$ cd /tmp/mii-sample/model-images $ unzip imagetool.zip $ ./imagetool/bin/imagetool.sh cache addInstaller \\ --type wdt \\ --version latest \\ --path /tmp/mii-sample/model-images/weblogic-deploy.zip These steps will install WIT to the /tmp/mii-sample/model-images/imagetool directory, plus put a wdt_latest entry in the tool\u0026rsquo;s cache which points to the WDT ZIP installer. We will use WIT later in the sample for creating model images.\n Additional prerequisites for JRF domains NOTE: If you\u0026rsquo;re using a WLS domain type, skip this section and continue here.\n JRF Prerequisites Contents Introduction to JRF setups Set up and initialize an infrastructure database Increase introspection job timeout Important considerations for RCU model attributes, domain resource attributes, and secrets Introduction to JRF setups NOTE: The requirements in this section are in addition to Prerequisites for all domain types.\n A JRF domain requires an infrastructure database, initializing this database with RCU, and configuring your domain to access this database. All of these steps must occur before you create your domain.\nSet up and initialize an infrastructure database A JRF domain requires an infrastructure database and also requires initializing this database with a schema and a set of tables. The following example shows how to set up a database and use the RCU tool to create the infrastructure schema for a JRF domain. The database is set up with the following attributes:\n Attribute Value database Kubernetes namespace default database Kubernetes pod oracle-db database image container-registry.oracle.com/database/enterprise:12.2.0.1-slim database password Oradoc_db1 infrastructure schema prefix FMW1 infrastructure schema password Oradoc_db1 database URL oracle-db.default.svc.cluster.local:1521/devpdb.k8s Ensure that you have access to the database image, and then create a deployment using it:\n Use a browser to log in to https://container-registry.oracle.com, select database-\u0026gt;enterprise and accept the license agreement.\n Get the database image:\n In the local shell, docker login container-registry.oracle.com. In the local shell, docker pull container-registry.oracle.com/database/enterprise:12.2.0.1-slim. Use the sample script in /tmp/operator-source/kubernetes/samples/scripts/create-oracle-db-service to create an Oracle database running in the pod, oracle-db.\n$ cd /tmp/operator-source/kubernetes/samples/scripts/create-oracle-db-service $ start-db-service.sh This script will deploy a database in the default namespace with the connect string oracle-db.default.svc.cluster.local:1521/devpdb.k8s, and administration password Oradoc_db1.\nThis step is based on the steps documented in Run a Database.\nWARNING: The Oracle Database Docker images are supported only for non-production use. For more details, see My Oracle Support note: Oracle Support for Database Running on Docker (Doc ID 2216342.1).\n Use the sample script in /tmp/operator-source/kubernetes/samples/scripts/create-rcu-schema to create the RCU schema with the schema prefix FMW1.\nNote that this script assumes Oradoc_db1 is the DBA password, Oradoc_db1 is the schema password, and that the database URL is oracle-db.default.svc.cluster.local:1521/devpdb.k8s.\n$ cd /tmp/operator-source/kubernetes/samples/scripts/create-rcu-schema $ ./create-rcu-schema.sh -s FMW1 -i container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.4 NOTE: If you need to drop the repository, use this command:\n$ drop-rcu-schema.sh -s FMW1 Increase introspection job timeout The JRF domain home creation can take more time than the introspection job\u0026rsquo;s default timeout. You should increase the timeout for the introspection job. Use the configuration.introspectorJobActiveDeadlineSeconds in your domain resource to override the default with a value of at least 300 seconds (the default is 120 seconds). Note that the JRF versions of the domain resource files that are provided in /tmp/mii-sample/domain-resources already set this value.\nImportant considerations for RCU model attributes, domain resource attributes, and secrets To allow Model in Image to access the database and OPSS wallet, you must create an RCU access secret containing the database connect string, user name, and password that\u0026rsquo;s referenced from your model and an OPSS wallet password secret that\u0026rsquo;s referenced from your domain resource before deploying your domain. It\u0026rsquo;s also necessary to define an RCUDbInfo stanza in your model.\nThe sample includes examples of JRF models and domain resources in the /tmp/mii-sample/model-images and /tmp/mii-sample/domain-resources directories, and instructions in the following sections will describe setting up the RCU and OPSS secrets.\nWhen you follow the instructions later in this sample, avoid instructions that are WLS only, and substitute JRF for WLS in the corresponding model image tags and domain resource file names.\nFor example:\n JRF domain resources in this sample have an opss.walletPasswordSecret field that references a secret named sample-domain1-opss-wallet-password-secret, with password=welcome1.\n JRF image models in this sample have a domainInfo -\u0026gt; RCUDbInfo stanza that reference a sample-domain1-rcu-access secret with appropriate values for attributes rcu_prefix, rcu_schema_password, and rcu_db_conn_string for accessing the Oracle database that you deployed to the default namespace as one of the prerequisite steps.\n Important considerations for reusing or sharing OPSS tables We do not recommend that most users share OPSS tables. Extreme caution is required when sharing OPSS tables between domains.\n When you successfully deploy your JRF domain resource for the first time, the introspector job will initialize the OPSS tables for the domain using the domainInfo -\u0026gt; RCUDbInfo stanza in the WDT model plus the configuration.opss.walletPasswordSecret specified in the domain resource. The job will also create a new domain home. Finally, the operator will also capture an OPSS wallet file from the new domain\u0026rsquo;s local directory and place this file in a new Kubernetes ConfigMap.\nThere are scenarios when the domain needs to be recreated between updates, such as when WebLogic credentials are changed, security roles defined in the WDT model have been changed, or you want to share the same infrastructure tables with different domains. In these scenarios, the operator needs the walletPasswordSecret as well as the OPSS wallet file, together with the exact information in domainInfo -\u0026gt; RCUDbInfo so that the domain can be recreated and access the same set of tables. Without the wallet file and wallet password, you will not be able to recreate a domain accessing the same set of tables, therefore we strongly recommend that you back up the wallet file.\nTo recover a domain\u0026rsquo;s OPSS tables between domain restarts or to share an OPSS schema between different domains, it is necessary to extract this wallet file from the domain\u0026rsquo;s automatically deployed introspector ConfigMap and save the OPSS wallet password secret that was used for the original domain. The wallet password and wallet file are needed again when you recreate the domain or share the database with other domains.\nTo save the wallet file, assuming that your namespace is sample-domain1-ns and your domain UID is sample-domain1:\n $ kubectl -n sample-domain1-ns \\ get configmap sample-domain1-weblogic-domain-introspect-cm \\ -o jsonpath='{.data.ewallet\\.p12}' \\ \u0026gt; ./ewallet.p12 Alternatively, you can save the file using the sample\u0026rsquo;s wallet utility:\n $ /tmp/mii-sample/utils/opss-wallet.sh -n sample-domain1-ns -d sample-domain1 -wf ./ewallet.p12 # For help: /tmp/mii-sample/utils/opss-wallet.sh -? Important! Back up your wallet file to a safe location that can be retrieved later.\nTo reuse the wallet file in subsequent redeployments or to share the domain\u0026rsquo;s OPSS tables between different domains:\n Load the saved wallet file into a secret with a key named walletFile (again, assuming that your domain UID is sample-domain1 and your namespace is sample-domain1-ns): $ kubectl -n sample-domain1-ns create secret generic sample-domain1-opss-walletfile-secret \\ --from-file=walletFile=./ewallet.p12 $ kubectl -n sample-domain1-ns label secret sample-domain1-opss-walletfile-secret \\ weblogic.domainUID=`sample-domain1` Alternatively, use the sample\u0026rsquo;s wallet utility:\n $ /tmp/mii-sample/utils/opss-wallet.sh -n sample-domain1-ns -d sample-domain1 -wf ./ewallet.p12 -ws sample-domain1-opss-walletfile-secret # For help: /tmp/mii-sample/utils/opss-wallet.sh -? Modify your domain resource JRF YAML files to provide the wallet file secret name, for example: configuration: opss: # Name of secret with walletPassword for extracting the wallet walletPasswordSecret: sample-domain1-opss-wallet-password-secret # Name of secret with walletFile containing base64 encoded opss wallet walletFileSecret: sample-domain1-opss-walletfile-secret Note: The sample JRF domain resource files included in /tmp/mii-sample/domain-resources already have the above YAML stanza.\n Initial use case Contents Overview Image creation Image creation - Introduction Understanding our first archive Staging a ZIP file of the archive Staging model files Creating the image with WIT Deploy resources Deploy resources - Introduction Secrets Domain resource Overview In this use case, we set up an initial WebLogic domain. This involves:\n A WDT archive ZIP file that contains your applications. A WDT model that describes your WebLogic configuration. A Docker image that contains your WDT model files and archive. Creating secrets for the domain. Creating a domain resource for the domain that references your secrets and image. After the domain resource is deployed, the WebLogic operator will start an \u0026lsquo;introspector job\u0026rsquo; that converts your models into a WebLogic configuration, and then the operator will pass this configuration to each WebLogic Server in the domain.\nPerform the steps in Prerequisites for all domain types before performing the steps in this use case.\nIf you are taking the JRF path through the sample, then substitute JRF for WLS in your image names and directory paths. Also note that the JRF-v1 model YAML differs from the WLS-v1 YAML file (it contains an additional domainInfo -\u0026gt; RCUDbInfo stanza).\n Image creation - Introduction The goal of the initial use case \u0026lsquo;image creation\u0026rsquo; is to demonstrate using the WebLogic Image Tool to create an image named model-in-image:WLS-v1 from files that we will stage to /tmp/mii-sample/model-images/model-in-image:WLS-v1/. The staged files will contain a web application in a WDT archive, and WDT model configuration for a WebLogic Administration Server called admin-server and a WebLogic cluster called cluster-1.\nOverall, a Model in Image image must contain a WebLogic installation and also a WebLogic Deploy Tooling installation in its /u01/wdt/weblogic-deploy directory. In addition, if you have WDT model archive files, then the image must also contain these files in its /u01/wdt/models directory. Finally, an image may optionally also contain your WDT model YAML and properties files in the same /u01/wdt/models directory. If you do not specify WDT model YAML in your /u01/wdt/models directory, then the model YAML must be supplied dynamically using a Kubernetes ConfigMap that is referenced by your domain resource spec.model.configMap attribute. We will provide an example of using a model ConfigMap later in this sample.\nLet\u0026rsquo;s walk through the steps for creating the image model-in-image:WLS-v1:\n Understanding our first archive Staging a ZIP file of the archive Staging model files Creating the image with WIT Understanding our first archive The sample includes a predefined archive directory in /tmp/mii-sample/archives/archive-v1 that we will use to create an archive ZIP file for the image.\nThe archive top directory, named wlsdeploy, contains a directory named applications, which includes an \u0026lsquo;exploded\u0026rsquo; sample JSP web application in the directory, myapp-v1. Three useful aspects to remember about WDT archives are:\n A model image can contain multiple WDT archives. WDT archives can contain multiple applications, libraries, and other components. WDT archives have a well defined directory structure, which always has wlsdeploy as the top directory. If you are interested in the web application source, click here to see the JSP code. \u0026lt;%-- Copyright (c) 2019, 2020, Oracle Corporation and/or its affiliates. --%\u0026gt; \u0026lt;%-- Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. --%\u0026gt; \u0026lt;%@ page import=\u0026quot;javax.naming.InitialContext\u0026quot; %\u0026gt; \u0026lt;%@ page import=\u0026quot;javax.management.*\u0026quot; %\u0026gt; \u0026lt;%@ page import=\u0026quot;java.io.*\u0026quot; %\u0026gt; \u0026lt;% InitialContext ic = null; try { ic = new InitialContext(); String srName=System.getProperty(\u0026quot;weblogic.Name\u0026quot;); String domainUID=System.getenv(\u0026quot;DOMAIN_UID\u0026quot;); String domainName=System.getenv(\u0026quot;CUSTOM_DOMAIN_NAME\u0026quot;); out.println(\u0026quot;\u0026lt;html\u0026gt;\u0026lt;body\u0026gt;\u0026lt;pre\u0026gt;\u0026quot;); out.println(\u0026quot;*****************************************************************\u0026quot;); out.println(); out.println(\u0026quot;Hello World! This is version 'v1' of the mii-sample JSP web-app.\u0026quot;); out.println(); out.println(\u0026quot;Welcome to WebLogic server '\u0026quot; + srName + \u0026quot;'!\u0026quot;); out.println(); out.println(\u0026quot; domain UID = '\u0026quot; + domainUID +\u0026quot;'\u0026quot;); out.println(\u0026quot; domain name = '\u0026quot; + domainName +\u0026quot;'\u0026quot;); out.println(); MBeanServer mbs = (MBeanServer)ic.lookup(\u0026quot;java:comp/env/jmx/runtime\u0026quot;); // display the current server's cluster name Set\u0026lt;ObjectInstance\u0026gt; clusterRuntimes = mbs.queryMBeans(new ObjectName(\u0026quot;*:Type=ClusterRuntime,*\u0026quot;), null); out.println(\u0026quot;Found \u0026quot; + clusterRuntimes.size() + \u0026quot; local cluster runtime\u0026quot; + (String)((clusterRuntimes.size()!=1)?\u0026quot;s:\u0026quot;:\u0026quot;:\u0026quot;)); for (ObjectInstance clusterRuntime : clusterRuntimes) { String cName = (String)mbs.getAttribute(clusterRuntime.getObjectName(), \u0026quot;Name\u0026quot;); out.println(\u0026quot; Cluster '\u0026quot; + cName + \u0026quot;'\u0026quot;); } out.println(); // display local data sources ObjectName jdbcRuntime = new ObjectName(\u0026quot;com.bea:ServerRuntime=\u0026quot; + srName + \u0026quot;,Name=\u0026quot; + srName + \u0026quot;,Type=JDBCServiceRuntime\u0026quot;); ObjectName[] dataSources = (ObjectName[])mbs.getAttribute(jdbcRuntime, \u0026quot;JDBCDataSourceRuntimeMBeans\u0026quot;); out.println(\u0026quot;Found \u0026quot; + dataSources.length + \u0026quot; local data source\u0026quot; + (String)((dataSources.length!=1)?\u0026quot;s:\u0026quot;:\u0026quot;:\u0026quot;)); for (ObjectName dataSource : dataSources) { String dsName = (String)mbs.getAttribute(dataSource, \u0026quot;Name\u0026quot;); String dsState = (String)mbs.getAttribute(dataSource, \u0026quot;State\u0026quot;); out.println(\u0026quot; Datasource '\u0026quot; + dsName + \u0026quot;': State='\u0026quot; + dsState +\u0026quot;'\u0026quot;); } out.println(); out.println(\u0026quot;*****************************************************************\u0026quot;); } catch (Throwable t) { t.printStackTrace(new PrintStream(response.getOutputStream())); } finally { out.println(\u0026quot;\u0026lt;/pre\u0026gt;\u0026lt;/body\u0026gt;\u0026lt;/html\u0026gt;\u0026quot;); if (ic != null) ic.close(); } %\u0026gt; The application displays important details about the WebLogic Server that it\u0026rsquo;s running on: namely its domain name, cluster name, and server name, as well as the names of any data sources that are targeted to the server. You can also see that application output reports that it\u0026rsquo;s at version v1; we will update this to v2 in a future use case to demonstrate upgrading the application.\nStaging a ZIP file of the archive When we create our image, we will use the files in staging directory /tmp/mii-sample/model-in-image__WLS-v1. In preparation, we need it to contain a ZIP file of the WDT application archive.\nRun the following commands to create your application archive ZIP file and put it in the expected directory:\n# Delete existing archive.zip in case we have an old leftover version $ rm -f /tmp/mii-sample/model-images/model-in-image__WLS-v1/archive.zip # Move to the directory which contains the source files for our archive $ cd /tmp/mii-sample/archives/archive-v1 # Zip the archive to the location will later use when we run the WebLogic Image Tool $ zip -r /tmp/mii-sample/model-images/model-in-image__WLS-v1/archive.zip wlsdeploy Staging model files In this step, we explore the staged WDT model YAML file and properties in directory /tmp/mii-sample/model-in-image__WLS-v1. The model in this directory references the web application in our archive, configures a WebLogic Administration Server, and configures a WebLogic cluster. It consists of only two files, model.10.properties, a file with a single property, and, model.10.yaml, a YAML file with our WebLogic configuration model.10.yaml.\nCLUSTER_SIZE=5 Here is the WLS model.10.yaml:\ndomainInfo: AdminUserName: '@@SECRET:__weblogic-credentials__:username@@' AdminPassword: '@@SECRET:__weblogic-credentials__:password@@' ServerStartMode: 'prod' topology: Name: '@@ENV:CUSTOM_DOMAIN_NAME@@' AdminServerName: 'admin-server' Cluster: 'cluster-1': DynamicServers: ServerTemplate: 'cluster-1-template' ServerNamePrefix: 'managed-server' DynamicClusterSize: '@@PROP:CLUSTER_SIZE@@' MaxDynamicClusterSize: '@@PROP:CLUSTER_SIZE@@' MinDynamicClusterSize: '0' CalculatedListenPorts: false Server: 'admin-server': ListenPort: 7001 ServerTemplate: 'cluster-1-template': Cluster: 'cluster-1' ListenPort: 8001 appDeployments: Application: myapp: SourcePath: 'wlsdeploy/applications/myapp-v1' ModuleType: ear Target: 'cluster-1' Click here to expand the JRF `model.10.yaml`, and note the RCUDbInfo stanza and its references to a DOMAIN_UID-rcu-access secret. domainInfo: AdminUserName: '@@SECRET:__weblogic-credentials__:username@@' AdminPassword: '@@SECRET:__weblogic-credentials__:password@@' ServerStartMode: 'prod' RCUDbInfo: rcu_prefix: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-access:rcu_prefix@@' rcu_schema_password: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-access:rcu_schema_password@@' rcu_db_conn_string: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-access:rcu_db_conn_string@@' topology: AdminServerName: 'admin-server' Name: '@@ENV:CUSTOM_DOMAIN_NAME@@' Cluster: 'cluster-1': Server: 'admin-server': ListenPort: 7001 'managed-server1-c1-': Cluster: 'cluster-1' ListenPort: 8001 'managed-server2-c1-': Cluster: 'cluster-1' ListenPort: 8001 'managed-server3-c1-': Cluster: 'cluster-1' ListenPort: 8001 'managed-server4-c1-': Cluster: 'cluster-1' ListenPort: 8001 appDeployments: Application: myapp: SourcePath: 'wlsdeploy/applications/myapp-v1' ModuleType: ear Target: 'cluster-1' The model files:\n Define a WebLogic domain with:\n Cluster cluster-1 Administration Server admin-server A cluster-1 targeted ear application that\u0026rsquo;s located in the WDT archive ZIP file at wlsdeploy/applications/myapp-v1 Leverage macros to inject external values:\n The property file CLUSTER_SIZE property is referenced in the model YAML DynamicClusterSize and MaxDynamicClusterSize fields using a PROP macro. The model file domain name is injected using a custom environment variable named CUSTOM_DOMAIN_NAME using an ENV macro. We set this environment variable later in this sample using an env field in its domain resource. This conveniently provides a simple way to deploy multiple differently named domains using the same model image. The model file administrator user name and password are set using a weblogic-credentials secret macro reference to the WebLogic credential secret. This secret is in turn referenced using the weblogicCredentialsSecret field in the domain resource. The weblogic-credentials is a reserved name that always dereferences to the owning domain resource actual WebLogic credentials secret name. A Model in Image image can contain multiple properties files, archive ZIP files, and YAML files, but in this sample we use just one of each. For a full discussion of Model in Images model file naming conventions, file loading order, and macro syntax, see Model files in the Model in Image user documentation.\nCreating the image with WIT Note: If you are using JRF in this sample, substitute JRF for each occurrence of WLS in the imagetool command line below, plus substitute container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.4 for the --fromImage value.\n At this point, we have staged all of the files needed for image model-in-image:WLS-v1, they include:\n /tmp/mii-sample/model-images/weblogic-deploy.zip /tmp/mii-sample/model-images/model-in-image__WLS-v1/model.10.yaml /tmp/mii-sample/model-images/model-in-image__WLS-v1/model.10.properties /tmp/mii-sample/model-images/model-in-image__WLS-v1/archive.zip If you don\u0026rsquo;t see the weblogic-deploy.zip file, then it means that you missed a step in the prerequisites.\nNow let\u0026rsquo;s use the Image Tool to create an image named model-in-image:WLS-v1 that\u0026rsquo;s layered on a base WebLogic image. We\u0026rsquo;ve already set up this tool during the prerequisite steps at the beginning of this sample.\nRun the following commands to create the model image and verify that it worked:\n$ cd /tmp/mii-sample/model-images $ ./imagetool/bin/imagetool.sh update \\ --tag model-in-image:WLS-v1 \\ --fromImage container-registry.oracle.com/middleware/weblogic:12.2.1.4 \\ --wdtModel ./model-in-image__WLS-v1/model.10.yaml \\ --wdtVariables ./model-in-image__WLS-v1/model.10.properties \\ --wdtArchive ./model-in-image__WLS-v1/archive.zip \\ --wdtModelOnly \\ --wdtDomainType WLS If you don\u0026rsquo;t see the imagetool directory, then it means that you missed a step in the prerequisites.\nThis command runs the WebLogic Image Tool in its Model in Image mode, and does the following:\n Builds the final Docker image as a layer on the container-registry.oracle.com/middleware/weblogic:12.2.1.4 base image. Copies the WDT ZIP file that\u0026rsquo;s referenced in the WIT cache into the image. Note that we cached WDT in WIT using the keyword latest when we set up the cache during the sample prerequisites steps. This lets WIT implicitly assume its the desired WDT version and removes the need to pass a -wdtVersion flag. Copies the specified WDT model, properties, and application archives to image location /u01/wdt/models. When the command succeeds, it should end with output like:\n[INFO ] Build successful. Build time=36s. Image tag=model-in-image:WLS-v1 Also, if you run the docker images command, then you should see a Docker image named model-in-image:WLS-v1.\nDeploy resources - Introduction In this section we will deploy our new image to namespace sample-domain1-ns, including the following steps:\n Create a secret containing your WebLogic administrator user name and password. Create a secret containing your Model in Image runtime encryption password: All Model in Image domains must supply a runtime encryption secret with a password value. It is used to encrypt configuration that is passed around internally by the operator. The value must be kept private but can be arbitrary; you can optionally supply a different secret value every time you restart the domain. If your domain type is JRF, create secrets containing your RCU access URL, credentials, and prefix. Deploy a domain resource YAML file that references the new image. Wait for the domain\u0026rsquo;s pods to start and reach their ready state. Secrets First, create the secrets needed by both WLS and JRF type model domains. In this case, we have two secrets.\nRun the following kubectl commands to deploy the required secrets:\n$ kubectl -n sample-domain1-ns create secret generic \\ sample-domain1-weblogic-credentials \\ --from-literal=username=weblogic --from-literal=password=welcome1 $ kubectl -n sample-domain1-ns label secret \\ sample-domain1-weblogic-credentials \\ weblogic.domainUID=sample-domain1 $ kubectl -n sample-domain1-ns create secret generic \\ sample-domain1-runtime-encryption-secret \\ --from-literal=password=my_runtime_password $ kubectl -n sample-domain1-ns label secret \\ sample-domain1-runtime-encryption-secret \\ weblogic.domainUID=sample-domain1 Some important details about these secrets:\n The WebLogic credentials secret:\n It is required and must contain username and password fields. It must be referenced by the spec.weblogicCredentialsSecret field in your domain resource. It also must be referenced by macros in the domainInfo.AdminUserName and domainInfo.AdminPassWord fields in your model YAML file. The Model WDT runtime secret:\n This is a special secret required by Model in Image. It must contain a password field. It must be referenced using the spec.model.runtimeEncryptionSecret attribute in its domain resource. It must remain the same for as long as the domain is deployed to Kubernetes, but can be changed between deployments. It is used to encrypt data as it\u0026rsquo;s internally passed using log files from the domain\u0026rsquo;s introspector job and on to its WebLogic Server pods. Deleting and recreating the secrets:\n We delete a secret before creating it, otherwise the create command will fail if the secret already exists. This allows us to change the secret when using the kubectl create secret command. We name and label secrets using their associated domain UID for two reasons:\n To make it obvious which secrets belong to which domains. To make it easier to clean up a domain. Typical cleanup scripts use the weblogic.domainUID label as a convenience for finding all resources associated with a domain. If you\u0026rsquo;re following the JRF path through the sample, then you also need to deploy the additional secret referenced by macros in the JRF model RCUDbInfo clause, plus an OPSS wallet password secret. For details about the uses of these secrets, see the Model in Image user documentation.\n Click here for the commands for deploying additional secrets for JRF. $ kubectl -n sample-domain1-ns create secret generic \\ sample-domain1-rcu-access \\ --from-literal=rcu_prefix=FMW1 \\ --from-literal=rcu_schema_password=Oradoc_db1 \\ --from-literal=rcu_db_conn_string=oracle-db.default.svc.cluster.local:1521/devpdb.k8s $ kubectl -n sample-domain1-ns label secret \\ sample-domain1-rcu-access \\ weblogic.domainUID=sample-domain1 $ kubectl -n sample-domain1-ns create secret generic \\ sample-domain1-opss-wallet-password-secret \\ --from-literal=walletPassword=welcome1 $ kubectl -n sample-domain1-ns label secret \\ sample-domain1-opss-wallet-password-secret \\ weblogic.domainUID=sample-domain1 Domain resource Now let\u0026rsquo;s create a domain resource. A domain resource is the key resource that tells the operator how to deploy a WebLogic domain.\nCopy the following to a file called /tmp/mii-sample/mii-initial.yaml or similar, or use the file /tmp/mii-sample/domain-resources/WLS/mii-initial-d1-WLS-v1.yaml that is included in the sample source.\n Click here to expand the WLS domain resource YAML. # # This is an example of how to define a Domain resource. # # If you are using 3.0.0-rc1, then the version on the following line # should be `v7` not `v6`. apiVersion: \u0026quot;weblogic.oracle/v6\u0026quot; kind: Domain metadata: name: sample-domain1 namespace: sample-domain1-ns labels: weblogic.resourceVersion: domain-v2 weblogic.domainUID: sample-domain1 spec: # Set to 'FromModel' to indicate 'Model in Image'. domainHomeSourceType: FromModel # The WebLogic Domain Home, this must be a location within # the image for 'Model in Image' domains. domainHome: /u01/domains/sample-domain1 # The WebLogic Server Docker image that the Operator uses to start the domain image: \u0026quot;model-in-image:WLS-v1\u0026quot; # Defaults to \u0026quot;Always\u0026quot; if image tag (version) is ':latest' imagePullPolicy: \u0026quot;IfNotPresent\u0026quot; # Identify which Secret contains the credentials for pulling an image #imagePullSecrets: #- name: regsecret # Identify which Secret contains the WebLogic Admin credentials, # the secret must contain 'username' and 'password' fields. webLogicCredentialsSecret: name: sample-domain1-weblogic-credentials # Whether to include the WebLogic server stdout in the pod's stdout, default is true includeServerOutInPodLog: true # Whether to enable overriding your log file location, see also 'logHome' #logHomeEnabled: false # The location for domain log, server logs, server out, and Node Manager log files # see also 'logHomeEnabled', 'volumes', and 'volumeMounts'. #logHome: /shared/logs/sample-domain1 # Set which WebLogic servers the Operator will start # - \u0026quot;NEVER\u0026quot; will not start any server in the domain # - \u0026quot;ADMIN_ONLY\u0026quot; will start up only the administration server (no managed servers will be started) # - \u0026quot;IF_NEEDED\u0026quot; will start all non-clustered servers, including the administration server, and clustered servers up to their replica count. serverStartPolicy: \u0026quot;IF_NEEDED\u0026quot; # Settings for all server pods in the domain including the introspector job pod serverPod: # Optional new or overridden environment variables for the domain's pods # - This sample uses CUSTOM_DOMAIN_NAME in its image model file # to set the Weblogic domain name env: - name: CUSTOM_DOMAIN_NAME value: \u0026quot;domain1\u0026quot; - name: JAVA_OPTIONS value: \u0026quot;-Dweblogic.StdoutDebugEnabled=false\u0026quot; - name: USER_MEM_ARGS value: \u0026quot;-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom \u0026quot; # Optional volumes and mounts for the domain's pods. See also 'logHome'. #volumes: #- name: weblogic-domain-storage-volume # persistentVolumeClaim: # claimName: sample-domain1-weblogic-sample-pvc #volumeMounts: #- mountPath: /shared # name: weblogic-domain-storage-volume # The desired behavior for starting the domain's administration server. adminServer: # The serverStartState legal values are \u0026quot;RUNNING\u0026quot; or \u0026quot;ADMIN\u0026quot; # \u0026quot;RUNNING\u0026quot; means the listed server will be started up to \u0026quot;RUNNING\u0026quot; mode # \u0026quot;ADMIN\u0026quot; means the listed server will be start up to \u0026quot;ADMIN\u0026quot; mode serverStartState: \u0026quot;RUNNING\u0026quot; # Setup a Kubernetes node port for the administration server default channel #adminService: # channels: # - channelName: default # nodePort: 30701 # The number of managed servers to start for unlisted clusters replicas: 1 # The desired behavior for starting a specific cluster's member servers clusters: - clusterName: cluster-1 serverStartState: \u0026quot;RUNNING\u0026quot; replicas: 2 # Change the `restartVersion` to force the introspector job to rerun # and apply any new model configuration, to also force a subsequent # roll of your domain's WebLogic pods. restartVersion: '1' configuration: # Settings for domainHomeSourceType 'FromModel' model: # Valid model domain types are 'WLS', 'JRF', and 'RestrictedJRF', default is 'WLS' domainType: \u0026quot;WLS\u0026quot; # Optional configmap for additional models and variable files #configMap: sample-domain1-wdt-config-map # All 'FromModel' domains require a runtimeEncryptionSecret with a 'password' field runtimeEncryptionSecret: sample-domain1-runtime-encryption-secret # Secrets that are referenced by model yaml macros # (the model yaml in the optional configMap or in the image) #secrets: #- sample-domain1-datasource-secret Click here to expand the JRF domain resource YAML. # Copyright (c) 2020, Oracle Corporation and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl. # # This is an example of how to define a Domain resource. # # If you are using 3.0.0-rc1, then the version on the following line # should be `v7` not `v6`. apiVersion: \u0026quot;weblogic.oracle/v6\u0026quot; kind: Domain metadata: name: sample-domain1 namespace: sample-domain1-ns labels: weblogic.resourceVersion: domain-v2 weblogic.domainUID: sample-domain1 spec: # Set to 'FromModel' to indicate 'Model in Image'. domainHomeSourceType: FromModel # The WebLogic Domain Home, this must be a location within # the image for 'Model in Image' domains. domainHome: /u01/domains/sample-domain1 # The WebLogic Server Docker image that the Operator uses to start the domain image: \u0026quot;model-in-image:JRF-v1\u0026quot; # Defaults to \u0026quot;Always\u0026quot; if image tag (version) is ':latest' imagePullPolicy: \u0026quot;IfNotPresent\u0026quot; # Identify which Secret contains the credentials for pulling an image #imagePullSecrets: #- name: regsecret # Identify which Secret contains the WebLogic Admin credentials, # the secret must contain 'username' and 'password' fields. webLogicCredentialsSecret: name: sample-domain1-weblogic-credentials # Whether to include the WebLogic server stdout in the pod's stdout, default is true includeServerOutInPodLog: true # Whether to enable overriding your log file location, see also 'logHome' #logHomeEnabled: false # The location for domain log, server logs, server out, and Node Manager log files # see also 'logHomeEnabled', 'volumes', and 'volumeMounts'. #logHome: /shared/logs/sample-domain1 # Set which WebLogic servers the Operator will start # - \u0026quot;NEVER\u0026quot; will not start any server in the domain # - \u0026quot;ADMIN_ONLY\u0026quot; will start up only the administration server (no managed servers will be started) # - \u0026quot;IF_NEEDED\u0026quot; will start all non-clustered servers, including the administration server, and clustered servers up to their replica count. serverStartPolicy: \u0026quot;IF_NEEDED\u0026quot; # Settings for all server pods in the domain including the introspector job pod serverPod: # Optional new or overridden environment variables for the domain's pods # - This sample uses CUSTOM_DOMAIN_NAME in its image model file # to set the Weblogic domain name env: - name: CUSTOM_DOMAIN_NAME value: \u0026quot;domain1\u0026quot; - name: JAVA_OPTIONS value: \u0026quot;-Dweblogic.StdoutDebugEnabled=false\u0026quot; - name: USER_MEM_ARGS value: \u0026quot;-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom \u0026quot; # Optional volumes and mounts for the domain's pods. See also 'logHome'. #volumes: #- name: weblogic-domain-storage-volume # persistentVolumeClaim: # claimName: sample-domain1-weblogic-sample-pvc #volumeMounts: #- mountPath: /shared # name: weblogic-domain-storage-volume # The desired behavior for starting the domain's administration server. adminServer: # The serverStartState legal values are \u0026quot;RUNNING\u0026quot; or \u0026quot;ADMIN\u0026quot; # \u0026quot;RUNNING\u0026quot; means the listed server will be started up to \u0026quot;RUNNING\u0026quot; mode # \u0026quot;ADMIN\u0026quot; means the listed server will be start up to \u0026quot;ADMIN\u0026quot; mode serverStartState: \u0026quot;RUNNING\u0026quot; # Setup a Kubernetes node port for the administration server default channel #adminService: # channels: # - channelName: default # nodePort: 30701 # The number of managed servers to start for unlisted clusters replicas: 1 # The desired behavior for starting a specific cluster's member servers clusters: - clusterName: cluster-1 serverStartState: \u0026quot;RUNNING\u0026quot; replicas: 2 # Change the restartVersion to force the introspector job to rerun # and apply any new model configuration, to also force a subsequent # roll of your domain's WebLogic pods. restartVersion: '1' configuration: # Settings for domainHomeSourceType 'FromModel' model: # Valid model domain types are 'WLS', 'JRF', and 'RestrictedJRF', default is 'WLS' domainType: \u0026quot;JRF\u0026quot; # Optional configmap for additional models and variable files #configMap: sample-domain1-wdt-config-map # All 'FromModel' domains require a runtimeEncryptionSecret with a 'password' field runtimeEncryptionSecret: sample-domain1-runtime-encryption-secret # Secrets that are referenced by model yaml macros # (the model yaml in the optional configMap or in the image) secrets: #- sample-domain1-datasource-secret - sample-domain1-rcu-access # Increase the introspector job active timeout value for JRF use cases introspectorJobActiveDeadlineSeconds: 300 opss: # Name of secret with walletPassword for extracting the wallet, used for JRF domains walletPasswordSecret: sample-domain1-opss-wallet-password-secret # Name of secret with walletFile containing base64 encoded opss wallet, used for JRF domains #walletFileSecret: sample-domain1-opss-walletfile-secret Run the following command to create the domain custom resource:\n$ kubectl apply -f /tmp/mii-sample/domain-resources/WLS/mii-initial-d1-WLS-v1.yaml Note: If you are choosing not to use the predefined domain resource YAML file and instead created your own domain resource file earlier, then substitute your custom file name in the above command. You might recall that we suggested naming it /tmp/mii-sample/mii-initial.yaml.\n If you run kubectl get pods -n sample-domain1-ns --watch, then you should see the introspector job run and your WebLogic Server pods start. The output should look something like this:\n Click here to expand. $ kubectl get pods -n sample-domain1-ns --watch NAME READY STATUS RESTARTS AGE sample-domain1-introspect-domain-job-lqqj9 0/1 Pending 0 0s sample-domain1-introspect-domain-job-lqqj9 0/1 ContainerCreating 0 0s sample-domain1-introspect-domain-job-lqqj9 1/1 Running 0 1s sample-domain1-introspect-domain-job-lqqj9 0/1 Completed 0 65s sample-domain1-introspect-domain-job-lqqj9 0/1 Terminating 0 65s sample-domain1-admin-server 0/1 Pending 0 0s sample-domain1-admin-server 0/1 ContainerCreating 0 0s sample-domain1-admin-server 0/1 Running 0 1s sample-domain1-admin-server 1/1 Running 0 32s sample-domain1-managed-server1 0/1 Pending 0 0s sample-domain1-managed-server2 0/1 Pending 0 0s sample-domain1-managed-server1 0/1 ContainerCreating 0 0s sample-domain1-managed-server2 0/1 ContainerCreating 0 0s sample-domain1-managed-server1 0/1 Running 0 2s sample-domain1-managed-server2 0/1 Running 0 2s sample-domain1-managed-server1 1/1 Running 0 43s sample-domain1-managed-server2 1/1 Running 0 42s Alternatively, you can run /tmp/mii-sample/utils/wl-pod-wait.sh -p 3. This is a utility script that provides useful information about a domain\u0026rsquo;s pods and waits for them to reach a ready state, reach their target restartVersion, and reach their target image before exiting.\n Click here to expand the `wl-pod-wait.sh` usage. $ ./wl-pod-wait.sh -? Usage: wl-pod-wait.sh [-n mynamespace] [-d mydomainuid] \\ [-p expected_pod_count] \\ [-t timeout_secs] \\ [-q] Exits non-zero if 'timeout_secs' is reached before 'pod_count' is reached. Parameters: -d \u0026lt;domain_uid\u0026gt; : Defaults to 'sample-domain1'. -n \u0026lt;namespace\u0026gt; : Defaults to 'sample-domain1-ns'. pod_count \u0026gt; 0 : Wait until exactly 'pod_count' WebLogic server pods for a domain all (a) are ready, (b) have the same 'domainRestartVersion' label value as the current domain resource's 'spec.restartVersion, and (c) have the same image as the current domain resource's image. pod_count = 0 : Wait until there are no running WebLogic server pods for a domain. The default. -t \u0026lt;timeout\u0026gt; : Timeout in seconds. Defaults to '600'. -q : Quiet mode. Show only a count of wl pods that have reached the desired criteria. -? : This help. Click here to expand sample output from `wl-pod-wait.sh`. @@ [2020-04-30T13:50:42][seconds=0] Info: Waiting up to 600 seconds for exactly '3' WebLogic server pods to reach the following criteria: @@ [2020-04-30T13:50:42][seconds=0] Info: ready='true' @@ [2020-04-30T13:50:42][seconds=0] Info: image='model-in-image:WLS-v1' @@ [2020-04-30T13:50:42][seconds=0] Info: domainRestartVersion='1' @@ [2020-04-30T13:50:42][seconds=0] Info: namespace='sample-domain1-ns' @@ [2020-04-30T13:50:42][seconds=0] Info: domainUID='sample-domain1' @@ [2020-04-30T13:50:42][seconds=0] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:50:42][seconds=0] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------------------- ------- ----- ----- --------- 'sample-domain1-introspect-domain-job-rkdkg' '' '' '' 'Pending' @@ [2020-04-30T13:50:45][seconds=3] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:50:45][seconds=3] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------------------- ------- ----- ----- --------- 'sample-domain1-introspect-domain-job-rkdkg' '' '' '' 'Running' @@ [2020-04-30T13:51:50][seconds=68] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:51:50][seconds=68] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE ---- ------- ----- ----- ----- @@ [2020-04-30T13:51:59][seconds=77] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:51:59][seconds=77] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE ----------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'false' 'Pending' @@ [2020-04-30T13:52:02][seconds=80] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:52:02][seconds=80] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE ----------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'false' 'Running' @@ [2020-04-30T13:52:32][seconds=110] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:52:32][seconds=110] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'false' 'Pending' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'false' 'Pending' @@ [2020-04-30T13:52:34][seconds=112] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:52:34][seconds=112] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'false' 'Running' @@ [2020-04-30T13:53:14][seconds=152] Info: '3' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:53:14][seconds=152] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:53:14][seconds=152] Info: Success! If you see an error, then consult Debugging in the Model in Image user guide.\nInvoke the web application Now that all the initial use case resources have been deployed, you can invoke the sample web application through the Traefik ingress controller\u0026rsquo;s NodePort. Note: The web application will display a list of any data sources it finds, but we don\u0026rsquo;t expect it to find any because the model doesn\u0026rsquo;t contain any at this point.\nSend a web application request to the load balancer:\n$ curl -s -S -m 10 -H 'host: sample-domain1-cluster-cluster-1.mii-sample.org' \\ http://localhost:30305/myapp_war/index.jsp Or, if Traefik is unavailable and your Administration Server pod is running, you can use kubectl exec:\n$ kubectl exec -n sample-domain1-ns sample-domain1-admin-server -- bash -c \\ \u0026quot;curl -s -S -m 10 http://sample-domain1-cluster-cluster-1:8001/myapp_war/index.jsp\u0026quot; You should see output like the following:\n$ curl -s -S -m 10 -H 'host: sample-domain1-cluster-cluster-1.mii-sample.org' \\ http://localhost:30305/myapp_war/index.jsp \u0026lt;html\u0026gt;\u0026lt;body\u0026gt;\u0026lt;pre\u0026gt; ***************************************************************** Hello World! This is version 'v1' of the mii-sample JSP web-app. Welcome to WebLogic server 'managed-server2'! domain UID = 'sample-domain1' domain name = 'domain1' Found 1 local cluster runtime: Cluster 'cluster-1' Found 0 local data sources: ***************************************************************** \u0026lt;/pre\u0026gt;\u0026lt;/body\u0026gt;\u0026lt;/html\u0026gt; Note: If you\u0026rsquo;re running your curl commands on a remote machine, then substitute localhost with an external address suitable for contacting your Kubernetes cluster. A Kubernetes cluster address that often works can be obtained by using the address just after https:// in the KubeDNS line of the output from the kubectl cluster-info command.\nIf you want to continue to the next use case, then leave your domain running.\nUpdate1 use case This use case demonstrates dynamically adding a data source to your running domain. It demonstrates several features of WDT and Model in Image:\n The syntax used for updating a model is exactly the same syntax you use for creating the original model. A domain\u0026rsquo;s model can be updated dynamically by supplying a model update in a file in a Kubernetes ConfigMap. Model updates can be as simple as changing the value of a single attribute, or more complex, such as adding a JMS Server. For a detailed discussion of model updates, see Runtime Updates in the Model in Image user guide.\nThe operator does not support all possible dynamic model updates. For model update limitations, consult Runtime Updates in the Model in Image user docs, and carefully test any model update before attempting a dynamic update in production.\n Here are the steps:\n Ensure that you have a running domain.\nMake sure you have deployed the domain from the Initial use case.\n Create a data source model YAML file.\nCreate a WDT model snippet for a data source (or use the example provided). Make sure that its target is set to cluster-1, and that its initial capacity is set to 0.\nThe reason for the latter is to prevent the data source from causing a WebLogic Server startup failure if it can\u0026rsquo;t find the database, which would be likely to happen because we haven\u0026rsquo;t deployed one (unless you\u0026rsquo;re using the JRF path through the sample).\nHere\u0026rsquo;s an example data source model configuration that meets these criteria:\nresources: JDBCSystemResource: mynewdatasource: Target: 'cluster-1' JdbcResource: JDBCDataSourceParams: JNDIName: [ jdbc/mydatasource1, jdbc/mydatasource2 ] GlobalTransactionsProtocol: TwoPhaseCommit JDBCDriverParams: DriverName: oracle.jdbc.xa.client.OracleXADataSource URL: '@@SECRET:@@ENV:DOMAIN_UID@@-datasource-secret:url@@' PasswordEncrypted: '@@SECRET:@@ENV:DOMAIN_UID@@-datasource-secret:password@@' Properties: user: Value: 'sys as sysdba' oracle.net.CONNECT_TIMEOUT: Value: 5000 oracle.jdbc.ReadTimeout: Value: 30000 JDBCConnectionPoolParams: InitialCapacity: 0 MaxCapacity: 1 TestTableName: SQL ISVALID TestConnectionsOnReserve: true Place the above model snippet in a file named /tmp/mii-sample/mydatasource.yaml and then use it in the later step where we deploy the model ConfigMap, or alternatively, use the same data source that\u0026rsquo;s provided in /tmp/mii-sample/model-configmaps/datasource/model.20.datasource.yaml.\n Create the data source secret.\nThe data source references a new secret that needs to be created. Run the following commands to create the secret:\n$ kubectl -n sample-domain1-ns create secret generic \\ sample-domain1-datasource-secret \\ --from-literal=password=Oradoc_db1 \\ --from-literal=url=jdbc:oracle:thin:@oracle-db.default.svc.cluster.local:1521/devpdb.k8s $ kubectl -n sample-domain1-ns label secret \\ sample-domain1-datasource-secret \\ weblogic.domainUID=sample-domain1 We name and label secrets using their associated domain UID for two reasons:\n To make it obvious which secret belongs to which domains. To make it easier to clean up a domain. Typical cleanup scripts use the weblogic.domainUID label as a convenience for finding all the resources associated with a domain. Create a ConfigMap with the WDT model that contains the data source definition.\nRun the following commands:\n$ kubectl -n sample-domain1-ns create configmap sample-domain1-wdt-config-map \\ --from-file=/tmp/mii-sample/model-configmaps/datasource $ kubectl -n sample-domain1-ns label configmap sample-domain1-wdt-config-map \\ weblogic.domainUID=sample-domain1 If you\u0026rsquo;ve created your own data source file, then substitute the file name in the --from-file= parameter (we suggested /tmp/mii-sample/mydatasource.yaml earlier). Note that the -from-file= parameter can reference a single file, in which case it puts the designated file in the ConfigMap, or it can reference a directory, in which case it populates the ConfigMap with all of the files in the designated directory. We name and label ConfigMap using their associated domain UID for two reasons:\n To make it obvious which ConfigMap belong to which domains. To make it easier to cleanup a domain. Typical cleanup scripts use the weblogic.domainUID label as a convenience for finding all resources associated with a domain. Update your domain resource to refer to the ConfigMap and secret.\n Option 1: Update your current domain resource file from the \u0026ldquo;Initial\u0026rdquo; use case.\n Add the secret to its spec.configuration.secrets stanza:\nspec: ... configuration: ... secrets: - sample-domain1-datasource-secret (Leave any existing secrets in place.)\n Change its spec.configuration.model.configMap to look like:\nspec: ... configuration: ... model: ... configMap: sample-domain1-wdt-config-map Apply your changed domain resource:\n$ kubectl apply -f your-domain-resource.yaml Option 2: Use the updated domain resource file that is supplied with the sample:\n$ kubectl apply -f /tmp/miisample/domain-resources/mii-update1-d1-WLS-v1-ds.yaml Restart (\u0026lsquo;roll\u0026rsquo;) the domain.\nNow that the data source is deployed in a ConfigMap and its secret is also deployed, and we have applied an updated domain resource with its spec.configuration.model.configMap and spec.configuration.secrets referencing the ConfigMap and secret, let\u0026rsquo;s tell the operator to roll the domain.\nWhen a model domain restarts, it will rerun its introspector job in order to regenerate its configuration, and it will also pass the configuration changes found by the introspector to each restarted server. One way to cause a running domain to restart is to change the domain\u0026rsquo;s spec.restartVersion. To do this:\n Option 1: Edit your domain custom resource.\n Call kubectl -n sample-domain1-ns edit domain sample-domain1. Edit the value of the spec.restartVersion field and save. The field is a string; typically, you use a number in this field and increment it with each restart. Option 2: Dynamically change your domain using kubectl patch.\n To get the current restartVersion call:\n$ kubectl -n sample-domain1-ns get domain sample-domain1 '-o=jsonpath={.spec.restartVersion}' Choose a new restart version that\u0026rsquo;s different from the current restart version.\n The field is a string; typically, you use a number in this field and increment it with each restart. Use kubectl patch to set the new value. For example, assuming the new restart version is 2:\n$ kubectl -n sample-domain1-ns patch domain sample-domain1 --type=json '-p=[{\u0026quot;op\u0026quot;: \u0026quot;replace\u0026quot;, \u0026quot;path\u0026quot;: \u0026quot;/spec/restartVersion\u0026quot;, \u0026quot;value\u0026quot;: \u0026quot;2\u0026quot; }]' Option 3: Use the sample helper script.\n Call /tmp/mii-sample/utils/patch-restart-version.sh -n sample-domain1-ns -d sample-domain1. This will perform the same kubectl get and kubectl patch commands as Option 2. Wait for the roll to complete.\nNow that you\u0026rsquo;ve started a domain roll, you\u0026rsquo;ll need to wait for it to complete if you want to verify that the data source was deployed.\n One way to do this is to call kubectl get pods -n sample-domain1-ns --watch and wait for the pods to cycle back to their ready state.\n Alternatively, you can run /tmp/mii-sample/utils/wl-pod-wait.sh -p 3. This is a utility script that provides useful information about a domain\u0026rsquo;s pods and waits for them to reach a ready state, reach their target restartVersion, and reach their target image before exiting.\n Click here to expand the `wl-pod-wait.sh` usage. $ ./wl-pod-wait.sh -? Usage: wl-pod-wait.sh [-n mynamespace] [-d mydomainuid] \\ [-p expected_pod_count] \\ [-t timeout_secs] \\ [-q] Exits non-zero if 'timeout_secs' is reached before 'pod_count' is reached. Parameters: -d \u0026lt;domain_uid\u0026gt; : Defaults to 'sample-domain1'. -n \u0026lt;namespace\u0026gt; : Defaults to 'sample-domain1-ns'. pod_count \u0026gt; 0 : Wait until exactly 'pod_count' WebLogic server pods for a domain all (a) are ready, (b) have the same 'domainRestartVersion' label value as the current domain resource's 'spec.restartVersion, and (c) have the same image as the current domain resource's image. pod_count = 0 : Wait until there are no running WebLogic server pods for a domain. The default. -t \u0026lt;timeout\u0026gt; : Timeout in seconds. Defaults to '600'. -q : Quiet mode. Show only a count of wl pods that have reached the desired criteria. -? : This help. Click here to expand sample output from `wl-pod-wait.sh` that shows a rolling domain. @@ [2020-04-30T13:53:19][seconds=0] Info: Waiting up to 600 seconds for exactly '3' WebLogic server pods to reach the following criteria: @@ [2020-04-30T13:53:19][seconds=0] Info: ready='true' @@ [2020-04-30T13:53:19][seconds=0] Info: image='model-in-image:WLS-v1' @@ [2020-04-30T13:53:19][seconds=0] Info: domainRestartVersion='2' @@ [2020-04-30T13:53:19][seconds=0] Info: namespace='sample-domain1-ns' @@ [2020-04-30T13:53:19][seconds=0] Info: domainUID='sample-domain1' @@ [2020-04-30T13:53:19][seconds=0] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:53:19][seconds=0] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-introspect-domain-job-wlkpr' '' '' '' 'Pending' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:53:20][seconds=1] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:53:20][seconds=1] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-introspect-domain-job-wlkpr' '' '' '' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:18][seconds=59] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:18][seconds=59] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------------------- ------- ----------------------- ------ ----------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-introspect-domain-job-wlkpr' '' '' '' 'Succeeded' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:19][seconds=60] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:19][seconds=60] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:31][seconds=72] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:31][seconds=72] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:40][seconds=81] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:40][seconds=81] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:52][seconds=93] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:52][seconds=93] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:58][seconds=99] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:58][seconds=99] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'false' 'Pending' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:00][seconds=101] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:00][seconds=101] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:12][seconds=113] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:12][seconds=113] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:24][seconds=125] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:24][seconds=125] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:33][seconds=134] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:33][seconds=134] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:34][seconds=135] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:34][seconds=135] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'false' 'Pending' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:40][seconds=141] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:40][seconds=141] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:44][seconds=145] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:44][seconds=145] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:56:25][seconds=186] Info: '2' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:56:25][seconds=186] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:56:26][seconds=187] Info: '2' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:56:26][seconds=187] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'false' 'Pending' @@ [2020-04-30T13:56:30][seconds=191] Info: '2' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:56:30][seconds=191] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:56:34][seconds=195] Info: '2' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:56:34][seconds=195] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '2' 'model-in-image:WLS-v1' 'false' 'Pending' @@ [2020-04-30T13:57:09][seconds=230] Info: '3' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:57:09][seconds=230] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '2' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:57:09][seconds=230] Info: Success! After your domain is running, you can call the sample web application to determine if the data source was deployed.\nSend a web application request to the ingress controller:\n$ curl -s -S -m 10 -H 'host: sample-domain1-cluster-cluster-1.mii-sample.org' \\ http://localhost:30305/myapp_war/index.jsp Or, if Traefik is unavailable and your Administration Server pod is running, you can run kubectl exec:\n$ kubectl exec -n sample-domain1-ns sample-domain1-admin-server -- bash -c \\ \u0026quot;curl -s -S -m 10 http://sample-domain1-cluster-cluster-1:8001/myapp_war/index.jsp\u0026quot; You should see something like the following:\n Click here to see the expected web application output. $ curl -s -S -m 10 -H 'host: sample-domain1-cluster-cluster-1.mii-sample.org' \\ http://localhost:30305/myapp_war/index.jsp \u0026lt;html\u0026gt;\u0026lt;body\u0026gt;\u0026lt;pre\u0026gt; ***************************************************************** Hello World! This is version 'v1' of the mii-sample JSP web-app. Welcome to WebLogic server 'managed-server1'! domain UID = 'sample-domain1' domain name = 'domain1' Found 1 local cluster runtime: Cluster 'cluster-1' Found 1 local data source: Datasource 'mynewdatasource': State='Running' ***************************************************************** \u0026lt;/pre\u0026gt;\u0026lt;/body\u0026gt;\u0026lt;/html\u0026gt; If you see an error, then consult Debugging in the Model in Image user guide.\nThis completes the sample scenarios.\nCleanup To remove the resources you have created in these samples:\n Delete the domain resources.\n$ /tmp/operator-source/kubernetes/samples/scripts/delete-domain/delete-weblogic-domain-resources.sh -d sample-domain1 $ /tmp/operator-source/kubernetes/samples/scripts/delete-domain/delete-weblogic-domain-resources.sh -d sample-domain2 This deletes the domain and any related resources that are labeled with the domain UID sample-domain1 and sample-domain2.\nIt leaves the namespace intact, the operator running, the load balancer running (if installed), and the database running (if installed).\n Note: When you delete a domain, the operator should detect your domain deletion and shut down its pods. Wait for these pods to exit before deleting the operator that monitors the sample-domain1-ns namespace. You can monitor this process using the command kubectl get pods -n sample-domain1-ns --watch (ctrl-c to exit).\n If you set up the Traefik ingress controller:\n$ helm delete --purge traefik-operator $ kubectl delete namespace traefik If you set up a database for JRF:\n$ /tmp/operator-source/kubernetes/samples/scripts/create-oracle-db-service/stop-db-service.sh Delete the operator and its namespace:\n$ helm delete --purge sample-weblogic-operator $ kubectl delete namespace sample-weblogic-operator-ns Delete the domain\u0026rsquo;s namespace:\n$ kubectl delete namespace sample-domain1-ns Delete the images you may have created in this sample:\n$ docker image rm model-in-image:WLS-v1 $ docker image rm model-in-image:WLS-v2 $ docker image rm model-in-image:JRF-v1 $ docker image rm model-in-image:JRF-v2 References For references to the relevant user documentation, see:\n Model in Image user documentation Oracle WebLogic Server Deploy Tooling Oracle WebLogic Image Tool " | |||
| "content": " This feature is supported only in 3.0.0-rc1.\n Contents Introduction Model in Image domain types (WLS, JRF, and Restricted JRF) Use cases Sample directory structure Prerequisites for all domain types Additional prerequisites for JRF domains Initial use case: An initial WebLogic domain Update1 use case: Dynamically adding a data source using a model ConfigMap Cleanup References Introduction This sample demonstrates deploying a Model in Image domain home source type. Unlike Domain in PV and Domain in Image, Model in Image eliminates the need to pre-create your WebLogic domain home prior to deploying your domain resource. Instead, Model in Image uses a WebLogic Deploy Tooling (WDT) model to specify your WebLogic configuration.\nWDT models are a convenient and simple alternative to WebLogic WLST configuration scripts and templates. They compactly define a WebLogic domain using YAML files and support including application archives in a ZIP file. The WDT model format is described in the open source, WebLogic Deploy Tooling GitHub project, and the required directory structure for a WDT archive is specifically discussed here.\nFor more information on Model in Image, see the Model in Image user guide. For a comparison of Model in Image to other domain home source types, see Choose a domain home source type.\nModel in Image domain types (WLS, JRF, and Restricted JRF) There are three types of domains supported by Model in Image: a standard WLS domain, an Oracle Fusion Middleware Infrastructure Java Required Files (JRF) domain, and a RestrictedJRF domain. This sample demonstrates the WLS and JRF types.\nThe JRF domain path through the sample includes additional steps required for JRF: deploying an infrastructure database, initializing the database using the Repository Creation Utility (RCU) tool, referencing the infrastructure database from the WebLogic configuration, setting an Oracle Platform Security Services (OPSS) wallet password, and exporting/importing an OPSS wallet file. JRF domains may be used by Oracle products that layer on top of WebLogic Server, such as SOA and OSB. Similarly, RestrictedJRF domains may be used by Oracle layered products, such as Oracle Communications products.\nUse cases This sample demonstrates two Model in Image use cases:\n Initial: An initial WebLogic domain with the following characteristics:\n Image model-in-image:WLS-v1 with: A WebLogic installation A WebLogic Deploy Tooling (WDT) installation A WDT archive with version v1 of an exploded Java EE web application A WDT model with: A WebLogic Administration Server A WebLogic cluster A reference to the web application Kubernetes Secrets: WebLogic credentials Required WDT runtime password A domain resource with: spec.domainHomeSourceType: FromModel spec.image: model-in-image:WLS-v1 References to the secrets Update1: Demonstrates udpating the initial domain by dynamically adding a data source using a model ConfigMap:\n Image model-in-image:WLS-v1: Same image as Initial use case Kubernetes Secrets: Same as Initial use case plus secrets for data source credentials and URL Kubernetes ConfigMap with: A WDT model for a data source targeted to the cluster A domain resource with: Same as Initial use case plus: spec.model.configMap referencing the ConfigMap References to data source secrets Sample directory structure The sample contains the following files and directories:\n Location Description domain-resources JRF and WLS domain resources. archives Source code location for WebLogic Deploy Tooling application ZIP archives. model-images Staging for each model image\u0026rsquo;s WDT YAML, WDT properties, and WDT archive ZIP files. The directories in model images are named for their respective images. model-configmaps Staging files for a model ConfigMap that configures a data source. ingresses Ingress resources. utils/wl-pod-wait.sh Utility for watching the pods in a domain reach their expected restartVersion, image name, and ready state. utils/patch-restart-version.sh Utility for updating a running domain spec.restartVersion field (which causes it to \u0026lsquo;re-instrospect\u0026rsquo; and \u0026lsquo;roll\u0026rsquo;). utils/opss-wallet.sh Utility for exporting or importing a JRF domain OPSS wallet file. Prerequisites for all domain types Choose the type of domain you\u0026rsquo;re going to use throughout the sample, WLS or JRF.\n The first time you try this sample, we recommend that you choose WLS even if you\u0026rsquo;re familiar with JRF. This is because WLS is simpler and will more easily familiarize you with Model in Image concepts. We recommend choosing JRF only if you are already familiar with JRF, you have already tried the WLS path through this sample, and you have a definite use case where you need to use JRF. The JAVA_HOME environment variable must be set and must reference a valid JDK 8 or 11 installation.\n Get the operator source from the release/3.0.0-rc1 branch and put it in /tmp/operator-source.\nFor example:\n$ mkdir /tmp/operator-source $ cd /tmp/operator-source $ git clone https://github.com/oracle/weblogic-kubernetes-operator.git $ git checkout release/3.0.0-rc1 Note: We will refer to the top directory of the operator source tree as /tmp/operator-source; however, you can use a different location.\n For additional information about obtaining the operator source, see the Developer Guide Requirements.\n Copy the sample to a new directory; for example, use directory /tmp/mii-sample.\n$ mkdir /tmp/mii-sample $ cp -r /tmp/operator-source/kubernetes/samples/scripts/create-weblogic-domain/model-in-image/* /tmp/mii-sample Note: We will refer to this working copy of the sample as /tmp/mii-sample; however, you can use a different location. Make sure an operator is set up to manage namespace sample-domain1-ns. Also, make sure a Traefik ingress controller is managing the same namespace and listening on port 30305.\nFor example, follow the same steps as the Quick Start guide from the beginning through to the Prepare for a domain step.\nMake sure you stop when you complete the \u0026ldquo;Prepare for a domain\u0026rdquo; step and then resume following these instructions.\n Set up ingresses that will redirect HTTP from Traefik port 30305 to the clusters in this sample\u0026rsquo;s WebLogic domains.\n Option 1: To create the ingresses, use the following YAML to create a file called /tmp/mii-sample/ingresses/myingresses.yaml and then call kubectl apply -f /tmp/mii-sample/ingresses/myingresses.yaml:\napiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-ingress-sample-domain1-admin-server namespace: sample-domain1-ns labels: weblogic.domainUID: sample-domain1 annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: http: paths: - path: /console backend: serviceName: sample-domain1-admin-server servicePort: 7001 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-ingress-sample-domain1-cluster-cluster-1 namespace: sample-domain1-ns labels: weblogic.domainUID: sample-domain1 annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: sample-domain1-cluster-cluster-1.mii-sample.org http: paths: - path: backend: serviceName: sample-domain1-cluster-cluster-1 servicePort: 8001 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-ingress-sample-domain1-cluster-cluster-2 namespace: sample-domain1-ns labels: weblogic.domainUID: sample-domain1 annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: sample-domain1-cluster-cluster-2.mii-sample.org http: paths: - path: backend: serviceName: sample-domain1-cluster-cluster-2 servicePort: 8001 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-ingress-sample-domain2-cluster-cluster-1 namespace: sample-domain1-ns labels: weblogic.domainUID: sample-domain2 annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: sample-domain2-cluster-cluster-1.mii-sample.org http: paths: - path: backend: serviceName: sample-domain2-cluster-cluster-1 servicePort: 8001 Option 2: Run kubectl apply -f on each of the ingress YAML files that are already included in the sample source /tmp/mii-sample/ingresses directory:\n $ cd /tmp/mii-sample/ingresses $ kubectl apply -f traefik-ingress-sample-domain1-admin-server.yaml $ kubectl apply -f traefik-ingress-sample-domain1-cluster-cluster-1.yaml $ kubectl apply -f traefik-ingress-sample-domain1-cluster-cluster-2.yaml $ kubectl apply -f traefik-ingress-sample-domain2-cluster-cluster-1.yaml $ kubectl apply -f traefik-ingress-sample-domain2-cluster-cluster-2.yaml NOTE: We give each cluster ingress a different host name that is decorated using both its operator domain UID and its cluster name. This makes each cluster uniquely addressable even when cluster names are the same across different clusters. When using curl to access the WebLogic domain through the ingress, you will need to supply a host name header that matches the host names in the ingress.\n For more on information ingresses and load balancers, see Ingress.\n Obtain the WebLogic 12.2.1.4 image that is required to create the sample\u0026rsquo;s model images.\na. Use a browser to access Oracle Container Registry.\nb. Choose an image location: for JRF domains, select Middleware, then fmw-infrastructure; for WLS domains, select Middleware, then weblogic.\nc. Select Sign In and accept the license agreement.\nd. Use your terminal to log in to Docker locally: docker login container-registry.oracle.com.\ne. Later in this sample, when you run WebLogic Image Tool commands, the tool will use the image as a base image for creating model images. Specifically, the tool will implicitly call docker pull for one of the above licensed images as specified in the tool\u0026rsquo;s command line using the --fromImage parameter. For JRF, this sample specifies container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.4, and for WLS, the sample specifies container-registry.oracle.com/middleware/weblogic:12.2.1.4.\nIf you prefer, you can create your own base image and then substitute this image name in the WebLogic Image Tool --fromImage parameter throughout this sample. See Preparing a Base Image.\n Download the latest WebLogic Deploying Tooling and WebLogic Image Tool installer ZIP files to your /tmp/mii-sample/model-images directory.\nBoth WDT and WIT are required to create your Model in Image Docker images. Download the latest version of each tool\u0026rsquo;s installer ZIP file to the /tmp/mii-sample/model-images directory.\nFor example, visit the GitHub WebLogic Deploy Tooling Releses and WebLogic Image Tool Releases web pages to determine the latest release version for each, and then, assuming the version numbers are 1.9.3 and 1.8.4 respectively, call:\n$ curl -m 30 -fL https://github.com/oracle/weblogic-deploy-tooling/releases/download/release-1.9.3/weblogic-deploy.zip \\ -o /tmp/mii-sample/model-images/weblogic-deploy.zip $ curl -m 30 -fL https://github.com/oracle/weblogic-image-tool/releases/download/release-1.8.4/imagetool.zip \\ -o /tmp/mii-sample/model-images/imagetool.zip Set up the WebLogic Image Tool.\nRun the following commands:\n$ cd /tmp/mii-sample/model-images $ unzip imagetool.zip $ ./imagetool/bin/imagetool.sh cache addInstaller \\ --type wdt \\ --version latest \\ --path /tmp/mii-sample/model-images/weblogic-deploy.zip These steps will install WIT to the /tmp/mii-sample/model-images/imagetool directory, plus put a wdt_latest entry in the tool\u0026rsquo;s cache which points to the WDT ZIP installer. We will use WIT later in the sample for creating model images.\n Additional prerequisites for JRF domains NOTE: If you\u0026rsquo;re using a WLS domain type, skip this section and continue here.\n JRF Prerequisites Contents Introduction to JRF setups Set up and initialize an infrastructure database Increase introspection job timeout Important considerations for RCU model attributes, domain resource attributes, and secrets Introduction to JRF setups NOTE: The requirements in this section are in addition to Prerequisites for all domain types.\n A JRF domain requires an infrastructure database, initializing this database with RCU, and configuring your domain to access this database. All of these steps must occur before you create your domain.\nSet up and initialize an infrastructure database A JRF domain requires an infrastructure database and also requires initializing this database with a schema and a set of tables. The following example shows how to set up a database and use the RCU tool to create the infrastructure schema for a JRF domain. The database is set up with the following attributes:\n Attribute Value database Kubernetes namespace default database Kubernetes pod oracle-db database image container-registry.oracle.com/database/enterprise:12.2.0.1-slim database password Oradoc_db1 infrastructure schema prefix FMW1 infrastructure schema password Oradoc_db1 database URL oracle-db.default.svc.cluster.local:1521/devpdb.k8s Ensure that you have access to the database image, and then create a deployment using it:\n Use a browser to log in to https://container-registry.oracle.com, select database-\u0026gt;enterprise and accept the license agreement.\n Get the database image:\n In the local shell, docker login container-registry.oracle.com. In the local shell, docker pull container-registry.oracle.com/database/enterprise:12.2.0.1-slim. Use the sample script in /tmp/operator-source/kubernetes/samples/scripts/create-oracle-db-service to create an Oracle database running in the pod, oracle-db.\n$ cd /tmp/operator-source/kubernetes/samples/scripts/create-oracle-db-service $ start-db-service.sh This script will deploy a database in the default namespace with the connect string oracle-db.default.svc.cluster.local:1521/devpdb.k8s, and administration password Oradoc_db1.\nThis step is based on the steps documented in Run a Database.\nWARNING: The Oracle Database Docker images are supported only for non-production use. For more details, see My Oracle Support note: Oracle Support for Database Running on Docker (Doc ID 2216342.1).\n Use the sample script in /tmp/operator-source/kubernetes/samples/scripts/create-rcu-schema to create the RCU schema with the schema prefix FMW1.\nNote that this script assumes Oradoc_db1 is the DBA password, Oradoc_db1 is the schema password, and that the database URL is oracle-db.default.svc.cluster.local:1521/devpdb.k8s.\n$ cd /tmp/operator-source/kubernetes/samples/scripts/create-rcu-schema $ ./create-rcu-schema.sh -s FMW1 -i container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.4 NOTE: If you need to drop the repository, use this command:\n$ drop-rcu-schema.sh -s FMW1 Increase introspection job timeout The JRF domain home creation can take more time than the introspection job\u0026rsquo;s default timeout. You should increase the timeout for the introspection job. Use the configuration.introspectorJobActiveDeadlineSeconds in your domain resource to override the default with a value of at least 300 seconds (the default is 120 seconds). Note that the JRF versions of the domain resource files that are provided in /tmp/mii-sample/domain-resources already set this value.\nImportant considerations for RCU model attributes, domain resource attributes, and secrets To allow Model in Image to access the database and OPSS wallet, you must create an RCU access secret containing the database connect string, user name, and password that\u0026rsquo;s referenced from your model and an OPSS wallet password secret that\u0026rsquo;s referenced from your domain resource before deploying your domain. It\u0026rsquo;s also necessary to define an RCUDbInfo stanza in your model.\nThe sample includes examples of JRF models and domain resources in the /tmp/mii-sample/model-images and /tmp/mii-sample/domain-resources directories, and instructions in the following sections will describe setting up the RCU and OPSS secrets.\nWhen you follow the instructions later in this sample, avoid instructions that are WLS only, and substitute JRF for WLS in the corresponding model image tags and domain resource file names.\nFor example:\n JRF domain resources in this sample have an opss.walletPasswordSecret field that references a secret named sample-domain1-opss-wallet-password-secret, with password=welcome1.\n JRF image models in this sample have a domainInfo -\u0026gt; RCUDbInfo stanza that reference a sample-domain1-rcu-access secret with appropriate values for attributes rcu_prefix, rcu_schema_password, and rcu_db_conn_string for accessing the Oracle database that you deployed to the default namespace as one of the prerequisite steps.\n Important considerations for reusing or sharing OPSS tables We do not recommend that most users share OPSS tables. Extreme caution is required when sharing OPSS tables between domains.\n When you successfully deploy your JRF domain resource for the first time, the introspector job will initialize the OPSS tables for the domain using the domainInfo -\u0026gt; RCUDbInfo stanza in the WDT model plus the configuration.opss.walletPasswordSecret specified in the domain resource. The job will also create a new domain home. Finally, the operator will also capture an OPSS wallet file from the new domain\u0026rsquo;s local directory and place this file in a new Kubernetes ConfigMap.\nThere are scenarios when the domain needs to be recreated between updates, such as when WebLogic credentials are changed, security roles defined in the WDT model have been changed, or you want to share the same infrastructure tables with different domains. In these scenarios, the operator needs the walletPasswordSecret as well as the OPSS wallet file, together with the exact information in domainInfo -\u0026gt; RCUDbInfo so that the domain can be recreated and access the same set of tables. Without the wallet file and wallet password, you will not be able to recreate a domain accessing the same set of tables, therefore we strongly recommend that you back up the wallet file.\nTo recover a domain\u0026rsquo;s OPSS tables between domain restarts or to share an OPSS schema between different domains, it is necessary to extract this wallet file from the domain\u0026rsquo;s automatically deployed introspector ConfigMap and save the OPSS wallet password secret that was used for the original domain. The wallet password and wallet file are needed again when you recreate the domain or share the database with other domains.\nTo save the wallet file, assuming that your namespace is sample-domain1-ns and your domain UID is sample-domain1:\n $ kubectl -n sample-domain1-ns \\ get configmap sample-domain1-weblogic-domain-introspect-cm \\ -o jsonpath='{.data.ewallet\\.p12}' \\ \u0026gt; ./ewallet.p12 Alternatively, you can save the file using the sample\u0026rsquo;s wallet utility:\n $ /tmp/mii-sample/utils/opss-wallet.sh -n sample-domain1-ns -d sample-domain1 -wf ./ewallet.p12 # For help: /tmp/mii-sample/utils/opss-wallet.sh -? Important! Back up your wallet file to a safe location that can be retrieved later.\nTo reuse the wallet file in subsequent redeployments or to share the domain\u0026rsquo;s OPSS tables between different domains:\n Load the saved wallet file into a secret with a key named walletFile (again, assuming that your domain UID is sample-domain1 and your namespace is sample-domain1-ns): $ kubectl -n sample-domain1-ns create secret generic sample-domain1-opss-walletfile-secret \\ --from-file=walletFile=./ewallet.p12 $ kubectl -n sample-domain1-ns label secret sample-domain1-opss-walletfile-secret \\ weblogic.domainUID=`sample-domain1` Alternatively, use the sample\u0026rsquo;s wallet utility:\n $ /tmp/mii-sample/utils/opss-wallet.sh -n sample-domain1-ns -d sample-domain1 -wf ./ewallet.p12 -ws sample-domain1-opss-walletfile-secret # For help: /tmp/mii-sample/utils/opss-wallet.sh -? Modify your domain resource JRF YAML files to provide the wallet file secret name, for example: configuration: opss: # Name of secret with walletPassword for extracting the wallet walletPasswordSecret: sample-domain1-opss-wallet-password-secret # Name of secret with walletFile containing base64 encoded opss wallet walletFileSecret: sample-domain1-opss-walletfile-secret Note: The sample JRF domain resource files included in /tmp/mii-sample/domain-resources already have the above YAML stanza.\n Initial use case Contents Overview Image creation Image creation - Introduction Understanding our first archive Staging a ZIP file of the archive Staging model files Creating the image with WIT Deploy resources Deploy resources - Introduction Secrets Domain resource Overview In this use case, we set up an initial WebLogic domain. This involves:\n A WDT archive ZIP file that contains your applications. A WDT model that describes your WebLogic configuration. A Docker image that contains your WDT model files and archive. Creating secrets for the domain. Creating a domain resource for the domain that references your secrets and image. After the domain resource is deployed, the WebLogic operator will start an \u0026lsquo;introspector job\u0026rsquo; that converts your models into a WebLogic configuration, and then the operator will pass this configuration to each WebLogic Server in the domain.\nPerform the steps in Prerequisites for all domain types before performing the steps in this use case.\nIf you are taking the JRF path through the sample, then substitute JRF for WLS in your image names and directory paths. Also note that the JRF-v1 model YAML differs from the WLS-v1 YAML file (it contains an additional domainInfo -\u0026gt; RCUDbInfo stanza).\n Image creation - Introduction The goal of the initial use case \u0026lsquo;image creation\u0026rsquo; is to demonstrate using the WebLogic Image Tool to create an image named model-in-image:WLS-v1 from files that we will stage to /tmp/mii-sample/model-images/model-in-image:WLS-v1/. The staged files will contain a web application in a WDT archive, and WDT model configuration for a WebLogic Administration Server called admin-server and a WebLogic cluster called cluster-1.\nOverall, a Model in Image image must contain a WebLogic installation and also a WebLogic Deploy Tooling installation in its /u01/wdt/weblogic-deploy directory. In addition, if you have WDT model archive files, then the image must also contain these files in its /u01/wdt/models directory. Finally, an image may optionally also contain your WDT model YAML and properties files in the same /u01/wdt/models directory. If you do not specify WDT model YAML in your /u01/wdt/models directory, then the model YAML must be supplied dynamically using a Kubernetes ConfigMap that is referenced by your domain resource spec.model.configMap attribute. We will provide an example of using a model ConfigMap later in this sample.\nLet\u0026rsquo;s walk through the steps for creating the image model-in-image:WLS-v1:\n Understanding our first archive Staging a ZIP file of the archive Staging model files Creating the image with WIT Understanding our first archive The sample includes a predefined archive directory in /tmp/mii-sample/archives/archive-v1 that we will use to create an archive ZIP file for the image.\nThe archive top directory, named wlsdeploy, contains a directory named applications, which includes an \u0026lsquo;exploded\u0026rsquo; sample JSP web application in the directory, myapp-v1. Three useful aspects to remember about WDT archives are:\n A model image can contain multiple WDT archives. WDT archives can contain multiple applications, libraries, and other components. WDT archives have a well defined directory structure, which always has wlsdeploy as the top directory. If you are interested in the web application source, click here to see the JSP code. \u0026lt;%-- Copyright (c) 2019, 2020, Oracle Corporation and/or its affiliates. --%\u0026gt; \u0026lt;%-- Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. --%\u0026gt; \u0026lt;%@ page import=\u0026quot;javax.naming.InitialContext\u0026quot; %\u0026gt; \u0026lt;%@ page import=\u0026quot;javax.management.*\u0026quot; %\u0026gt; \u0026lt;%@ page import=\u0026quot;java.io.*\u0026quot; %\u0026gt; \u0026lt;% InitialContext ic = null; try { ic = new InitialContext(); String srName=System.getProperty(\u0026quot;weblogic.Name\u0026quot;); String domainUID=System.getenv(\u0026quot;DOMAIN_UID\u0026quot;); String domainName=System.getenv(\u0026quot;CUSTOM_DOMAIN_NAME\u0026quot;); out.println(\u0026quot;\u0026lt;html\u0026gt;\u0026lt;body\u0026gt;\u0026lt;pre\u0026gt;\u0026quot;); out.println(\u0026quot;*****************************************************************\u0026quot;); out.println(); out.println(\u0026quot;Hello World! This is version 'v1' of the mii-sample JSP web-app.\u0026quot;); out.println(); out.println(\u0026quot;Welcome to WebLogic server '\u0026quot; + srName + \u0026quot;'!\u0026quot;); out.println(); out.println(\u0026quot; domain UID = '\u0026quot; + domainUID +\u0026quot;'\u0026quot;); out.println(\u0026quot; domain name = '\u0026quot; + domainName +\u0026quot;'\u0026quot;); out.println(); MBeanServer mbs = (MBeanServer)ic.lookup(\u0026quot;java:comp/env/jmx/runtime\u0026quot;); // display the current server's cluster name Set\u0026lt;ObjectInstance\u0026gt; clusterRuntimes = mbs.queryMBeans(new ObjectName(\u0026quot;*:Type=ClusterRuntime,*\u0026quot;), null); out.println(\u0026quot;Found \u0026quot; + clusterRuntimes.size() + \u0026quot; local cluster runtime\u0026quot; + (String)((clusterRuntimes.size()!=1)?\u0026quot;s:\u0026quot;:\u0026quot;:\u0026quot;)); for (ObjectInstance clusterRuntime : clusterRuntimes) { String cName = (String)mbs.getAttribute(clusterRuntime.getObjectName(), \u0026quot;Name\u0026quot;); out.println(\u0026quot; Cluster '\u0026quot; + cName + \u0026quot;'\u0026quot;); } out.println(); // display local data sources ObjectName jdbcRuntime = new ObjectName(\u0026quot;com.bea:ServerRuntime=\u0026quot; + srName + \u0026quot;,Name=\u0026quot; + srName + \u0026quot;,Type=JDBCServiceRuntime\u0026quot;); ObjectName[] dataSources = (ObjectName[])mbs.getAttribute(jdbcRuntime, \u0026quot;JDBCDataSourceRuntimeMBeans\u0026quot;); out.println(\u0026quot;Found \u0026quot; + dataSources.length + \u0026quot; local data source\u0026quot; + (String)((dataSources.length!=1)?\u0026quot;s:\u0026quot;:\u0026quot;:\u0026quot;)); for (ObjectName dataSource : dataSources) { String dsName = (String)mbs.getAttribute(dataSource, \u0026quot;Name\u0026quot;); String dsState = (String)mbs.getAttribute(dataSource, \u0026quot;State\u0026quot;); out.println(\u0026quot; Datasource '\u0026quot; + dsName + \u0026quot;': State='\u0026quot; + dsState +\u0026quot;'\u0026quot;); } out.println(); out.println(\u0026quot;*****************************************************************\u0026quot;); } catch (Throwable t) { t.printStackTrace(new PrintStream(response.getOutputStream())); } finally { out.println(\u0026quot;\u0026lt;/pre\u0026gt;\u0026lt;/body\u0026gt;\u0026lt;/html\u0026gt;\u0026quot;); if (ic != null) ic.close(); } %\u0026gt; The application displays important details about the WebLogic Server that it\u0026rsquo;s running on: namely its domain name, cluster name, and server name, as well as the names of any data sources that are targeted to the server. You can also see that application output reports that it\u0026rsquo;s at version v1; we will update this to v2 in a future use case to demonstrate upgrading the application.\nStaging a ZIP file of the archive When we create our image, we will use the files in staging directory /tmp/mii-sample/model-in-image__WLS-v1. In preparation, we need it to contain a ZIP file of the WDT application archive.\nRun the following commands to create your application archive ZIP file and put it in the expected directory:\n# Delete existing archive.zip in case we have an old leftover version $ rm -f /tmp/mii-sample/model-images/model-in-image__WLS-v1/archive.zip # Move to the directory which contains the source files for our archive $ cd /tmp/mii-sample/archives/archive-v1 # Zip the archive to the location will later use when we run the WebLogic Image Tool $ zip -r /tmp/mii-sample/model-images/model-in-image__WLS-v1/archive.zip wlsdeploy Staging model files In this step, we explore the staged WDT model YAML file and properties in directory /tmp/mii-sample/model-in-image__WLS-v1. The model in this directory references the web application in our archive, configures a WebLogic Administration Server, and configures a WebLogic cluster. It consists of only two files, model.10.properties, a file with a single property, and, model.10.yaml, a YAML file with our WebLogic configuration model.10.yaml.\nCLUSTER_SIZE=5 Here is the WLS model.10.yaml:\ndomainInfo: AdminUserName: '@@SECRET:__weblogic-credentials__:username@@' AdminPassword: '@@SECRET:__weblogic-credentials__:password@@' ServerStartMode: 'prod' topology: Name: '@@ENV:CUSTOM_DOMAIN_NAME@@' AdminServerName: 'admin-server' Cluster: 'cluster-1': DynamicServers: ServerTemplate: 'cluster-1-template' ServerNamePrefix: 'managed-server' DynamicClusterSize: '@@PROP:CLUSTER_SIZE@@' MaxDynamicClusterSize: '@@PROP:CLUSTER_SIZE@@' MinDynamicClusterSize: '0' CalculatedListenPorts: false Server: 'admin-server': ListenPort: 7001 ServerTemplate: 'cluster-1-template': Cluster: 'cluster-1' ListenPort: 8001 appDeployments: Application: myapp: SourcePath: 'wlsdeploy/applications/myapp-v1' ModuleType: ear Target: 'cluster-1' Click here to expand the JRF `model.10.yaml`, and note the RCUDbInfo stanza and its references to a DOMAIN_UID-rcu-access secret. domainInfo: AdminUserName: '@@SECRET:__weblogic-credentials__:username@@' AdminPassword: '@@SECRET:__weblogic-credentials__:password@@' ServerStartMode: 'prod' RCUDbInfo: rcu_prefix: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-access:rcu_prefix@@' rcu_schema_password: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-access:rcu_schema_password@@' rcu_db_conn_string: '@@SECRET:@@ENV:DOMAIN_UID@@-rcu-access:rcu_db_conn_string@@' topology: AdminServerName: 'admin-server' Name: '@@ENV:CUSTOM_DOMAIN_NAME@@' Cluster: 'cluster-1': Server: 'admin-server': ListenPort: 7001 'managed-server1-c1-': Cluster: 'cluster-1' ListenPort: 8001 'managed-server2-c1-': Cluster: 'cluster-1' ListenPort: 8001 'managed-server3-c1-': Cluster: 'cluster-1' ListenPort: 8001 'managed-server4-c1-': Cluster: 'cluster-1' ListenPort: 8001 appDeployments: Application: myapp: SourcePath: 'wlsdeploy/applications/myapp-v1' ModuleType: ear Target: 'cluster-1' The model files:\n Define a WebLogic domain with:\n Cluster cluster-1 Administration Server admin-server A cluster-1 targeted ear application that\u0026rsquo;s located in the WDT archive ZIP file at wlsdeploy/applications/myapp-v1 Leverage macros to inject external values:\n The property file CLUSTER_SIZE property is referenced in the model YAML DynamicClusterSize and MaxDynamicClusterSize fields using a PROP macro. The model file domain name is injected using a custom environment variable named CUSTOM_DOMAIN_NAME using an ENV macro. We set this environment variable later in this sample using an env field in its domain resource. This conveniently provides a simple way to deploy multiple differently named domains using the same model image. The model file administrator user name and password are set using a weblogic-credentials secret macro reference to the WebLogic credential secret. This secret is in turn referenced using the weblogicCredentialsSecret field in the domain resource. The weblogic-credentials is a reserved name that always dereferences to the owning domain resource actual WebLogic credentials secret name. A Model in Image image can contain multiple properties files, archive ZIP files, and YAML files, but in this sample we use just one of each. For a full discussion of Model in Images model file naming conventions, file loading order, and macro syntax, see Model files in the Model in Image user documentation.\nCreating the image with WIT Note: If you are using JRF in this sample, substitute JRF for each occurrence of WLS in the imagetool command line below, plus substitute container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.4 for the --fromImage value.\n At this point, we have staged all of the files needed for image model-in-image:WLS-v1, they include:\n /tmp/mii-sample/model-images/weblogic-deploy.zip /tmp/mii-sample/model-images/model-in-image__WLS-v1/model.10.yaml /tmp/mii-sample/model-images/model-in-image__WLS-v1/model.10.properties /tmp/mii-sample/model-images/model-in-image__WLS-v1/archive.zip If you don\u0026rsquo;t see the weblogic-deploy.zip file, then it means that you missed a step in the prerequisites.\nNow let\u0026rsquo;s use the Image Tool to create an image named model-in-image:WLS-v1 that\u0026rsquo;s layered on a base WebLogic image. We\u0026rsquo;ve already set up this tool during the prerequisite steps at the beginning of this sample.\nRun the following commands to create the model image and verify that it worked:\n$ cd /tmp/mii-sample/model-images $ ./imagetool/bin/imagetool.sh update \\ --tag model-in-image:WLS-v1 \\ --fromImage container-registry.oracle.com/middleware/weblogic:12.2.1.4 \\ --wdtModel ./model-in-image__WLS-v1/model.10.yaml \\ --wdtVariables ./model-in-image__WLS-v1/model.10.properties \\ --wdtArchive ./model-in-image__WLS-v1/archive.zip \\ --wdtModelOnly \\ --wdtDomainType WLS If you don\u0026rsquo;t see the imagetool directory, then it means that you missed a step in the prerequisites.\nThis command runs the WebLogic Image Tool in its Model in Image mode, and does the following:\n Builds the final Docker image as a layer on the container-registry.oracle.com/middleware/weblogic:12.2.1.4 base image. Copies the WDT ZIP file that\u0026rsquo;s referenced in the WIT cache into the image. Note that we cached WDT in WIT using the keyword latest when we set up the cache during the sample prerequisites steps. This lets WIT implicitly assume its the desired WDT version and removes the need to pass a -wdtVersion flag. Copies the specified WDT model, properties, and application archives to image location /u01/wdt/models. When the command succeeds, it should end with output like:\n[INFO ] Build successful. Build time=36s. Image tag=model-in-image:WLS-v1 Also, if you run the docker images command, then you should see a Docker image named model-in-image:WLS-v1.\nDeploy resources - Introduction In this section we will deploy our new image to namespace sample-domain1-ns, including the following steps:\n Create a secret containing your WebLogic administrator user name and password. Create a secret containing your Model in Image runtime encryption password: All Model in Image domains must supply a runtime encryption secret with a password value. It is used to encrypt configuration that is passed around internally by the operator. The value must be kept private but can be arbitrary; you can optionally supply a different secret value every time you restart the domain. If your domain type is JRF, create secrets containing your RCU access URL, credentials, and prefix. Deploy a domain resource YAML file that references the new image. Wait for the domain\u0026rsquo;s pods to start and reach their ready state. Secrets First, create the secrets needed by both WLS and JRF type model domains. In this case, we have two secrets.\nRun the following kubectl commands to deploy the required secrets:\n$ kubectl -n sample-domain1-ns create secret generic \\ sample-domain1-weblogic-credentials \\ --from-literal=username=weblogic --from-literal=password=welcome1 $ kubectl -n sample-domain1-ns label secret \\ sample-domain1-weblogic-credentials \\ weblogic.domainUID=sample-domain1 $ kubectl -n sample-domain1-ns create secret generic \\ sample-domain1-runtime-encryption-secret \\ --from-literal=password=my_runtime_password $ kubectl -n sample-domain1-ns label secret \\ sample-domain1-runtime-encryption-secret \\ weblogic.domainUID=sample-domain1 Some important details about these secrets:\n The WebLogic credentials secret:\n It is required and must contain username and password fields. It must be referenced by the spec.weblogicCredentialsSecret field in your domain resource. It also must be referenced by macros in the domainInfo.AdminUserName and domainInfo.AdminPassWord fields in your model YAML file. The Model WDT runtime secret:\n This is a special secret required by Model in Image. It must contain a password field. It must be referenced using the spec.model.runtimeEncryptionSecret attribute in its domain resource. It must remain the same for as long as the domain is deployed to Kubernetes, but can be changed between deployments. It is used to encrypt data as it\u0026rsquo;s internally passed using log files from the domain\u0026rsquo;s introspector job and on to its WebLogic Server pods. Deleting and recreating the secrets:\n We delete a secret before creating it, otherwise the create command will fail if the secret already exists. This allows us to change the secret when using the kubectl create secret command. We name and label secrets using their associated domain UID for two reasons:\n To make it obvious which secrets belong to which domains. To make it easier to clean up a domain. Typical cleanup scripts use the weblogic.domainUID label as a convenience for finding all resources associated with a domain. If you\u0026rsquo;re following the JRF path through the sample, then you also need to deploy the additional secret referenced by macros in the JRF model RCUDbInfo clause, plus an OPSS wallet password secret. For details about the uses of these secrets, see the Model in Image user documentation.\n Click here for the commands for deploying additional secrets for JRF. $ kubectl -n sample-domain1-ns create secret generic \\ sample-domain1-rcu-access \\ --from-literal=rcu_prefix=FMW1 \\ --from-literal=rcu_schema_password=Oradoc_db1 \\ --from-literal=rcu_db_conn_string=oracle-db.default.svc.cluster.local:1521/devpdb.k8s $ kubectl -n sample-domain1-ns label secret \\ sample-domain1-rcu-access \\ weblogic.domainUID=sample-domain1 $ kubectl -n sample-domain1-ns create secret generic \\ sample-domain1-opss-wallet-password-secret \\ --from-literal=walletPassword=welcome1 $ kubectl -n sample-domain1-ns label secret \\ sample-domain1-opss-wallet-password-secret \\ weblogic.domainUID=sample-domain1 Domain resource Now let\u0026rsquo;s create a domain resource. A domain resource is the key resource that tells the operator how to deploy a WebLogic domain.\nCopy the following to a file called /tmp/mii-sample/mii-initial.yaml or similar, or use the file /tmp/mii-sample/domain-resources/WLS/mii-initial-d1-WLS-v1.yaml that is included in the sample source.\n Click here to expand the WLS domain resource YAML. # # This is an example of how to define a Domain resource. # # If you are using 3.0.0-rc1, then the version on the following line # should be `v7` not `v6`. apiVersion: \u0026quot;weblogic.oracle/v6\u0026quot; kind: Domain metadata: name: sample-domain1 namespace: sample-domain1-ns labels: weblogic.resourceVersion: domain-v2 weblogic.domainUID: sample-domain1 spec: # Set to 'FromModel' to indicate 'Model in Image'. domainHomeSourceType: FromModel # The WebLogic Domain Home, this must be a location within # the image for 'Model in Image' domains. domainHome: /u01/domains/sample-domain1 # The WebLogic Server Docker image that the Operator uses to start the domain image: \u0026quot;model-in-image:WLS-v1\u0026quot; # Defaults to \u0026quot;Always\u0026quot; if image tag (version) is ':latest' imagePullPolicy: \u0026quot;IfNotPresent\u0026quot; # Identify which Secret contains the credentials for pulling an image #imagePullSecrets: #- name: regsecret # Identify which Secret contains the WebLogic Admin credentials, # the secret must contain 'username' and 'password' fields. webLogicCredentialsSecret: name: sample-domain1-weblogic-credentials # Whether to include the WebLogic server stdout in the pod's stdout, default is true includeServerOutInPodLog: true # Whether to enable overriding your log file location, see also 'logHome' #logHomeEnabled: false # The location for domain log, server logs, server out, and Node Manager log files # see also 'logHomeEnabled', 'volumes', and 'volumeMounts'. #logHome: /shared/logs/sample-domain1 # Set which WebLogic servers the Operator will start # - \u0026quot;NEVER\u0026quot; will not start any server in the domain # - \u0026quot;ADMIN_ONLY\u0026quot; will start up only the administration server (no managed servers will be started) # - \u0026quot;IF_NEEDED\u0026quot; will start all non-clustered servers, including the administration server, and clustered servers up to their replica count. serverStartPolicy: \u0026quot;IF_NEEDED\u0026quot; # Settings for all server pods in the domain including the introspector job pod serverPod: # Optional new or overridden environment variables for the domain's pods # - This sample uses CUSTOM_DOMAIN_NAME in its image model file # to set the Weblogic domain name env: - name: CUSTOM_DOMAIN_NAME value: \u0026quot;domain1\u0026quot; - name: JAVA_OPTIONS value: \u0026quot;-Dweblogic.StdoutDebugEnabled=false\u0026quot; - name: USER_MEM_ARGS value: \u0026quot;-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom \u0026quot; # Optional volumes and mounts for the domain's pods. See also 'logHome'. #volumes: #- name: weblogic-domain-storage-volume # persistentVolumeClaim: # claimName: sample-domain1-weblogic-sample-pvc #volumeMounts: #- mountPath: /shared # name: weblogic-domain-storage-volume # The desired behavior for starting the domain's administration server. adminServer: # The serverStartState legal values are \u0026quot;RUNNING\u0026quot; or \u0026quot;ADMIN\u0026quot; # \u0026quot;RUNNING\u0026quot; means the listed server will be started up to \u0026quot;RUNNING\u0026quot; mode # \u0026quot;ADMIN\u0026quot; means the listed server will be start up to \u0026quot;ADMIN\u0026quot; mode serverStartState: \u0026quot;RUNNING\u0026quot; # Setup a Kubernetes node port for the administration server default channel #adminService: # channels: # - channelName: default # nodePort: 30701 # The number of managed servers to start for unlisted clusters replicas: 1 # The desired behavior for starting a specific cluster's member servers clusters: - clusterName: cluster-1 serverStartState: \u0026quot;RUNNING\u0026quot; replicas: 2 # Change the `restartVersion` to force the introspector job to rerun # and apply any new model configuration, to also force a subsequent # roll of your domain's WebLogic pods. restartVersion: '1' configuration: # Settings for domainHomeSourceType 'FromModel' model: # Valid model domain types are 'WLS', 'JRF', and 'RestrictedJRF', default is 'WLS' domainType: \u0026quot;WLS\u0026quot; # Optional configmap for additional models and variable files #configMap: sample-domain1-wdt-config-map # All 'FromModel' domains require a runtimeEncryptionSecret with a 'password' field runtimeEncryptionSecret: sample-domain1-runtime-encryption-secret # Secrets that are referenced by model yaml macros # (the model yaml in the optional configMap or in the image) #secrets: #- sample-domain1-datasource-secret Click here to expand the JRF domain resource YAML. # Copyright (c) 2020, Oracle Corporation and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl. # # This is an example of how to define a Domain resource. # # If you are using 3.0.0-rc1, then the version on the following line # should be `v7` not `v6`. apiVersion: \u0026quot;weblogic.oracle/v6\u0026quot; kind: Domain metadata: name: sample-domain1 namespace: sample-domain1-ns labels: weblogic.resourceVersion: domain-v2 weblogic.domainUID: sample-domain1 spec: # Set to 'FromModel' to indicate 'Model in Image'. domainHomeSourceType: FromModel # The WebLogic Domain Home, this must be a location within # the image for 'Model in Image' domains. domainHome: /u01/domains/sample-domain1 # The WebLogic Server Docker image that the Operator uses to start the domain image: \u0026quot;model-in-image:JRF-v1\u0026quot; # Defaults to \u0026quot;Always\u0026quot; if image tag (version) is ':latest' imagePullPolicy: \u0026quot;IfNotPresent\u0026quot; # Identify which Secret contains the credentials for pulling an image #imagePullSecrets: #- name: regsecret # Identify which Secret contains the WebLogic Admin credentials, # the secret must contain 'username' and 'password' fields. webLogicCredentialsSecret: name: sample-domain1-weblogic-credentials # Whether to include the WebLogic server stdout in the pod's stdout, default is true includeServerOutInPodLog: true # Whether to enable overriding your log file location, see also 'logHome' #logHomeEnabled: false # The location for domain log, server logs, server out, and Node Manager log files # see also 'logHomeEnabled', 'volumes', and 'volumeMounts'. #logHome: /shared/logs/sample-domain1 # Set which WebLogic servers the Operator will start # - \u0026quot;NEVER\u0026quot; will not start any server in the domain # - \u0026quot;ADMIN_ONLY\u0026quot; will start up only the administration server (no managed servers will be started) # - \u0026quot;IF_NEEDED\u0026quot; will start all non-clustered servers, including the administration server, and clustered servers up to their replica count. serverStartPolicy: \u0026quot;IF_NEEDED\u0026quot; # Settings for all server pods in the domain including the introspector job pod serverPod: # Optional new or overridden environment variables for the domain's pods # - This sample uses CUSTOM_DOMAIN_NAME in its image model file # to set the Weblogic domain name env: - name: CUSTOM_DOMAIN_NAME value: \u0026quot;domain1\u0026quot; - name: JAVA_OPTIONS value: \u0026quot;-Dweblogic.StdoutDebugEnabled=false\u0026quot; - name: USER_MEM_ARGS value: \u0026quot;-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom \u0026quot; # Optional volumes and mounts for the domain's pods. See also 'logHome'. #volumes: #- name: weblogic-domain-storage-volume # persistentVolumeClaim: # claimName: sample-domain1-weblogic-sample-pvc #volumeMounts: #- mountPath: /shared # name: weblogic-domain-storage-volume # The desired behavior for starting the domain's administration server. adminServer: # The serverStartState legal values are \u0026quot;RUNNING\u0026quot; or \u0026quot;ADMIN\u0026quot; # \u0026quot;RUNNING\u0026quot; means the listed server will be started up to \u0026quot;RUNNING\u0026quot; mode # \u0026quot;ADMIN\u0026quot; means the listed server will be start up to \u0026quot;ADMIN\u0026quot; mode serverStartState: \u0026quot;RUNNING\u0026quot; # Setup a Kubernetes node port for the administration server default channel #adminService: # channels: # - channelName: default # nodePort: 30701 # The number of managed servers to start for unlisted clusters replicas: 1 # The desired behavior for starting a specific cluster's member servers clusters: - clusterName: cluster-1 serverStartState: \u0026quot;RUNNING\u0026quot; replicas: 2 # Change the restartVersion to force the introspector job to rerun # and apply any new model configuration, to also force a subsequent # roll of your domain's WebLogic pods. restartVersion: '1' configuration: # Settings for domainHomeSourceType 'FromModel' model: # Valid model domain types are 'WLS', 'JRF', and 'RestrictedJRF', default is 'WLS' domainType: \u0026quot;JRF\u0026quot; # Optional configmap for additional models and variable files #configMap: sample-domain1-wdt-config-map # All 'FromModel' domains require a runtimeEncryptionSecret with a 'password' field runtimeEncryptionSecret: sample-domain1-runtime-encryption-secret # Secrets that are referenced by model yaml macros # (the model yaml in the optional configMap or in the image) secrets: #- sample-domain1-datasource-secret - sample-domain1-rcu-access # Increase the introspector job active timeout value for JRF use cases introspectorJobActiveDeadlineSeconds: 300 opss: # Name of secret with walletPassword for extracting the wallet, used for JRF domains walletPasswordSecret: sample-domain1-opss-wallet-password-secret # Name of secret with walletFile containing base64 encoded opss wallet, used for JRF domains #walletFileSecret: sample-domain1-opss-walletfile-secret Run the following command to create the domain custom resource:\n$ kubectl apply -f /tmp/mii-sample/domain-resources/WLS/mii-initial-d1-WLS-v1.yaml Note: If you are choosing not to use the predefined domain resource YAML file and instead created your own domain resource file earlier, then substitute your custom file name in the above command. You might recall that we suggested naming it /tmp/mii-sample/mii-initial.yaml.\n If you run kubectl get pods -n sample-domain1-ns --watch, then you should see the introspector job run and your WebLogic Server pods start. The output should look something like this:\n Click here to expand. $ kubectl get pods -n sample-domain1-ns --watch NAME READY STATUS RESTARTS AGE sample-domain1-introspect-domain-job-lqqj9 0/1 Pending 0 0s sample-domain1-introspect-domain-job-lqqj9 0/1 ContainerCreating 0 0s sample-domain1-introspect-domain-job-lqqj9 1/1 Running 0 1s sample-domain1-introspect-domain-job-lqqj9 0/1 Completed 0 65s sample-domain1-introspect-domain-job-lqqj9 0/1 Terminating 0 65s sample-domain1-admin-server 0/1 Pending 0 0s sample-domain1-admin-server 0/1 ContainerCreating 0 0s sample-domain1-admin-server 0/1 Running 0 1s sample-domain1-admin-server 1/1 Running 0 32s sample-domain1-managed-server1 0/1 Pending 0 0s sample-domain1-managed-server2 0/1 Pending 0 0s sample-domain1-managed-server1 0/1 ContainerCreating 0 0s sample-domain1-managed-server2 0/1 ContainerCreating 0 0s sample-domain1-managed-server1 0/1 Running 0 2s sample-domain1-managed-server2 0/1 Running 0 2s sample-domain1-managed-server1 1/1 Running 0 43s sample-domain1-managed-server2 1/1 Running 0 42s Alternatively, you can run /tmp/mii-sample/utils/wl-pod-wait.sh -p 3. This is a utility script that provides useful information about a domain\u0026rsquo;s pods and waits for them to reach a ready state, reach their target restartVersion, and reach their target image before exiting.\n Click here to expand the `wl-pod-wait.sh` usage. $ ./wl-pod-wait.sh -? Usage: wl-pod-wait.sh [-n mynamespace] [-d mydomainuid] \\ [-p expected_pod_count] \\ [-t timeout_secs] \\ [-q] Exits non-zero if 'timeout_secs' is reached before 'pod_count' is reached. Parameters: -d \u0026lt;domain_uid\u0026gt; : Defaults to 'sample-domain1'. -n \u0026lt;namespace\u0026gt; : Defaults to 'sample-domain1-ns'. pod_count \u0026gt; 0 : Wait until exactly 'pod_count' WebLogic server pods for a domain all (a) are ready, (b) have the same 'domainRestartVersion' label value as the current domain resource's 'spec.restartVersion, and (c) have the same image as the current domain resource's image. pod_count = 0 : Wait until there are no running WebLogic server pods for a domain. The default. -t \u0026lt;timeout\u0026gt; : Timeout in seconds. Defaults to '600'. -q : Quiet mode. Show only a count of wl pods that have reached the desired criteria. -? : This help. Click here to expand sample output from `wl-pod-wait.sh`. @@ [2020-04-30T13:50:42][seconds=0] Info: Waiting up to 600 seconds for exactly '3' WebLogic server pods to reach the following criteria: @@ [2020-04-30T13:50:42][seconds=0] Info: ready='true' @@ [2020-04-30T13:50:42][seconds=0] Info: image='model-in-image:WLS-v1' @@ [2020-04-30T13:50:42][seconds=0] Info: domainRestartVersion='1' @@ [2020-04-30T13:50:42][seconds=0] Info: namespace='sample-domain1-ns' @@ [2020-04-30T13:50:42][seconds=0] Info: domainUID='sample-domain1' @@ [2020-04-30T13:50:42][seconds=0] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:50:42][seconds=0] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------------------- ------- ----- ----- --------- 'sample-domain1-introspect-domain-job-rkdkg' '' '' '' 'Pending' @@ [2020-04-30T13:50:45][seconds=3] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:50:45][seconds=3] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------------------- ------- ----- ----- --------- 'sample-domain1-introspect-domain-job-rkdkg' '' '' '' 'Running' @@ [2020-04-30T13:51:50][seconds=68] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:51:50][seconds=68] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE ---- ------- ----- ----- ----- @@ [2020-04-30T13:51:59][seconds=77] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:51:59][seconds=77] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE ----------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'false' 'Pending' @@ [2020-04-30T13:52:02][seconds=80] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:52:02][seconds=80] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE ----------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'false' 'Running' @@ [2020-04-30T13:52:32][seconds=110] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:52:32][seconds=110] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'false' 'Pending' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'false' 'Pending' @@ [2020-04-30T13:52:34][seconds=112] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:52:34][seconds=112] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'false' 'Running' @@ [2020-04-30T13:53:14][seconds=152] Info: '3' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:53:14][seconds=152] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:53:14][seconds=152] Info: Success! If you see an error, then consult Debugging in the Model in Image user guide.\nInvoke the web application Now that all the initial use case resources have been deployed, you can invoke the sample web application through the Traefik ingress controller\u0026rsquo;s NodePort. Note: The web application will display a list of any data sources it finds, but we don\u0026rsquo;t expect it to find any because the model doesn\u0026rsquo;t contain any at this point.\nSend a web application request to the load balancer:\n$ curl -s -S -m 10 -H 'host: sample-domain1-cluster-cluster-1.mii-sample.org' \\ http://localhost:30305/myapp_war/index.jsp Or, if Traefik is unavailable and your Administration Server pod is running, you can use kubectl exec:\n$ kubectl exec -n sample-domain1-ns sample-domain1-admin-server -- bash -c \\ \u0026quot;curl -s -S -m 10 http://sample-domain1-cluster-cluster-1:8001/myapp_war/index.jsp\u0026quot; You should see output like the following:\n$ curl -s -S -m 10 -H 'host: sample-domain1-cluster-cluster-1.mii-sample.org' \\ http://localhost:30305/myapp_war/index.jsp \u0026lt;html\u0026gt;\u0026lt;body\u0026gt;\u0026lt;pre\u0026gt; ***************************************************************** Hello World! This is version 'v1' of the mii-sample JSP web-app. Welcome to WebLogic server 'managed-server2'! domain UID = 'sample-domain1' domain name = 'domain1' Found 1 local cluster runtime: Cluster 'cluster-1' Found 0 local data sources: ***************************************************************** \u0026lt;/pre\u0026gt;\u0026lt;/body\u0026gt;\u0026lt;/html\u0026gt; Note: If you\u0026rsquo;re running your curl commands on a remote machine, then substitute localhost with an external address suitable for contacting your Kubernetes cluster. A Kubernetes cluster address that often works can be obtained by using the address just after https:// in the KubeDNS line of the output from the kubectl cluster-info command.\nIf you want to continue to the next use case, then leave your domain running.\nUpdate1 use case This use case demonstrates dynamically adding a data source to your running domain. It demonstrates several features of WDT and Model in Image:\n The syntax used for updating a model is exactly the same syntax you use for creating the original model. A domain\u0026rsquo;s model can be updated dynamically by supplying a model update in a file in a Kubernetes ConfigMap. Model updates can be as simple as changing the value of a single attribute, or more complex, such as adding a JMS Server. For a detailed discussion of model updates, see Runtime Updates in the Model in Image user guide.\nThe operator does not support all possible dynamic model updates. For model update limitations, consult Runtime Updates in the Model in Image user docs, and carefully test any model update before attempting a dynamic update in production.\n Here are the steps:\n Ensure that you have a running domain.\nMake sure you have deployed the domain from the Initial use case.\n Create a data source model YAML file.\nCreate a WDT model snippet for a data source (or use the example provided). Make sure that its target is set to cluster-1, and that its initial capacity is set to 0.\nThe reason for the latter is to prevent the data source from causing a WebLogic Server startup failure if it can\u0026rsquo;t find the database, which would be likely to happen because we haven\u0026rsquo;t deployed one (unless you\u0026rsquo;re using the JRF path through the sample).\nHere\u0026rsquo;s an example data source model configuration that meets these criteria:\nresources: JDBCSystemResource: mynewdatasource: Target: 'cluster-1' JdbcResource: JDBCDataSourceParams: JNDIName: [ jdbc/mydatasource1, jdbc/mydatasource2 ] GlobalTransactionsProtocol: TwoPhaseCommit JDBCDriverParams: DriverName: oracle.jdbc.xa.client.OracleXADataSource URL: '@@SECRET:@@ENV:DOMAIN_UID@@-datasource-secret:url@@' PasswordEncrypted: '@@SECRET:@@ENV:DOMAIN_UID@@-datasource-secret:password@@' Properties: user: Value: 'sys as sysdba' oracle.net.CONNECT_TIMEOUT: Value: 5000 oracle.jdbc.ReadTimeout: Value: 30000 JDBCConnectionPoolParams: InitialCapacity: 0 MaxCapacity: 1 TestTableName: SQL ISVALID TestConnectionsOnReserve: true Place the above model snippet in a file named /tmp/mii-sample/mydatasource.yaml and then use it in the later step where we deploy the model ConfigMap, or alternatively, use the same data source that\u0026rsquo;s provided in /tmp/mii-sample/model-configmaps/datasource/model.20.datasource.yaml.\n Create the data source secret.\nThe data source references a new secret that needs to be created. Run the following commands to create the secret:\n$ kubectl -n sample-domain1-ns create secret generic \\ sample-domain1-datasource-secret \\ --from-literal=password=Oradoc_db1 \\ --from-literal=url=jdbc:oracle:thin:@oracle-db.default.svc.cluster.local:1521/devpdb.k8s $ kubectl -n sample-domain1-ns label secret \\ sample-domain1-datasource-secret \\ weblogic.domainUID=sample-domain1 We name and label secrets using their associated domain UID for two reasons:\n To make it obvious which secret belongs to which domains. To make it easier to clean up a domain. Typical cleanup scripts use the weblogic.domainUID label as a convenience for finding all the resources associated with a domain. Create a ConfigMap with the WDT model that contains the data source definition.\nRun the following commands:\n$ kubectl -n sample-domain1-ns create configmap sample-domain1-wdt-config-map \\ --from-file=/tmp/mii-sample/model-configmaps/datasource $ kubectl -n sample-domain1-ns label configmap sample-domain1-wdt-config-map \\ weblogic.domainUID=sample-domain1 If you\u0026rsquo;ve created your own data source file, then substitute the file name in the --from-file= parameter (we suggested /tmp/mii-sample/mydatasource.yaml earlier). Note that the -from-file= parameter can reference a single file, in which case it puts the designated file in the ConfigMap, or it can reference a directory, in which case it populates the ConfigMap with all of the files in the designated directory. We name and label ConfigMap using their associated domain UID for two reasons:\n To make it obvious which ConfigMap belong to which domains. To make it easier to cleanup a domain. Typical cleanup scripts use the weblogic.domainUID label as a convenience for finding all resources associated with a domain. Update your domain resource to refer to the ConfigMap and secret.\n Option 1: Update your current domain resource file from the \u0026ldquo;Initial\u0026rdquo; use case.\n Add the secret to its spec.configuration.secrets stanza:\nspec: ... configuration: ... secrets: - sample-domain1-datasource-secret (Leave any existing secrets in place.)\n Change its spec.configuration.model.configMap to look like:\nspec: ... configuration: ... model: ... configMap: sample-domain1-wdt-config-map Apply your changed domain resource:\n$ kubectl apply -f your-domain-resource.yaml Option 2: Use the updated domain resource file that is supplied with the sample:\n$ kubectl apply -f /tmp/miisample/domain-resources/mii-update1-d1-WLS-v1-ds.yaml Restart (\u0026lsquo;roll\u0026rsquo;) the domain.\nNow that the data source is deployed in a ConfigMap and its secret is also deployed, and we have applied an updated domain resource with its spec.configuration.model.configMap and spec.configuration.secrets referencing the ConfigMap and secret, let\u0026rsquo;s tell the operator to roll the domain.\nWhen a model domain restarts, it will rerun its introspector job in order to regenerate its configuration, and it will also pass the configuration changes found by the introspector to each restarted server. One way to cause a running domain to restart is to change the domain\u0026rsquo;s spec.restartVersion. To do this:\n Option 1: Edit your domain custom resource.\n Call kubectl -n sample-domain1-ns edit domain sample-domain1. Edit the value of the spec.restartVersion field and save. The field is a string; typically, you use a number in this field and increment it with each restart. Option 2: Dynamically change your domain using kubectl patch.\n To get the current restartVersion call:\n$ kubectl -n sample-domain1-ns get domain sample-domain1 '-o=jsonpath={.spec.restartVersion}' Choose a new restart version that\u0026rsquo;s different from the current restart version.\n The field is a string; typically, you use a number in this field and increment it with each restart. Use kubectl patch to set the new value. For example, assuming the new restart version is 2:\n$ kubectl -n sample-domain1-ns patch domain sample-domain1 --type=json '-p=[{\u0026quot;op\u0026quot;: \u0026quot;replace\u0026quot;, \u0026quot;path\u0026quot;: \u0026quot;/spec/restartVersion\u0026quot;, \u0026quot;value\u0026quot;: \u0026quot;2\u0026quot; }]' Option 3: Use the sample helper script.\n Call /tmp/mii-sample/utils/patch-restart-version.sh -n sample-domain1-ns -d sample-domain1. This will perform the same kubectl get and kubectl patch commands as Option 2. Wait for the roll to complete.\nNow that you\u0026rsquo;ve started a domain roll, you\u0026rsquo;ll need to wait for it to complete if you want to verify that the data source was deployed.\n One way to do this is to call kubectl get pods -n sample-domain1-ns --watch and wait for the pods to cycle back to their ready state.\n Alternatively, you can run /tmp/mii-sample/utils/wl-pod-wait.sh -p 3. This is a utility script that provides useful information about a domain\u0026rsquo;s pods and waits for them to reach a ready state, reach their target restartVersion, and reach their target image before exiting.\n Click here to expand the `wl-pod-wait.sh` usage. $ ./wl-pod-wait.sh -? Usage: wl-pod-wait.sh [-n mynamespace] [-d mydomainuid] \\ [-p expected_pod_count] \\ [-t timeout_secs] \\ [-q] Exits non-zero if 'timeout_secs' is reached before 'pod_count' is reached. Parameters: -d \u0026lt;domain_uid\u0026gt; : Defaults to 'sample-domain1'. -n \u0026lt;namespace\u0026gt; : Defaults to 'sample-domain1-ns'. pod_count \u0026gt; 0 : Wait until exactly 'pod_count' WebLogic server pods for a domain all (a) are ready, (b) have the same 'domainRestartVersion' label value as the current domain resource's 'spec.restartVersion, and (c) have the same image as the current domain resource's image. pod_count = 0 : Wait until there are no running WebLogic server pods for a domain. The default. -t \u0026lt;timeout\u0026gt; : Timeout in seconds. Defaults to '600'. -q : Quiet mode. Show only a count of wl pods that have reached the desired criteria. -? : This help. Click here to expand sample output from `wl-pod-wait.sh` that shows a rolling domain. @@ [2020-04-30T13:53:19][seconds=0] Info: Waiting up to 600 seconds for exactly '3' WebLogic server pods to reach the following criteria: @@ [2020-04-30T13:53:19][seconds=0] Info: ready='true' @@ [2020-04-30T13:53:19][seconds=0] Info: image='model-in-image:WLS-v1' @@ [2020-04-30T13:53:19][seconds=0] Info: domainRestartVersion='2' @@ [2020-04-30T13:53:19][seconds=0] Info: namespace='sample-domain1-ns' @@ [2020-04-30T13:53:19][seconds=0] Info: domainUID='sample-domain1' @@ [2020-04-30T13:53:19][seconds=0] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:53:19][seconds=0] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-introspect-domain-job-wlkpr' '' '' '' 'Pending' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:53:20][seconds=1] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:53:20][seconds=1] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-introspect-domain-job-wlkpr' '' '' '' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:18][seconds=59] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:18][seconds=59] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------------------- ------- ----------------------- ------ ----------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-introspect-domain-job-wlkpr' '' '' '' 'Succeeded' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:19][seconds=60] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:19][seconds=60] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:31][seconds=72] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:31][seconds=72] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '1' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:40][seconds=81] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:40][seconds=81] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:52][seconds=93] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:52][seconds=93] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:54:58][seconds=99] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:54:58][seconds=99] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'false' 'Pending' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:00][seconds=101] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:00][seconds=101] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:12][seconds=113] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:12][seconds=113] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:24][seconds=125] Info: '0' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:24][seconds=125] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:33][seconds=134] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:33][seconds=134] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:34][seconds=135] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:34][seconds=135] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '1' 'model-in-image:WLS-v1' 'false' 'Pending' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:40][seconds=141] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:40][seconds=141] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:55:44][seconds=145] Info: '1' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:55:44][seconds=145] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'false' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:56:25][seconds=186] Info: '2' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:56:25][seconds=186] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:56:26][seconds=187] Info: '2' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:56:26][seconds=187] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '1' 'model-in-image:WLS-v1' 'false' 'Pending' @@ [2020-04-30T13:56:30][seconds=191] Info: '2' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:56:30][seconds=191] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:56:34][seconds=195] Info: '2' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:56:34][seconds=195] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------- --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '2' 'model-in-image:WLS-v1' 'false' 'Pending' @@ [2020-04-30T13:57:09][seconds=230] Info: '3' WebLogic pods currently match all criteria, expecting '3'. @@ [2020-04-30T13:57:09][seconds=230] Info: Introspector and WebLogic pods with same namespace and domain-uid: NAME VERSION IMAGE READY PHASE -------------------------------- ------- ----------------------- ------ --------- 'sample-domain1-admin-server' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server1' '2' 'model-in-image:WLS-v1' 'true' 'Running' 'sample-domain1-managed-server2' '2' 'model-in-image:WLS-v1' 'true' 'Running' @@ [2020-04-30T13:57:09][seconds=230] Info: Success! After your domain is running, you can call the sample web application to determine if the data source was deployed.\nSend a web application request to the ingress controller:\n$ curl -s -S -m 10 -H 'host: sample-domain1-cluster-cluster-1.mii-sample.org' \\ http://localhost:30305/myapp_war/index.jsp Or, if Traefik is unavailable and your Administration Server pod is running, you can run kubectl exec:\n$ kubectl exec -n sample-domain1-ns sample-domain1-admin-server -- bash -c \\ \u0026quot;curl -s -S -m 10 http://sample-domain1-cluster-cluster-1:8001/myapp_war/index.jsp\u0026quot; You should see something like the following:\n Click here to see the expected web application output. $ curl -s -S -m 10 -H 'host: sample-domain1-cluster-cluster-1.mii-sample.org' \\ http://localhost:30305/myapp_war/index.jsp \u0026lt;html\u0026gt;\u0026lt;body\u0026gt;\u0026lt;pre\u0026gt; ***************************************************************** Hello World! This is version 'v1' of the mii-sample JSP web-app. Welcome to WebLogic server 'managed-server1'! domain UID = 'sample-domain1' domain name = 'domain1' Found 1 local cluster runtime: Cluster 'cluster-1' Found 1 local data source: Datasource 'mynewdatasource': State='Running' ***************************************************************** \u0026lt;/pre\u0026gt;\u0026lt;/body\u0026gt;\u0026lt;/html\u0026gt; If you see an error, then consult Debugging in the Model in Image user guide.\nThis completes the sample scenarios.\nCleanup To remove the resources you have created in these samples:\n Delete the domain resources.\n$ /tmp/operator-source/kubernetes/samples/scripts/delete-domain/delete-weblogic-domain-resources.sh -d sample-domain1 $ /tmp/operator-source/kubernetes/samples/scripts/delete-domain/delete-weblogic-domain-resources.sh -d sample-domain2 This deletes the domain and any related resources that are labeled with the domain UID sample-domain1 and sample-domain2.\nIt leaves the namespace intact, the operator running, the load balancer running (if installed), and the database running (if installed).\n Note: When you delete a domain, the operator should detect your domain deletion and shut down its pods. Wait for these pods to exit before deleting the operator that monitors the sample-domain1-ns namespace. You can monitor this process using the command kubectl get pods -n sample-domain1-ns --watch (ctrl-c to exit).\n If you set up the Traefik ingress controller:\n$ helm delete --purge traefik-operator $ kubectl delete namespace traefik If you set up a database for JRF:\n$ /tmp/operator-source/kubernetes/samples/scripts/create-oracle-db-service/stop-db-service.sh Delete the operator and its namespace:\n$ helm delete --purge sample-weblogic-operator $ kubectl delete namespace sample-weblogic-operator-ns Delete the domain\u0026rsquo;s namespace:\n$ kubectl delete namespace sample-domain1-ns Delete the images you may have created in this sample:\n$ docker image rm model-in-image:WLS-v1 $ docker image rm model-in-image:WLS-v2 $ docker image rm model-in-image:JRF-v1 $ docker image rm model-in-image:JRF-v2 References For references to the relevant user documentation, see:\n Model in Image user documentation Oracle WebLogic Server Deploy Tooling Oracle WebLogic Image Tool " | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was it intentional to update the documentation for earlier releases? (I'm not opposed; it's just not obvious that it was intentional). Also, I'm having a hard time finding the actual diff.
rjeberhard
approved these changes
Aug 11, 2020
rjeberhard
added a commit
that referenced
this pull request
Nov 13, 2020
* Add Javadoc * Clarify 2.6.0 upgrade instructions (#1818) * Clarify 2.6.0 upgrade instructions * Review comments * Update Javadoc * Fix broken hugo doc relrefs using a link formatting workaround: .../foo.md#bar --> .../foo/_index.md#bar (#1823) * update java url in Dockerfile * Added timeout and debugger to fix hanging issue on kind-new Jenkins jenkins-ignore (#1825) * Mirror introspector log to a rotating file in 'log home' (if configured) (#1827) * Mirror introspector log to a rotating file in 'log home' (if configured) * minor fix * remove comment * doc that logHome includes introspector out * minor doc update * minor doc update * OWLS-81928 - JUnit5: Convert ItManagedCoherence (testCreateCoherenceDomainInImageUsingWdt) test (#1822) * Converted ItManagedCoherence to use JUnit5 jenkins-ignore * Converted ItManagedCoherence to use JUnit5 jenkins-ignore * Removed unnecessary fields in model file jenkins-ignore * Test support: don't update status on replace if defined as subresource (#1830) * Test support: don't update status on replace if defined as subresource * Test support: don't update status on patch if defined as subresource * remove unused method * Support configurable model home (#1828) * initial change * work in progress * Change domain schema * Fix a typo * Minor fix * Unit test fix * Minor doc update * address a review comment * More changes * Minor change * update hashCode/toString/equals, and address review comments * Use kubectl exec to fix a test hanging issue (#1834) * use kubectl exec * cleanup * cleanup extra artifacts for prom and grafana (#1835) * Changes for OWLS-82011 to reflect introspector status in domain status (#1832) * Changes for OWLS-82011 to reflect introspector status in domain status * change method name * Code refactoring * cleanup debug message * Remove unused method * Added check to terminate fiber when intro job fails with BackoffLimitExceeded (which happens when intro job times out due to DeadlineExceeded). * implement review comment suggestions * added unit test for introspector pod phase failed * Added restPort, https tests, extra cleanup for prom and grafana (#1824) * added testcases for https and restport * changed norestport file * fixed dependencies * fixed style * switched to master * disable unittest build * fixed typo * fixed typo * added extra cleaning * added cleanup * addressed all review comments * checkstyle * switched to master branch * switched to new monexp release * added extra clean * added check if clusterrole or clusterrolebinding exists before delete * fixed typo * fixed typos * change change portnumbers to fix parallel run * styling * styling1 * added fix for paralell run * Update chart build * Update the Traefik Version to 2.x (#1814) * Initial check-in for traefik version update * Modified the doc and sample artifacts files * Foramt the document * More doc changes * Renamed setup.sh to setupLoadBalancer.sh to be more specific as per suggestion on OWLS-77806 * Addressed doc comments on PR review * More doc review comments * Minor hypelink name change * More doc changes * Fixed the broken links * Updated copyright inforrmation * Doc review comments * More doc changes * Minor typo * Modified the mii sample wraper scripts * Update Mii Sample script * Resync develop branch. Modified more scripts and yaml files for mii-sample * update MII sample generation/test instructions * Missing modification * Modified traefik ingress syntax * Modifoe md files from docs-source directory * More doc review comment * Fixed the indention issue in inline ingress file * More doc review resolution * Minor doc update * Move the istio istallatiion to RESULTS_ROOT directory Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-2.subnet1ad3phx.devweblogicphx.oraclevcn.com> Co-authored-by: Tom Barnes <tom.barnes@oracle.com> * Add tests to verify loadbalancing with treafik lb for multiple domains (#1831) * Adding utilities for creating Traefik load balancer * Log the assertion errors in IT*.out files * Create ingress resource for traefik * Fix the traefik pod name * do kind repo check * uninstall traefik * Add application building * Deploy application using REST and access it through traefik LB * create secret for WebLogic domain * remove the testwebapp * Print JVMID from clusterview app * Remove the domain creation in PV * cleanup code * cleanup code * remove the domainUid from url * verify loadbalancing after creating 2 domains * cleanup javadocs * Add verification checks to determine host routing is done correctly * fix comments * Add a delay before accessing the app * Fix the curl command * iAddressed the review comments * Fix service name * fix namespace * add more wait time * Adding TLS test * Fix CN * Add https usecase * Encode it as normal String * Fix file names * include -k option to ignore host name verification * Add cert and key as String * wip * use same namespace for traefik * Added more tests * Fix the ingress rules creation command * Fix getNodePort * Access console in a loop * Adding voyager tests * Renamed testclass to be generic for all loadbalancers * order the tests * Add break statement once reponse is found * bind domain names * Adding Junit5 Operator upgrade tests (#1841) * adding operator upgrade tests * adding parameterized test * fix javadoc * adding individual tests * comment out 250 upgrade * change release name * adding order for testing * commenting cleanup to debug * commenting cleanup to debug * adding retry for scale * adding cleanup back * adding individual tests * change test names * check operator image name after upgrade * use 0 for external rest port * adding jira in comments * javadoc changes * exclude upgrade test in full test run * address review comments * Resolution to Jenkin log archive issue (#1845) Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-2.subnet1ad3phx.devweblogicphx.oraclevcn.com> * Update the external client FAQ to discuss 'unknown host' handling, plus... (#1842) * Update the external client FAQ to discuss 'unknown host' handling. Update the domain-resource doc to try make it easier to read. * review and edit * delete extraneous file * minor doc tweaks Co-authored-by: Rosemary Marano <rosemary.marano@oracle.com> * Parameterize tests with different domain home source types (#1776) * parameterize domain type initial commit * cleanup * create domains in initAll * parameterize scale domain tests * cleanup * remove old ItPodsRestart and ItScaleMiiDomainNginx * change spec.ImagePullSecrets for domainInPV * verify pv exists * mv KIND_REPO check before creating domain-in-pv domain * debug wldf on jenkins run * debug wldf on jenkins run 2 * set wldf alarm type to manual * add more wldf debug info * try scale with wldf in different test methods * add more debug info for wldf * debug wldf issue * enable mii domain * add more debug info in scalingAction.sh * add more wait time for debugging * add more debug info in scalingAction.sh * add longer wait time for debug * add domainNameSpace in clusterrolebinding name * enable all tests * debug domaininpv app access in parallel run * add more debug info for domaininpv parallel run * enable verbose for curl command * update JAVA_URL in Dockerfile * address proxy client hanging issue * use httpclient to access sample app * debug domaininpv 404 issue in parallel run * consolidate test classes * address review comments * make admin server routing optional * address Vanaja's review comments * clean up * address Marina's review comments * Fix build * Update Apache doc for helm 3.x (#1847) Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-2.subnet1ad3phx.devweblogicphx.oraclevcn.com> * Revert backofflimit check (#1852) * revert backoff limit check * ignore unit test for backoff limit check * Make sure domainUid is in operator log messages - part 1 (#1844) * Add domainUID to watcher related log messages * Minor changes * Move a common method into KubernetesUtils * minor javadoc update * minor change * revert an unnecessary change * lookup and save internalOperatorkey, if already exists (#1846) * Add WDT & WIT download url options (#1851) * adding wdt/wit download url options * run upgrade tests in separate mvn command * run upgrade tests in separate mvn command * exclude parameterized domain test from parallel runs and include in sequential run * use ! to exclude a test * remove mvn command which runs upgrade test * Disable ItCoherenceTests jenkins-ignore (#1859) * changed WDT release URL for latest release (#1843) * JUnit5 Create Infrastructure for ELK Stack (#1839) * JUnit5 Create Infrastructure for ELK stack jenkins-ignore * JUnit5 Create Infrastructure for ELK stack jenkins-ignore * Changes based on comments jenkins-ignore * Minor change in installAndVerifyOperator jenkins-ignore * Changes based on the comments jenkins-ignore * Corrected a typo jenkins-ignore * Load Balance doc update for SSL termination * Updated the heading font * Modified the hostname :wq * Missing quote * Add domainUID to operator log messages in async call request/response code path (#1856) * Add domainUID to watcher related log messages * Minor changes * Move a common method into KubernetesUtils * minor javadoc update * minor change * revert an unnecessary change * Add domainUid to call requestParams * Work in progress * Work in progress * Work in progress * merge * cleanup * Handle two exception log messages * fix operator external REST http port conflicts in integration tests (#1858) * debug install operator regression * set externalRestHttpsPort to 0 * re-enable lookup method in operator templates * fix NullPointerException in isPodReady method (#1862) * Changes for OWLS-83431 (#1855) * changes for OWLS-83431 * changes for owls-83431 * Chnages to address PR review comments * changes to suppresserror from synchronous call * cleanup changes based on PR comments * changes to fix javadoc and variable name * changes for latest review comments * Namespace management enhancements (#1860) * Work in progress * Work in progess * Work in progress * List, Dedicated working * Update chart build * Use enableClusterRoleBinding * Use lookup * Preserve debugging * Complete label selector * Correct typos * Debugging * More working * Debugging more complicated label selectors * Update chart build * Update chart build * Begin updating samples * Documentation work * Update charts * Complete doc. updates * Additional unit tests * Add additional mementos * Test code to diagnose build failure on Jenkins * More debug code * More debug code * Hopefully fixed unit tests * Review comment * Review comments * Review comments * ItPodTemplates test conversion to Junit5 (#1850) * added tests for pod templates * fixed typo * fixed styling * fixed styling1 * removed junit4 test * removed t3port * fixed domainhome dir * fixed domainhome dir1 * revert to 5ca7d8a704fdfc0b5395b80e327a520b97b33a6e * addressed the review comments * put back Lenny's commit * added more comments * addressed comments, move to use default wdt image * removed mii test * Junit5: Convert two ELK Stack test cases ( testLogLevelSearch and testOperatorLogSearch ) (#1848) * JUnit5 Create Infrastructure for ELK stack jenkins-ignore * JUnit5 Create Infrastructure for ELK stack jenkins-ignore * Changes based on comments jenkins-ignore * Minor change in installAndVerifyOperator jenkins-ignore * Converted two ELK Stack test cases to use JUnit5 jenkins-ignore * Converted two ELK Stack test cases to use JUnit5 jenkins-ignore * Changes based on the comments jenkins-ignore * Added detaied test steps jenkins-ignore * Upgraded ELK Stack to version 7.8.1 and delete old test suites jenkins-ignore * Verify fields that cause servers to be restarted (#1866) * First cut for pod restart * keep the change in ItDomainInImageWdt * move the test to ItPodsRestart * revert some ealier change * Move the field change verification right after patch domain * remove the order * minor change * address the review comments * minor change * Debugcdttest (#1863) * debug cdt test * printing exec stdout and stderr before assertion Co-authored-by: BHAVANI.RAVICHANDRAN@ORACLE.COM <bravicha@bravicha-1.subnet1ad1phx.devweblogicphx.oraclevcn.com> * addressed review comments * remove duplicate parameter srcstorepass in keytool command (#1867) Co-authored-by: Johnny Shum <cbdream99@gmail.com> * Kubernetes Exec API intermittently hangs when executing curl command for ItMiiDomain test (#1871) * change to reproduce hang * additional debug to Kubernetes.exec * enable verbose of curl command * read stderr * log exec result * add max-time to curl command * fix curl command option * enable detailed trace * Add --max-time flag * Check exit code 7 and sleep 10 seconds * More debug info when calling checkAppIsRunning * Add thread info to log messages * checkstyle fixes to debug info * Add debug for retry * Update thread info for log messages * addition debug statements along exec call path * Fix NPE * Try programmatic thread dump to debug exec * Debug AppIsRunning awaitility * Read error stream in separate thread * Remove some debug and join without timeout * revert ItMiiDomain.java * Revert TestActions.java * Remove debug info * modify test to use appAccessibleInPod method * refactor stream readers * Addreseed new doc review comments * Owls 83534 - Changes to allow setting nodeAffinity and nodeSelector values in operator Helm chart (#1869) * changes to allow setting nodeAffinity and nodeSelector values in operator Helm chart and related doc change * minor doc change * changes based on review comments * Make tests fail fast when initialization fails (#1874) * check initialization success * modified log message * modified log message * log rotation enhancements and doc (#1872) * Add log rotation for NM .out/.log, enhance log rotation for introspector .out, and document log rotation for WLS .out and .log. * minor doc edits * Update charts * Owls 83538 (#1876) * Eliminate http call from Watcher tests * cleanup, remove unit test thread dependencies * Backout change to chart * minor cleanup Co-authored-by: Ryan Eberhard <ryan.eberhard@oracle.com> * Adding assertions in upgrade test (#1880) * adding missing assert * adding assert for uninstall operator * Add a test to change WebLogic credentials (#1853) * Adding test to change the admin credentials for domain running in PV * Fx the test name * fix secret name * add restartVersion instead of replace * fix json * change the method order * fix managed server names * Fix assertionerror class for invalid login * Fix comments * Address review comments * use default channel port for accessing application * Lookup domain runtime only if it is admin server * change max-message-size to a large value change the t3channel port to some arbitrary number * fix the max-message-size * log response from managed servers * increasing the max iterations * Fix the JAVA_OPTIONS * Fix the java options * Add debug flags to servers * remove the system property maxmessagesize * Change the implmentation of cluster communication check * fix iterations * fix replicaCount * Adding more debug messages * Fix server names * Refactor the server health checks * fix comment * Fix the server count * Enable cmo.setResolveDNSName(true) for custom nap * log dns resolv.conf file as well * Add more debug flags * Add a random objects to JNDI tree * Fix dns logging * Use MBean server connection instead of heartbeats to detect server health * Fix the curl request url * Refactored the server communication verification by MBeanServerConnection to the individual servers instead of relying on cluster heartbeats * Fix the URL * Remove DNS entries logging * Undoing the changes for ItLoadbalancer.java * Checking if the server is running * Deleted ItLoadbalancer.java * Deploy application before accessing it * PR to add sample running Oracle WLS Kubernetes Operator on Azure Kubernetes Service. Thanks to Johnny Shum, Ryan Eberhart and Monica Ricelli. Merge from branch created for https://github.com/oracle/weblogic-kubernetes-operator/pull/1804 Update _index.md On branch edburns-msft-180-01-wls-aks modified: docs-source/content/samples/simple/azure-kubernetes-service/_index.md - Version numbers in prerequisites. - Additional "Successful output looks like" blocks. - When a code block defines an env var, export it. - Before running the script to create the yaml, rm -rf ~/azure. modified: kubernetes/samples/scripts/create-kuberetes-secrets/create-azure-storage-credentials-secret.sh modified: kubernetes/samples/scripts/create-kuberetes-secrets/create-docker-credentials-secret.sh - chmod ugo+x modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks-inputs.yaml - Readability. - Move the "prefix" stuff to the "must change" section. On branch edburns-msft-180-01-wls-aks modified: docs-source/content/samples/simple/azure-kubernetes-service/_index.md - Working toward 1163875 Apply disambiguation prefix on additional items. modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks.sh - Correct spelling error in comment. Task 1163875: Apply disambiguation prefix on additional items Changes after reviewing commit 705ab338ab4647c2af963ed58b74872d4fb1de6b with Ed. Fix check points and check length of namePrefix. Create validate.sh to validate resources before creating domain manually. Typos On branch edburns-msft-180-01-wls-aks typos modified: docs-source/content/samples/simple/azure-kubernetes-service/_index.md On branch edburns-msft-180-01-wls-aks modified: docs-source/content/samples/simple/azure-kubernetes-service/_index.md - Spelling. - Additional validation: kubectl logs -f. - Mention health checks. modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks.sh - Make it so the script can be run from an absolute path. modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/validate.sh - chmod ugo+x Modified in kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks-inputs.yaml Update _index.md and all related samples script and yaml files to remove all mention of Docker Hub Modified in kubernetes/samples/scripts/create-kuberetes-secrets/create-docker-credentials-secret.sh Update dockerServer=container-registry.oracle.com On branch edburns-msft-180-01-wls-aks modified: docs-source/content/samples/simple/azure-kubernetes-service/_index.md - Fix link to GET IMAGES. - Fix lower case l. - Update heading. - Correct wording. - Give hint about ImagePullBackoff. modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks-inputs.yaml - Adjust comments to make it clear that it's Oracle SSO credentials. modified: kubernetes/samples/scripts/create-weblogic-domain/domain-home-on-pv/create-domain.sh - Increased retries to 30. On branch edburns-msft-180-01-wls-aks modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/azure-file-pv-template.yaml modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/azure-file-pvc-template.yaml - Changes suggested by Johnny Shum to get past the cluster distribution problem. Revert "On branch edburns-msft-180-01-wls-aks" This reverts commit b52b466ab5b8eb2a7493e829e125be876dc516a1. Name vp/pvc, file share with unique name. Add testwebapp.war for testing. Modified in kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks.sh Change file share name with "prefix-weblogic-time" Change pv, pvc name with "prefix-azurefile-time" Output status during waiting for job completed. Modified in docs-source/content/samples/simple/azure-kubernetes-service/_index.md Update text with pv/pvc, file share unique name. Modified in kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks-inputs.yaml Change name structure of pvc and file share. Modified in kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/validate.sh Fix validate.sh with pv/pvc, file share unique name. On branch edburns-msft-180-02-wls-aks forward slashes only. modified: docs-source/content/samples/simple/azure-kubernetes-service/_index.md On branch edburns-msft-180-02-wls-aks Verified manual execution of steps works on Oracle Enterprise Java subscription. modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/azure-file-pv-template.yaml modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/azure-file-pvc-template.yaml - increase capacity to 10Gi. - Set on pv: ``` persistentVolumeReclaimPolicy: Retain ``` - Remove nobrl. - Set on pvc: + selector: + matchLabels: + usage: %PERSISTENT_VOLUME_CLAIM_NAME% On branch edburns-msft-180-02-wls-aks In table for automation, update description for docker related parameters. modified: docs-source/content/samples/simple/azure-kubernetes-service/_index.md * Update _index.md Address comments from @rosemarymarano. * Update README.md Address @rosemarymarano comment. * Update _index.md * On branch edburns-msft-180-02-wls-aks modified: docs-source/content/samples/simple/azure-kubernetes-service/_index.md - Copyedits. - Remove ClusterRoleBinding modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks.sh - Remove ClusterRoleBinding * On branch edburns-msft-180-02-wls-aks deleted: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/testwebapp.war - "security policy doesn't let us merge changes with non-image binary files." - This deleted file has the same checksum as `kubernetes/samples/charts/application/testwebapp.war` so let's just use that. modified: docs-source/content/samples/simple/azure-kubernetes-service/_index.md - Use `kubernetes/samples/charts/application/testwebapp.war` * Change default VMSize and node number, as Standard_D4s_v3 and 3 node exceed quota on free azure account. Modified on docs-source/content/samples/simple/azure-kubernetes-service/_index.md Change VM size to Standard_D4s_v3 and node number to 2 in document. Modified in kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks-inputs.yaml Change default value of VM size to Standard_D4s_v3 and node number to 2. Tested in Oracle Enterprise Java and a free azure account. * On branch edburns-msft-180-02-wls-aks Add Clean Up section. modified: docs-source/content/samples/simple/azure-kubernetes-service/_index.md modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks.sh * On branch edburns-msft-180-02-wls-aks Apply changes suggested by @mriccell. modified: docs-source/content/samples/simple/azure-kubernetes-service/_index.md modified: kubernetes/samples/scripts/create-weblogic-domain-on-azure-kubernetes-service/create-domain-on-aks.sh * Consolidate multiple Mii test classes to a single ItClass (#1875) * Consolidate Mii Domains and remove junit4 tests * removed more Junit4 Mii tests * Modify the logic to check SystemResources * Updated the initAll() with installAndVerifyOperator replacing the old code * Modify the assertion for delete resources * Adding upgrade test for 3.0.1 to latest(develop) (#1882) * adding missing assert * adding assert for uninstall operator * adding upgrade test from 3.0.1 * convert testTwoDomainsManagedByOneOperatorSharingPV in ItOperatorTwoDomains.java to Junit5 (#1849) * convert test from junit4 to junit5 * change deleteJob to use GenericKubernetes API * remove ItDomainInPV * remove junit4 test ItOperatorTwoDomains.java * add loadbalancer tests * revert lookup in _operator-secret.tpl * remove ItLoadBalancer.java ItVouagerSample.java * re-enable lookup method in operator templates * add more debug info * get the clusterview from credential-change-pv-domain branch * collect logs for default namespace * fix intermittent issue in ClusterView App * add retry in admin console login * use unique domainuid * get new clusterview app * use non-default namespace * fix ItSimpleValidation PV hostPath * remove ItSimpleValidation * get the latest clusterview app * Retry docker login, image push/pull from/to repos (#1885) * Using standard retry logic to retry the build * Use retry for docker login and image pull and push * increase timeout to 30 minutes * Retry the push * wip * use iterator to push * check SKIP_BASIC_IMAGE_BUILD is set before pushing it to repo * Bring back the test jenkins-ignore (#1883) * add NGINX for production ICs (#1878) * Verify shutdown rules when shutdown options are defined at different domain levels (#1870) * test for ItPodsShutdown * remote JUnit4 ItPodsShutdown * address the review comment * refactor the code * add javadoc * rename the test class name as ItPodsShutdownOption * Modified py scripts * correct the errors (#1887) * Adding operator restart use cases from Junit4 tests (#1884) * adding operator pod restart tests from Junit4 * renaming file * code refactor * code refactor * log exception * code refactor * delete Junit4 test class * fix refactored method * deleting Junit4 test classes which are converted * addressing review comments * fix logic to wait for rolling restart to start in existing test * address review comments * fix log message * Created infra for WLS Logging Exporter and converted related tests to use JUnit5 (#1877) * Created infra for WLS Logging Exporter and converted related tests to use JUnit5 jenkins-ignore * upgraded ELK Stack back to v7 and fixed a filter issue in RESTful API to query Elasticsearch repos jenkins-ignore * Changes based on comments jenkins-ignore * More doc review change * Converted Usability Tests to Junit5 (#1888) * added usability tests * fixed some typos * added cleanup * fixed update * fixed upgrade test * cherrypick fix from Maggie * removed old class * merge from develop * merge * addressed review comments * addressed more comments * addressed comments from Pani * addressed comments from Pani1 * styling * corrected java doc * corrected java doc with typo * create test infrastructure for Apache load balancer (#1891) * initial commit * apache lb * add apache tests to ItTwoDomainsLoadBalancers * fix apache-webtier chart imagePullSecret * get the ItPodsShutdownOption from shutdown3 branch * change imagePullPolicy to Always * push the apache image to Kind repo * add debug info for REPO_REGISTRY etc in Kind new * remoe pull image from ocir and push to kind repo * enable pull apache image from ocir and push to kind new * cleanup * move OCR login to test * address Vanaja's comments * address Pani's comments * address Vanaja's comments Co-authored-by: vanajamukkara <vanajakshi.mukkara@oracle.com> * add debug info in scaling cluster with WLDF methods (#1893) * add debug infor in scaling cluster with WLDF methods * address Vanaja's comments * owls-83918 an idle domain's resource version should stay unchanged (#1879) * In recheck code path only continue if spec changes * Minor change * only add progressing condition when something is really happening to the domain * cleanup * populate state and healht when needed * Work in progress * debug * debug * more debugging * remove debugging * remove debugging * cleanup * Fix patchPod handling and change HashMap to ConcurrentHashMap * Add ProgressingStep on scaling down * Address review comments * Review comment * sample changes needed to work in openshift with default scc * Adding WLDF and JMS system override tests (#1896) * Adding situational config overrides tests for JMS and WLDF resources * Remove clusterview app and DB * Fix file name * Add sitconfig application * fix url * wip * Wait until response code is 200 * fix response string * change wait time to 5 min * Improve comments and javadoc * Removing old tests * Use Kubernetes Java client 9.0.2 (#1898) * Uptake k8s Java client 9.0.1 * Review corrections * Use new version of GenericKubernetesApi * Changes for OWLS-83136 - Limit concurrent pod shutdowns during a cluster shrink (#1892) * changes for OWLS-83136 - Limit concurrent pod shutdowns during a cluster shrink * Minor code cleanup for OWLS-83136 * minor change to avoid duplicate step * fixed javadoc for deletePodAsyncWithRetryStrategy method * fix for integration test and added maxConcurrentShutdown in index.html * fix unit test failure and shutdown servers concurrently when domain serverStartPolicy is NEVER * Address PR review comments. * Changes to address PR review comments. * Changes to address PR review comments. * Changes to address PR review comments * Resolve merge conflict * Enable unit tests * Use Dongbo's change in unit test Co-authored-by: Ryan Eberhard <ryan.eberhard@oracle.com> * Added sample support for Ngnix Load Balancer (#1886) * Initial check-in * Updated doc with SSL termination * Review comments on nginx/README.md * Review comments * More review comments * Removed the Path Routing Section * More doc review change * Minor doc modification * More review comments * Remove path routing yaml file * Update README.md * Update README.md * Update setupLoadBalancer.sh Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-1.subnet1ad2phx.devweblogicphx.oraclevcn.com> Co-authored-by: Ryan Eberhard <ryan.eberhard@oracle.com> * New Tests for ServerStartPolicy (#1895) * Added new tests for ServerStartPolicy * Minor typo modification * Adddressed review comments * Added logic to check the managed server timestamp * Resolved few typos * Resolved more review comments, removed duplicate codes, used common utility methods * More review comments Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-2.subnet1ad3phx.devweblogicphx.oraclevcn.com> * fix build application not to assert exec false exit value (#1904) * fix build application not to assert exec false exit valuae * Asserting that file is available after build Co-authored-by: BHAVANI.RAVICHANDRAN@ORACLE.COM <bravicha@bravicha-1.subnet1ad1phx.devweblogicphx.oraclevcn.com> Co-authored-by: sankar <sankarpn@gmail.com> * temporarily adding chown option to imagetool command (#1905) * Adding test to use PV for logHome in MII domain (#1903) * logs on PV * use PV for logs * Revert to develop branch code * fix pv path * code refactor * fixing path * fix comments * look for string RUNNING in server log * modify success/failure criteria * fix indentation * adding pipefail * ItInitContainers conversion to Junit5 (#1907) * added testcase for initcontainers * added testcase for initcontainers1 * fixed init check * addressed review comments * addressed the comments * sync to develop branch * Added annotation to remove client header on Traefik Ingress * add NGINX path routing doc * List continuation and watch bookmark support (#1881) * Allow watch bookmarks * Work in progress * Work in progress * Work in progress * Work in progress * AsyncRequestStep tests * Fix tests * Bookmark tests * Clarify names * Work in progress * Better support for namespace lists spanning REST calls * Revert changes to charts * Bug fixes * Disassociate CRD creation from namespace startup * Make unit-test more generic * Clarify continue pattern * Save continue for reinvoke of async request * Correct method name * Add security disclaimer statements * fixing the mii domain and sample test after images have been updated (#1906) * fixing the mii domain and sample test after images have been updated * Modified group name used to copy file to server pod jenkins-ignore * fix jrf test in mii samples not to change gid to root * adding the fix I had removed by mistake * update after Tom's review comments Co-authored-by: BHAVANI.RAVICHANDRAN@ORACLE.COM <bravicha@bravicha-1.subnet1ad1phx.devweblogicphx.oraclevcn.com> Co-authored-by: huiling_zhao <huiling.zhao@oracle.com> * Added doc to eliminate client proxy header * Added nginx link * added ngnix ref * update doc with review comments * Change REST API query to handle hyphen, WebLogic Logging Exporter (#1897) * Change REST API query to handle hyphen, WebLogic Logging Exporter jenkins-ignore * Removed some comments jenkins-ignore * Syncup with latest develop branch jenkins-ignore * Merge with develop Co-authored-by: Ryan Eberhard <ryan.eberhard@oracle.com> * OWLS-84517: Scaling failed when setting Dedicated to true (#1921) * Rest authentication of requests should use namespace scope for Dedicated selection strategy * remove System.out.println from unit test * Retry failure fix (#1854) * update from develop * update from develop * change log message * Initial check in for JRF Fatal error fix * remove test for now * Fix logic * add info for create jrf domain and remove obsolete constants * add comment * relax retry and update comments * change comments * change info text * minor text change * Add retry count to domain status and support retryCountMax * increment retry count only if there is an error message * doc update * use existing error for increment * Log message change * rename retryCount to introspectJobFailureRetryCount * Add logging for retry counter * Fix NPE in unit test * Minor refactoring * refactor code * update description of field * changing description texts * missed files * change "MII Fatal Error" to "FatalIntrospectorError" * default failure retry count should be 0 * change comparing failure retry count gte * rename introspectJobFailureRetryCount field, reset counter if succcessful, and log before retry. * Internationalize message and move logging to start of retry * update message * update message text Co-authored-by: Johnny Shum <cbdream99@gmail.com> Co-authored-by: Ryan Eberhard <ryan.eberhard@oracle.com> * Configure JMS Server logs on PV and verify (#1924) * configure jms server logs on PV and verify * undo jms logs on pv * fixing rolling restart assertions * OWLS-80038 and OWLS-80090: fix mountPath validation and token substitution (#1911) * Skip volume mount path validation if it contains valid tokens * Check admin serverPod's additional volume mount paths too in domain validation * Check cluster serverPod's additional mount paths too in domain validation * check mountpath validation after token substitution is performed * cleanup * fine tuning * minor cleanup * In progress * WIP * minor change * Modify a test name * Pod shutdown tests porting (#1914) * Fix the shutdown options * Adding shutdown option tests * Fix log location * fix shutdown object assignment * Add ind ms * fix start policy * wip * wip * wip * wip * wip * Add tests without env override * add debug messages * refactor code * check for in ms 2 * Fix pod name * fix pod name * Cleaned up comments and updated javadocs * Removed JUnit4 test class * Restore updatedomainconfig it class * fix the yaml formatting * Remove left over files * update comments * fix array size * address review comments from Vanaja * remove throws clause * fix javadoc * replace StepTest with FiberTest, remove unused Fiber code (#1926) * Correct Ingress documentation (#1923) * remove outdated sample doc references to weblogicImagePullSecretName (use imagePullSecretName) (#1920) * Correct enableClusterRoleBinding * Removal of Junit4 Integration tests (#1934) * Removal of old Junit4 test * Remove reference to junit4 integration test from pom.xml * Remove ref to junit4 test from buildtime-reports Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-1.subnet1ad2phx.devweblogicphx.oraclevcn.com> Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-2.subnet1ad3phx.devweblogicphx.oraclevcn.com> * OWLS-84141 fix operator upgrade issues related to domainNamespaceSelectionStrategy (#1930) * Fix operator helm upgrade issues related to domainnamespaceSelectionStrategy * minor fixes * minor change * make helm chart behavior matches what the operator runtime has * Owls83813 fix a scaling issue when upgrade from 2.5.0 (#1933) * comment out workaround for upgrade from 2.5.0 * Fix NPE and remove workaround in test * cleanup * minor modification * MII jrf - improve wallet password handling (#1919) * Improve opss wallet password handling. * miijrf-remove-pwd-echo: add comments, tracing, doc fixes, and two fixes. * fix comment Co-authored-by: Ryan Eberhard <ryan.eberhard@oracle.com> * Mob refactoring (#1929) * Expand unit test coverage * startPodWatcher * more watcher methods * ConfigMapAfter refactor * WIP * finish start watcher methods * resolve merge conflicts * DomainPresenceInfos and DefaultResponseStep changes * isolated domain presence info map * refactoring changes * PodListStep refactor * generify * refactoring * move common methods up * refactoring and bug fix * Add namespace to DomainPresenceInfos * refactoring WIP * fix checkstyle errors * refactoring changes * start refactor to biconsumer * continue processList refactor * continue refactoring * refactor PodListStep * implementing NamespaceProcessor * refactor readExistingResources to eliminate duplication * Add refactorings from previous chunked list attempt * add files not previously added Co-authored-by: Lenny Phan <lenny.phan@oracle.com> Co-authored-by: Dongbo Xiao <dongbo.xiao@oracle.com> Co-authored-by: ANIL.KEDIA@ORACLE.COM <anil.kedia@oracle.com> * Rename new-integration-test directory (#1935) Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-2.subnet1ad3phx.devweblogicphx.oraclevcn.com> * wait till admin pod has restarted instead of pod deleted (#1938) * check existence of sa in helm templates (#1939) * Kubernetes Java Client 10.0.0 (#1937) * Kubernetes Java Client 10.0.0 * Rebuild charts * Update charts * Revert "check existence of sa in helm templates (#1939)" This reverts commit fcfd8855fc50759d3cc780c51757df424b27c235. * Revert "Kubernetes Java Client 10.0.0 (#1937)" This reverts commit 8c9d57208938e055fa6eb79156fb6f590292fa6f. * Supercedes #1900 (#1940) * Fix text parsing issue modified: utility.sh Fix text parsing error on Ubuntu 18.04.5 LTS modified: validate.sh Fix yaml parsing error on MacOS. Wait for resource ready on MacOS and remove code for helm less than 3.0 Add text for domain status troubleshooting Use tag that includes AKS docs. Per Reza, inline OCR authentication Update _index.md Update create-domain-on-aks-inputs.yaml Update _index.md Update _index.md Update _index.md Update _index.md Update _index.md Update _index.md Remove UNIX Update _index.md Use AKS addon name Use alias as Rosemary suggested in our last PR. * Update _index.md * Update _index.md Co-authored-by: Galia <haixia.cheng@mircosoft.com> Co-authored-by: Ryan Eberhard <ryan.eberhard@oracle.com> * Model and application archives In persistent persistent volume (#1936) * Adding tests for model in pv for MII domain * add docker login * use oracle image instead of wls image * wip * fix wdt model file location * fix the model home to point to a directory rather than a file * add a application archive * fix model file name * wip * create different directories for application and model file * wip * verify servers health * fix checkstyle violations * add admin-server in app target * fix user names * moved the tests to integration-tests directory * address Pani's comments * use CommonMiiTestUtils.createDomainResource * add pv to server pod * fix javadoc * add public doc in test method javadoc * Mii doc: update runtime update doc (#1942) * Improve opss wallet password handling. * miijrf-remove-pwd-echo: add comments, tracing, doc fixes, and two fixes. * fix comment * Update MII runtime update doc, including documenting embedded LDAP and credential mapping runtime updates as unsupported. * review and edit Co-authored-by: Rosemary Marano <rosemary.marano@oracle.com> * Owls 84594 (#1941) * Test for reproducing bug owls-84594 * clean up cluster comparison * Fix checkstyle issues * Verify logs from ms1 * fix the expected ignore sessions attribute for ms1 * Simplify unit tests Co-authored-by: sankar <sankarpn@gmail.com> * Integration test for secure nodeport service (#1931) * Initial check-in * Review comments * Sync up develop branch * Review comments resolution * Add check the availability of exteranal service * Resolve more review comments * Consolidate the test doamin and rename the class * Fixed few typos Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-2.subnet1ad3phx.devweblogicphx.oraclevcn.com> Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-1.subnet1ad2phx.devweblogicphx.oraclevcn.com> * document altering WebLogic Server CLASSPATH and PRE_CLASSPATH (#1948) * document altering WebLogic Server CLASSPATH and PRE_CLASSPATH * minor doc edit * Modified doc to remove client headers * Updated doc/utility to download custom version of Load Balancer release * fix broken link (#1952) * added synchronized to execCommand to fix intermitten failures (#1947) * change domain name to be different than the one used in other IT tests (#1943) * fixed cleanup order to uninstall operator before cleaning sa to fix intermittent failures (#1946) * fixed cleanup order to uninstall operator before cleaning sa * replaced hardcoded secret name with var * corrected secret name * style * Add TLS and Path Routing Tests for Nginx, Voyager and Traefik (#1910) * add NGINX tls and two domains tests * add tls ingress for Voyager * add path routing for three lbs * fix nginx service name in other tests * cleanup * use NGINX chart version 2.16.0 * uninstall nginx first in AfterAll * add stable repo for Prometheus * address Pani's comments * Added Upgrade Test from Release 3.0.2 (#1956) Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-2.subnet1ad3phx.devweblogicphx.oraclevcn.com> * Support for persistentVolumeClaimName, support for both non privilege… (#1916) * Support for persistentVolumeClaimName, support for both non privileged port 8080 to listen and default priviledged port, imagePullSecrets and readme updates * Incorporated doc review comments * Owls-84294 handle missing sa in operator install/upgrade (#1957) * check existence of sa in helm templates * minor product change plus test changes * minor changes * more changes * more doc change * minor doc change * fine tune doc text * add error messagewq * minor doc edit * enable disabled test cases (#1949) * Fix missing ODL configuration that may presents in the model (#1950) * Fix missing ODL configuration that may presents in the model * refine archive for fmw logging.xml Co-authored-by: Johnny Shum <cbdream99@gmail.com> * Unit test and fix detection of stranded namespaces (#1953) * Unit test and fix detection of stranded namespaces * Some code simplification * put some code back where it started from * Correct method name * Cache compiled pattern, use explicit constant for call limit. * Fix retry regression (#1954) * Fix retry regression * simplify reset failurecount in podstepcontext Co-authored-by: Johnny Shum <cbdream99@gmail.com> * Develop owls 84334 (#1902) * support domain's secure mode * Use the domain's administration port if the server's admin port is 0 * - Fix the NPE when the WLS pod's listen port passed to Prometheus annotations is null - Derive the defaults for a few other MBean attributes that depend on the domain secure mode * Fix error in getNAPProtocol for ServerTemplate * Adding test to verify image update for WLS domain (#1959) * first cut for image update * minor change * refactor the code after syncing to the latest develop branch * edit the log message * address the review comments * remove the extra line * disable ItTwoDomainsLoadBalancers.testApacheLoadBalancingCustomSample (#1962) * Adding flexibility to integration tests to pull the base images from OCIR or OCR and more (#1951) * use secret based on base images repo * fix secrets * fix secrets * fix more secrets * more secret fixes * fixing image name * fix compilation errors * fix condition for exec * fix merge conflict * fix log messages in ItServerStartPolicy * some more refactoring * refactoring -move useSecret and deriving base image name logic to TestConstants * fix checkstyle * fix domain images repo for multi node cluster * set image pull secret always * fix domain images repo * fix indentation * change REPO env var to OCIR * fixed grafana install (#1965) * Add a testcase for -wdtModelHome option to the imagetool (#1961) * add wdtModelHome parameter * Adding testcase for custom wdt model home * fix model home * fix wdtmodelhome location * remove @Test annotation * log domain uid and image * fix image name * use wls pod for pv manipulation * change pv name * wip * change pv permission to oracle:root * use variable to store location model home * wip * add modelfile to the image * supply modelfile in the image building process * fix image push * fix comments and javadocs * fix log message * fix image check * list images * fix image name * address review comments * add the pod exec and copy commands to common util file * use file util from FileUtils * fix default OCR image names (#1968) * Add Tests for DataHome Override in Domain Custom Resource (#1964) * add tests for dataHome override * address Pani's comments * re-enable ItTwoDomainsLoadBalancers.testApacheLoadBalancingCustomSample (#1969) * initial commit for apache custom sample update * re-enable ItTwoDomainsLoadBalancers.testApacheLoadBalancingCustomSample * address Vanaja's comments * Add ClientFactoryStub memento to watcher tests (#1967) * OWLS-84786 - Use Kubernetes Java Client 10.0.0 (#1972) * Second attempt at using the Kubernetes Java Client 10.0.0 This reverts commit 9a33d3bf4f69f48470205a980322eedf867105fe. * Diagnostics * Remove duplicated dependencies * More diagnostics * Update charts * changes for OWLS-84786 * removed diagnostics messages and code cleanup for OWLS-84786 * minor cleanup * Back out changes to docs/charts dir. Co-authored-by: Ryan Eberhard <ryan.eberhard@oracle.com> * Added integration test cases for Dedicated namespace scenarios (#1913) * Added integration test cases for Dedicated namespace scenarios jenkins-ignore * Syncup with JIRA OWLS-84517 and changes based on comments jenkins-ignore * Delete index.yaml * Remove binary files that are wrongly checked in jenkins-ignore * Changes based on the comments jenkins-ignore * Syncup with develop branch jenkins-ignore * Changed javadoc jenkins-ignore * Syncup w develop and changes based on the comments jenkins-ignore * Added creating CRD jenkins-ignore * Syncup with develop branch jenkins-ignore * Syncup with develop branch jenkins-ignore * Added this test suite to sequential run only jenkins-ignore * Fixed an error in kindtest.sh jenkins-ignore * OWLS-84562 - added tests for Namespace management enhancements (#1955) * added tests * updated test loc * more tests * fixed default secrests management * fixed test logic * corrected java docs * fixed domainns * fixed secret creation * fixed secret dependencies * fixed default domain crd dependencies * fixed check pod creation * style * added rbac test * added rbac test, corrected ns label * addressed comments from review * addressed comments from review1 * addressed comments from review2 * addressed comments from review3 * addressed more comments * style * removed commented out methods * addressed review comments5 * correct image name construction of DB and FMW (#1974) * Moving SOA deployment samples to a different repo (#1973) * removed -t flag for create and drop scripts * Deleted SOA deployment samples and updated README pointing to SOA external repo * rephrased soa doc reference * Reduce job delete timeout value. (#1978) * add retry when scaling cluster with WLDF (#1986) * add retry when scale cluster with WLDF * cleanup * Added automation to test StickySession using latest Traefik Version (#1976) * Added automation for StickySession using latest Traefik Version jenkins-ignore * Corrected a typo jenkins-ignore * Improve nightly stability for ItIntrospectVersion.testUpdateImageName (#1988) * improve nightly stability * address review comment * copy out scalingAction.log from admin pod when scaling cluster with WLDF (#1989) * copy scalingAction.sh from admin server pod * add retry when copyfile from pod * cleanup * cleanup * update javadoc * add copyFileFromPod to FileUtils * address Lenny's comments * OWLS-84881 better handle long domainUID (#1979) * add limits to generated resource names * work in progress * get server and cluster info from introspector's results * new configuration, validation, and unit tests * minor change * fix operator chart validation * fix chart validation * fix hard-coded external service name suffix in integration tests * fix more integration test failures * clean up * minor doc fixwq * Revert "minor doc fixwq" This reverts commit 56b5656e82acd01c5602fc0293312b749258a7c4. * minor doc fix * minor fix to test * clean up test constants * minor doc edits based on review comments * improve error msg and remove hard-coded suffixes from doc and samples * cleanup * change the legal checks based on Ryan's suggestion * add cluster size padding validation * minor changes to helm validation * only reserve spaces for cluster size wqsmaller than 100 * one more unit test case * minor changes * minor doc change * change a method name * Minor doc update on upgrading operator using helm upgrade (#1994) * helm upgrade to new operator image should be issue from same github repository branch * minor edits * Reduce jrfmii map size (#1980) * remove em.ear and sysman/log/EmSetup*.log backup_config to reduce configmap size * compress merged model before encryption to reduce size * update show modelscript * minor changes Co-authored-by: Johnny Shum <cbdream99@gmail.com> * use old default suffix when running older release (#1995) * cross domain transaction recovery (#1993) * test for crossdomaintransaction with TMAfterTLogBeforeCommitExit * update image name * update after Alex's review comment and develop * update after review comments * fix checkstyle violation Co-authored-by: BHAVANI.RAVICHANDRAN@ORACLE.COM <bravicha@bravicha-1.subnet1ad1phx.devweblogicphx.oraclevcn.com> * External JMS client with LoadBalancer Tunneling (#1975) * Added test for external JMS client * separate methods for http and https tunneling * Modified the test method name * Added ssl debug * Modify keytool command line args * Added SAN extension to the ssl cert * Resolved typo in openssl command * Modified K8S_NODEPORT_HOST to return IP Address * Review comments (a) modified method scope to private (b) usage of utility method to copy files from Pod * Addressed more review comments * Added more description and modified the assettion to check command line execution Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-2.subnet1ad3phx.devweblogicphx.oraclevcn.com> * OWLS-84660 document the new resource name Helm configurations and resource name limits (#1997) * add doc for length limits to resource names * more doc changes * more changes to the doc * fix reference links * changing some of the wording * minor fixes * move the main new section from domain-resource.md to its parent _index.md * minor changes * more edits * more doc edits * more edits * address review comments * minor change * address more review comments * minor edit * Refactoring: extract domain namespaces code from main (#1999) * Extract common domain namespace processing * Ensure script config map exists when namespace started * Address race condition in tuning parameters instantiation * Use function to get current namespace list * Remove obsolete code * Automate domain in pv samples (#1996) * Adding samples * wip * fix file paths * correct pv pvc name * wip * fix managed server namebase * delete pvc * change the pv reclaim policy to recycle * fix ms base name * fix domain namespace in service check * parameterize test to use wlst and wdt * fix test type * add javadoc * wip * delete domain and verify it is removed * change domain name * create credentials secret for each domain * add t3publicaddress in input file * wip * Fix javadocs and comments * delete pv and pvc and wait for it to terminate * wip * wip * address review comments * Add image secrets * Change the domain name to be unique (#2005) * Fix ENV variable setting in JRF domain in PV test class (#1998) * debug JAVA_HOME issue for fmw image * fix hard coded env var * debugging * debugging * refactor the code * delete the extra space line * address the review comment * address the review comment * address the review comment * synchronize startOracleDB method (#2006) * add tests for terminating SSL at LB to access console and servers (#1977) * add tests for terminating ssl at LB to access console and servers * address Pani's comments * fix traefik annotation * add header in traefik ingress rules * cleanup * remove if __name__=main clause in python script (#2004) * Add instructions for creating a custom Security Context Constraint (#2003) * update openshift security docs to include custom scc instructions Signed-off-by: Mark Nelson <mark.x.nelson@oracle.com> * update based on review comments Signed-off-by: Mark Nelson <mark.x.nelson@oracle.com> * updates after review with ryan Signed-off-by: Mark Nelson <mark.x.nelson@oracle.com> * updates after review with ryan Signed-off-by: Mark Nelson <mark.x.nelson@oracle.com> * OWLS 84741: Scaling failed on Jenkins when setting Dedicated to true & io.kubernetes.client.openapi.ApiException: Not Found (#1990) * Use REST client's access token for authentication and authorization * Enable testDedicatedModeSameNamespaceScale * Add patch permissions to rolebinding * Code cleanup * Changes from initial code review * Use TuningParameters to acccess property to control Operator's REST API authentiction and authorization implementation * Code review changes * documentation updates * document patch verb * documentation changes based on review * use code font for appropriate parameters * Owls 84815 (#2009) * Remove dependency of job processing on DomainNamespaces class * Add unit tests for Namespace watcher * work in progress * start converting main to use instance methods * Extract operator startup into instance method * Define K8s version in main delegate * refactoring: move fullRecheckFlag out of Namespaces * Refactoring: extract Namespaces class * test for ability to list domains when dedicated namespace strategy * Handle null value for watch tuning in unit tests * Correct chart * Update test dependencies and POM and change Dockerfile to use JDK 15 (#2008) * OWLS85461 add introspect version to server pod label (#2012) * initial changes to add introspect version to pod labels * work in progress * work in progress * cleanup * fix unit test failure in PodWatcherTest not related to this PR * minor changes * doc changes * add an example in the doc * doc edits to address review comments * address review comments * cleanup * add domainRestartVersion to the example * move the patch part into the existing patch step and remove log messages * minor doc edit * refactored a little * Correct overrideDistributionStrategy other places * Add correct the other misspelled field name Co-authored-by: Ryan Eberhard <ryan.eberhard@oracle.com> * OWLS 85530: OPERATOR INTROSPECTOR THROWS VALIDATION ERRORS FOR STATIC CLUSTER (#2014) * getDynamicServersOrNone doesn't throw exception with 14.1.1.0 * check for ServerTemplate for Dynamic Servers * Check if DynamicServers mbean exist * Add test to introspect configured cluster created by online WLST * Document configured cluster introspection test * documentation updates to README and referencing JIRA OWLS-85530 * modify istio installation script (#2015) Co-authored-by: ANTARYAMI.PANIGRAHI@ORACLE.COM <anpanigr@anpanigr-2.subnet1ad3phx.devweblogicphx.oraclevcn.com> * Added automations to verify domain in image samples using wlst and wdt (#2016) * Added DII sample tests jenkins-ignore * Changes based on the comments * Use spec.containers[].image instead of status.containerStatuses[].image (#2018) * Fix dedicated mode test (#2019) * Detect missing CRD during domain checks * refactoring: convert DomainNamespaces to use instance variables rather than statics * Avoid creating namespace watchers when using dedicate mode * remove obsolete fields and methods * Xc owls85579 (#2024) * fix intermittent error in Jenkins * assert not null for admin pod log * add retry to get admin server pod log * cleanup * set sinceSeconds when get admin server pod log * print out pod log * increase sinceSeconds when getting pod log * return previous terminated pod log * fix error * remove some debug flag * remove commented out lines * Owls85582 take all ALWAYS servers before considering rest of the servers when meet cluster replicas requirement (#2020) * add all Always servers before consider IfNeeded servers * add unit test cases for NEVER policy * clean up unit tests * minor changes to the unit tests * resort the final server startup list * Release note updates (#2026) * Release note updates * Review comments * Review comments * Detect and shut down stuck server pods (#2027) * Detect and shut down stuck server pods * Send 0 grace period seconds to force delete * Log message after deleting stuck pod * Ignore testUpdateImageName if the image tag is 14.1.1.0-11 (#2017) * abort the test if the image tag is 14.1.1.0-11 * minor change * implement the custom annotation @AssumeWebLogicImage * checkstyple * adding the review comments * Owls83995 - Sample scripts to shutdown and start a specific managed server/cluster/domain (#2002) * owls-83995 - Scripts to start/stop a managed server/cluster/domain * fix method comments * Minor changes * Address review comments and fix script comments/usages. * Added integration tests, made few doc changes based on review comments and minor fix. * Clarify script usage, updated README file and minor changes. * Changes to add script usage details * Address PR review comments * Review comment and cleanup. * Documentation changes based on PR review comments. * Fully qualified replica value as per review comments * edit docs * edit README * Address PR review comments * Changes to address PR review comments and removed ItLifecycleSampleScripts class by adding methods in ItSamples * fix indentation * fix comment and typo * Added validation as per review comment. * changes to address review comment and minor cleanup * PR review comment - changes to assume default policy is IF_NEEDED if policy is not set at domain level. * changes for new algorithm as documented on http://aseng-wiki.us.oracle.com/asengwiki/pages/viewpage.action?pageId=5280694898 * More changes for new algorithm. * code refactoring and minor doc update. * Minor change for dynamic server name validation * Changes to address review comments. * More review comment changes and cleanup. * Unset policy to start independent (stadalone) maanged server instead of ALWAYS. * Latest review comment changes. * More changes based on review comments. * Chnages for latest review comments. * Remove unused action variable and assignments. * Fix the logic to display error when config map not found and return policy without quotes. * Changes for latest review comments. * Changes for latest round of review comments. * use printError instead of echo * Changes to remove integration tests and doc review comments. Co-authored-by: Rosemary Marano <rosemary.marano@oracle.com> * Use oracle:root to support running in the OpenShift restrictive SCC (#2007) * Use oracle:root to support running in the OpenShift restrictive SCC * Fix issues found in test * Clean-up * Add SECURITY.md * JRF mii Domain test class/infra for the mii RCU functionality testing (#2011) * first cut for ItJrfMiiDomain * minor change * addressing the review comments, adding em console verification * use default -ext as service suffix * minor change * minor change * OWLS85912 - Integration tests for domain lifecycle scripts added as part of OWLS-83995. (#2032) * Added integration tests for domain lifecycle scripts. * minor changes. * fix K8s setup doc (#2028) * owls-85910 - display correct minimumReplicas status for dynamic clusters (#2035) * Additional release note updates (#2034) * Response from pod delete REST call can be Pod or Status (#2038) * Ignore response value from delete operations * Correct api group * Switch from custom objects * Remove unnecessary whitespace changes * Owls85476 attempt to fix intermittent integration test issues in nightlies (#2037) * switch to kubectl and improve the test code * change testAddSecondApp to use kubectl as well * more changes * check main thread done as well * minor update * minor change * Added anti affinity to the wls pods (#2033) * added nfs * added antiaffity * added syncronized * fixed typo * fixed typo1 * fixed domain crd for itpodtemplate * style * added storageclass to pv * added storageclass to pvc * added storageclass to pvc1 * style * fixed hostpath * removed fss related code * style * removed unneeded file * Add comment to liveness probe (#2041) * Add com…
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Release tag changed to match Image Tool.
(pani) Started a run https://build.weblogick8s.org:8443/job/weblogic-kubernetes-operator-kind-new/954/