New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sample WebLogic 12.2.1.3 domain image orchestrated in Kubernetes #665

Closed
wants to merge 2 commits into
base: master
from
Jump to file or symbol
Failed to load files and symbols.
+739 −0
Diff settings

Always

Just for now

Next

Sample WebLogic 12.2.1.3 domain image orchestrated in Kubernetes

  • Loading branch information...
mriccell committed Nov 20, 2017
commit 676b878abcf4887b374759a8d6a767f1b7b85197
@@ -0,0 +1,36 @@
# Pull base image
# ---------------
FROM weblogic-12.2.1.3-developer:latest
MAINTAINER Lily He <lily.he@oracle.com>
# Environment variables required for this build (do NOT change)
# -------------------------------------------------------------
ENV MW_HOME="$ORACLE_HOME" \
PATH="$ORACLE_HOME/wlserver/server/bin:$ORACLE_HOME/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:$JAVA_HOME/jre/bin:$JAVA_HOME/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$ORACLE_HOME/oracle_common/common/bin:$ORACLE_HOME/wlserver/common/bin:$ORACLE_HOME/wlserver/samples/server/" \
WLST="$ORACLE_HOME/oracle_common/common/bin/wlst.sh" \
DOMAIN_NAME=wlsdomain \
SAMPLE_DOMAIN_HOME=/u01/wlsdomain \
ADMIN_PORT=8001
user root
# Copy scripts and install python http lib
# --------------------------------
COPY container-scripts/ /u01/oracle/
RUN chmod +x /u01/oracle/*.sh /u01/oracle/*.py
# The following installation need internet access. If you are behind a proxy you may need to set http_proxy and https_proxy
# ENV http_proxy=foo \
# https_proxy=foo
# install requests module of python since we need it to call REST api
RUN python /u01/oracle/get-pip.py && \
pip install requests
# unset proxy
# ENV https_proxy="" \
# http_proxy=""
user oracle
WORKDIR $SAMPLE_DOMAIN_HOME
@@ -0,0 +1,137 @@
WebLogic Sample on Kubernetes with Shared Domain Home
=========================================
This sample extends the Oracle WebLogic developer install image by creating a sample WLS 12.2.1.3 domain and cluster to run in Kubernetes. The WebLogic domain consists of an Admininstrator Server and several Managed Servers running in a WebLogic cluster. All WebLogic servers share the same domain home which has been mapped to an external volume.
## Prerequisites
1. You need to have a Kubernetes cluster up and running with kubectl installed.
2. You have built oracle/weblogic:12.2.1.3-developer image locally based on Dockerfile and scripts here: https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles/12.2.1.3/
3. Username/password for the WebLogic domain are stored in k8s/secrets.yml and they are encoded by base64. The default values are weblogic/weblogic1.
If you want to customize it, first get the encoded data of your username/password via running `echo -n <username> | base64` and `echo -n <password> | base64`. Next upate k8s/secrets.yml with the new encoded data.
## How to Build and Run
### 1. Build the WebLogic Image for This Sample Domain
Pre-steps before build the image:
1. You need to download get-pip.py from https://bootstrap.pypa.io/get-pip.py and save it to folder 'container-scripts'.
2. If you run `docker build` behind a proxy, you need to set up http and https proxy in the Dockerfile.
Then build the image:
```
$ docker build -t wls-k8s-domain .
```
Or you can build the image by running build.sh directly.
### 2. Prepare Volume Directories
Three volumes are defined in k8s/pv.yml which refer to three external directories. You can choose to use host paths or shared NFS directories. Please change the paths accordingly. The external directories need to be initially empty.
**NOTE:** The first two persistent volumes 'pv1' and 'pv2' will be used by WebLogic server pods. All processes in WebLogic server pods are running with UID 1000 and GID 1000 by default, so proper permissions need to be set to these two external directories to make sure that UID 1000 or GID 1000 have permission to read and write the volume directories. The third persistent volume 'pv3' is reserved for later use. We assume that root user will be used to access this volume so no particular permission need to be set to the directory.
### 3. Deploy All the Kubernetes Resources
Run the script deploy.sh to deploy all resources to your Kubernetes cluster. You can also deploy the resources indivisually by running the following commands:
```
$ kubectl create -f k8s/secrets.yml
$ kubectl create -f k8s/pv.yml
$ kubectl create -f k8s/pvc.yml
$ kubectl create -f k8s/wls-admin.yml
$ kubectl create -f k8s/wls-stateful.yml
```
### 4. Check Resources Deployed to Kubernetes
#### 4.1 Check Pods and Controllers
List all pods and controllers:
```
$ kubectl get all
NAME READY STATUS RESTARTS AGE
po/admin-server-1238998015-f932w 1/1 Running 0 11m
po/managed-server-0 1/1 Running 0 11m
po/managed-server-1 1/1 Running 0 8m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/admin-server 10.102.160.123 <nodes> 8001:30007/TCP 11m
svc/kubernetes 10.96.0.1 <none> 443/TCP 39d
svc/wls-service 10.96.37.152 <nodes> 8011:30009/TCP 11m
svc/wls-subdomain None <none> 8011/TCP 11m
NAME DESIRED CURRENT AGE
statefulsets/managed-server 2 2 11m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/admin-server 1 1 1 1 11m
NAME DESIRED CURRENT READY AGE
rs/admin-server-1238998015 1 1 1 11m
```
#### 4.2 Check PV and PVC
List all pv and pvc:
```
$ kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv1 10Gi RWX Recycle Available manual 17m
pv2 10Gi RWX Recycle Bound default/wlserver-pvc-1 manual 17m
pv3 10Gi RWX Recycle Bound default/wlserver-pvc-2 manual 17m
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
wlserver-pvc-1 Bound pv2 10Gi RWX manual 18m
wlserver-pvc-2 Bound pv3 10Gi RWX manual 18m
```
We have three pv defined and two pvc defined. One pv is reserved for later use.
#### 4.3 Check Secrets
List all secrets:
```
$ kubectl get secrets
NAME TYPE DATA AGE
default-token-m93m1 kubernetes.io/service-account-token 3 39d
wlsecret Opaque 2 19m
```
### 5. Check Weblogic Server Status via Administrator Console
The admin console URL is 'http://[hostIP]:30007/console'.
### 6. Troubleshooting
You can trace WebLogic server output and logs for troubleshooting.
Trace WebLogic server output. Note you need to replace $serverPod with the actual pod name of a WebLogic server.
```
$ kubectl logs -f $serverPod
```
You can look at the WebLogic server logs by running:
```
$ kubectl exec managed-server-0 -- tail -f /u01/wlsdomain/servers/managed-server-0/logs/managed-server-0.log
$ kubectl exec managed-server-0 -- tail -f /u01/wlsdomain/servers/managed-server-1/logs/managed-server-1.log
$ kubectl exec managed-server-0 -- tail -f /u01/wlsdomain/servers/AdminServer/logs/AdminServer.log
```
### 7. Restart All Pods
#### 7.1 Shutdown the Managed Servers' Pods Gracefully
```
$ kubectl exec -it managed-server-0 -- /u01/wlsdomain/bin/stopManagedWebLogic.sh managed-server-0 t3://admin-server:8001
$ kubectl exec -it managed-server-1 -- /u01/wlsdomain/bin/stopManagedWebLogic.sh managed-server-1 t3://admin-server:8001
```
#### 7.2 Shutdown the Administrator Server Pod Gracefully
First gracefully shutdown admin server process. Note that you need to replace $adminPod with the real admin server pod name.
```
$ kubectl exec -it $adminPod -- /u01/wlsdomain/bin/stopWebLogic.sh <username> <password> t3://localhost:8001
```
Next manually delete the admin pod.
```
$ kubectl delete pod/$adminPod
```
After the pods are stopped, each pod's corresponding controller is responsible for restarting the pods automatically.
Wait until all pods are running and ready again. Monitor status of pods via `kubectl get pod`.
### 8. Cleanup
Run the script clean.sh to remove all resources from your Kubernetes cluster. You can also do the cleanup indivisually by running the following commands:
```
$ kubectl delete -f k8s/wls-stateful.yml
$ kubectl delete -f k8s/wls-admin.yml
$ kubectl delete -f k8s/pvc.yml
$ kubectl delete -f k8s/pv.yml
$ kubectl delete -f k8s/secrets.yml
```
And you need to clean up all data in volume directories via `rm -rf *`.
## COPYRIGHT
Copyright (c) 2014-2017 Oracle and/or its affiliates. All rights reserved.
@@ -0,0 +1,2 @@

This comment has been minimized.

@Djelibeybi

Djelibeybi Nov 20, 2017

Member

This script needs a #!/bin/bash shebang line.

@Djelibeybi

Djelibeybi Nov 20, 2017

Member

This script needs a #!/bin/bash shebang line.

docker build -t wls-k8s-domain .
@@ -0,0 +1,6 @@
kubectl delete -f k8s/wls-stateful.yml --now=true

This comment has been minimized.

@Djelibeybi

Djelibeybi Nov 20, 2017

Member

This needs a #!/bin/bash shebang line.

@Djelibeybi

Djelibeybi Nov 20, 2017

Member

This needs a #!/bin/bash shebang line.

kubectl delete -f k8s/wls-admin.yml --now=true
kubectl delete -f k8s/pvc.yml --now=true
kubectl delete -f k8s/pv.yml --now=true
kubectl delete -f k8s/secrets.yml --now=true
@@ -0,0 +1,104 @@
#Copyright (c) 2014-2017 Oracle and/or its affiliates. All rights reserved.
#
#Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
# Author Lily He
import requests
from requests.auth import HTTPBasicAuth
import json
import shutil
import sys
import os
from time import time
from time import sleep
from collections import OrderedDict
import os
adminPort=os.environ["ADMIN_PORT"]
prefix='http://localhost:' + adminPort + '/management/weblogic/latest/'
domainDir=os.environ["SAMPLE_DOMAIN_HOME"]
jmsFileName="mymodule-jms.xml"
jdbcFileName="ds1-jdbc.xml"
print (prefix, domainDir)
user = os.environ["WLUSER"]
pwd = os.environ["WLPASSWORD"]
auth = HTTPBasicAuth(user, pwd)
header1 = {'X-Requested-By': 'pythonclient','Accept':'application/json','Content-Type':'application/json'}
header2 = {'X-Requested-By': 'pythonclient','Accept':'application/json'}
def delete(tail):
myResponse = requests.delete(prefix+tail, auth=auth, headers=header2, verify=True)
result(myResponse, 'delete', 'false')
def get(tail):
myResponse = requests.get(prefix+tail, auth=auth, headers=header2, verify=True)
result(myResponse, 'get', 'false')
return myResponse
def result(res, opt, fail):
print (res.status_code)
if(not res.content.isspace()):
print (res.content)
if(res.ok):
print opt, 'succeed.'
else:
print opt, 'failed.'
if(fail == 'true'):
res.raise_for_status()
def waitAdmin():
print("wait until admin started")
tail='domainRuntime/serverRuntimes/'
fail = True
while(fail):
sleep(2)
try:
res = requests.get(prefix+tail, auth=auth, headers=header2, verify=True)
print res.status_code
if(res.ok):
fail=False
except Exception:
print "waiting admin started..."
def cpJMSResource(modulefile):
print("cpJMSResource", modulefile)
destdir=domainDir+'/config/jms/'
try:
os.makedirs(destdir)
except OSError:
if not os.path.isdir(destdir):
raise
destfile=destdir + jmsFileName
shutil.copyfile(modulefile, destfile)
print('copy jms resource finished.')
def cpJDBCResource(modulefile):
print("cpJDBCResource", modulefile)
destdir=domainDir+'/config/jdbc/'
try:
os.makedirs(destdir)
except OSError:
if not os.path.isdir(destdir):
raise
destfile=destdir + jdbcFileName
shutil.copyfile(modulefile, destfile)
print('copy jdbc resource finished. from', modulefile, 'to', destfile)
def createOne(name, tail, data):
#print("create", name, tail, data)
jData = json.dumps(data, ensure_ascii=False)
print(jData)
myResponse = requests.post(prefix+"edit/"+tail, auth=auth, headers=header1, data=jData, verify=True)
result(myResponse, 'create ' + name, 'true')
def createAll(inputfile):
jdata = json.loads(open(inputfile, 'r').read(), object_pairs_hook=OrderedDict)
for tkey in jdata.keys():
ss =jdata.get(tkey)
for key in ss.keys():
oneRes = ss.get(key)
print(oneRes)
createOne(key, oneRes['url'], oneRes['data'])
@@ -0,0 +1,51 @@
{ "resources": {
"myCluster": {
"url": "clusters",
"data": {
"clusterMessagingMode": "unicast",
"name": "myCluster"
}
},
"managed-server-0": {
"url": "servers",
"data": {
"listenPort": 8011,
"listenAddress": "managed-server-0.wls-subdomain.default.svc.cluster.local",
"cluster": ["clusters", "myCluster"],
"name": "managed-server-0"
}
},
"managed-server-1": {
"url": "servers",
"data": {
"listenPort": 8011,
"listenAddress": "managed-server-1.wls-subdomain.default.svc.cluster.local",
"cluster": ["clusters", "myCluster"],
"name": "managed-server-1"
}
},
"managed-server-2": {
"url": "servers",
"data": {
"listenPort": 8011,
"listenAddress": "managed-server-2.wls-subdomain.default.svc.cluster.local",
"cluster": ["clusters", "myCluster"],
"name": "managed-server-2"
}
},
"managed-server-3": {
"url": "servers",
"data": {
"listenPort": 8011,
"listenAddress": "managed-server-3.wls-subdomain.default.svc.cluster.local",
"cluster": ["clusters", "myCluster"],
"name": "managed-server-3"
}
}
}}
@@ -0,0 +1,54 @@
#Copyright (c) 2014-2017 Oracle and/or its affiliates. All rights reserved.
#
#Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
# Author Lily He
import requests
from requests.auth import HTTPBasicAuth
import json
import shutil
import sys
import os
from time import time
from time import sleep
from collections import OrderedDict
import base
clusterData='cluster.json'
defaultDSModule='ds1-jdbc.xml'
defaultDSJson='ds.json'
defaultJMSModule='mymodule-jms.xml'
defaultJMSJson='jmsres.json'
def createAll():
createDomain()
createDS(defaultDSModule, defaultDSJson)
createJMS(defaultJMSModule, defaultJMSJSON)
def createDomain():
base.waitAdmin()
base.createAll(clusterData)
def createDS(DSModule, DSJson):
base.cpJDBCResource(DSModule)
base.createAll(DSJson)
def createJMS(JMSModule, JMSJson):
base.cpJMSResource(JMSModule)
base.createAll(JMSJson)
print 'url:', base.prefix
start=time()
option=sys.argv[1]
if(option == 'createDomain'):
createDomain()
elif(option == 'createJMS'):
createJMS(sys.argv[2], sys.argv[3])
elif(option == 'createDS'):
createDS(sys.argv[2], sys.argv[3])
elif(option == 'createAll'):
createAll()
end=time()
print option, "spent", (end-start), "seconds"
Oops, something went wrong.
ProTip! Use n and p to navigate between commits in a pull request.