-
Notifications
You must be signed in to change notification settings - Fork 15
v3.14.9 Deployment Instructions_draft
The OpenWIS Installation Guide aims at providing the various steps to install and configure the OpenWIS components. This guide is provided for system administrator as most of the operations require root access.
The Installation Guide describes the installation and configuration process for each OpenWIS component in dedicated sections.
A full distribution of OpenWIS consists of the following artefacts. These are either available from Jenkins or can be built from source:
- openwis-securityservice.war: Security service endpoint.
- openwis-management-service.ear: Management services
- openwis-dataservice.ear: Data Services
- openwis-user-portal.war: Public user portal
- openwis-admin-portal.war: Administration portal
- openwis-portal-solr.war: Solr search index
- stagingPost.war: Staging post
- PopulateLDAP.zip: A tool used during installation to populate users and groups in OpenAM.
- openwis-dependencies.zip: An archive containing various dependencies used by OpenWIS.
OpenWIS requires the following dependencies.
- OpenAM
- OpenDJ
In addition to these, the following system and 3rd party dependencies may be required for specific components:
- OpenJDK 1.8
- Wildfly 15.0.1 Final
- Tomcat 9.0.33
- vsftpd
- httpd
From an installation point of view, deploying a GISC, a DCPC or a NC is equivalent. The only differences reside in configuration (described below).
Each deployment consists in deploying the following components:
- Security Service (OpenDJ, OpenAM and OpenWIS WebServices)
- Database (PostgreSQL)
- Data Service + Cache replication (GISC only)
- Management Service
- Portals and front-end
- Staging Post
OpenWIS v3.14.10 installation structure example:
It requires 4 tomcat instances. If all of Tomcat instances are at the same OS they must run under different user.
For maintenance or migration purpose, the whole system will need to be restarted. To avoid losing any data / requests, the following process may be performed:
- On Admin portal: stop all local services (in Backup / Availability of Local System) This will prevent any users to perform new requests and the system to ingest and process new subscriptions.
- Wait for all JMS queues to go down to 0 (see section 5.6.1) This will wait for current processing to complete.
- Stop servers (only if necessary) in this order:
- User Portal(s)
- Admin Portal
- Solr
- Security Service
- Data Service
- Postgres
- Perform the maintenance operations
- Start the servers in reverse order (only for servers that have been stopped):
- Postgres
- Data Service
- Security Service
- Solr
- Admin Portal
- User Portal(s)
All OpenWIS components can be installed on RedHat 5.5 or newer. The following system configuration is required:
- Time set in UTC and synchronized with NTP on an external time server
- English language (required by OpenAM installer)
- Firewall settings to allow communication between components
The first point is to ensure that the hostname is properly set.
Make sure you are logged in as root.
cd /etc/sysconfig
vi network
Look for the HOSTNAME line and replace it with the new hostname you want to use.
HOSTNAME=<HOSTNAME>
Next we will edit the /etc/hosts file and set the new hostname.
vi /etc/hosts
The changes to /etc/hosts and /etc/sysconfig/network are necessary to make your changes persistent (in the event of an unscheduled reboot). Now we use the hostname program to change the hostname that is currently set.
hostname <HOSTNAME>
And run it again without any parameters to see if the hostname has changed. hostname Finally we will restart the network to apply the changes we made to /etc/hosts and /etc/sysconfig/network.
service network restart
As root, create a user “openwis”, with the default home directory (/home/openwis).:
useradd openwis
passwd openwis
Changing password for user openwis…
The following Java version is required: OpenJDK 1.8.0
Java installation with yum:
As root:
yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel
If java is already installed, check that java-1.8.0-openjdk-devel.x86_64 is installed (jdk is required for maven compilation) and choose Java 1.8 version as root, using:
alternatives --config java
As openwis,
Set java environment variables. Add these lines in .bash_profile files:
JAVA_HOME=/etc/alternatives/jre_1.8.0_openjdk
export JAVA_HOME
Apply:
source .bash_profile
Verify:
echo $JAVA_HOME
and
java -version
- In this document java 1.8 release is : 1.8.0_242
If git is not installed, run as root:
yum install git-all
Note: OpenWIS versions may require older version of maven. See: Getting-Started#installing-apache-maven
As root:
yum install -y wget unzip
cd /opt/
wget https://downloads.apache.org/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.zip
unzip apache-maven-3.6.3-bin.zip
rm apache-maven-3.6.3-bin.zip
and as user:
cd ~
vim .bash_profile
Add:
export M2_HOME=/opt/apache-maven-3.6.3
export M2=$M2_HOME/bin
export PATH=$M2:$PATH
If needed:
export JAVA_OPTS="-Xms1g -Xmx2g -XX:MaxPermSize=2g"
Apply:
source .bash_profile
Verify:
mvn -version
Outputs:
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: /opt/apache-maven-3.6.3
Java version: 1.8.0_242, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.el7_7.x86_64/jre
Default locale: fr_FR, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-957.21.3.el7.x86_64", arch: "amd64", family: "unix"
Maximum Number of Open File Descriptors
OpenWIS system makes intensive use of file access and network connections.
By default, RedHat limits the number of open file descriptor to 1024 by user, which may appear insufficient in case of burst of files or service to process.
Beyond this limit, errors occur in the system such as “Too many open files” or “cannot open connection” to the database, which may cause unexpected and unpredictable problems.
All the components are running as ‘openwis’ user or ‘postgres’ user (database only).
It’s then recommended to increase this limit on all the component hosts.
Test:
ulimit
If limit is below 8192 then configure as root the file /etc/security/limits.conf (system needs to be restarted):
openwis - nofile 8192
postgres - nofile 8192
To check this limit has been applied, log in as openwis or postgres user and check:
ulimit
Should show: 8192
Clone OpenWIS in ~/maven_projects directory:
As openwis,
mkdir maven_projects
cd maven_projects
git clone https://github.com/OpenWIS/openwis.git
cd openwis
git checkout openwis-3.14.10
mvn clean install -Pdependencies,openwis,admin
mvn clean install -Puser
The tests could be skipped with option:
-DskipTests=true
Copy and unzip openwis-dependencies/target/openwis-dependencies.zip to /home/openwis:
cp maven_projects/openwis/openwis-dependencies/target/openwis-dependencies.zip ~
unzip openwis-dependencies.zip
cd ~/openwis-dependencies/data-management-services/
unzip openwis-dataservice-config-files.zip
*Maven build likewise Windows, see Getting-Started Wiki
Note: For this installation the minimum required Tomcat instances is 3
- For Security service and utils
- For Admin portal
- For User portal
Each must be installed *under different user, under different Home and path. Following chapter details installation of 1 instance of Apache Tomcat.
You can download a copy of Tomat 9.0.36 from the project website (or one of the mirrors):
As openwis,
wget https://downloads.apache.org/tomcat/tomcat-9/v9.0.36/bin/apache-tomcat-9.0.36.zip
Once downloaded, perform the following commands in the openwis home directory:
As openwis,
unzip apache-tomcat-9.0.36.zip
chmod a+x apache-tomcat-9.0.36/bin/*.sh
If server is just installed as application: The provided scripts (@/openwis-dependencies/portals) can be used from user to start and stop Tomcat:
As openwis
cp ~/openwis-dependencies/portals/*.sh .
chmod a+x *.sh
Note start/stop scripts update may needed, $CATALINA_HOME must point to Tomcat home instance. Start script example:
#!/bin/sh
#
# Start OpenWIS Tomcat
#
# Settings
export CATALINA_OPTS="-Xmx512m -XX:MaxPermSize=256m"
export CATALINA_HOME=/home/openwis/apache-tomcat-9.0.36
export CATALINA_PID=$CATALINA_HOME/openwis-tomcat.pid
# Check if Tomcat did not crash (with PID file)
if [ -e $CATALINA_PID ]
then
ps -p `cat $CATALINA_PID` &> /dev/null
if [ $? = 0 ]
then
echo "Tomcat still running"
exit 0
else
echo "Tomcat crashed, cleaning remaining PID file"
rm $CATALINA_PID
fi
fi
# Start Tomcat
cd $CATALINA_HOME/bin
./startup.sh
- Memory adjustment
By default, Tomcat is configured to use a maximum of 512 Mbytes of memory for heap space.
If the host allows more memory allocation, it is recommended to increase this maximum value.
Edit start_openwis_tomcat.sh:
export CATALINA_OPTS="-Xmx512m -XX:MaxPermSize=256m"
The following parameters may be adjusted: -Xmx represents the maximum of memory usage (‘m’ means Mbytes)
-
Modify the script "tail_tomcat_log.sh" with the correct tomcat version:
tail -f ~/apache-tomcat-9.0.36/logs/catalina.out
-
Stop/start tomcat:
As openwis
~/start_openwis_tomcat.sh
~/stop_openwis_tomcat.sh
-
To verify Tomcat state, check catalina.out file:
less /home/openwis/apache-tomcat-9.0.36/logs/catalina.out
or:
~/tail_tomcat_log.sh
Tomcat application server may added as system service.
To add Tomcat as a service to start at boot, edit as root:
cd /etc/systemd/system/
vim tomcat.service
Create the 'tomcat.service' file with the following lines:
[Unit]
Description=Apache Tomcat Web Application Server
After=syslog.target network.target
[Service]
Type=forking
User=openwis
Environment="JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk/"
Environment="CATALINA_HOME=/home/openwis/apache-tomcat-9.0.36"
Environment="CATALINA_PID=/home/openwis/apache-tomcat-9.0.36/openwis-tomcat.pid"
Environment="CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC"
ExecStart=/home/openwis/apache-tomcat-9.0.36/bin/startup.sh
ExecStop=/home/openwis/apache-tomcat-9.0.36/bin/shutdown.sh
[Install]
WantedBy=multi-user.target
*CATALINA_HOME, JAVA_HOME and process name may differ dependant on servers purpose.
then :
systemctl daemon-reload
- Tomcat stop/start as service
As root
systemctl start tomcat
systemctl stop tomcat
-
Enable the service to be automatically started at boot:
systemctl enable tomcat
If required to install more than one instance of Tomcat in the same machine
In any case multiple tomcats are added as services, user, processname and paths must not be the same.
Follow 2.7 set different $CATALINA_HOME (and $JAVA_HOME if needed)
then copy scripts and rename them accordingly.
Ex: start_openwis_portals.sh
Edit $CATALINA_HOME value to path of new Tomcat.
-
Locate server.xml in {Tomcat homr}\ conf \
-
Find following similar statements
<Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true"/>
-
About Tomcat’s server.xml file cites it’s runs on port 8080. Change every connector configuration port: Connector port=”8080″ to any other port number. For example:
-
Edit and save the server.xml file. Restart Tomcat. Done
If required to run a Tomcat which must run under different Java than default edit start/stop scripts to use the other Java home
As openwis user:
Download the version of WildFly 15.0.1 from the JBoss website:
cd ~
wget https://download.jboss.org/wildfly/15.0.1.Final/wildfly-15.0.1.Final.tar.gz
Unzip the file in the openwis home directory:
tar -xzf wildfly-15.0.1.Final.tar.gz
Edit ~/.bash_profile and add the following lines:
export JBOSS_HOME=/home/openwis/wildfly-15.0.1.Final
Save and execute
source ~/.bash_profile
Create link to JBOSS_HOME:
ln -s /home/openwis/wildfly-15.0.1.Final wildfly
JBoss is configured to use a maximum of 512 Mbytes of heap space by default. If the host allows more memory allocation, it is recommended to increase this maximum value.
The following parameters may be adjusted:
- -Xmx represents the maximum of memory usage (‘m’ means Mbytes)
Edit $JBOSS_HOME/bin/standalone.conf to adjust $JAVA_OPTS values:
JAVA_OPTS="-Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
values example :
JAVA_OPTS="-Xms1024m -Xmx2048m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=768m"
*XX:MaxMetaspaceSize will be increase in case of "ERROR: java.lang.OutOfMemoryError: Metadata space"
The time-zone used by the JVM will need to be explicitly set to UTC. Also add at the end of the file the following line:
JAVA_OPTS=$JAVA_OPTS -Duser.timezone=UTC
The server logs will be written to $JBOSS_HOME/standalone/log/ . If you want to have the logs written to another location, you can replace the empty log directory with a symbolic link to desired log directory:
ln -s $JBOSS_HOME/standalone/log <target log dir>
As user copy WildFly stop/start scripts to user home :
cd ~
cp ~/openwis-dependencies/data-management-services/openwis-dataservice-config/*.sh ~
chmod a+x *.sh
The following configuration values for WildFly are configured as part of the start script and can be changed if necessary: If so, open start_openwis_jboss.sh in a text editor and change the appropriate configuration value.
• bindingAddress: The address WildFly will be bound to. Default = “0.0.0.0”
• multicastAddress: The multicast address used for communication between cluster members. All cluster members must have the same multicast address. Default = “239.255.100.100”
The provided scripts are used from openwis user to start and stop WildFly:
As openwis
./start_openwis_jboss.sh
./stop_openwis_jboss.sh
To verify Wildfly running, check server.log file :
tail -f $JBOSS_HOME/standalone/log/server.log
Wildfly application server may be added as systemd service.
- To add Wildfly as a service, there are 3 files to create/configure.
As root:
wildfly.conf
mkdir -p /etc/wildfly
vim /etc/wildfly/wildfly.conf
Add:
# The configuration you want to run
WILDFLY_CONFIG=standalone-full.xml
# The mode you want to run
WILDFLY_MODE=standalone
# The address to bind to
WILDFLY_BIND=0.0.0.0
launch.sh
vim /home/openwis/wildfly-15.0.1.Final/bin/launch.sh
Add:
#!/bin/bash
echo "Start Wildfly"
echo "Configuration file: $2"
echo "Bind address: $3"
CONF_FILE="$2"
if [ "x$CONF_FILE" = "x" ]; then
CONF_FILE="standalone-full.xml"
fi
if [ "x$OPENWIS_HOME" = "x" ]; then
OPENWIS_HOME="/home/openwis"
fi
if [ "x$WILDFLY_HOME" = "x" ]; then
WILDFLY_HOME="$OPENWIS_HOME/wildfly"
fi
if [[ "$1" == "domain" ]]; then
$WILDFLY_HOME/bin/domain.sh -c $2 -b $3
else
$WILDFLY_HOME/bin/standalone.sh -c $CONF_FILE
fi
Make the file executable:
chmod a+x /home/openwis/wildfly-15.0.1.Final/bin/launch.sh
wildfly.service
vim /etc/systemd/system/wildfly.service
Add:
[Unit]
Description=The WildFly Application Server
After=syslog.target network.target
[Service]
Environment=LAUNCH_JBOSS_IN_BACKGROUND=1
EnvironmentFile=-/etc/wildfly/wildfly.conf
User=openwis
LimitNOFILE=102642
PIDFile=/var/run/wildfly/wildfly.pid
ExecStart=/home/openwis/wildfly/bin/launch.sh $WILDFLY_MODE $WILDFLY_CONFIG $WILDFLY_BIND
StandardOutput=journal
[Install]
WantedBy=multi-user.target
Notify systemd for the new unit file:
sudo systemctl daemon-reload
Start the wildfly service:
systemctl start wildfly
Check the service status:
systemctl status wildfly
Status output:
[root@linux]# systemctl status wildfly
● wildfly.service - The WildFly Application Server
Loaded: loaded (/etc/systemd/system/wildfly.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2020-06-11 20:01:18 CEST; 18min ago
Main PID: 4124 (launch.sh)
CGroup: /system.slice/wildfly.service
├─4124 /bin/bash /home/openwis/wildfly/bin/launch.sh standalone standalone-full.xml 0.0.0.0
├─4125 /bin/sh /home/openwis/wildfly/bin/standalone.sh -c standalone-full.xml -b 0.0.0.0
└─4182 java -D[Standalone] -server -Xms1024m -Xmx2048m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=768m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.a...
Log server.log output
...
INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 15.0.1.Final (WildFly Core 7.0.0.Final) started in 16621ms - Started 2536 of 2725 services (403 services are lazy, passive or on-demand)
Stop the wildfly service:
systemctl stop wildfly
-
Enable the service to be automatically started at boot:
systemctl enable wildfly
Check wildfly systemd log:
journalctl -u wildfly
Or, in order to see the last info:
journalctl -u wildfly --since "5 minutes ago"
To check how much disk space is currently taken up by the journal, use
journalctl --disk-usage
Output:
Archived and active journals take up 104.0M on disk.
To delete archived journal entries manually, you can use either the –vacuum-size or the –vacuum-time option.
In the example below, we are deleting any archived journal files, so the journal size comes back from 104 to 72 MB (depending of size and the number of journal files in "/run/log/journal/" ).
journalctl --vacuum-size=50M
Output:
Deleted archived journal /run/log/journal/5d2575b95e0c432dac2a14103fc1ca5f/system@23ab0e7cb99f49c28f2714139d8bdb48-0000000000000001-0005a7ce8ad18545.journal (32.0M).
Vacuuming done, freed 32.0M of archived journals on disk.
To delete the archived journals (the only active journal is kept):
journalctl -m --vacuum-time=1s
Output:
Deleted archived journal /run/log/journal/5d2575b95e0c432dac2a14103fc1ca5f/system@23ab0e7cb99f49c28f2714139d8bdb48-000000000000784f-0005a888c3f15e2a.journal (24.0M).
Deleted archived journal /run/log/journal/5d2575b95e0c432dac2a14103fc1ca5f/system@23ab0e7cb99f49c28f2714139d8bdb48-000000000000de9b-0005a922dbc5631e.journal (24.0M).
Vacuuming done, freed 48.0M of archived journals on disk.
Check the size of the active journal:
journalctl --disk-usage
Output:
Archived and active journals take up 24.0M on disk.
Optional: redirect output of systemd service to a file
Replace in /etc/systemd/system/wildfly.service
, StandardOutput=journal
with the 3 following lines:
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=wildfly
Then:
sudo systemctl daemon-reload
Create the file /etc/rsyslog.d/wildfly.conf
:
vim /etc/rsyslog.d/wildfly.conf
Add:
if $programname == 'wildfly' then /var/log/wildfly/wildfly.log
& stop
Then:
systemctl restart rsyslog
Then, a copy of the wildfly log is located in /var/log/wildfly/ :
ll /var/log/wildfly/
total 224
-rw-r--r-- 1 root root 11190 Apr 9 10:50 console.log
-rw------- 1 root root 216544 Jun 11 20:26 wildfly.log
⚠️ Notice: /var/log/wildfly/wildfly.log is a another log file for wildfly. The data and management service logfile is a heavy file, especially in GISC case, then it's necessary to monitor carefully the evolution of the size of the different file systems and to adjust crontab if necessary.
Dataservice deployment is described in chapter 5
The Security Service deployment is composed of:
- OpenDJ (LDAP server)
- OpenAM
- OpenWIS Security Services (User & Group management) openJDK 7 is required.
Note: This has been developed and tested using openjdk version "1.8.0_252"
At the following part there are lots of configurations to be done.
- Tomcat logs (OpenDJ, OpenAM, OpenWIS Security Services)
- JBoss logs (openwis-dataservice, openwis-management-service)
- OpenDJ logs located at:
/home/openwis/opendj/logs
- Openam logs located at:
/home/openwis/openam/log
or/home/openwis/openam/openam/debug
The following sections describe how to install OpenDJ 3.0.0.
Download OpenDJ-.zip:
cd ~
wget https://github.com/OpenRock/OpenDJ/releases/download/3.0.0/OpenDJ-3.0.0.zip
As openwis user, unzip this Archive in /home/openwis directory.
unzip OpenDJ-3.0.0.zip
Run OpenDJ setup:
cd opendj
for a GUI installation
./setup
for a command line installation
./setup --cli
The following steps depend on the way the installation is done: with graphical capabilities or without command line.
The splash screen is displayed.
The Welcome page is displayed, click Next
Enter the hostname, the ldap listener port, the administration connector port and login/password for root User DN
Select Standalone server (default)
Set Directory Base DN to dc=opensso,dc=java,dc=net and select in Directory Data menu, "Only create Base Entry"
Leave default runtime settings, click Next.
Click on Finish
Installation is processing
Installation completed.
⚠️ With a java version 1.7.0_191, 1.8.0_181 or later (including Oracle® JDK and OpenJDK), the opendj status and ControlPannel launching fail.
Then apply the note OpenDJ 3.x Java upgrade causes certificate exceptions with contol-panel/dsreplication/status commands as a workaround:
vim ~/opendj/config/java.properties
add "-Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true" at the end of the 3 following lines:
...
control-panel.java-args=-Xms64m -Xmx128m -client -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
...
dsreplication.java-args=-Xms8m -client -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
...
status.java-args=-Xms8m -client -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
Apply the modification:
cd ~/opendj/bin
./dsjavaproperties
Output:
The operation was successful. The server commands will use the java arguments
and java home specified in the properties file located in
/home/openwis/opendj/config/java.properties
Then start opendj server:
./start-ds
See 3.1.3 section for more information about OpenDJ stop/start
OpenDJ 3.0.0 is now installed and ready for OpenAM installation.
Please take note that the installation script has some problems with:
- copy-pasting
- using backspace
- using space
# OpenDJ installation
cd /home/openwis/
wget https://github.com/OpenRock/OpenDJ/releases/download/3.0.0/OpenDJ-3.0.0.zip
unzip -qq OpenDJ-3.0.0.zip
cd ~/opendj
export PORT=1389
export ADMIN_PORT=4444
export LDAPS_PORT=1689
export BASE_DN="dc=opensso,dc=java,dc=net"
export ROOT_PASSWORD="password"
export ADD_BASE_ENTRY="--addBaseEntry"
cd /home/openwis/opendj
./setup --cli -p $PORT --ldapsPort $LDAPS_PORT --adminConnectorPort $ADMIN_PORT --enableStartTLS --generateSelfSignedCertificate --baseDN $BASE_DN -h localhost --rootUserPassword "$ROOT_PASSWORD" --acceptLicense --no-prompt $ADD_BASE_ENTRY
bin/start-ds
bin/status -D "cn=Directory Manager" -w "$ROOT_PASSWORD"
⚠️ With a java version 1.7.0_191, 1.8.0_181 or later (including Oracle® JDK and OpenJDK), the opendj status and ControlPannel launching fail.
Then apply the note OpenDJ 3.x Java upgrade causes certificate exceptions with contol-panel/dsreplication/status commands as a workaround:
vim ~/opendj/config/java.properties
add "-Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true" at the end of the 3 following lines:
...
control-panel.java-args=-Xms64m -Xmx128m -client -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
...
dsreplication.java-args=-Xms8m -client -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
...
status.java-args=-Xms8m -client -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true
Apply the modification:
cd ~/opendj/bin
./dsjavaproperties
Output:
The operation was successful. The server commands will use the java arguments
and java home specified in the properties file located in
/home/openwis/opendj/config/java.properties
Then stop and start opendj server:
./stop-ds && ./start-ds
Run again OpenDJ status:
./status -D "cn=Directory Manager" -w <ROOT_PASSWORD>
See /tmp/opendj-setup-XXXXX.log for a detailed log of this operation.
To start OpenDJ:
cd opendj/bin
./start-ds
or
~opendj/bin/start-ds
To stop OpenDJ:
cd opendj/bin
./stop-ds
or
~opendj/bin/stop-ds
Opendj application server may be added as system service.
To add Opendj as a service to start at boot, edit as root:
cd /etc/systemd/system/
vim opendj.service
Create the 'opendj.service' file like the following example:
[Unit]
SourcePath=/home/openwis/opendj/bin
Description=OpenDJ Server (systemd init)
Before=runlevel2.target runlevel3.target runlevel4.target runlevel5.target shutdown.target display-manager.service
After=
Conflicts=shutdown.target
[Service]
Type=simple
User=openwis
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
ExecStart=/home/openwis/opendj/bin/start-ds
ExecStop=/home/openwis/opendj/bin/stop-ds
[Install]
WantedBy=multi-user.target
*SourcePath and process name may differ dependant on installation choices.
then :
systemctl daemon-reload
- opendj stop/start as service
As root
systemctl stop opendj
systemctl start opendj
- Enable the service to be automatically started at boot:
As root
systemctl enable opendj
via GUI console:
~/opendj/bin/control-panel&
via command line:
~/opendj/bin/status
Warning: At status results Data sources Entries must not be 0.
--- Data Sources ---
Base DN: dc=opensso,dc=java,dc=net
Backend ID: userRoot
Entries: 0
Replication:
If so create new file named make-basedn.ldif:
Add:
dn: dc= opensso,dc=java,dc=net
objectClass: domain
objectClass: top
dc: opensso
If so, a Data Source entry must be added manually, execute:
./ldapmodify -p 1389 -D "cn=Directory Manager" -w openwis -f make-basedn.ldif –a
OpenDJ status results at the end of OpenWIS installation
Import SchemaOpenWIS.ldif
As user:
cd /home/openwis/opendj/bin
./ldapmodify -h localhost -p 4444 -D "cn=Directory Manager" -w <LDAP_PASSWORD> -X --useSSL -f /home/openwis/openwis-dependencies/security/SchemaOpenWIS.ldif
Output:
Note: If replication is active the schema must be replicated on each instance (to have all the attributes of OPENWIS)
The following sections describe how to install release 13.0.0
For more informations, please report to Forgerock documentation:
https://backstage.forgerock.com/docs/openam/13/getting-started/
https://backstage.forgerock.com/docs/openam/13/install-guide/
https://backstage.forgerock.com/docs/openam/13/OpenAM-13-Install-Guide.pdf
OpenAM-13.0.0 is deployed in Tomcat 7
. Report to section 2.7 for Tomcat installation details.
Configure Tomcat 7 for your environment:
- Start/Stop scripts to adapt
- Include UTF-8 configuration
- Include VM memory settings
- Tomcat failover consideration:
If failover is configured for component deployed in Tomcat 7 (such as OpenAM), the deployment of the component will be done on two Tomcat instances that run simultaneously.
The Web accessed will be done via a load balancer, such as Apache front-end with mod_proxy module (described in a dedicated section).
To enable session affinity configuration on the load balancer, the Tomcat instances needs to be configured:
vi /home/openwis/apache-tomcat-7.0.59/conf/server.xml
Set the jvmRoute attribute on Engine element:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="node1">
Download OpenAM war from the middleware repository: https://github.com/OpenIdentityPlatform/OpenAM/releases
For the OpenAM versions < 14.0, Maven compilation has to be performed.
Deploy openam.war to the Tomcat container:
mkdir /home/openwis/apache-tomcat-7.0.59/webapps/openam
cp openam.war /home/openwis/apache-tomcat-7.0.59/webapps/openam/
cd /home/openwis/apache-tomcat-7.0.59/webapps/openam
jar xvf openam.war
Note: a new specific JSP (spGetToken.jsp), used by OpenWIS portal to retrieve tokens, is not included in openam.war
<%@ page import="javax.servlet.ServletException" %>
<%@ page import="javax.servlet.http.Cookie" %>
<%--
spGetTokenId.jsp
Patch for get the token.
- receives the get token request.
- returns the token which is stored in the cookie named "iPlanetDirectoryPro".
--%>
<%
String token = "";
if (request.getCookies() != null) {
for (Cookie cookie : request.getCookies()) {
if ("iPlanetDirectoryPro".equals(cookie.getName())) {
token = cookie.getValue();
}
}
}
response.sendRedirect(request.getParameter("spTokenAddress") + token);
%>
Copy the jsp file in webbapps/openam:
cp spGetToken.jsp /home/openwis/apache-tomcat-7.0.59/webapps/openam/
Pre-requisite: Tomcat and OpenDJ are started.
Go on OpenAM Installation Page
http://<HOST_NAME>:<PORT>/openam/
- WARNING: Don’t use localhost or IP for OpenSSO URL. You have to use the fully qualified Hostname.
- WARNING: the System time between each entity must be synchronized with UTC time.
- WARNING: When configuring an installation with 2 OpenAM instances, each OpenAM instance must be installed with its own distinct hostname. The frontend will only be setup during “site configuration”.
Using HTTPS To setup OpenAM using HTTPS, configure first the Apache front-end, and perform the whole installation process via HTTPS instead of HTTP (IDP Discovery included).
Create a new custom configuration and accept the license.
Fill the password of the amAdmin user (OpenAM administrator)
Keep default values and press next
Leave default values.
Select OpenDJ as data store type -- WARNING: uncheck OpenDJ and check again OpenDJ (bug OpenAM)
Change only:
-
The port to 1389.
-
Root suffix to:
dc=opensso,dc=java,dc=net
-
Fill the password used during the OpenDJ installation.
Leave default values.
Set the default policy agent password (not used in OpenWIS).
Summary details before installation. If it's ok, "Create Configuration"
Configuration processing...
Configuration passed.
After installation is complete:
Connect the OpenAM console with User Name amAdmin (or amadmin).
Password is the one set during the previous step (step 1).
*Login page timeout configuration
OpenAM defines a timeout on the Login Page itself (before being logged in) and on timeout, the context is lost by OpenAM preventing the redirect back to the portal after a successful login.
This is related to an OpenAM bug .
The bug should be fixed in next versions of OpenAM but the fix does not apply on the current OpenAM version.
If needed, the following configuration allows to increase the login page timeout to one week.
On OpenAM console:
- Go to Configuration / Servers and Sites
- Edit server configuration and go to Session Tab
- Click on Inheritance Settings, uncheck Invalidate Session Max Time, then Save and go Back to Server profile
- Set Invalidate Session Max Time to 10081 and save
After installation go to :
cd ~/openam/openam/log/
and execute:
> amAuthentication.error
It appears this file is missing from current openAM version and must be added manually.
Start OpenDJ control panel to check OpenAM and OpenDJ are linked. The following steps depend on the way the installation is done either with graphical capabilities or without (command line).
cd ~/opendj/bin
./control-panel&
Click on Directory Data / Manage Entries
Check the tree structure is equivalent to this screen.
cd ~/opendj/bin
Replace “HOST-NAME” with your server’s hostname and execute:
./ldapsearch --hostname HOST-NAME --port 1389 --baseDN "dc=opensso,dc=java,dc=net" "(objectclass=*)" @groupOfUniqueNames
Note: This section is not mandatory
As user go
cd ~/opendj/bin
Copy from openam deployment find opendj_user_schema.ldif like:
cp ~/apache-tomcat-7.0.59/webapps/openam/WEB-INF/template/ldif/opendj/opendj_user_schema.ldif .
and after replacing HOSTNAME and PASSWORD execute:
./ldapmodify -h HOSTNAME -p 4444 -D "cn=Directory Manager" -w PASSWORD -X --useSSL -f ./opendj_user_schema.ldif
outputs:
Processing MODIFY request for cn=schema MODIFY
operation successful for DN cn=schema
As user install SSO Admin Tools as follows:
cd /home/openwis
mkdir ssoAdminTools
cd ssoAdminTools
Download the SSO Admin Tools from the middleware repository if available otherwise, perform a Maven build.
Unzip the tools and run the setup:
unzip ssoAdminTools.zip
./setup
After license acceptance, you will be prompted for:
Do you accept the license?
Write yes
Path to config files of OpenAM server [/home/openwis/openam]:
Enter
Debug Directory [/home/openwis/ssoAdminTools/debug]:
Enter
Log Directory [/home/openwis/ssoAdminTools/log]:
Enter
The scripts are properly setup under directory: /home/openwis/ssoAdminTools/openam
Debug directory is /home/openwis/ssoAdminTools/debug.
Log directory is /home/openwis/ssoAdminTools/log.
The version of this tools.zip is: OpenAM 13.0.0
The version of your server instance is: OpenAM 13.0.0 Build 5d4589530d (2016-January-14 21:15)
cd /home/openwis/ssoAdminTools/openam/bin
Create a file containing the amAdmin user's password named passwd
echo "YOUR_amAdmin_PASSWORD" > passwd
Change the mode of passwd. It must be read-only by owner.
chmod 400 passwd
Reload iPlanetAMUserService: First Delete the service iPlanetAMUserService
./ssoadm delete-svc -u amadmin -f passwd -s iPlanetAMUserService
Service was deleted.
Then, add it back again
./ssoadm create-svc -u amAdmin -f passwd --xmlfile /home/openwis/openwis-dependencies/security/amUser.xml
Output:
Service was added.
cd /home/openwis/ssoAdminTools/openam/bin
cp /home/openwis/openwis-dependencies/security/attrs.properties .
Edit attrs.properties and set the values:
- idrepo-ldapv3-config-ldap-server=YOUR_HOSTNAME:1389
- sun-idrepo-ldapv3-config-authpw to the LDAP password for cn=Directory Manager
Remove deprecated entry:
- sun-idrepo-ldapv3-config-ssl-enabled
Next update the data store configuration:
./ssoadm update-datastore -u amadmin -f passwd -e / -m OpenDJ -D attrs.properties
Datastore profile was updated.
Check OpenAM / Data Store connection
In OpenAM:
Go to Access Control, select the / Realm
Go to Data Stores, select OpenDJ
The “LDAP User Attributes section” must contain the OpenWIS attributes (ex OpenWISNeedUserAccount)
The “Attribute Name for Group Membership” must contain isMemberOf
Select “Load schema when saved” and click on Save.
If the message Information “Profile was updated” appears, the connection is OK.
Otherwise, check the configuration on this page (LDAP host, port and password)
The following sections describe how to define a Circle of Trust via OpenAM, which will include:
- The hosted Identity Provider (IdP): the local IdP provided by this OpenAM
- An optional remote IdP, when IdP federation is required
- The IdP Discovery, allowing to discover an IdP from the portal
- The service providers, which are the OpenWIS portals
Connect to OpenAM
http://<HOST_NAME>:<PORT>/openam
Click on “Create SAMLv2 Providers” .
Select "Create Hosted Identity Provider"
Enter IDP Name: IDP1
Select “test” as Signing key
Enter Circle of Trust name: e.g. “cot_openwis”
Click On Configure
Note: At this step, it’s possible to configure the attribute mapper from this page or later as described in section 6.3.4.
Then click on Finish
Check at:
Federation
Entity Providers
Download .war from the middleware repository and deploy idpdiscovery.war into Tomcat, as openwis user:
cd ~
cp idpdiscovery.war /home/openwis/apache-tomcat-7.0.105/webapps/
Configuration Connect to configuring IDP Discovery Service:
http://<IDP_DISCOVERY_HOST_NAME>:<PORT>/idpdiscovery/Configurator.jsp
Set:
- /home/openwis/idpdiscovery as debug directory
- Persistent as cookie type Click on Configure.
Go to OpenAM
http://<IDP_HOST-NAME>:<IDP_PORT>/openam
Click on “Federation” tab.
Edit Circle Of trust by clicking on its name ex. cot_openwis
Add SAML2 reader and writer urls
SAML2 Writer Service URL:
http://IDP_DISCOVERY_HOST_NAME
:PORT
/idpdiscovery/saml2writerSAML2 Reader Service URL:
http://IDP_DISCOVERY_HOST_NAME
:PORT
/idpdiscovery/saml2reader
Note: If an Apache frontend is used to access OpenAM / IdpDiscovery, set the external host name / port configured on the front-end for these services.
The WAR of OpenWIS Security Service and the configuration file are deployed in Tomcat that contains OpenAM. As openwis user:
cd ~
mkdir /home/openwis/apache-tomcat-7.0.59/webapps/openwis-securityservice
cd /home/openwis/apache-tomcat-7.0.59/webapps/openwis-securityservice
cp /home/openwis/maven_projects/openwis/openwis-securityservice/openwis-securityservice-war/target/openwis-securityservice.war .
jar xvf openwis-securityservice.war
Edit the configuration file located at:
vim ./WEB-INF/classes/openwis-securityservice.properties
The following parameters need to be set, replace all '@LABELS@' with your system's appropriate :
- ldap_host: set the hostname where OpenDJ has been installed
- ldap_port: keep 1389
- ldap_user: keep cn=Directory Manager
- ldap_password: set the LDAP admin password configured during OpenDJ installation
- @managementServiceServer@ : set ManagmentService server with port number(currently jboss. ex: http://openwis.server.com:8180/openwis-management-service-ejb/AlertService/AlertService?wsdl )
- @openam.baseDir@ set openam home ex: /home/openwis/openam/openam/log/amAuthentication.error
The following optional parameters may also be set:
- register_users_threshold: number of registered users beyond which an alarm is raised
- global_groups: the default Global groups (comma separated list)
- log.timer.file: the OpenAM file to check for authentication failure
- openwis.management.alertservice.wsdl: the WSDL location of the alert service
The OpenWIS Security Web Services will accessible on URL:
http://<host>:<port>/openwis-securityservice/services/<SERVICE>?wsdl
For example:
http://your_host:8080/openwis-securityservice/services/UserManagementService?wsdl
When deploying a new Centre, the LDAP must be initialized. The provided OpenWIS distribution contains a tool called “PopulateLDAP”, which can be used to perform the LDAP initialization when defining a new Centre.
Zip file is located at
~/maven_projects/openwis/openwis-securityservice/openwis-securityservice-utils/PopulateLDAP/target/
PopulateLDAP.zip
As openwis user install PopulateLDAP, for example on the IdP:
cd ~
mkdir PopulateLDAP
cd PopulateLDAP
cp ~/maven_projects/openwis/openwis-securityservice/openwis-securityservice-utils/PopulateLDAP/target/PopulateLDAP.zip .
unzip PopulateLDAP.zip
chmod a+x populateLDAP.sh
Edit script ./populateLDAP.sh to set your system's properites,
-
Update CLASSPATH to be consistent with your current version
openwis-securityservice-utils-populate-ldap-@VERSION@.jar
you can find jar version information by executing:find ~ -name openwis-securityservice-utils-populate-ldap*
-
Replace @OPENWIS_SECURITY_HOST@:@PORT@ with your current system setting.
Run the script as user:
./populateLDAP.sh
- Enter 1, to create a new Centre
- Enter the Centre name (name of the deployment name: e.g. GiscMF) Enter the Administrator login, password, email, First name, Last name
./populateLDAP.sh
------ Populate LDAP ------ [1] Initialize Deployment [2] Populate LDAP [3] Reset LDAP Users [4] Reset LDAP Groups [5] Reset LDAP Users And Groups Choose one of the above values. (any other value to exit) 1 Initialize Deployment: Deployment Name: openwis Administrator Login: openwis Administrator Password: <admin password> Administrator Email: <Email> Administrator First name: <admin first name> Administrator Last name: <admin last name>
Continue with the portal's configuration at chapter 6.3.3
PostgreSQL 10 (10.0 is the tested and supported version) is required along with PostGIS 2.4 or 2.5 and the CIText extension. These are provided as RPM repositories from PostgreSQL. It is recommended to use these repositories to install the database to simplify the process of installing and enabling the extensions required by OpenWIS
If any help needed see : postgresonline.com
Select the appropriate repository for the database version you wish to use and the OS distribution you are installing the database on. The postgresql10 applicable is listed below:
PostgreSQL Version | Operation System | URL |
---|---|---|
10.0 | RHEL 7+ | https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm |
URLs of other repositories can be found at: http://yum.postgresql.org/
Install the repository using rpm. Use the URL specific to your OS (e.g. the following is for RHEL7 x86_64):
As root:
rpm -ivh https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
Install the PostgreSQL packages, including (the contrib package ?) the PostGIS package:
As root:
yum install -y epel-release
yum install -y postgresql10-server postgis24_10 postgis24_10-client postgis24_10-utils
... Complete!
Additional packages:
hdf5.x86_64 -> already installed and latest version
json-c.x86_64 -> already installed and latest version
yum install proj.x86_64
yum install gdal.x86_64
... Complete!
Optional, install "pspg", a usefull pager for psql:
yum install pspg
As root:
systemctl enable postgresql-10
The process of initializing the database depends on whether you are using a custom data directory. • If you are using a custom directory, run the initdb script as the “postgresql” user. Include the full path of the directory as the argument to the “-D” option:
As root:
su - postgres -c '/usr/pgsql-10/bin/initdb --data-checksums -D ~/10/data'
... Success.
Then after initdb, the data and log repertories are:
PGDATA=/var/lib/pgsql/10/data
PGLOG=/var/lib/pgsql/10/data/log
systemctl start postgresql-10
Verify that PostgreSQL is up and running:
systemctl status postgresql-10
[root@centos76 ~]# systemctl status postgresql-10
● postgresql-10.service - PostgreSQL 10 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-10.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2020-05-05 12:01:43 CEST; 2 days ago
Docs: https://www.postgresql.org/docs/10/static/
Process: 13311 ExecStartPre=/usr/pgsql-10/bin/postgresql-10-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Main PID: 13317 (postmaster)
CGroup: /system.slice/postgresql-10.service
├─13317 /usr/pgsql-10/bin/postmaster -D /var/lib/pgsql/10/data/
├─13319 postgres: logger process
├─13321 postgres: checkpointer process
├─13322 postgres: writer process
├─13323 postgres: wal writer process
├─13324 postgres: autovacuum launcher process
├─13325 postgres: stats collector process
├─13326 postgres: bgworker: logical replication launcher
├─19055 postgres: openwis OpenWIS 127.0.0.1(36236) idle
├─19064 postgres: openwis OpenWIS 127.0.0.1(36242) idle
└─19067 postgres: openwis OpenWIS 127.0.0.1(36248) idle
May 05 12:01:42 centos76 systemd[1]: Starting PostgreSQL 10 database server...
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.444 CEST [13317] LOG: listening on IPv4 address "0.0.0.0", port 5432
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.445 CEST [13317] LOG: listening on IPv6 address "::", port 5432
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.456 CEST [13317] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.461 CEST [13317] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.688 CEST [13317] LOG: redirecting log output to logging collector process
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.688 CEST [13317] HINT: Future log output will appear in directory "log".
May 05 12:01:43 centos76 systemd[1]: Started PostgreSQL 10 database server.
Note: ** For the rest of the install instructions, to log in as postres user**, login as root and then execute
su - postgres
As the “postgres” user
Create the "openwis" postgres user. The "openwis" user should not be a super user and should not need to create any separate databases or user roles. The "openwis" user, however, may need to create new tables or modify existing tables:
createuser -P openwis
Enter password for new role: <enter password>
Enter it again: <enter password again>
createdb -O openwis OpenWIS
$ psql -d OpenWIS -c 'create extension postgis;'
CREATE EXTENSION
$ psql -d OpenWIS -c 'create extension postgis_topology;'
CREATE EXTENSION
$ psql -d OpenWIS -c 'create extension citext;'
CREATE EXTENSION
cd /usr/pgsql-10/share/contrib/postgis-2.4/
psql -d OpenWIS -U postgres -f legacy.sql
CREATE FUNCTION
...
...
CREATE AGGREGATE
psql -d OpenWIS -U openwis
OpenWIS=> select postgis_full_version();
NOTICE: Topology support cannot be inspected. Is current user granted USAGE on schema "topology" ?
postgis_full_version
POSTGIS="2.4.8 r17696" PGSQL="100" GEOS="3.8.1-CAPI-1.13.3" PROJ="Rel. 7.0.0, March 1st, 2020" GDAL="GDAL 3.0.4, released 2020/01/28" LIBXML="2.9.1" L IBJSON="0.11" LIBPROTOBUF="1.0.2" RASTER (1 row)
OpenWIS=> select * from pg_available_extensions where name like '%postgis%';
postgis_sfcgal | 2.4.8 | | PostGIS SFCGAL functions postgis | 2.4.8 | 2.4.8 | PostGIS geometry, geography, and raster spatial types and functions postgis_tiger_geocoder | 2.4.8 | | PostGIS tiger geocoder and reverse geocoder postgis_topology | 2.4.8 | 2.4.8 | PostGIS topology spatial types and functions (4 rows)
OpenWIS=> select name, installed_version as ver, comment from pg_available_extensions where installed_version is not null;
citext | 1.4 | data type for case-insensitive character strings postgis | 2.4.8 | PostGIS geometry, geography, and raster spatial types and functions postgis_topology | 2.4.8 | PostGIS topology spatial types and functions plpgsql | 1.0 | PL/pgSQL procedural language (4 rows)
-
Quit OpenWIS DB:
OpenWIS=#\q
With large catalogues and spatial searches, very large SQL queries can be generated on the PostGIS spatial index table. This can cause PostgreSQL to exceed the default process stack size limit. You will know when this happens because a very long SQL query will be output to the SolR log file prefixed with a cryptic message something along the lines of “java.util.NoSuchElementException: Could not acquire feature:org.geotools.data.DataSourceException: Error Performing SQL query: SELECT .........”. It is therefore necessary to raise the stack size limits of the PostgreSQL database:
Get the permitted stack size of the postgresql user. The result will be return in kilobytes (1,024 bytes).
ulimit -s
10240 # KB = 10 MB
As root open “/var/lib/pgsql/10/data/postgresql.conf” in a text editor and search for the key “max_stack_depth”. Uncomment the value if it is not commented out, and raise this to about 80% of the value reported by ulimit:
max_stack_depth = 8MB
As root, restart PostgreSQL for the changes to take effect.
systemctl start postgresql-10
Verify as root :
systemctl status postgresql-10
If the error still appears these values need to be increased.
The OpenWIS distribution comes with two scripts: schema.dll and purge.sql. The schema.dll script is used to create the OpenWIS database tables used by the data services (the tables used by the user portals are created automatically on first start). Both of these scripts are located in “openwis-dependencies.zip”:
/home/openwis/openwis-dependencies/database/purge.sql
/home/openwis/openwis-dependencies/database/schema.ddl
As the “openwis” user:
- Openwis DB SCHEMA
The “schema.dll” file from “openwis-dependencies/database” needs to be loadded into the OpenWIS database:
psql -d OpenWIS -f /home/openwis/openwis-dependencies/database/schema.ddl
- Purge script
A purge script purge.sql is provided in openwis-dependencies/database. It allows to purge the blacklisting entries for users having no activity (request/subscription) since 30 days (default value, can be changed). This is needed in particular because the data service is not strictly related to a local user referential, and can manage requests/subscriptions of user belonging to remote Centre (via Remote Authentication). The user lifecycle (creation/deletion) is then not correlated to the Data Service. To execute the purge:
psql -d OpenWIS -f /home/openwis/openwis-dependencies/database/purge.sql
As postgres user, locate pg_hba.conf:
-
connect at postgres DB:
$ psql
psql (10.12)
Type "help" for help.
-
execute:
postgres=# show hba_file ;
hba_file ------------------------------------ /var/lib/pgsql/10/data/pg_hba.conf
(1 row)
-
Exit postgres DB console
postgres=#\q
-
Edit the pg_hba.conf to allow access to these components according to installation needs:
vim /var/lib/pgsql/10/data/pg_hba.conf
trust Allow the connection unconditionally. This method allows anyone that can connect to the PostgreSQL database server to login as any PostgreSQL user they like, without the need for a password.
reject Reject the connection unconditionally. This is useful for "filtering out" certain hosts from a group.
md5 Require the client to supply an MD5-encrypted password for authentication.
password Require the client to supply an unencrypted password for authentication. Since the password is sent in clear text over the network, this should not be used on untrusted networks.
gss Use GSSAPI to authenticate the user. This is only available for TCP/IP connections.
sspi Use SSPI to authenticate the user. This is only available on Windows.
krb5 Use Kerberos V5 to authenticate the user. This is only available for TCP/IP connections.
ident Obtain the operating system user name of the client (for TCP/IP connections by contacting the ident server on the client, for local connections by getting it from the operating system) and check if it matches the requested database user name.
ldap Authenticate using an LDAP server.
cert Authenticate using SSL client certificates.
pam Authenticate using the Pluggable Authentication Modules (PAM) service provided by the operating system.
Peer Obtain the client's operating system user name from the operating system and check if it matches the requested database user name. This is only available for local connections.
- For more info about pg_hba.conf file, see : https://www.postgresql.org/docs/10/auth-pg-hba-conf.html
-
Edit the postgresql.conf
vim /var/lib/pgsql/10/data/postgresql.conf
Set listen_addresses to the host IP address (or ‘*’) if the database needs to be accessible from outside this host.
-
restart postgresql
systemctl restart postgresql-10
The Data Service and Management Service run in Wildfly 15.0.1.
Note: Openwis application could be installed in /var/opt/openwis/ directory.
In this wiki, the middlewares and Openwis components are located in /home/openwis/
As openwis:
-
Create directory for dataservice properties files
mkdir /home/openwis/conf
-
Create directory for data
mkdir /home/openwis/harness mkdir /home/openwis/harness/incoming mkdir /home/openwis/harness/ingesting mkdir /home/openwis/harness/ingesting/fromReplication mkdir /home/openwis/harness/outgoing mkdir /home/openwis/harness/working mkdir /home/openwis/harness/working/fromReplication mkdir /home/openwis/cache mkdir /home/openwis/stagingPost mkdir /home/openwis/temp mkdir /home/openwis/replication mkdir /home/openwis/replication/sending mkdir /home/openwis/replication/sending/local mkdir /home/openwis/replication/sending/destinations mkdir /home/openwis/replication/receiving mkdir /home/openwis/status
These folders correspond to:
- Incoming: folder on which incoming GTS files are dropped and consumed by OpenWIS dispatch process
- Ingesting: folder containing files to ingest (after dispatch)
- Outgoing: folder on which OpenWIS feeds files to the GTS
- Working: internal working folder
- Cache: the cache root directory
- StagingPost : the staging post root directory
- Temp : the directory for temporarily stored files
- Replication: the directory for the replication folders, with the following sub-structure:
- sending / local: contains a buffer of 24 hours data received from GTS and that needs to be sent to replication destination
- sending / destinations /
<DESTINATION_NAME>
: contains links to sending/local data that are currently sent to the given destination - receiving /
<SOURCE_NAME>
: contains files received from the replication from the given source - Status: the directory containing status files (status of each services)
Depending on the deployment, all these directories, and in particular the Cache and Staging Post directories, may correspond to mounted folders, targeting network shared locations (NFS, Samba…).
The components and config files are available for deploying after the Maven compilation described at 2.4.3
-
Copy property files in /home/openwis/conf/
cp ~/openwis-dependencies/data-management-services/openwis-dataservice-config/config/localdatasourceservice.properties ~/conf/ cp ~/openwis-dependencies/data-management-services/openwis-dataservice-config/config/openwis-dataservice.properties ~/conf/
Make appropriate changes to the configuration files located in ~/conf. The following files can be configured:
localdatasourceservice.properties:
For a GISC deployment, leave the values commented
For a DCPC, configure the LocalDataSource.
- Identification of the local data source: o Key: logical name of the data source o Value: URL of the WSDL corresponding to the Local Data Source Web Service
- Polling mechanism (to detect availability of new products) o Key:
<logical name of the data source>
.polling o Value: true or false to enable/disable the polling of the data source
openwis-dataservice.properties:
- For both GISC and DCPC:
Adapt the file "openwis-dataservice.properties" to your deployment.
This configuration section contains many parameters that can be configured. Placeholders surrounded with “@” will need to be replaced.
Most of typical deployments will have to configure only the basic ones, shown in the table below.
Note: ejb JNDI names in "openwis-dataservice.properties"
Replace in each line containing "ejb" ejb:
with java:global/
as the following example:
extraction.timer.url=ejb:openwis-dataservice/openwis-dataservice-server-ejb/ExtractionTimerService!org.openwis.dataservice.common.timer.ExtractionTimerService
Becomes
extraction.timer.url=java:global/openwis-dataservice/openwis-dataservice-server-ejb/ExtractionTimerService!org.openwis.dataservice.common.timer.ExtractionTimerService
The following table describes the parameters
Data Service Directories Locations | description |
---|---|
dataService.baseLocation | Base location of the data services data directory (e.g. “/home/openwis”). |
cache.dir.stagingPost | The path to the staging post directory. (same as ‘staging.post.uri’ value) |
cache.dir.temp | The path to the temporary directory |
cache.dir.cache | The path to the cache root directory |
cache.dir.harness.incoming | The path to the incoming directory (folder in which GTS flow is dropped) |
cache.dir.harness.ingesting | The path to the folder containing files to ingest (after dispatch) |
cache.dir.harness.working | The path to the working directory (batch files being ingested are split in this folder, after ingesting) |
cache.dir.harness.outgoing | The path to the outgoing directory for feeding |
dataservice.service.status.folder | The path to the folder containing status files for each sub-services. |
cache.replication.config.folder | The path to the folder containing replication sub-folders (sending/receiving) |
FTP Replication Settings | |
cache.replication.config.fromReplication.folder | Name of sub-folder of ingesting and working folders where files are dropped when received from replication |
Cache Feed/Ingest Settings | |
cache.config.numberOfChecksumBytes | The number of bytes of a file on which the checksum will be calculated |
cache.config.location.sendingCentre | The 4-letter-location-code of the sending (the local) centre (used as a prefix of outgoing WMO FTP files). |
Dissemination Configuration | |
staging.post.uri | The path to the staging post directory. |
cache.dissemination.stagingPost.purgeTime | [minutes] The entries in the “StagingPostEntry” database table and the files on the staging post, referenced in those entries, which are older than this value will be deleted. |
cache.config.stagingPostMaximumSize | The maximum number of bytes in the staging post. |
cache.config.cacheMaximumSize | The maximum number of bytes in the cache. |
cache.dissemination.disseminationHarness.public.url | The URL of the public dissemination harness. |
cache.dissemination.disseminationHarness.rmdcn.url | The URL of the RMDCN dissemination harness. |
cache.dissemination.threshold.mail | The maximum number of bytes to send via email |
cache.dissemination.threshold.ftp | The maximum number of bytes to send via ftp |
mail.from Sender | of the blacklisting emailing |
mail.smtp.host SMTP | Host for the blacklisting emailing |
mail.smtp.port SMTP | Port for the blacklisting emailing |
blacklist.default.nb.warn Default | blacklisting warning threshold regarding the number of files |
blacklist.default.nb.blacklist | Default blacklisting threshold regarding the number of files |
blacklist.default.vol.warn | Default blacklisting warning threshold regarding the volume |
blacklist.default.vol.blacklist | Default blacklisting threshold for the volume |
Management WebService URLs | |
openwis.management.alertservice.wsdl | The WSDL URL of the alert service. (update Host/Port with the location of the Management service) |
openwis.management.controlservice.wsdl | The WSDL URL of the control service. (update Host/Port with the location of the Management service) |
openwis.management.controlservice.defaultFeedingFilterLocation | The full path and name (e.g. ‘defaultFeedingFilters.config’) of a file containing the default feeding filters. The content of this file must be one regular expression per line, each line thus defines a filter. |
openwis.management.disseminateddatastatistics.wsdl | The WSDL URL of the disseminated data statistics. (update Host/Port with the location of the Management service) |
openwis.management.exchangeddatastatistics.wsdl | The WSDL URL of the exchanged data statistics. (update Host/Port with the location of the Management service) |
openwis.management.replicateddatastatistics.wsdl | The WSDL URL of the replicated data statistics. (update Host/Port with the location of the Management service) |
openwis.management.ingesteddatastatistics.wsdl | The WSDL URL of the ingested data statistics. (update Host/Port with the location of the Management service) |
Other Settings | |
cache.cacheManager.housekeepingTimer.expirationWindow | [days] The number of days the cache content will be valid before it gets removed |
cache.cacheManager.purgingTimer.expirationWindow | [ms] The number of milliseconds a file will be kept in temporary directory. |
Table 1: Data Service configuration properties
Note: Advanced parameters should not to be modified on a typical deployment.
-
Download JDBC Driver for PosgreSQL 10 and java 1.8
wget https://jdbc.postgresql.org/download/postgresql-42.2.6.jar
-
JBOSS_HOME - Check that JBOSS_HOME is properly configured:
$ echo $JBOSS_HOME
/home/openwis/wildfly-15.0.1.Final
It's convenient to install the Openwis Data&Management service using jboss-cli.
Assuming that openwis is builded as described in 2.6 and that JBOSS_HOME is correctly configured, execute a script with the following commands:
Note:
Wildfly must be started
As openwis user:
#!/bin/bash
#
echo "---Deploy Openwis Data and Management Services via CLI Tools---"
echo ""
echo "-- JBOSS_HOME setting check";
if [ "x$JBOSS_HOME" = "x" ]; then
echo "JBOSS_HOME undefined -> exit !";
exit;
else
echo "JBOSS_HOME ok : $JBOSS_HOME";
fi
echo "--Set the HTTP port to 8180"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/socket-binding-group="standard-sockets"/socket-binding="http":write-attribute(name="port",value=8180)"
sleep 5
echo "--Set the Management Inet Listen Address"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/interface=management:write-attribute(name=inet-address,value=127.0.0.1)"
echo "--Set the Public Inet Listen Address"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/interface=public:write-attribute(name=inet-address,value=0.0.0.0)"
echo "--Set the WSDL Inet Listen Address"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/subsystem=webservices:write-attribute(name="wsdl-host",value=127.0.0.1)"
echo "--Set the WSDL port to 8180"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/subsystem=webservices:write-attribute(name="wsdl-port",value=8180)"
sleep 5
echo "--Configure Deployment Scanner scan-interval"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/subsystem=deployment-scanner/scanner=default:write-attribute(name=scan-interval,value=500)"
echo "--Configure Deployment Scanner auto-deploy true"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/subsystem=deployment-scanner/scanner=default:write-attribute(name=auto-deploy-exploded,value="true")"
echo "--Reload WildFly..."
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 --command=":reload"
sleep 5
echo "--Setup logging"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/periodic-rotating-file-handler="CollectionHandler":add(level=INFO, file={"relative-to"=>"jboss.server.log.dir", "path"=>"collection.log"}, formatter="%d %-5p [%c] %m%n", append=true, autoflush=true, suffix="yyyy-MM-dd")'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/periodic-rotating-file-handler="RequestHandler":add(level=INFO, file={"relative-to"=>"jboss.server.log.dir", "path"=>"request.log"}, formatter="%d %-5p [%c] %m%n", append=true, autoflush=true, suffix="yyyy-MM-dd")'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/periodic-rotating-file-handler="AlertsHandler":add(level=INFO, file={"relative-to"=>"jboss.server.log.dir", "path"=>"alerts.log"}, formatter="%d %-5p [%c] %m%n", append=true, autoflush=true, suffix="yyyy-MM-dd")'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/logger=org.openwis.dataservice.util.WMOFTP:add(use-parent-handlers=true,handlers=["CollectionHandler"])'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/logger=org.openwis.dataservice.gts.collection:add(use-parent-handlers=true,handlers=["CollectionHandler"])'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/logger=org.openwis.dataservice.dissemination:add(use-parent-handlers=true,handlers=["RequestHandler"])'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/logger=org.openwis.datasource:add(use-parent-handlers=true,handlers=["RequestHandler"])'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/logger=org.openwis.management.service:add(use-parent-handlers=true,handlers=["AlertsHandler"])'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/periodic-rotating-file-handler=FILE:write-attribute(name="level",value="INFO")'
sleep 5
echo "--Reload WildFly..."
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 --command=':reload'
sleep 5
echo "--Deploy the postgresql driver"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 --command='deploy postgresql-42.2.6.jar'
echo "--Setup the OpenDS data source"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command=' data-source add --name=OpenwisDS --jndi-name="java:/OpenwisDS" --connection-url="jdbc:postgresql://localhost:5432/OpenWIS?stringtype=unspecified" \
--user-name="openwis" --password="$openwis" --driver-name="postgresql-42.2.6.jar" --driver-class="org.postgresql.Driver" \
--min-pool-size=10 --max-pool-size=40 --idle-timeout-minutes=15 --blocking-timeout-wait-millis=15000 --background-validation-millis=50000'
sleep 10
echo "--Setup the JMS queues"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=CollectionQueue --entries=[java:/jms/queue/CollectionQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=IncomingDataQueue --entries=[java:/jms/queue/IncomingDataQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=RequestQueue --entries=[java:/jms/queue/RequestQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=DisseminationQueue --entries=[java:/jms/queue/DisseminationQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=PackedFeedingQueue --entries=[java:/jms/queue/PackedFeedingQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=UnpackedFeedingQueue --entries=[java:/jms/queue/UnpackedFeedingQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=StatisticsQueue --entries=[java:/jms/queue/StatisticsQueue]"
sleep 10
echo "--Reload WildFly..."
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 --command=":reload"
sleep 10
echo "--Deploying Management Service ear file"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="deploy /home/openwis/maven_projects/openwis/openwis-management/openwis-management-service/openwis-management-service-ear/target/openwis-management-service.ear"
sleep 10
echo "--Deploying Data Service ear file"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="deploy /home/openwis/maven_projects/openwis/openwis-dataservice/openwis-dataservice-server/openwis-dataservice-server-ear/target/openwis-dataservice.ear"
sleep 10
echo "--checking deployments"
$JBOSS_HOME/bin/jboss-cli.sh --controller="localhost:9990" -c --commands=ls\ deployment
sleep 5
echo ""
echo "*** Installation COMPLETE ***"
Note: If needed, Solr can be deployed in wildfly with the following command:
#Deploying SolR war
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="deploy /home/openwis/maven_projects/openwis/openwis-metadataportal/openwis-portal-solr/target/openwis-portal-solr.war"
-
Verify that Wildfly is running port 8180:
$ netstat -an | grep LISTEN|grep -v LISTENING tcp 0 0 127.0.0.1:9990 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3528 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8180 0.0.0.0:* LISTEN
-
Verify that OpenWIS DataService is working:
tail -f $JBOSS_HOME/standalone/log/server.log
...
2020-06-08 16:06:15,884 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 8) +++ purging temporary directory
2020-06-08 16:06:15,885 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 7) +++ Start housekeeping of cache since Sun Jun 07 16:06:15 CEST 2020
2020-06-08 16:06:15,885 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 7) +++ expiration +++ deleting mapped metadata
2020-06-08 16:06:15,886 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 7) +++ expiration +++ deleting cached files
2020-06-08 16:06:15,887 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 7) +++ expiration +++ deleted : 0
2020-06-08 16:06:15,888 INFO [org.openwis.management.service.IngestedDataStatisticsImpl] (EJB default - 7) No ingested data found at Mon Jun 08 02:00:00 CEST 2020
2020-06-08 16:06:15,888 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 7) +++ Finished housekeeping of cache since Sun Jun 07 16:06:15 CEST 2020
...
As openwis user: Create the Solr home folder and the Solr data folder:
mkdir /home/openwis/solr
mkdir /home/openwis/solr/data
The SolR war is deployed as followed:
mkdir <TOMCAT_HOME>/webapps/openwis-portal-solr
cd <TOMCAT_HOME>/webapps/openwis-portal-solr
copy openwis-portal-solr.war at TOMCAT_HOME/webapps/openwis-portal-solr ex:
cp ~/maven_projects/openwis/openwis-metadataportal/openwis-portal-solr/target/openwis-portal-solr.war .
unzip openwis-portal-solr.war
The configuration is done in WEB-INF/classes/openwis.properties of the SolR webapp:
###
### OpenWIS SolR Configuration
###
### SolR data folder
openwis.solr.data=/home/openwis/solr/data
### Spatial folder (only used in case PostGIS is not available)
openwis.solr.spatial=/home/openwis/solr/spatial
### Database configuration
jdbc.driver=org.postgresql.Driver
jdbc.url=jdbc:postgresql://127.0.0.1:5432/OpenWIS?stringtype=unspecified
jdbc.user=openwis
jdbc.password=DB_PASSWORD
Verify that postgresql-XXX-jdbc4.jar” version is the same as installed postgres at:
<TOMCAT_HOME>/webapps/openwis-portal-solr/WEB-INF/lib/
In any other case replace it with the consistent one.
2 possibilities exist for spatial indexing:
- Shape file: When using shape files (and not PostGIS), local files in a dedicated directory will be used to store the spatial information. Create the SolR spatial directory:
mkdir /home/openwis/solr/spatial
Adjust the related spatial folder in openwis.properties described above (if needed).
Difference with Iteration 2: the shared folder between SolR and Portals are not needed anymore
- PostGIS: when using PostGIS, the spatial information are stored in the common OpenWIS database. Adjust the database configuration in openwis.properties described above.
Note: When starting up, SolR will first try to connect the database and test if a valid PostGIS installation exists, otherwise it automatically switch to shape file indexing.
Finally restart Tomcat service tomcat restart (or use stop start tomcat scripts provided)
The Staging Post is a file system accessible via a Web interface. This can be configured using the provided StagingPost web application deployed on a Tomcat server or alternatively deployed on JBoss. As mentioned in data service section, the Staging Post file system is configured to be in /home/openwis/stagingPost.
As openwis: Deployment of the Staging Post web app:
mkdir /home/openwis/stagingPost (if needed)
cd /home/openwis/stagingPost
cp ~/maven_projects/openwis/openwis-stagingpost/target/stagingPost.war
jar xvf stagingPost.war
In case the Staging Post is deployed elsewhere, adjust the Staging Post globalXsltFile locations in WEB-INF/web.xml.
To deploy the stagingPost on Tomcat: Edit the server.xml file of Tomcat:
vi ~/apache-tomcat-9.0.36/conf/server.xml
Add the following element at the end of the Host element:
<Context docBase="/home/openwis/stagingPost" path="/stagingPost" reloadable="false"/>
Note: Admin Portal, User portal and OpenAM cannot be in the same Tomcat instance neither run under the same OS user.
Deployment of the admin portal:
As a new user (ex openwisAdmin):
mkdir apache-tomcat-9.0.36/webapps/openwis-admin-portal
cd apache-tomcat-9.0.36/webapps/openwis-admin-portal
Find openwis-admin-portal.war or openwis-admin-portal-admin.war (in later versions of openwis) and copy it at apache-tomcat-9.0.36/webapps/openwis-admin-portal.
cp ~/maven_projects/openwis/openwis-metadataportal/openwis-portal/openwis-admin-portal/openwis-admin-portal-admin.war .
jar xvf openwis-admin-portal-admin.war
Likewise deployment of the user portal:
Also as new user (ex openwisUser):
mkdir apache-tomcat-9.0.36/webapps/openwis-user-portal
cd apache-tomcat-9.0.36/webapps/openwis-user-portal
cp ~/maven_projects/openwis/openwis-metadataportal/openwis-portal/openwis-user-portal/openwis-user-portal-user.war
jar xvf openwis-user-portal-user.war
The configuration is based on the same file/parameters on user and admin portals.
⚠️ This configuration must be done for both deployments: User Portal and also for Admin Portal
Edit as your defined: portal admin for Admin Portal And as portal user for User Portal
-
config.xml: database connection (Geonetwork)
vi WEB-INF/config.xml
Edit at section postgresql, set the user, password and URL used to connect the PostgreSQL instance.
-
openwis-metadataportal.properties: OpenWIS specific configuation
vi WEB-INF/classes/openwis-metadataportal.properties
Property | Description |
---|---|
openwis.metadataportal.dataservice.*.wsdl | WSDL location of the Data Services |
openwis.metadataportal.securityservice.*.wsdl | WSDL location of the Security Web Services Update with SecurityService host/port |
openwis.metadataportal.harness.subselectionparameters.wsdl | WSDL location of the SubselectionParameters Harness Web Service |
openwis.metadataportal.harness.mssfss.wsdl | WSDL location of the MSS/FSS Harness Web Service |
openwis.management.*.wsdl | WSDL location of the Security Web Services,Update with ManagementService host/port |
openwis.metadataportal.mssfss.support | Whether the MSS/FSS is supported by the current deployment |
openwis.metadataportal.url.staging.post | Base URL of the Staging Post |
openwis.metadataportal.cache.enable | Whether the current deployment has a Cache facility |
openwis.metadataportal.solr.url | SolR location: http://server:port/openwis-portal-solr |
openwis.metadataportal.date.format | Date format (test only) |
openwis.metadataportal.datetime.format | Date time format (test only) |
openwis.metadataportal.deploy.name | Deployment name as defined with Populate LDAP described in section 3.5(e.g. GiscMeteoFrance) |
openwis.metadataportal.datapolicy.default.name | Default data policy that will be applied to newly created groups |
openwis.metadataportal.datapolicy.default.operations | Default data policy operation that will be applied to newly created groups |
openwis.metadataportal.sso | OpenSSO URL (used only on Administration portal) |
openwis.metadataportal.oai.maxRecords | Max records processed by OAIPMH in one page |
openwis.metadataportal.acceptedFileExtensions | Accepted list of file extensions (deduced from the metadata and used during file unpacking) |
openwis.metadataportal.monitoring.userportal.url | URL of user portal used to test availability (used on admin portal) |
openwis.metadataportal.monitoring.synchro.warn.limit | Percentage beyond which the availability of synchronization process will be marked in error |
openwis.metadataportal.monitoring.harvesting.warn.limit | Percentage beyond which the availability of harvesting process will be marked in error |
openwis.metadataportal.session.securityservice.tooManyActiveUsers | Number of connected users beyond which an alarm is raised |
openwis.metadataportal.securityservice.tooManyActiveAnonymousUsers | Number of anonymous users beyond which an alarm is raised |
openwis.metadataportal.extract.xpath | XPath of the GTS Category / Data Policy in ISO19139 schema |
openwis.metadataportal.extract.gtsCategoryAdditionalRegexp | Regular expression to interpret a GTS Category found in metadata as WMOAdditional |
openwis.metadataportal.extract.gtsCategoryEssentialRegexp | Regular expression to interpret a GTS Category found in metadata as WMOEssential |
openwis.metadataportal.extract.gtsPriorityRegexp | Regular expression to interpret a GTS Priority found in metadata |
openwis.metadataportal.extract.urnPatternForIgnoredFncPattern | The pattern applied on URN to determine if FNC Pattern found in metadata should be ignored when inserting the metadata.Default ignored FNC Pattern when URN is TTAAiiCCCC based) |
openwis.metadataportal.catalogsize.alarm.period | Catalog |
openwis.metadataportal.catalogsize.alarm.limit | Catalog size limit beyond which raise an alarm |
openwis.metadataportal.lang.list | Available languages: list of <lang_value/lang_label> |
openwis.metadataportal.report.file.path | Directory where harvesting reports are stored on the admin portal server |
- openwis-deployments.properties: Multi-deployments configuration
vi WEB-INF/classes/openwis-deployments.properties
Property | Description |
---|---|
openwis.cots | List of deployment names that can be used to consult the remote requests/subscription.These deployments must be in the same circle of trust. |
openwis.backups | List of backup deployment names |
openwis.deployment.url.DEPLOYMENT_NAME | The User portal URL of the backup deployment (for each backup deployment, including the current one) |
openwis.deployment.url.DEPLOYMENT_NAME.admin | The administrator email of the backup deployment (for each backup deployment, including the current one) that will be notified in case of availability error detection |
openwis.backup.warn.rate | The rate of available function (in %) bellow which we consider that |
Each OpenWIS portal (User or Admin) will be a Service Provider (SP), and will need to be configured before being registered in the Circle of Trust defined previously. This phase requires the OpenWIS portal to be deployed.
GenerateSPConfFiles is a tool provided to ease the configuration process of a Service Provider. The tool can be downloaded from the middleware repository:
https://repository.../GenerateSPConfFiles.zip
Unzip GenerateSPConfFiles tool
Assuming that /home/openwis/ is the PORTAL USER or PORTAL ADMIN HOME:
cd /home/openwis/
mkdir GenerateSPConfFiles
cd GenerateSPConfFiles
wget https://repository.../GenerateSPConfFiles.zip
unzip GenerateSPConfFiles.zip
Edit configuration.properties in conf directory:
#Home Directory (Windows : c:/Documents and Settings/<USER-NAME>, Other /home/<USER-NAME>)
directory=<PORTAL USER or PORTAL ADMIN HOME>
#Circle Of Trust Name
cot-name =cot_openwis as defined in step 3.3.1
# Service Provider Name
sp-name =GiscA # logical name of the portal: GiscMF, GiscMFAdmin; name to identify the Service Provider in OpenAM
# Service Provider URL (example: http://<HOST-NAME>:<PORT>/<SP-NAME>)
sp-url =http://<external-sp-hostname>/openwis-user-portal (for user portal)
# IDP Discovery URL (example: http://<HOST-NAME>:<PORT>/idpdiscovery)
idp-discovery=http://<external-idpdiscovery-hostname>/idpdiscovery
# IDPs and SPs of the circle of trust. (comma separated)
trusted-providers =IDP1 # at least the IdP, as defined in step 3.3.1
#All IDPS names
idps-names =IDP1 # at least the IdP, as defined in step 3.3.1
#For each IDP name, create a variable <IDP-NAME>.url=http://<HOST-NAME>:<PORT>/openam
#<IDP-NAME>.url=
IDP1.url=http://<external-idp-hostname>/openam
# at least the IdP, as defined in step **3.3.1**
Run GenerateSPConfFiles
cd ~/GenerateSPConfFiles/apache-ant-1.8.1/bin
chmod a+x *
cd ~/GenerateSPConfFiles
chmod a+x Generate-SP-Conf-Files.sh apache-ant-1.8.1/bin/ant
./Generate-SP-Conf-Files.sh
The tool generates files in /fedlet, which allows the portals (service provider) to manage SAML2 communications with the other entities:
Generate-SP-Conf-Files.sh
more than once . You must manually delete all files contained in fedlet folder. Script wont replace them, thus re-execution wont have any effect.
After each fedlet making, restart Tomcat.
⚠️ If any Fedlet error occurs, see Fedlet log files located at
/fedlet/debug
Restart Portal server
and then verify via a browser:
For admin Portal:
<Admin-PORTAL-HOST-NAME>:<PORT>/openwis-admin-portal/saml2/jsp/exportmetadata.jsp
For user Portal:
<User-PORTAL-HOST-NAME>:<PORT>/openwis-user-portal/saml2/jsp/exportmetadata.jsp
In order to add Portals at COT they must be registered as remote Service Provider on OpenAM
Connect to OpenAM.
Click on “Top Level Realm” button.
In "Common Tasks", click on "Create SAML2 Providers"
Select “Register Remote Service Provider”.
In the field "URL where metadata is located", register the URL:
http://<SP_HOST_NAME>:<PORT>/<SP_NAME>/saml2/jsp/exportmetadata.jsp
This URL is the URL of the User or Administration portal to register.
For example: http://openwis-oai.akka.eu/openwis-user-portal/saml2/jsp/exportmetadata.jsp
The portal must be running to complete this step.
Select if needed the circle of trust previously created named "cot_openwis"
Click on Configure.
SP configured.
For each Identity Provider in the Federation tab, select “Assertion Processing” tab.
Add one by one the following lines as attributes to the Attribute Mapper (the attribute names are case sensitive and there must be no spaces ) and save:
OpenWISAddress=OpenWISAddress
OpenWISAddressCity=OpenWISAddressCity
OpenWISAddressZip=OpenWISAddressZip
OpenWISAddressState=OpenWISAddressState
OpenWISAddressCountry=OpenWISAddressCountry
OpenWISClassOfService=OpenWISClassOfService
OpenWISBackUps=OpenWISBackUps
OpenWISProfile=OpenWISProfile
isMemberOf=isMemberOf
OpenWISNeedUserAccount=OpenWISNeedUserAccount
cn=cn
sn=sn
givenname=givenname
mail=mail
After installation is complete portals must be accesible via :
http://<USER_PORTAL_SERVER>:<PORT>/openwis-user-portal
http://<ADMIN_PORTAL_SERVER>:<PORT>/openwis-admin-portal
At login prompt through OpenAM user must enter the credentials of user(s) created via populateLDAP (see 3.5)
If any error occurs, information may be found at:
- Tomcat logs
- OpenWIS portals log files located at:
/home/<Admin or User>/logs
- Tomcat logs (OpenDJ, OpenAM, OpenWIS Security Services)
- JBoss logs (openwis-dataservice, openwis-management-service)
An Apache server is used as a front-end to access:
- the user and admin portals
- OpenAM
- OpenWIS Security Services (load balancing/failover)
- Data Service (load balancing/failover)
- the Staging Post
Apache is already installed by default in RedHat 5.5 +.
The configuration of Apache is done via the configuration file: /etc/httpd/conf/httpd.conf.
See details of configuration at next paragraphs
The relevant section is:
Start Apache Web Server:
service httpd start
service httpd status
Notes:
Enable SELinux working with Apache Proxy As root: setsebool
httpd_can_network_connect on
if getsebool: SELinux is disabled
Edit :/etc/sysconfig/selinux
and set SELINUX=disabled -> SELINUX=permissive
Reboot OS
To test the value
getsebool httpd_can_network_connect
httpd_can_network_connect --> on
Warning:
- If failover and site are not configured : the host name of the proxied OpenAM needs to be exactly the same as used during OpenAM installation (hostname and not IP address) or the whole OpenAM installation needs to be done via the front-end host name.
- In failover and site configuration: the frontend hostname will be the frontend of the configured site.
For a simple front-end configuration:
ProxyPreserveHost On
# Proxy for OpenAM
ProxyPass /opensso http://<HOST_OPENAM>:8080/openam
ProxyPassReverse /opensso http://<HOST_OPENAM>:8080/openam
For a failover configuration with load balancing between 2 OpenAM instances:
ProxyRequests Off
# Load balancer for OpenAM
<Proxy balancer://cluster-opensso>
Order deny,allow
Allow from all
BalancerMember http://<HOST_OPENAM_1>:8080/opensso route=node1
BalancerMember http://<HOST_OPENAM_2>:8080/opensso route=node2
ProxySet lbmethod=byrequests
ProxySet stickysession= APLBCOOKIE
</Proxy>
Header add Set-Cookie "APLBCOOKIE=APACHE.%{BALANCER_WORKER_ROUTE}e;path=/;" env=BALANCER_ROUTE_CHANGED ProxyPass /opensso balancer://cluster-opensso ProxyPassReverse /opensso http://<HOST_OPENAM_1>:8080/openam ProxyPassReverse /opensso http://<HOST_OPENAM_2>:8080/openam
For a simple front-end configuration:
# Proxy for IDP Discovery
ProxyPass /idpdiscovery http://<HOST_IDP_DISCOVERY>:8080/idpdiscovery
ProxyPassReverse /idpdiscovery http://<HOST_IDP_DISCOVERY>:8080/idpdiscovery
For a failover configuration with load balancing between 2 IdpDiscovery instances:
# IDP Discovery
<Proxy balancer://cluster-idpdiscovery>
Order deny,allow
Allow from all
BalancerMember http://<HOST_IDPDISCOVERY_1>:8080/idpdiscovery route=node1
BalancerMember http://<HOST_IDPDISCOVERY_2>:8080/idpdiscovery route=node2
</Proxy>
ProxyPass /idpdiscovery balancer://cluster-idpdiscovery lbmethod=byrequests
ProxyPassReverse /idpdiscovery http://<HOST_IDPDISCOVERY_1>:8080/idpdiscovery
ProxyPassReverse /idpdiscovery http://<HOST_IDPDISCOVERY_2>:8080/idpdiscovery
When configuring failover for OpenWIS Security Web Services, the Apache frontend is used as load balancer between the 2 Tomcat instances (used for OpenAM). The Apache configuration is as followed:
# Security service Loadbalancer
<Proxy balancer://cluster-securityservice>
Order deny,allow
Allow from all
BalancerMember http://<HOST_OPENAM_1>:8080/openwis-securityservice route=node1
BalancerMember http://<HOST_OPENAM_2>:8080/openwis-securityservice route=node2
</Proxy>
ProxyPass /openwis-securityservice balancer://cluster-securityservice lbmethod=byrequests
ProxyPassReverse /openwis-securityservice http://<HOST_OPENAM_1>:8080/openwis-securityservice
ProxyPassReverse /openwis-securityservice http://<HOST_OPENAM_2>:8080/openwis-securityservice
ProxyPreserveHost On
# Proxy for User portal
ProxyPass /openwis-user-portal http://<HOST_USER_PORTAL>:8080/openwis-user-portal
ProxyPassReverse /openwis-user-portal http://<HOST_USER_PORTAL>:8080/openwis-user-portal
# Proxy for Admin portal (if admin portal is available on this zone)
ProxyPass /openwis-admin-portal http://<HOST_ADMIN_PORTAL>:8080/openwis-admin-portal
ProxyPassReverse /openwis-admin-portal http://<HOST_ADMIN_PORTAL>:8080/openwis-admin-portal
Replace HOST_USER_PORTAL and HOST_ADMIN_PORTAL by the virtual address used for the Active/Passive configuration.
Redirect / It’s generally useful to be able to access the application without having to precise the name of the web application and just give the hostname; for example: using http://wispi.meteo.fr instead of http://wispi.meteo.fr/openwis-user-portal to access the user portal. This can be configured with the following Redirect directive: Redirect / http://<HOST_USER_PORTAL:PORT>/openwis-user-portal
Or similarly for Admin portal:
Redirect / http://<HOST_ADMIN_PORTAL:PORT>/openwis-admin-portal
ProxyPass /stagingPost http://<HOST_STAGINGPOST_WEBAPP>:8080/stagingPost
ProxyPassReverse /stagingPost http://<HOST_STAGINGPOST_WEBAPP>:8080/stagingPost
[OpenWIS Association] (http://openwis.github.io/openwis-documentation/)
[OpenWIS repository] (https://github.com/OpenWIS)
[IG-OpenWIS-1-v02 9-MigrationOpenAM-v3] (https://github.com/OpenWIS/openwis/blob/master/docs/IG-OpenWIS-1-v02%209-MigrationOpenAM-v3.doc)