Skip to content

Commit

Permalink
Merge pull request #209 from GlobalDataverseCommunityConsortium/develop
Browse files Browse the repository at this point in the history
Release v4.20
  • Loading branch information
poikilotherm committed Oct 23, 2020
2 parents 4d00ec2 + 1ea178a commit 33b0695
Show file tree
Hide file tree
Showing 101 changed files with 2,077 additions and 394 deletions.
27 changes: 15 additions & 12 deletions README.rst
@@ -1,7 +1,7 @@
Deploying, Running and Using Dataverse on Kubernetes
====================================================

.. image:: https://raw.githubusercontent.com/IQSS/dataverse-kubernetes/master/docs/img/title-composition.png
.. image:: docs/img/title-composition.png

|Dataverse badge|
|Validation badge|
Expand All @@ -11,25 +11,28 @@ Deploying, Running and Using Dataverse on Kubernetes
|Docs badge|
|IRC badge|

This community-supported project aims to provide simple to re-use Kubernetes
objects on how to run Dataverse on a Kubernetes cluster.
This community-supported project aims at offering a new way to deploy, run and
maintain a Dataverse installation for any purpose on any kind of Kubernetes-based
cloud infrastructure.

It aims at day-1 deployments and day-2 operations.
You can use this on your laptop, in your on-prem datacentre or public cloud.
With the power of `Kubernetes <http://kubernetes.io>`_, many scenarios are possible.

* Documentation: https://dataverse-k8s.rtfd.io
* Support: https://github.com/IQSS/dataverse-kubernetes/issues
* Roadmap: https://dataverse-k8s.rtfd.io/en/latest/roadmap.html
* Support and new ideas: https://github.com/IQSS/dataverse-kubernetes/issues

If you would like to contribute, you are most welcome. Head over to the
`contribution guide <https://dataverse-k8s.rtfd.io/en/latest/contribute.html>`_
for details.
If you would like to contribute, you are most welcome.

This project follows the same branching strategy as the upstream Dataverse
project, using a ``release`` branch for stable releases plus a ``develop``
branch. In this branch unexpected or breaking changes may happen.


.. |Dataverse badge| image:: https://img.shields.io/badge/Dataverse-v4.19-important.svg

.. |Dataverse badge| image:: https://img.shields.io/badge/Dataverse-v4.20-important.svg
:target: https://dataverse.org
.. |Validation badge| image:: https://jenkins.dataverse.org/job/dataverse-k8s/job/Kubeval%20Linting/job/master/badge/icon?subject=kubeval&status=valid&color=purple
:target: https://jenkins.dataverse.org/blue/organizations/jenkins/dataverse-k8s%2FKubeval%20Linting/activity?branch=master
.. |Validation badge| image:: https://jenkins.dataverse.org/job/dataverse-k8s/job/Kubeval%20Linting/job/release/badge/icon?subject=kubeval&status=valid&color=purple
:target: https://jenkins.dataverse.org/blue/organizations/jenkins/dataverse-k8s%2FKubeval%20Linting/activity?branch=release
.. |DockerHub dataverse-k8s badge| image:: https://img.shields.io/static/v1.svg?label=image&message=dataverse-k8s&logo=docker
:target: https://hub.docker.com/r/iqss/dataverse-k8s
.. |DockerHub solr-k8s badge| image:: https://img.shields.io/static/v1.svg?label=image&message=solr-k8s&logo=docker
Expand Down
2 changes: 1 addition & 1 deletion dataverse
Submodule dataverse updated 189 files
32 changes: 32 additions & 0 deletions docker-compose.yaml
@@ -0,0 +1,32 @@
---
version: '3.5'
services:

postgresql:
image: postgres:9.6
expose:
- 5432
environment:
- POSTGRES_USER=dataverse
- POSTGRES_PASSWORD=changeme

solr:
image: iqss/solr-k8s
expose:
- 8983

dataverse:
build:
context: .
dockerfile: ./docker/dataverse-k8s/glassfish-dev/Dockerfile
image: iqss/dataverse-k8s:dev
depends_on:
- postgresql
- solr
ports:
- 8080:8080
volumes:
- type: bind
source: ./personas/docker-compose/secrets
target: /secrets
read_only: true
4 changes: 2 additions & 2 deletions docker/dataverse-k8s/Jenkinsfile
Expand Up @@ -74,7 +74,7 @@ pipeline {
}
stage('latest') {
when {
branch 'master'
branch 'release'
}
environment {
// credentials() will magically add DOCKER_HUB_USR and DOCKER_HUB_PSW
Expand All @@ -83,7 +83,7 @@ pipeline {
}
steps {
script {
// Push master image to latest tag
// Push release image to latest tag
docker.withRegistry("${env.DOCKER_REGISTRY}", "${env.DOCKER_HUB_CRED}") {
gf_docker_image.push("latest")
pyr_docker_image.push("payara")
Expand Down
5 changes: 3 additions & 2 deletions docker/dataverse-k8s/bin/bootstrap-job.sh
Expand Up @@ -17,7 +17,8 @@ DATAVERSE_URL=${DATAVERSE_URL:-"http://${DATAVERSE_SERVICE_HOST}:${DATAVERSE_SER
# The Solr Service IP is always available under its name within the same namespace.
# If people want to use a different Solr than we normally deploy, they have the
# option to override.
SOLR_K8S_HOST=${SOLR_K8S_HOST:-"solr"}
SOLR_SERVICE_HOST=${SOLR_SERVICE_HOST:-"solr"}
SOLR_SERVICE_PORT=${SOLR_SERVICE_PORT:-"8983"}

# Check postgres and API key secrets are available
if [ ! -s "${SECRETS_DIR}/db/password" ]; then
Expand Down Expand Up @@ -53,7 +54,7 @@ sed -i -e "s#dataverse@mailinator.com#${CONTACT_MAIL}#" data/user-admin.json
./setup-all.sh --insecure -p="${ADMIN_PASSWORD:-admin}"

# 4.) Configure Solr location
curl -sS -X PUT -d "${SOLR_K8S_HOST}:8983" "${DATAVERSE_URL}/api/admin/settings/:SolrHostColonPort"
curl -sS -X PUT -d "${SOLR_SERVICE_HOST}:${SOLR_SERVICE_PORT}" "${DATAVERSE_URL}/api/admin/settings/:SolrHostColonPort"

# 5.) Provision builtin users key to enable creation of more builtin users
if [ -s "${SECRETS_DIR}/api/userskey" ]; then
Expand Down
8 changes: 8 additions & 0 deletions docker/dataverse-k8s/bin/default.config
Expand Up @@ -16,6 +16,14 @@ JMX_EXPORTER_CONFIG=${JMX_EXPORTER_CONFIG:-"${HOME}/jmx_exporter_config.yaml"}
# (Exporting needed as they cannot be seen by `env` otherwise)

export dataverse_files_directory=${dataverse_files_directory:-/data}
export dataverse_files_storage__driver__id=${dataverse_files_storage__driver__id:-local}

if [ "${dataverse_files_storage__driver__id}" = "local" ]; then
export dataverse_files_local_type=${dataverse_files_local_type:-file}
export dataverse_files_local_label=${dataverse_files_local_label:-Local}
export dataverse_files_local_directory=${dataverse_files_local_directory:-/data}
fi

export dataverse_rserve_host=${dataverse_rserve_host:-rserve}
export dataverse_rserve_port=${dataverse_rserve_port:-6311}
export dataverse_rserve_user=${dataverse_rserve_user:-rserve}
Expand Down
29 changes: 18 additions & 11 deletions docker/dataverse-k8s/glassfish/Dockerfile
Expand Up @@ -8,9 +8,9 @@ FROM centos:7

LABEL maintainer="FDM FZJ <forschungsdaten@fz-juelich.de>"

ARG TINI_VERSION=v0.18.0
ARG TINI_VERSION=v0.19.0
ARG JMX_EXPORTER_VERSION=0.12.0
ARG VERSION=4.19
ARG VERSION=4.20
ARG DOMAIN=domain1

ENV HOME_DIR=/opt/dataverse\
Expand All @@ -21,11 +21,12 @@ ENV HOME_DIR=/opt/dataverse\
DOCROOT_DIR=/docroot\
METADATA_DIR=/metadata\
SECRETS_DIR=/secrets\
DUMPS_DIR=/dumps\
GLASSFISH_PKG=http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip\
GLASSFISH_SHA1=704a90899ec5e3b5007d310b13a6001575827293\
WELD_PKG=https://repo1.maven.org/maven2/org/jboss/weld/weld-osgi-bundle/2.2.10.SP1/weld-osgi-bundle-2.2.10.SP1-glassfish4.jar\
GRIZZLY_PKG=http://guides.dataverse.org/en/latest/_downloads/glassfish-grizzly-extra-all.jar\
PGDRIVER_PKG=https://jdbc.postgresql.org/download/postgresql-42.2.10.jar\
GRIZZLY_PKG=http://guides.dataverse.org/en/${VERSION}/_downloads/glassfish-grizzly-extra-all.jar\
PGDRIVER_PKG=https://jdbc.postgresql.org/download/postgresql-42.2.12.jar\
DATAVERSE_VERSION=${VERSION}\
DATAVERSE_PKG=https://github.com/IQSS/dataverse/releases/download/v${VERSION}/dvinstall.zip\
JMX_EXPORTER_PKG=https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/${JMX_EXPORTER_VERSION}/jmx_prometheus_javaagent-${JMX_EXPORTER_VERSION}.jar\
Expand All @@ -43,15 +44,13 @@ RUN groupadd -g 1000 glassfish && \
useradd -u 1000 -M -s /bin/bash -d ${HOME_DIR} glassfish -g glassfish && \
echo glassfish:glassfish | chpasswd && \
mkdir -p ${HOME_DIR} ${SCRIPT_DIR} ${SECRETS_DIR} && \
mkdir -p ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} && \
chown -R glassfish: ${HOME_DIR} ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR}
mkdir -p ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} ${DUMPS_DIR} && \
chown -R glassfish: ${HOME_DIR} ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} ${DUMPS_DIR}

# Install tini as minimized init system
RUN wget --no-verbose -O /tini https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini && \
wget --no-verbose -O /tini.asc https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini.asc && \
gpg --batch --keyserver "hkp://p80.pool.sks-keyservers.net:80" --recv-keys 595E85A6B1B4779EA4DAAEC70B588DFF0527A9B7 && \
gpg --batch --verify /tini.asc /tini && \
chmod +x /tini
RUN wget --no-verbose -O tini-amd64 https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-amd64 && \
echo '93dcc18adc78c65a028a84799ecf8ad40c936fdfc5f2a57b1acda5a8117fa82c tini-amd64' | sha256sum -c - && \
mv tini-amd64 /tini && chmod +x /tini

# Install esh template engine from Github
RUN wget --no-verbose -O esh https://raw.githubusercontent.com/jirutka/esh/v0.3.0/esh && \
Expand Down Expand Up @@ -94,6 +93,14 @@ RUN ${GLASSFISH_DIR}/bin/asadmin start-domain && \
for MEMORY_JVM_OPTION in $(${GLASSFISH_DIR}/bin/asadmin list-jvm-options | grep "Xm[sx]"); do\
${GLASSFISH_DIR}/bin/asadmin delete-jvm-options $MEMORY_JVM_OPTION;\
done && \
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:+HeapDumpOnOutOfMemoryError" && \
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:HeapDumpPath=${DUMPS_DIR}" && \
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:+UseG1GC" && \
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:+UseStringDeduplication" && \
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:MaxGCPauseMillis=500" && \
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:MetaspaceSize=256m" && \
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:MaxMetaspaceSize=2g" && \
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:+IgnoreUnrecognizedVMOptions" && \
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-server" && \
${GLASSFISH_DIR}/bin/asadmin stop-domain && \
mkdir -p ${DOMAIN_DIR}/autodeploy && \
Expand Down
33 changes: 23 additions & 10 deletions docker/dataverse-k8s/glassfish/bin/init_1_conf_glassfish.sh
Expand Up @@ -30,16 +30,29 @@ do
done

# 1b. Create AWS access credentials when storage driver is set to s3
# See IQSS/dataverse-kubernetes#28 for details of this workaround.
if [ "s3" = "${dataverse_files_storage__driver__id}" ]; then
if [ -f ${SECRETS_DIR}/s3/access-key ] && [ -f ${SECRETS_DIR}/s3/secret-key ]; then
mkdir -p ${HOME_DIR}/.aws
echo "[default]" > ${HOME_DIR}/.aws/credentials
cat ${SECRETS_DIR}/s3/access-key | sed -e "s#^#aws_access_key_id = #" -e "s#\$#\n#" >> ${HOME_DIR}/.aws/credentials
cat ${SECRETS_DIR}/s3/secret-key | sed -e "s#^#aws_secret_access_key = #" -e "s#\$#\n#" >> ${HOME_DIR}/.aws/credentials
else
echo "WARNING: Could not find all S3 access secrets in ${SECRETS_DIR}/s3/(access-key|secret-key). Check your Kubernetes Secrets and their mounting!"
fi
# Find all access keys
if [ -d "${SECRETS_DIR}/s3" ]; then
S3_KEYS=`find "${SECRETS_DIR}/s3" -readable -type f -iname '*access-key'`
S3_CRED_FILE=${HOME_DIR}/.aws/credentials
mkdir -p `dirname "${S3_CRED_FILE}"`
rm -f ${S3_CRED_FILE}
# Iterate keys
while IFS= read -r S3_ACCESS_KEY; do
echo "Loading S3 key ${S3_ACCESS_KEY}"
# Try to find the secret key, parse for profile and add to the credentials file.
S3_PROFILE=`echo "${S3_ACCESS_KEY}" | sed -ne "s#.*/\(.*\)-access-key#\1#p"`
S3_SECRET_KEY=`echo "${S3_ACCESS_KEY}" | sed -ne "s#\(.*/\|.*/.*-\)access-key#\1secret-key#p"`

if [ -r ${S3_SECRET_KEY} ]; then
[ -z "${S3_PROFILE}" ] && echo "[default]" >> "${S3_CRED_FILE}" || echo "[${S3_PROFILE}]" >> "${S3_CRED_FILE}"
cat "${S3_ACCESS_KEY}" | sed -e "s#^#aws_access_key_id = #" -e "s#\$#\n#" >> "${S3_CRED_FILE}"
cat "${S3_SECRET_KEY}" | sed -e "s#^#aws_secret_access_key = #" -e "s#\$#\n#" >> "${S3_CRED_FILE}"
echo "" >> "${S3_CRED_FILE}"
else
echo "ERROR: Could not find or read matching \"$S3_SECRET_KEY\"."
exit 1
fi
done <<< "${S3_KEYS}"
fi

# 2. Domain-spaced resources (JDBC, JMS, ...)
Expand Down
27 changes: 8 additions & 19 deletions docker/dataverse-k8s/payara/Dockerfile
Expand Up @@ -4,44 +4,33 @@
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0

FROM payara/server-full:5.201
FROM payara/server-full:5.2020.3
LABEL maintainer="FDM FZJ <forschungsdaten@fz-juelich.de>"

ARG VERSION=4.19
ARG VERSION=4.20
ARG DOMAIN=domain1

ENV DATA_DIR=/data\
DOCROOT_DIR=/docroot\
METADATA_DIR=/metadata\
SECRETS_DIR=/secrets\
DUMPS_DIR=/dumps\
DOMAIN_DIR=${PAYARA_DIR}/glassfish/domains/${DOMAIN_NAME}\
DATAVERSE_VERSION=${VERSION}\
DATAVERSE_PKG=https://github.com/IQSS/dataverse/releases/download/v${VERSION}/dvinstall.zip\
PGDRIVER_PKG=https://jdbc.postgresql.org/download/postgresql-42.2.12.jar\
MEM_MAX_RAM_PERCENTAGE=70.0\
MEM_XSS=512k
# Make heap dumps on OOM appear in DUMPS_DIR
JVM_ARGS="-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=\${ENV=DUMPS_DIR}"

# Create basic pathes
USER root
RUN mkdir -p ${HOME_DIR} ${SCRIPT_DIR} ${SECRETS_DIR} && \
mkdir -p ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} && \
chown -R payara: ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} ${SECRETS_DIR}

# WORKAROUND MEMORY ISSUES UNTIL UPSTREAM FIXES THEM IN NEW RELEASE
RUN ${PAYARA_DIR}/bin/asadmin --user=${ADMIN_USER} --passwordfile=${PASSWORD_FILE} start-domain ${DOMAIN_NAME} && \
${PAYARA_DIR}/bin/asadmin --user=${ADMIN_USER} --passwordfile=${PASSWORD_FILE} delete-jvm-options \
'-XX\:+UnlockExperimentalVMOptions:-XX\:+UseCGroupMemoryLimitForHeap:-XX\:MaxRAMFraction=1' && \
${PAYARA_DIR}/bin/asadmin --user=${ADMIN_USER} --passwordfile=${PASSWORD_FILE} create-jvm-options \
'-XX\:+UseContainerSupport:-XX\:MaxRAMPercentage=${ENV=MEM_MAX_RAM_PERCENTAGE}:-Xss${ENV=MEM_XSS}' && \
${PAYARA_DIR}/bin/asadmin --user=${ADMIN_USER} --passwordfile=${PASSWORD_FILE} stop-domain ${DOMAIN_NAME} && \
# Cleanup after initialization
rm -rf \
${PAYARA_DIR}/glassfish/domains/${DOMAIN_NAME}/osgi-cache \
${PAYARA_DIR}/glassfish/domains/${DOMAIN_NAME}/logs
mkdir -p ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} ${DUMPS_DIR} && \
chown -R payara: ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} ${SECRETS_DIR} ${DUMPS_DIR}

# Install prerequisites
RUN apt-get -qq update && \
apt-get -qqy install postgresql-client jq imagemagick curl
apt-get -qqy install postgresql-client jq imagemagick curl wget unzip

# Install esh template engine from Github
RUN wget --no-verbose -O esh https://raw.githubusercontent.com/jirutka/esh/v0.3.0/esh && \
Expand Down
34 changes: 23 additions & 11 deletions docker/dataverse-k8s/payara/bin/init_2_conf_payara.sh
Expand Up @@ -34,17 +34,29 @@ do
done

# 1b. Create AWS access credentials when storage driver is set to s3
# See IQSS/dataverse-kubernetes#28 for details of this workaround.
if [ "s3" = "${dataverse_files_storage__driver__id}" ]; then
if [ -f ${SECRETS_DIR}/s3/access-key ] && [ -f ${SECRETS_DIR}/s3/secret-key ]; then
echo "INFO: Deploying AWS credentials."
mkdir -p ${HOME_DIR}/.aws
echo "[default]" > ${HOME_DIR}/.aws/credentials
cat ${SECRETS_DIR}/s3/access-key | sed -e "s#^#aws_access_key_id = #" -e "s#\$#\n#" >> ${HOME_DIR}/.aws/credentials
cat ${SECRETS_DIR}/s3/secret-key | sed -e "s#^#aws_secret_access_key = #" -e "s#\$#\n#" >> ${HOME_DIR}/.aws/credentials
else
echo "WARNING: Could not find all S3 access secrets in ${SECRETS_DIR}/s3/(access-key|secret-key). Check your Kubernetes Secrets and their mounting!"
fi
# Find all access keys
if [ -d "${SECRETS_DIR}/s3" ]; then
S3_KEYS=`find "${SECRETS_DIR}/s3" -readable -type f -iname '*access-key'`
S3_CRED_FILE=${HOME_DIR}/.aws/credentials
mkdir -p `dirname "${S3_CRED_FILE}"`
rm -f ${S3_CRED_FILE}
# Iterate keys
while IFS= read -r S3_ACCESS_KEY; do
echo "Loading S3 key ${S3_ACCESS_KEY}"
# Try to find the secret key, parse for profile and add to the credentials file.
S3_PROFILE=`echo "${S3_ACCESS_KEY}" | sed -ne "s#.*/\(.*\)-access-key#\1#p"`
S3_SECRET_KEY=`echo "${S3_ACCESS_KEY}" | sed -ne "s#\(.*/\|.*/.*-\)access-key#\1secret-key#p"`

if [ -r ${S3_SECRET_KEY} ]; then
[ -z "${S3_PROFILE}" ] && echo "[default]" >> "${S3_CRED_FILE}" || echo "[${S3_PROFILE}]" >> "${S3_CRED_FILE}"
cat "${S3_ACCESS_KEY}" | sed -e "s#^#aws_access_key_id = #" -e "s#\$#\n#" >> "${S3_CRED_FILE}"
cat "${S3_SECRET_KEY}" | sed -e "s#^#aws_secret_access_key = #" -e "s#\$#\n#" >> "${S3_CRED_FILE}"
echo "" >> "${S3_CRED_FILE}"
else
echo "ERROR: Could not find or read matching \"$S3_SECRET_KEY\"."
exit 1
fi
done <<< "${S3_KEYS}"
fi

# 2. Domain-spaced resources (JDBC, JMS, ...)
Expand Down
2 changes: 1 addition & 1 deletion docker/solr-k8s/Dockerfile
Expand Up @@ -10,7 +10,7 @@ LABEL maintainer="FDM FZJ <forschungsdaten@fz-juelich.de>"

ARG WEBHOOK_VERSION=2.6.11
ARG TINI_VERSION=v0.18.0
ARG VERSION=4.19
ARG VERSION=4.20
ARG COLLECTION=collection1
ENV SOLR_OPTS="-Dsolr.jetty.request.header.size=102400"\
COLLECTION_DIR=/opt/solr/server/solr/${COLLECTION}\
Expand Down
4 changes: 2 additions & 2 deletions docker/solr-k8s/Jenkinsfile
Expand Up @@ -68,7 +68,7 @@ pipeline {
}
stage('latest') {
when {
branch 'master'
branch 'release'
}
environment {
// credentials() will magically add DOCKER_HUB_USR and DOCKER_HUB_PSW
Expand All @@ -77,7 +77,7 @@ pipeline {
}
steps {
script {
// Push master image to latest tag
// Push release image to latest tag
docker.withRegistry("${env.DOCKER_REGISTRY}", "${env.DOCKER_HUB_CRED}") {
docker_image.push("latest")
}
Expand Down
1 change: 1 addition & 0 deletions docs/.gitignore
@@ -1 +1,2 @@
.build
_build
3 changes: 2 additions & 1 deletion docs/conf.py
Expand Up @@ -25,7 +25,7 @@
author = u'Oliver Bertuch'

# The short X.Y version
version = u'4.19'
version = u'4.20'
# The full version, including alpha/beta/rc tags
release = version

Expand Down Expand Up @@ -87,6 +87,7 @@
autosectionlabel_prefix_document = True

extlinks = {
'tree': ('https://github.com/IQSS/dataverse-kubernetes/tree/master/%s', 'folder of master branch '),
'issue': ('https://github.com/IQSS/dataverse-kubernetes/issues/%s', 'issue '),
'issue_dv': ('https://github.com/IQSS/dataverse/issues/%s', 'issue '),
'guide_dv': ('http://guides.dataverse.org/en/'+version+'/%s', 'upstream docs ')
Expand Down

0 comments on commit 33b0695

Please sign in to comment.