Skip to content

Commit

Permalink
[spcgeonode] integrate in main geonode repo
Browse files Browse the repository at this point in the history
  • Loading branch information
olivierdalang committed Oct 20, 2018
1 parent a1b125d commit 0dfa9cc
Show file tree
Hide file tree
Showing 38 changed files with 1,647 additions and 0 deletions.
78 changes: 78 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
version: 2

jobs:
build:
machine: true
steps:

- checkout

- run:
name: Install Docker Compose
command: |
sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
- run:
name: Build and start the stack
command: docker-compose -f docker-compose.yml up -d --build
working_directory: scripts/spcgeonode/

- run:
name: Wait for everything to start...
command: |
n=1
m=60
until [ $n -gt $m ]
do
echo "Waiting 60 seconds..."
sleep 60
DJANGO_STATUS=$(docker inspect --format="{{json .State.Health.Status}}" spcgeonode_django_1)
GEOSERVER_STATUS=$(docker inspect --format="{{json .State.Health.Status}}" spcgeonode_geoserver_1)
echo "Waited $n min (out of $m min)"
echo "Django: $DJANGO_STATUS"
echo "Geoserver: $GEOSERVER_STATUS"
if [[ $DJANGO_STATUS == '"healthy"' ]] && [[ $GEOSERVER_STATUS == '"healthy"' ]]; then
break
fi
echo "Not healthy yet..."
docker ps
n=$[$n+1]
done
[[ $DJANGO_STATUS == '"healthy"' ]] && [[ $GEOSERVER_STATUS == '"healthy"' ]];
- run:
name: Show state (debug)
command: docker ps
when: always

- run:
name: Geoserver logs (debug)
command: docker logs spcgeonode_geoserver_1 --tail 500
when: always

- run:
name: Django logs (debug)
command: docker logs spcgeonode_django_1 --tail 500
when: always

# - run: Run the Geonode integration test suite # TODO : reenable this if we manage to have them pass
# - run: docker-compose -f docker-compose.yml exec postgres psql -U postgres -c "SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity;"
# - run: docker-compose -f docker-compose.yml exec postgres psql -U postgres -c "CREATE DATABASE test_postgres WITH TEMPLATE postgres;"
# - run: docker-compose -f docker-compose.yml exec django python manage.py test geonode.tests.integration

workflows:
version: 2
commit:
jobs:
- build
nightly:
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- spcgeonode-release
jobs:
- build
2 changes: 2 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,5 @@ geonode/static/node_modules
docs
.coverage
.celerybeat-*

scripts/spcgeonode/_volume_*
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -84,3 +84,5 @@ geonode\.tests\.bdd\.e2e\.test_login/

/celerybeat.pid
/celerybeat-schedule

scripts/spcgeonode/_volume_*
1 change: 1 addition & 0 deletions _secrets/admin_password
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
duper
1 change: 1 addition & 0 deletions _secrets/admin_username
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
super
31 changes: 31 additions & 0 deletions _secrets/rclone.backup.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
############################################
# Example using Amazon S3
############################################

# To configure backups using Amazon S3, replace the following variables :
#
# The Access Key for your account :
# YOUR_S3_ACCESS_KEY_HERE
# The Secret Key for your account :
# YOUR_S3_SECRET_KEY_HERE
# The Amazon Region you want to use (looks like us-east-1, eu-west-1, ap-southeast-2, etc) :
# YOUR_S3_REGION_HERE
# The name of the bucket (if it doesn't exist, it will be created):
# THE_NAME_OF_YOUR_BUCKET_HERE
#
# Note that it may be a good idea to enable versionning on the Amazon Bucket, as rclone will just mirror the current directory state.


[spcgeonode_base]
type = s3
acl = private
access_key_id = YOUR_S3_ACCESS_KEY_HERE
secret_access_key = YOUR_S3_SECRET_KEY_HERE
region = YOUR_S3_REGION_HERE
env_auth = false

[spcgeonode]
type = alias
remote = spcgeonode_base:THE_NAME_OF_YOUR_BUCKET_HERE

# TODO : add some other examples (FTP, dropbox...)
7 changes: 7 additions & 0 deletions scripts/spcgeonode/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
*.pyc
Thumbs.db
_volume_*
_service_*
*~
celerybeat-schedule
celeryev.pid
38 changes: 38 additions & 0 deletions scripts/spcgeonode/.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
##############################################################
# #
# SPCgeonode Settings #
# #
# The defauts settings are suited for testing on localhost. #
# If you're deploying SPCgeonode for production, you need to #
# adapt the following settings #
# #
# DO NOT FORGET to also modify values in _secrets ! #
# #
##############################################################

# Name of the setup (you only need to change this if you run several instances of the stack)
COMPOSE_PROJECT_NAME=spcgeonode

# IP or domain name and port where the server can be reached on HTTPS (leave HOST empty if you want to use HTTP only)
HTTPS_HOST=
HTTPS_PORT=443

# IP or domain name and port where the server can be reached on HTTP (leave HOST empty if you want to use HTTPS only)
HTTP_HOST=127.0.0.1
HTTP_PORT=80

# Email where alters should be sent. This will be used by let's encrypt and as the django admin email.
ADMIN_EMAIL=admin@example.com

# Let's Encrypt certificates for https encryption. You must have a domain name as HTTPS_HOST (doesn't work
# with an ip) and it must be reachable from the outside. This can be one of the following :
# disabled : we do not get a certificate at all (a placeholder certificate will be used)
# staging : we get staging certificates (are invalid, but allow to test the process completely and have much higher limit rates)
# production : we get a normal certificate (default)
LETSENCRYPT_MODE=disabled

# Choose from https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
TIME_ZONE=Pacific/Fiji

# Whether users should be able to create accounts themselves
REGISTRATION_OPEN=True
103 changes: 103 additions & 0 deletions scripts/spcgeonode/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# Changelog

## Version 2.x

### 2.10rc4.0

- adopted Geonode's versions number (with an additionnal level for subreleases)
- moved the setup to the main Geonode repo under `scripts/spcgeonode`, this makes it easier to use as developement setup for Geonode
- use CirclCI (mostly to avoid interfering with existing travis setup)

## Version 0.1.x (Geonode 2.10)

**WARNING** YOU CANNOT UPGRADE FROM 0.0.x to 0.1.x
YOU NEED TO DO A FRESH INSTALL AND MANUALLY TRANSFER THE DATA

### 0.1.1

- improved nginx config (gzip and expiration header)

### 0.1.0

- targetting future 2.10
- removed elastic search container (it was unused anyways)
- removed postgres login hack and using instead Geonode-Geoserver OAuth mecanism
- prebuilt geodatadir used again and master password procedure simplified
- added django healthcheck
- if https is enabled, force redirection to https host (as geonode doesn't support multiple domain names/relative installs)
- django secret generated automatically

## Version 0.0.x (Geonode 2.6)

### 0.0.25

- undo admin users disabled again
- revert using 2.6.x branch (because of side effect - login taking ages)

### 0.0.24

- use Geonode's Geoserver .war build instead of starting from vanilla
- fix thumbnail generation (uses a custom release of Geonode)
- django admin users are again disabled on restart (so we can keep only 1 superuser)
- added travis integration test (try to deploy django then tries to create an user, upload a layer, get the thumbnail and get a tile of the layer)
- changed rclone configuration (you must now provide rclone conf file)
- removed syncthings
- make http(s) ports parametrable in case a port is already busy

### 0.0.23

- various fixes (broken pip dependencies, wrong fix for geoserver proxy, ssl certificate refreshing)

### 0.0.22

- siteurl set using HTTPS_HOST or HTTP_HOST (instead of "/" which isn't supported)

### 0.0.21

- use custom build of geonode (with some fixes not upstreamed yet)

### 0.0.18

- geoserver master password reset is cleaner (programmatically reset the password from initial datadir before first launch)
- support empty HTTP_HOST or HTTPS_HOST
- geosever 2.12.1 => 2.12.2
- cleaned up env vars
- upgrade should work

### 0.0.17

- improve nginx<->letsencrypt (nginx can work without letsencrypt service)

### 0.0.16

- put django in main directory (so it's more clear for deploy builds)

### 0.0.15

- removed rancher template from repo
- removed entryponts and command from django image to prevent what looks like a bug in rancher where empty entrypoint in docker-compose isn't taken into account

### 0.0.11

- added a second backup service using RClone (the idea is to test both syncthings and rclone then choose one)

### 0.0.10

- we don't rely on an initial geodatadir anymore, instead we start from scratch, launch geoserver once, then do our modifications
- added a backup service using Syncthings

### 0.0.9

- fix bug with rancher resolver on rancher

### 0.0.8

- allow to disable/test let's encrypt using env variables
- we use geonode users/groups table directly for geoserver's authentication

### 0.0.7

- have ssl working online
- use env variables / secrets where applicable
- publish on git and autobuild images
- make docker deploy work again

0 comments on commit 0dfa9cc

Please sign in to comment.