Skip to content

Commit

Permalink
🔀 Merge pull request #7 from kaplanPRO/development
Browse files Browse the repository at this point in the history
version 0.3.0
  • Loading branch information
csengor committed Mar 20, 2022
2 parents 19a3bdf + 13457c4 commit 2540739
Show file tree
Hide file tree
Showing 57 changed files with 1,610 additions and 355 deletions.
50 changes: 48 additions & 2 deletions .docker/.env.template
Original file line number Diff line number Diff line change
@@ -1,20 +1,66 @@
# Rename this file to .env by running:
# cp .env.template .env

# https://docs.djangoproject.com/en/latest/ref/settings/#std:setting-DEBUG
# https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-DEBUG
# Uncomment to set to False
#DEBUG=False

# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
#ALLOWED_HOSTS=

# https://docs.djangoproject.com/en/dev/ref/settings/#csrf-trusted-origins
# This setting is mandatory for Django>=4.0
#CSRF_TRUSTED_ORIGINS=

# Database credentials
DB_NAME=
DB_USER=
DB_PASSWORD=

# https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-SECRET_KEY
# If you do not set one yourself, it will be configured automatically
# every time the container is started.
#SECRET_KEY=

# Set to the name of the Docker network
# https://hub.docker.com/r/nginxproxy/nginx-proxy is on
NETWORK_NAME=nginx-proxy
# Defaults to nginx-proxy
#NETWORK_NAME=

# Optional Settings

# S3 Storage Settings
#STATICFILES_STORAGE=storages.backends.s3boto3.S3StaticStorage
#FILE_STORAGE=storages.backends.s3boto3.S3Boto3Storage
#S3_ACCESS_KEY_ID=
#S3_SECRET_ACCESS_KEY=
#S3_REGION_NAME=

# Public bucket for javascript, css, and other static files
#S3_PUBLIC_BUCKET=

# Defaults to 'static'. No need to change it unless you are using
# one bucket for multiple sites/apps. In which case, you might want
# to set this to something like 'project-name/static' so that the
# projects do not interfere with one another
#S3_PUBLIC_BUCKET_LOCATION=

# If you want to serve your static files stored in the public bucket
# via Cloudfront, change {0} to your Cloudfront subdomain, and uncomment
#S3_CUSTOM_DOMAIN={0}.cloudfront.net

# Private bucket for files to be translated
#S3_PRIVATE_BUCKET=

# Defaults to the root directory. No need to change it unless you
# are using one bucket for multiple sites/apps. In which case, you
# might want to set this to something like 'project-name' so that the
# projects do not interfere with one another
#S3_PRIVATE_BUCKET_LOCATION=

# No need to change the two below for AWS S3
#S3_ENDPOINT_URL=
#S3_USE_SSL=

# Sets the prefix for container names
#PROJECT_NAME=
2 changes: 1 addition & 1 deletion .docker/web.env.template → .docker/.env.web.template
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Rename this file to .env by running:
# cp web.env.template web.env
# cp .env.web.template .env.web

# Use comma to specify multiple domains. Eg:
# VIRTUAL_HOST=subdomain.yourdomain.tld,subdomain2.yourdomain.tld
Expand Down
2 changes: 1 addition & 1 deletion .docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ optionally, [nginxproxy/acme-companion](https://hub.docker.com/r/nginxproxy/acme
running on the same machine. Please make sure to connect the
nginxproxy/nginx-proxy container to a network, the name of which you'll need
to set to the NETWORK_NAME environment variable in .env. Please see the
.env.template and .db.env.template files for instructions.
.env.template and .env.web.template files for instructions.

When you're done, run `docker-compose up -d` while in the same directory as docker-compose.yml

Expand Down
5 changes: 0 additions & 5 deletions .docker/db.env.template

This file was deleted.

17 changes: 10 additions & 7 deletions .docker/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ version: "3.9"
networks:
customnetwork:
external: true
name: ${NETWORK_NAME}
name: ${NETWORK_NAME:-nginx-proxy}

volumes:
staticfiles:
Expand All @@ -15,8 +15,10 @@ services:
container_name: ${PROJECT_NAME:-kaplan-cloud}_db
volumes:
- ./db/data/db:/var/lib/postgresql/data
env_file:
- db.env
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
expose:
- 5432
restart: unless-stopped
Expand All @@ -28,9 +30,10 @@ services:
- staticfiles:/code/staticfiles
env_file:
- .env
- db.env
environment:
- POSTGRES_HOST=db
- DB_ENGINE=django.db.backends.postgresql
- DB_HOST=db
- DB_POST=5432
- USE_GUNICORN=True
depends_on:
- db
Expand All @@ -41,10 +44,10 @@ services:
image: nginx
container_name: ${PROJECT_NAME:-kaplan-cloud}_web
env_file:
- web.env
- .env.web
environment:
- VIRTUAL_PORT=8080
links:
depends_on:
- app
networks:
- default
Expand Down
1 change: 1 addition & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,6 @@
.tmp
.venv
__pycache__
db.sqlite3
projects
Dockerfile
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,7 @@ celerybeat.pid

# Environments
.env
.env.web
.venv
db.env
web.env
Expand Down
3 changes: 3 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY . /code/

RUN apt-get update && \
apt-get install -y postgresql-client

RUN pip install -U pip && \
pip install -r requirements.txt && \
pip install gunicorn
Expand Down
30 changes: 16 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,32 @@
# Kaplan Cloud

Hello and thank you for giving Kaplan Cloud a try!
Kaplan Cloud is a cloud-based translation management system.

If you would like help getting set up or try the demo available at [clouddemo.kaplan.pro](https://clouddemo.kaplan.pro), please reach out to contact@kaplan.pro.
The official documentation is available at https://docs.kaplan.pro/projects/kaplan-cloud

## Deploy with Docker locally
## Local installation with Docker

First, we need a [Postgres container](https://hub.docker.com/_/postgres) up and running:
Please see [here](https://docs.kaplan.pro/projects/kaplan-cloud/en/latest/installation.html#local-installation-with-docker)
for instructions; however, for testing purposes, all you need to do is first
start a [Kaplan Cloud container](https://hub.docker.com/r/kaplanpro/cloud):

```
docker run -d --expose 5432 -e POSTGRES_PASSWORD=postgres -v kaplan-postgres:/var/lib/postgresql/data --restart always --name kaplan-postgres postgres
docker run -d \
-p 8080:8080 \
--restart always \
--name kaplan-cloud \
kaplanpro/cloud
```

Now, let's start a [Kaplan Cloud container](https://hub.docker.com/r/kaplanpro/cloud):
And then create a superuser account:

```
docker run -d -p 8080:8080 --link kaplan-postgres -e POSTGRES_HOST=kaplan-postgres -e POSTGRES_PASSWORD=postgres -v kaplan-cloud:/code/kaplancloudapp/projects --restart always --name kaplan-cloud kaplanpro/cloud
```

Finally, we need to create a superuser (admin) account for you:
```
docker exec -it kaplan-cloud python manage.py createsuperuser
```

We're done! Head on over to http://0.0.0.0:8080 and explore Kaplan Cloud.
That's it! Head on over to http://0.0.0.0:8080 and explore Kaplan Cloud.

## Deploy with Docker Compose in a production environment
## Production installation with Docker Compose

There is a sample Docker Compose configuration available [here](https://github.com/kaplanPRO/kaplan-cloud/tree/main/.docker).
Please see [here](https://docs.kaplan.pro/projects/kaplan-cloud/en/latest/installation.html#production-installation-with-docker-compose)
for instructions.
Binary file added docs/source/_static/img/aws-iam-add-user.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/img/aws-iam-credentials.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/img/aws-iam-policy.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/img/gcp-cloud-shell.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
author = 'Kaplan'

# The full version, including alpha/beta/rc tags
release = '0.2.2'
release = '0.3.0'


# -- General configuration ---------------------------------------------------
Expand Down
4 changes: 3 additions & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,10 @@ Kaplan Cloud
============

.. toctree::
:maxdepth: 2
:maxdepth: 3

try-for-free-online
pre-installation-configuration
installation
users
language-profiles
Expand Down
44 changes: 27 additions & 17 deletions docs/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,43 +6,53 @@ Local installation with Docker
==============================
1. Follow `these instructions <https://docs.docker.com/get-docker>`_ to install Docker.

2. Deploy a `Postgres container <https://hub.docker.com/_/postgres>`_:
2. Deploy a `Kaplan Cloud container <https://hub.docker.com/r/kaplanpro/cloud>`_:

Please note that with Docker containers, storage is ephemeral. This means
that when you upgrade to a newer version, you will essentially remove the
container and its contents along with any work you may have done. Docker
solves this with
`mounts/volumes <https://docs.docker.com/storage/volumes/>`_. If you would
like to just fiddle around with Kaplan Cloud, the invocation below is all
you need:

.. code-block::
docker run -d \
--expose 5432 \
-e POSTGRES_PASSWORD=postgres \
-v kaplan-postgres:/var/lib/postgresql/data \
-p 8080:8080 \
--restart always \
--name kaplan-postgres \
postgres
--name kaplan-cloud \
kaplanpro/cloud
"postgres" is your password for the Postgres instance. It'd be best to change it.
However, if you'd like your locally stored data to persist, you'll first
need to create some directories and files, which we will attach (or bind)
to the container:

.. code-block::
mkdir kaplan-cloud && \
mkdir kaplan-cloud/projects && \
touch kaplan-cloud/db.sqlite3
3. Deploy a `Kaplan Cloud container <https://hub.docker.com/r/kaplanpro/cloud>`_:
Now, let's start the container with them attached:

.. code-block::
docker run -d \
-p 8080:8080 \
--link kaplan-postgres \
-e POSTGRES_HOST=kaplan-postgres \
-e POSTGRES_PASSWORD=postgres \
-v kaplan-cloud:/code/kaplancloudapp/projects \
--mount type=bind,source=${PWD}/kaplan-cloud/db.sqlite3,target=/code/db.sqlite3 \
--mount type=bind,source=${PWD}/kaplan-cloud/projects,target=/code/kaplancloudapp/projects \
--restart always \
--name kaplan-cloud \
kaplanpro/cloud``
Change "postgres" to the password you input in the previous step.
kaplanpro/cloud
4. Create an admin (superuser) account:
3. Create an admin (superuser) account:

.. code-block::
docker exec -it kaplan-cloud python manage.py createsuperuser
5. We're done! Head on over to http://0.0.0.0:8080 and explore Kaplan Cloud.
4. We're done! Head on over to http://0.0.0.0:8080 and explore Kaplan Cloud.

===========================================
Production installation with Docker Compose
Expand Down

0 comments on commit 2540739

Please sign in to comment.