Skip to content

Commit

Permalink
Merge 56c8009 into 5f6ab64
Browse files Browse the repository at this point in the history
  • Loading branch information
rstorey committed Jan 28, 2019
2 parents 5f6ab64 + 56c8009 commit 5298bd4
Show file tree
Hide file tree
Showing 10 changed files with 50 additions and 62 deletions.
2 changes: 1 addition & 1 deletion Pipfile
Expand Up @@ -7,7 +7,7 @@ name = "pypi"
"psycopg2" = "*"
"psycopg2-binary" = "*"
gunicorn = "*"
celery = "*"
celery = {extras = ["redis"],version = "*"}
coreapi = "*"
django-haystack = "*"
"boto3" = ">=1.9.16"
Expand Down
52 changes: 31 additions & 21 deletions Pipfile.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 0 additions & 6 deletions build_containers.sh
Expand Up @@ -48,12 +48,6 @@ if [ $BUILD_ALL -eq 1 ]; then
docker push "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/concordia/celerybeat:${VERSION_NUMBER}"
docker push "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/concordia/celerybeat:${TAG}"

docker pull rabbitmq:latest
docker tag rabbitmq:latest "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/rabbitmq:${VERSION_NUMBER}"
docker tag rabbitmq:latest "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/rabbitmq:${TAG}"
docker push "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/rabbitmq:${VERSION_NUMBER}"
docker push "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/rabbitmq:${TAG}"

docker build -t concordia/indexer --file indexer/Dockerfile .
docker tag concordia/indexer:latest "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/concordia/indexer:${VERSION_NUMBER}"
docker tag concordia/indexer:latest "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/concordia/indexer:${TAG}"
Expand Down
2 changes: 1 addition & 1 deletion cloudformation/README.md
Expand Up @@ -13,7 +13,7 @@ cd cloudformation
./sync_templates.sh
```

2. Read [how to get started with AWS ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html) and follow the instructions to create three ECR repositories named `concordia`, `concordia/importer` and `rabbitmq`.
2. Read [how to get started with AWS ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html) and follow the instructions to create an ECR repository for each docker image that will be deployed.
3. Set a BUILD_NUMBER in your environment and run `./build_containers.sh`
4. Create a KMS key for this project.
5. Populate the secrets in `create_secrets.sh` and run that script to create a new set of secrets.
Expand Down
20 changes: 4 additions & 16 deletions cloudformation/infrastructure/fargate-cluster.yaml
Expand Up @@ -29,7 +29,7 @@ Parameters:

ConcordiaVersion:
Type: String
Description: version of concordia, concordia/importer, and rabbitmq docker images to pull and deploy
Description: version of concordia docker images to pull and deploy
Default: latest

EnvName:
Expand Down Expand Up @@ -272,24 +272,12 @@ Resources:
- Name: HOST_NAME
Value: !Ref CanonicalHostName
- Name: DJANGO_SETTINGS_MODULE
Value: concordia.settings_ecs
Value: concordia.settings_ecs
MountPoints:
- SourceVolume: images_volume
ContainerPath: /concordia_images
PortMappings:
- ContainerPort: 80
- Name: rabbit
Cpu: 1024
Memory: 2048
Image: !Sub '${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/rabbitmq:${ConcordiaVersion}'
PortMappings:
- ContainerPort: 5672
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref 'ConcordiaAppLogsGroup'
awslogs-region: !Ref 'AWS::Region'
awslogs-stream-prefix: ConcordiaCron
- Name: importer
Cpu: 1024
Memory: 2048
Expand Down Expand Up @@ -378,8 +366,8 @@ Resources:
- Name: HOST_NAME
Value: !Ref CanonicalHostName
- Name: DJANGO_SETTINGS_MODULE
Value: concordia.settings_ecs
Value: concordia.settings_ecs

ConcordiaExternalService:
Type: AWS::ECS::Service
DependsOn: ExternalLoadBalancerListener
Expand Down
4 changes: 2 additions & 2 deletions concordia/settings_docker.py
Expand Up @@ -19,8 +19,8 @@

EMAIL_BACKEND = "django.core.mail.backends.dummy.EmailBackend"

CELERY_BROKER_URL = os.getenv("CELERY_BROKER_URL", "pyamqp://guest@rabbit:5672")
CELERY_RESULT_BACKEND = "rpc://"
# CELERY_BROKER_URL = os.getenv("CELERY_BROKER_URL", "pyamqp://guest@rabbit:5672")
# CELERY_RESULT_BACKEND = "rpc://"

S3_BUCKET_NAME = os.getenv("S3_BUCKET_NAME")

Expand Down
4 changes: 2 additions & 2 deletions concordia/settings_ecs.py
Expand Up @@ -49,8 +49,8 @@

CSRF_COOKIE_SECURE = True

CELERY_BROKER_URL = os.getenv("CELERY_BROKER_URL", "pyamqp://guest@rabbit:5672")
CELERY_RESULT_BACKEND = "rpc://"
# CELERY_BROKER_URL = os.getenv("CELERY_BROKER_URL", "pyamqp://guest@rabbit:5672")
# CELERY_RESULT_BACKEND = "rpc://"

S3_BUCKET_NAME = os.getenv("S3_BUCKET_NAME")
EXPORT_S3_BUCKET_NAME = os.getenv("EXPORT_S3_BUCKET_NAME")
Expand Down
4 changes: 2 additions & 2 deletions concordia/settings_template.py
Expand Up @@ -152,8 +152,8 @@
}

# Celery settings
CELERY_BROKER_URL = "pyamqp://guest@rabbit"
CELERY_RESULT_BACKEND = "rpc://"
CELERY_BROKER_URL = "redis://redis:6379/0"
CELERY_RESULT_BACKEND = "redis://redis:6379/0"

CELERY_ACCEPT_CONTENT = ["json"]
CELERY_TASK_SERIALIZER = "json"
Expand Down
14 changes: 5 additions & 9 deletions docker-compose.yml
Expand Up @@ -15,12 +15,8 @@ services:
- ./postgresql:/docker-entrypoint-initdb.d
- db_volume:/var/lib/postgresl/data/

rabbit:
hostname: rabbit
image: rabbitmq:latest
ports:
- 5672:5672
- 15672:15672
redis:
image: redis

app:
build: .
Expand All @@ -40,7 +36,7 @@ services:
- .:/app
- images_volume:/concordia_images
links:
- rabbit
- redis

ports:
- 80:80
Expand All @@ -53,7 +49,7 @@ services:
POSTGRESQL_HOST: db
POSTGRESQL_PW: ${POSTGRESQL_PW}
depends_on:
- rabbit
- redis
- db
volumes:
- images_volume:/concordia_images
Expand All @@ -76,7 +72,7 @@ services:
POSTGRESQL_HOST: db
POSTGRESQL_PW: ${POSTGRESQL_PW}
depends_on:
- rabbit
- redis
- db

volumes:
Expand Down
4 changes: 2 additions & 2 deletions docs/for-developers.md
Expand Up @@ -65,7 +65,7 @@ same package versions which you used during development.
Instead of doing `docker-compose up` as above, instead start everything except the app:

```bash
$ docker-compose up -d db rabbit importer
$ docker-compose up -d db redis importer
```

This will run the database in a container to ensure that it always matches the
Expand Down Expand Up @@ -135,7 +135,7 @@ virtualenv environment:

#### Import Data

Once the database, rabbitMQ service, importer and the application
Once the database, redis service, importer and the application
are running, you're ready to import data.
First, [create a Django admin user](https://docs.djangoproject.com/en/2.1/intro/tutorial02/#creating-an-admin-user)
and log in as that user.
Expand Down

0 comments on commit 5298bd4

Please sign in to comment.