Skip to content

Commit

Permalink
Merge 228ba92 into 5f6ab64
Browse files Browse the repository at this point in the history
  • Loading branch information
rstorey committed Jan 28, 2019
2 parents 5f6ab64 + 228ba92 commit 4c8870e
Show file tree
Hide file tree
Showing 12 changed files with 56 additions and 68 deletions.
2 changes: 1 addition & 1 deletion Pipfile
Expand Up @@ -7,7 +7,7 @@ name = "pypi"
"psycopg2" = "*"
"psycopg2-binary" = "*"
gunicorn = "*"
celery = "*"
celery = {extras = ["redis"],version = "*"}
coreapi = "*"
django-haystack = "*"
"boto3" = ">=1.9.16"
Expand Down
52 changes: 31 additions & 21 deletions Pipfile.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 0 additions & 6 deletions build_containers.sh
Expand Up @@ -48,12 +48,6 @@ if [ $BUILD_ALL -eq 1 ]; then
docker push "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/concordia/celerybeat:${VERSION_NUMBER}"
docker push "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/concordia/celerybeat:${TAG}"

docker pull rabbitmq:latest
docker tag rabbitmq:latest "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/rabbitmq:${VERSION_NUMBER}"
docker tag rabbitmq:latest "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/rabbitmq:${TAG}"
docker push "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/rabbitmq:${VERSION_NUMBER}"
docker push "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/rabbitmq:${TAG}"

docker build -t concordia/indexer --file indexer/Dockerfile .
docker tag concordia/indexer:latest "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/concordia/indexer:${VERSION_NUMBER}"
docker tag concordia/indexer:latest "${AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/concordia/indexer:${TAG}"
Expand Down
2 changes: 1 addition & 1 deletion cloudformation/README.md
Expand Up @@ -13,7 +13,7 @@ cd cloudformation
./sync_templates.sh
```

2. Read [how to get started with AWS ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html) and follow the instructions to create three ECR repositories named `concordia`, `concordia/importer` and `rabbitmq`.
2. Read [how to get started with AWS ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html) and follow the instructions to create an ECR repository for each docker image that will be deployed.
3. Set a BUILD_NUMBER in your environment and run `./build_containers.sh`
4. Create a KMS key for this project.
5. Populate the secrets in `create_secrets.sh` and run that script to create a new set of secrets.
Expand Down
26 changes: 7 additions & 19 deletions cloudformation/infrastructure/fargate-cluster.yaml
Expand Up @@ -29,7 +29,7 @@ Parameters:

ConcordiaVersion:
Type: String
Description: version of concordia, concordia/importer, and rabbitmq docker images to pull and deploy
Description: version of concordia docker images to pull and deploy
Default: latest

EnvName:
Expand Down Expand Up @@ -252,7 +252,7 @@ Resources:
- Name: EXPORT_S3_BUCKET_NAME
Value: !Ref ExportS3BucketName
- Name: CELERY_BROKER_URL
Value: pyamqp://guest@localhost:5672
Value: !Sub 'redis://${RedisAddress}:${RedisPort}/0'
- Name: AWS_DEFAULT_REGION
Value: !Ref AWS::Region
- Name: SENTRY_BACKEND_DSN
Expand All @@ -272,24 +272,12 @@ Resources:
- Name: HOST_NAME
Value: !Ref CanonicalHostName
- Name: DJANGO_SETTINGS_MODULE
Value: concordia.settings_ecs
Value: concordia.settings_ecs
MountPoints:
- SourceVolume: images_volume
ContainerPath: /concordia_images
PortMappings:
- ContainerPort: 80
- Name: rabbit
Cpu: 1024
Memory: 2048
Image: !Sub '${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/rabbitmq:${ConcordiaVersion}'
PortMappings:
- ContainerPort: 5672
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref 'ConcordiaAppLogsGroup'
awslogs-region: !Ref 'AWS::Region'
awslogs-stream-prefix: ConcordiaCron
- Name: importer
Cpu: 1024
Memory: 2048
Expand All @@ -312,7 +300,7 @@ Resources:
- Name: EXPORT_S3_BUCKET_NAME
Value: !Ref ExportS3BucketName
- Name: CELERY_BROKER_URL
Value: pyamqp://guest@localhost:5672
Value: !Sub 'redis://${RedisAddress}:${RedisPort}/0'
- Name: AWS_DEFAULT_REGION
Value: !Ref AWS::Region
- Name: SENTRY_BACKEND_DSN
Expand Down Expand Up @@ -358,7 +346,7 @@ Resources:
- Name: EXPORT_S3_BUCKET_NAME
Value: !Ref ExportS3BucketName
- Name: CELERY_BROKER_URL
Value: pyamqp://guest@localhost:5672
Value: !Sub 'redis://${RedisAddress}:${RedisPort}/0'
- Name: AWS_DEFAULT_REGION
Value: !Ref AWS::Region
- Name: SENTRY_BACKEND_DSN
Expand All @@ -378,8 +366,8 @@ Resources:
- Name: HOST_NAME
Value: !Ref CanonicalHostName
- Name: DJANGO_SETTINGS_MODULE
Value: concordia.settings_ecs
Value: concordia.settings_ecs

ConcordiaExternalService:
Type: AWS::ECS::Service
DependsOn: ExternalLoadBalancerListener
Expand Down
2 changes: 1 addition & 1 deletion cloudformation/infrastructure/fargate-featurebranch.yaml
Expand Up @@ -145,7 +145,7 @@ Resources:
- Name: EXPORT_S3_BUCKET_NAME
Value: !Ref ExportS3BucketName
- Name: CELERY_BROKER_URL
Value: pyamqp://guest@localhost:5672
Value: !Sub 'redis://${RedisAddress}:${RedisPort}/0'
- Name: AWS_DEFAULT_REGION
Value: !Ref AWS::Region
- Name: SENTRY_BACKEND_DSN
Expand Down
4 changes: 2 additions & 2 deletions concordia/settings_dev.py
Expand Up @@ -17,8 +17,8 @@

ALLOWED_HOSTS = ["127.0.0.1", "0.0.0.0", "*"]

CELERY_BROKER_URL = "pyamqp://guest@localhost"
CELERY_RESULT_BACKEND = "rpc://"
CELERY_BROKER_URL = "redis://localhost:63791/0"
CELERY_RESULT_BACKEND = CELERY_BROKER_URL

EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
EMAIL_FILE_PATH = "/tmp/concordia-messages" # change this to a proper location
Expand Down
3 changes: 0 additions & 3 deletions concordia/settings_docker.py
Expand Up @@ -19,9 +19,6 @@

EMAIL_BACKEND = "django.core.mail.backends.dummy.EmailBackend"

CELERY_BROKER_URL = os.getenv("CELERY_BROKER_URL", "pyamqp://guest@rabbit:5672")
CELERY_RESULT_BACKEND = "rpc://"

S3_BUCKET_NAME = os.getenv("S3_BUCKET_NAME")

DEFAULT_FILE_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"
Expand Down
4 changes: 2 additions & 2 deletions concordia/settings_ecs.py
Expand Up @@ -49,8 +49,8 @@

CSRF_COOKIE_SECURE = True

CELERY_BROKER_URL = os.getenv("CELERY_BROKER_URL", "pyamqp://guest@rabbit:5672")
CELERY_RESULT_BACKEND = "rpc://"
CELERY_BROKER_URL = os.getenv("CELERY_BROKER_URL")
CELERY_RESULT_BACKEND = CELERY_BROKER_URL

S3_BUCKET_NAME = os.getenv("S3_BUCKET_NAME")
EXPORT_S3_BUCKET_NAME = os.getenv("EXPORT_S3_BUCKET_NAME")
Expand Down
4 changes: 2 additions & 2 deletions concordia/settings_template.py
Expand Up @@ -152,8 +152,8 @@
}

# Celery settings
CELERY_BROKER_URL = "pyamqp://guest@rabbit"
CELERY_RESULT_BACKEND = "rpc://"
CELERY_BROKER_URL = "redis://redis:6379/0"
CELERY_RESULT_BACKEND = "redis://redis:6379/0"

CELERY_ACCEPT_CONTENT = ["json"]
CELERY_TASK_SERIALIZER = "json"
Expand Down
15 changes: 7 additions & 8 deletions docker-compose.yml
Expand Up @@ -15,12 +15,11 @@ services:
- ./postgresql:/docker-entrypoint-initdb.d
- db_volume:/var/lib/postgresl/data/

rabbit:
hostname: rabbit
image: rabbitmq:latest
redis:
hostname: redis
image: redis:latest
ports:
- 5672:5672
- 15672:15672
- 63791:6379

app:
build: .
Expand All @@ -40,7 +39,7 @@ services:
- .:/app
- images_volume:/concordia_images
links:
- rabbit
- redis

ports:
- 80:80
Expand All @@ -53,7 +52,7 @@ services:
POSTGRESQL_HOST: db
POSTGRESQL_PW: ${POSTGRESQL_PW}
depends_on:
- rabbit
- redis
- db
volumes:
- images_volume:/concordia_images
Expand All @@ -76,7 +75,7 @@ services:
POSTGRESQL_HOST: db
POSTGRESQL_PW: ${POSTGRESQL_PW}
depends_on:
- rabbit
- redis
- db

volumes:
Expand Down
4 changes: 2 additions & 2 deletions docs/for-developers.md
Expand Up @@ -65,7 +65,7 @@ same package versions which you used during development.
Instead of doing `docker-compose up` as above, instead start everything except the app:

```bash
$ docker-compose up -d db rabbit importer
$ docker-compose up -d db redis importer
```

This will run the database in a container to ensure that it always matches the
Expand Down Expand Up @@ -135,7 +135,7 @@ virtualenv environment:

#### Import Data

Once the database, rabbitMQ service, importer and the application
Once the database, redis service, importer and the application
are running, you're ready to import data.
First, [create a Django admin user](https://docs.djangoproject.com/en/2.1/intro/tutorial02/#creating-an-admin-user)
and log in as that user.
Expand Down

0 comments on commit 4c8870e

Please sign in to comment.