-
Notifications
You must be signed in to change notification settings - Fork 61
Conversation
@pedrommone I added you as a contributor to hub.docker.com/u/datasciencebr Could you generate the keys yet? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many many thanks, @pedrommone!
I left some inline comments. Please, don't take me wrong, I'm not a dick (I guess).
I was just trying to learn from your files, I'm a newbie to Docker and most of the lines were pretty straightforward — so I just asked about bits that I had doubts, just as a matter of curiosity ; )
Finally, one general question: what's the advantage of having separated containers from migration and seeding? Why not just running python manage.py migrate
(at every deploy) and python manage.py loaddatasets
etc. (on provision) within Jarbas main container?
|
||
env: | ||
global: | ||
- secure: "" # DOCKER_EMAIL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does that secure works? Should I set these variables in Travis CI? Drop a line at telegram.me/cuducos so we can sort that out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just discovered this feature too, travis has something to encrypt sensible data! You learn more here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's awesome — not sure if this encryption is really safe hahaha… I still tend to prefer to leave credentials in the Settings panel ; )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course, that's a lot better. We can change that.
- secure: "" # DOCKER_USER | ||
- secure: "" # DOCKER_PASS | ||
- REPOSITORY: "datasciencebr/jarbas" | ||
- COMMIT=${TRAVIS_COMMIT::8} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just out of curiosity: what does this line do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to avoid some DRY.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But what is this exactly? The name/hash of the commit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, it just put into environment some useful information.
@@ -28,3 +39,9 @@ script: | |||
|
|||
after_success: | |||
- coveralls | |||
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, n00b question again: what's the point of building and pushing the image (?) after the tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually is is something I want to improve, build a image every time and run tests on top of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. Thus maybe it should be on before_script
section, so when you got to the script
(the tests) section, the Docker image is ready…
If these commands take a while to exit, take a look on travis_wait
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's just a login procedure, we need to call it before pushing images. Anyway I'll move it because we'll need in future.
@@ -0,0 +1,17 @@ | |||
FROM python:3.5 | |||
|
|||
MAINTAINER Pedro Maia <pedro@pedromm.com> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yay! Thanks for that ; )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're welcome.
postgres: | ||
image: postgres:9.6 | ||
container_name: jarbas-postgres | ||
environment: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is a good practice when it comes to sensible info like this? Does Docker reads from server envvar or something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually we need to improve that too, I'll spend more time on it when we get a infrastructure to work on.
populate: | ||
image: jarbas | ||
container_name: jarbas-populate | ||
command: python manage.py loaddatasets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually there are two extras commands to populate the database, making a total of three:
python manage.py loaddatasets
python manage.py loadsuppliers
python manage.py ceapdatasets
Is this any of these syntax valid?
command:
- python manage.py loaddatasets
- python manage.py loadsuppliers
- python manage.py ceapdatasets
Or (worse since if one command fails, the following ones will not be executed):
command: python manage.py loaddatasets && python manage.py loadsuppliers && python manage.py ceapdatasets
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually append them with &
doesn't work (something related to django). I'll improve it.
@cuducos I saw you request, I'll try to code more today. |
Just some feedback. I've faced some issues:
How to run what I've done?Clone this branch and run |
Actually this is a good thing. We can create another container to run NodeJS and this is the first step to address Issue #18 (splitting front-end and backend cc @leomeloxp). What I can do is to some edit the repo in a separate branch and send a PR to this branch here. I make the Django and the Elm commands independent of each other — i.e. The The only link between both worlds will be that the output file will be saved directly into the Does that help?
I have no idea about this limitation, but I don't mind leaving Travis CI either. Any ideas? cc @luiz-simples @vitallan @gwmoura @ayr-ton
Does the separation I mentioned in my first point here helps? IMHO it worth it to have a working local version before depending on a CI tool. So if we fix the NodeJS thing the only pending issue will be the CI, am I right? |
Yep. it helps a lot.
Yep, we'll have some work to be done here. :) |
Done in
Is there anything I might have left behind? I felt that if I isolate it more I'd be splitting the repo and I'd like to do this with Docker linking front-end and back end. Otherwise I don't know how to locally have a static front-end (generated by NodeJS) at root (e.g. |
@cuducos we can create our simple pipeline on Travis CI like: test > build > push to docker hub > deploy container on server. One script for each step, the big problem is the deploy on the server. |
@cuducos about this, we can create on the project two folders and use docker-compose to manage this environments: web/
|__ Dockerfile
|__ files to frontend app
api/
|__ Dockerfile
|__ files to back-end app
docker-compose.yml
# docker-compose.yml content some like this
version: '2'
services:
db:
images: postgresql
web:
build: ./web
...
api:
build: ./api
... |
@gwmoura the main problem with this approach is the Travis itself, its made for Continuous Integration, not Continuous Delivery. We can't build a pipeline with Travis. I'm sorry for holding this MR for too long, I've been busy with work and college (I need to finish my thesis). |
Closing this PR since we get a working |
This is a partial PR of #12.
Dockerfile
This is just a simple image with official
Python 3.5
and all dependencies installed.docker-compose.yml
I've split the whole application into four containers and one of them is the official
postgres 9.6
. This is just a base to work..travis.ci.yml
I've added some new cool stuff into the CI pipeline. Now we generate the docker images too, yay! I hope to improve the CI pipeline even more for fully CI/CD.