Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how will cookiecutter handle Database driven projects #17

Closed
jbrambleDC opened this issue Apr 29, 2016 · 4 comments
Closed

how will cookiecutter handle Database driven projects #17

jbrambleDC opened this issue Apr 29, 2016 · 4 comments

Comments

@jbrambleDC
Copy link
Contributor

I see there is s3 syncing but for people using SQL Databases or HDFS? a few useful thoughts:

  1. There should be a place for database connection strings, and connections to be established
  2. inside of src/data we should store python scripts, but we can have a subdirectory, database_scripts for .sql, .hql, etc. This would cover all database insertion, ETL, in database data munging etc.

Does this seem sensible?

@isms
Copy link
Contributor

isms commented Apr 29, 2016

Here's how we've done this so far, just as one data point:

  • A script (or scripts) in src/data/ will do all of the ETL. Right now in the template there is an example stub called make_dataset.py but it could be anything - you could have multiple files for interfacing with various systems or even multiple directories if things start to get too busy in src/. As you mentioned, adding a folder for saved SQL queries might be a good way to keep things tidy.
  • That script dumps its output somewhere in the data/ directory.
  • Optional: rule in the Makefile specifying the target (output) and dependencies (input files, such as the .py and .sql/.hql files).
  • The project secrets (including database credentials) live in a file only on your machine called .env in the top level directory, which is in .gitignored by default to keep it out of version control. If you look at the stub make_dataset.py, it uses a package called python-dotenv to load up all the entries in this file as environment variables, so they are accessible with os.environ.get or whatever is language-appropriate.

Here's an example .env:

DATABASE_URL=postgres://username:password@localhost:5432/dbname
AWS_ACCESS_KEY=myaccesskey
AWS_SECRET_ACCESS_KEY=mysecretkey
...

Meta discussion:

  • One of the core "opinions" of the project is that all code lives in src/ and everything in the data/ directory is just data.
  • Another opinion is that end users should be liberal in adding folders to suit their needs but the template should be conservative in making those choices permanent.

Thoughts?

@jbrambleDC
Copy link
Contributor Author

completely agree with that approach. I think that works quite well.

@pjbull
Copy link
Member

pjbull commented Apr 29, 2016

I agree this approach has worked well. I think the .env workflow may need some explanation in the docs so I'll open a separate issue for that.

@isms
Copy link
Contributor

isms commented Apr 29, 2016

See #18 for .env issue - closing this issue for now but still very open to receiving comments.

@isms isms closed this as completed Apr 29, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants