New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

command to populate db with test data in diesel-cli #420

Closed
azerupi opened this Issue Aug 27, 2016 · 8 comments

Comments

Projects
None yet
5 participants
@azerupi

azerupi commented Aug 27, 2016

It's often useful to fill the database with some test data while you are developing. Could the cli make this easier? I don't work often with databases, so I don't know how it is done normally. But if the cli could make this an easy process that would be great!

It could be as easy as diesel fill test-data.sql. Or more elaborate by for example have a folder with csv files for each table.

@killercup

This comment has been minimized.

Member

killercup commented Aug 27, 2016

While I think this could be a nice CLI feature as it uses the same DB interface as diesel, you should note that all databases I know come with simple tools to do load data from a file. E.g., psql --dbname=testdb --file=test-data.sql for Postgres.

Another idea might be seeding with Rust code instead of a static SQL file. Just create an (additional) binary like src/bin/seed.rs.

@azerupi

This comment has been minimized.

azerupi commented Aug 27, 2016

Yeah I figured there would an easy way to load sql files in the db. Maybe that wouldn't add enough to be worth implementing..

What about the csv idea? Would that make any sense? In my head it seems easier to maintain a csv file with a couple of entries than a sql file when your database evolves often. But then again, I have very little experience 😉

@killercup

This comment has been minimized.

Member

killercup commented Aug 27, 2016

IMHO, CSV is nice when you first look at it, but then you try to import a text/date/foreign key/enum/decimal field and everything explodes… ;)

@sgrif

This comment has been minimized.

Member

sgrif commented Aug 28, 2016

I'm definitely against something like CSV for this.

@azerupi

This comment has been minimized.

azerupi commented Aug 28, 2016

@sgrif would there be another format more suitable for this task or is it just not worth the effort in general?

@theduke

This comment has been minimized.

Contributor

theduke commented Dec 22, 2016

A lot of other ORM CLIs have dump and load commands to export and import data. (Django ORM, Doctrine, ...)

Usually supporting CSV and JSON as formats.

It's a very nice feature to have during development.
It's significantly more convenient to create test data via your web interface and then dump it, rather than having to constantly write/update rust code to do the same.

It allows even non-devs to create realistic data, and other devs can just load it up without having to worry /know about database specific dump / restore tooling (like pg_dump).

diesel dump -f json --all --out-dir=./data
diesel database reset
diesel load data

When dumping all tables, a file for each table would be created.

The load command must figure out a foreign key dependency graph to load the tables in a correct order.

@sgrif

This comment has been minimized.

Member

sgrif commented Dec 16, 2017

Closing this since I don't think there's anything actionable right now.

If we were to do this, I think it should be a thin wrapper around the dumping tools that various databases provide. I'm happy to consider the feature, but we need a new issue with a more concrete proposal on what the API would look like, and what specifically it would do. I'd be fine with the proposal focusing on one backend as an example, we can figure out how to make that work with the tools for other backends from there.

@sgrif sgrif closed this Dec 16, 2017

@mjanda

This comment has been minimized.

mjanda commented Apr 13, 2018

What about just adding support for having seed.sql alongside up/down.sql files and running it after up.sql if --seed argument is used.

Having it together with migrations would mean less time keeping seeding data up to date with current db structure, help test migrations when altering db etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment