Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC 0002] MVP #2

Closed
wants to merge 7 commits into from
Closed

[RFC 0002] MVP #2

wants to merge 7 commits into from

Conversation

kiwicopple
Copy link
Member

@kiwicopple kiwicopple commented Jan 29, 2021

We would like to create a basic CLI for Supabase. At this stage we can assume that the user won't need to create orgs/projects from the CLI, but they will need to:

  • develop locally
  • push database migrations to their Postgres database

Rendered

We would like to create a basic CLI for Supabase. At this stage we can assume that the user won't need to create orgs/projects from the CLI, but they will need to:

- develop locally
- push database migrations to their Postgres database
@kiwicopple kiwicopple changed the title RFC0002: MVP [RFC 0002] MVP Jan 29, 2021

### Other questions
- what is the prefix? should it be `supabase` (clearer) or `supa` (might have a few people calling it "supper/suppa")
- Which CLI tool/framework/libraries should we use?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we use golang (static binaries, easy to distribute), https://github.com/spf13/cobra is used by most (all?) of the nicer modern CLIs (e.g. kubectl, I think also flyctl under the wraps)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, although I suspect we will need to use postgres-meta/pg-api for a lot of this functionality.

Unfortunately it creates a dependency on NodeJS

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/vadimdemedes/ink looks good, apparently Prisma used to use it but I couldn't figure out why they migrated away from it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soedirgo It had issues on Windows for us (Prisma) and we simplified things a lot so it was not needed anymore.

```


### Generators (bonus)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a fantastic idea that I think will help a lot of people!

For some inspiration you can take a look at the fantastic GraphQL Code Generator project!

# Models:
# This is my suggestion for a "local first" database which has all of the rules/relationships we need to build a a realtime user store with automatic synchronization via our Realtime engine.
# A bit like https://github.com/mobxjs/mst-gql (but hopefully much simpler)
supa gen typescript store
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@joshnuss 's ActiveRecord library could be a good starting place for JS:
https://github.com/joshnuss/supabase-active-record

├── models
│   ├── public.table # Personal preference to explicitly add the schema, and we should prefer dot notation over nesting.
│   │   ├── data.{sql|csv|json} # For convenience, table specific data can be stored here too.
│   │   ├── model.sql # This is the core. We should also figure out a nice, readable format for SQL files. Should include constraints and indexes and table/column comments.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a small mock of this to figure out how to structure the file. Some initial thoughts here:
https://github.com/supabase/cli/pull/3/files#r575771489

### Dump

```sh
supa db dump # dumps the currently active (local) database to disk
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used postgres-meta to create a small mock here: #3

I'm not entirely sure if it's the right approach. On the one hand, we have full control to manage the layout/structure of the dump. On the other hand, we will need to cover all the nuances of Postgres.

Perhaps it's better rely on pgdump from the outset. doing some sort of system where we either:

Option 1:

  1. Run pg_dump on the entire database
  2. Run some sort of AST parser on the full dump
  3. Clean it up and process it into separate files

or Option 2:

  1. Run pg_dump on individual tables/functions/types etc (eg: pg_dump -Fp -t $table -f $table.dump;). We can use postgres-meta to fetch the "structure" of the database
  2. Run some some sort of "prettier" formatter to make it look readable

I'm leaning towards option 2, as it also give the develop better control. For example they might want to dump just their "functions".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example they might want to dump just their "functions".

AFAICT, pg_dump cannot dump only functions(SO ref).

So if we go the above way, we'd need a combination of 1 and 2


Maybe there's another way.

If you look at the Schema Diff output image in #2 (comment), you'll notice that it also gives the source DDL. And all the database objects are neatly separated.

The Flask endpoint of the pgadmin schema diff process gives this big json as its response. The diff related to the above image is located here.

Notice that the ddl output also has COMMENTs included and it's pretty printed(has newlines).

Checking the big json, all the different types of db objects(event triggers, tables, functions, etc) are there. If we could (ab)use that output, we'd have most of what we need already organized.

```sh
supa db dump # dumps the currently active (local) database to disk
supa db restore # restores the (local) database from disk
supa migrate # generate the migrations file
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still not sure how we will handle going from a declarative structure, into a set of migration scripts.

There are tools like migra which work on a schema comparison - this might be the way to do it as long as the limitations aren't too restrictive.

eg: "custom types/domains Basic support (drop-and-create only, no alter)". ref

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still not sure how we will handle going from a declarative structure, into a set of migration scripts.

Ideally the developer should not look at migrations scripts and instead it should focus on declarative* SQL. So perhaps we could keep the migrations off git?(perhaps on a table)

  • Of course there's a limit of how declarative SQL can be. Column renames for example always have to use ALTER.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metagration also has an interesting approach. It allows generating up/down migration scripts with stored procedures. It also has its own table for tracking applied migrations.

Maybe we can use metagration plus migra for providing a template migration that can be edited.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metagration also creates a restore point for down migrations, so it's possible to recover dropped columns for example(no need for a restore from a backup).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK cool - metagration looks interersting.

I think it's the declarative structure -> up/down scripts that's the tricky part here though. The tools I found:

Or we can of course code this directly into postgres-meta, perhaps using migra as an example since it seems the most feature complete. It will be a lot of work though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately they're not open to provide a CLI mode: https://redmine.postgresql.org/issues/6304 (you'll need to sign in to see this). They mention that it would take a lot of work.

I still think the pgadmin diff is the best choice. It's already the most complete and has dedicated support. It would provide the best DX.

I've already managed to run it from source. It's a Flask application plus a Backbone.js frontend(seems they're migrating to React).

The functionality for the schema diff was started here: pgadmin-org/pgadmin4@45f2e35. That gives me an idea of the modules involved for making it work.

How about if I try isolating that python code from the GUI and create a package of our own?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about if I try isolating that python code from the GUI

Succeeded doing that. Code here: https://gist.github.com/steve-chavez/8c6825abee788394cb5424380c06487c.

Now I would need to wrap the code in a CLI.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice job Steve - that's an awesome effort.

Now I would need to wrap the code in a CLI

Yeah, it would be very cool if we can use it here somehow.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the CLI ready on this branch: https://github.com/steve-chavez/pgadmin4/blob/cli/web/cli.py.

I'm now wrapping it on a docker container so it can be tried without building the whole pgadmin4 package.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docker container here: https://hub.docker.com/r/supabase/pgadmin-schema-diff/

Usage:

docker run supabase/pgadmin-schema-diff \
  'postgres://postgres@host:5432/diff_source' \
  'postgres://postgres@host:5432/diff_target' \
  > diff_db.sql

rfcs/0002-mvp.md Outdated Show resolved Hide resolved
```sh
│ # Data: database fixed/seed data.
├── data
│   ├── some_data.{sql|csv|json}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding branching, this would allow us to just rely on different hardcoded master data for test/staging/preview environments right?

It does seem easier compared to something like snapshotting the db with ZFS. Much more lighter on resources as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah exactly - this would give "reproducible builds". It's not as powerful as ZFS cloning, but it is less "magic"

ZFS cloning might solve a different use-case in the future for managing terrabyte-scale databases

rfcs/0002-mvp.md Outdated Show resolved Hide resolved
Comment on lines +78 to +80
- `postgresql://postgres@localhost:5432/master`
- `postgresql://postgres@localhost:5432/develop`
- `postgresql://postgres@localhost:5432/chore_add_types`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On these different envs, would the users also get different realtime/postgrest instances?

For preview workflows, the users would like to interact with the full stack right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This completely slipped my mind. Yes they are going to need it. Hmm, how will we keep the costs down?

Perhaps it will have to be "local only" for now? It should be manage-able with the docker files

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, how will we keep the costs down?

A temporary postgrest with db-pool = 1(one connection max) should be lightweight, it would fit on the same instance. Realtime would need to limit its pool as well, but we'd definitely need to set its systemd MemoryLimit to be really low. But I'm not sure how low it can get.

Perhaps it will have to be "local only" for now? It should be manage-able with the docker files

Yes, local-only it's a good start. Other envs can be done in a later RFC - they need more thought.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can also take advantage of systemd socket activation, which essentially lets a service sleep until a request comes. That coupled with a systemd timer to stop the service after some idle time(so question), would make resource usage minimum.

@soedirgo
Copy link
Member

Closing this as we're transitioning to the Go CLI.

@soedirgo soedirgo closed this Oct 13, 2021
@soedirgo soedirgo deleted the rfc/mvp branch December 3, 2021 03:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants