Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
createList not working properly when using Postgres over Knex #1898
Describe the bug
Experimenting with my first Keystone v5 app using Postgres, I'm getting the following error when trying to use
The app creates the Todo table and CRUD operations work as expected, so there's communication between the app and the DB.
I have tried setting up the Knex Adapter manually as described in the documentation with the same unsuccessful results.
By using MongoDB instead,
Thanks for submitting this. I think I can explain what's happening.
The first thing to mention is that Keystone cannot do automated migrations. We'd like to, but it's a very difficult problem because we really can't anticipate the developers intentions while keep Keystone as flexible and agnostic about content as it currently is.
When Keystone is first run it will create database tables for you. Currently this will either be in Mongo or in Postgres. In theory Keystone doesn't care where the data ends up and we want to develop adapters for JSON and in-memory as well as others. I mention this just for context about why we keep the adapters a completely independent concern.
So, each storage mechanism has its own quirks. Mongo is happy to just store any data thrown at it. This is good sometime. It makes it really easy to add a List like you have here, but can also result in a mess of miss-matching data when you modify field types. This will become a problem when Keystone/GraphQL is expecting a string but sometimes gets a number or something else.
Postgres has different quirks. It wants to know the schema up front. So when we first run Keystone the adapter checks the schema from the lists and creates tables as required. We only do this initialisation step once. When you modify the list or add a new one the adapter won't attempt to create new fields or new tables because there could be existing data that would be lost if we did this.
At the moment our development philosophy for adapters is to keep them as simple as possible and not attempt to polyfill features to make them similar. That means things like JSON will not have uniqueIDs, Postgres will expect data to match the initial schema. And at the moment it also means we're not attempting to handle seeding or data migrations.
I wrote a little guide about this here: https://www.keystonejs.com/guides/migrations
It has some basic strategies and the most simple one for initial development is to drop your database and allow Keystone to re-create it for you when you make changes.
In future I think maybe we can do more. If we can recognise that list is new and that it is safe to create tables (which it usually is for new lists) perhaps the adapter should create them without the need for migrations. If we decide to do this I think we just need to be clear about when and where because I would not like to create confusion around this or limit options on larger project where they have very strict and explicit processes about rolling out change perhaps to multiple databases. It's a challenge to balance this for every type of user.
To help further I've got a little personal project here with an example that uses knex migrations integrated so that they are automatically run on build and start. It also uses Keystone within the migrations to make sure hooks are run and passwords hashed etc when creating data.
Finally we have this issue were we discuss how Keystone should help manage migrations: #299
Thanks a lot for your time and detailed explanation @MadeByMike!
From my limited knowledge in MongoDB, I would have sworn that it was the other way around: you need to predefine a schema in order to handle the data while in Postgres you can create a jsonb column in which you can throw what you want.
Independently of this, I totally see the limitations and risk you mention regarding changes in code that could potentially end up in unexpected data loss.
I'll check the documents you pointed to and give a chance to MongoDB otherwise.
Thanks again for your time and congrats for the job done in Keystone so far!