-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues with Querying Associated Documents and REST Parameter Mapping in Supabase Integration #429
Comments
Ok good! Now we are finally getting into the real problems with integration. Again, I appreciate your persistence and patience here. orderBy
I'm very curious about this. It shouldn't have worked fine, because the problem would be in the I've added some additional documentation in #430: "The SQLite Exception
This is exactly what I would've expected from the REST query
Now this is what I call a bug.
This should actually be |
I've published a fix for |
orderByUsing SQLite Exception
Yes, I am using REST query
I would assume so too, as it seems to make sense to include the What works is using |
@devj3ns could you please include the full URL with sanitized values for this association query? |
This is wild stuff. The SQL provider should have caught this. I just added a few unit tests to the However, the association querying AFAIK has never been attempted in a project I've managed. SQL should've not been able to order by that table...unless it ignored the qualifier in the order by and skipped straight to the column name. I don't know enough about SQL to understand why this happened or why it was still valid for you, but I guess it's valid.
Looking more closely at this. Based on your initial comment on this issue, const Where('customer').isExactly(
const Where('id').isExactly(customer.id.value),
), It should be const Where('customer').isExactly(customer.id.value)) This is a collision, because Brick is trying to map the What is the Supabase column type of Is there any chance you can rename this column to |
That's actually wild with the association querying, thanks for adding these tests.
Oh, yes, good catch @tshedor. I unintentionally removed the @OfflineFirst(where: {'id': "data['customer']['id']"})
@Supabase(foreignKey: 'customer')
@Sqlite(index: true)
final Customer customer; The only difference to before migrating to the Supabase Integration is
In Supabase the type of the
I agree that storing the id of the customer in a column named customer is not ideal and leads to confusion and the collision. I will change that. This means the annotation of the @OfflineFirst(where: {'id': "data['customer']['id']"})
@Supabase(foreignKey: 'customer_id')
@Sqlite(index: true)
final Customer customer; Is this the correct setup? Because the @override
final fieldsToSupabaseColumns = {
...
'customer': const RuntimeSupabaseColumnDefinition(
association: true,
columnName: 'customer',
associationForeignKey: 'customer_id',
associationType: Customer,
),
...
} And how should the where query look like after the rename of the Supabase db column name, to get all projects of a customer? |
@devj3ns This looks mostly right, but the Brick API may be incorrect because you're right that the
Let me know what option you think is best.
await repository.get<Project>(query: Query.where('customer', Where.exact('id', id))`; |
I would personally prefer option 2, as it's most explicit, but I am open to trying option 1 first. I just tried out #432, and it looks good so far. There is just the problem with the REST API Query left. When using the following query: await repository.getBatched<Project>(query: Query(
where: [
Where('customer', value: Where.exact('id', "xyz"))
],
providerArgs: {
'orderByReferencedTable': 'project.createdAt DESC'
},
), this URL is sent to the PostgREST API:
API Return JSON
[
{
"id": "e11d5ce4-07ae-4c90-b9e2-942f2ca3057e",
"customer": null
},
{
"id": "c1797c69-0929-445d-aaa0-b8d2abbfdfcd",
"customer": null
},
{
"id": "943a01a2-2fc6-4f83-9440-c0e5f9328207",
"customer": null
},
{
"id": "e0e2074b-d5dd-40a5-9245-a167493ea1e6",
"customer": null
},
{
"id": "86b4b666-da3a-4770-8896-574354d391f4",
"customer": null
},
{
"id": "5b78f082-b0dd-4229-8d44-5e384d7bc486",
"customer": null
},
{
"id": "8de4c1c3-2c70-48fb-b4c0-bd2b2cffcc63",
"customer": null
},
{
"id": "8317dbf4-2194-49d2-882e-bfb642b6e4af",
"customer": null
},
{
"id": "e45e7450-4fe4-4ea9-801b-a5b366060c39",
"customer": {
"id": "de8a7f34-1b22-4fd4-a26d-eb17f7e8c6f2",
"first_name": "a"
}
},
{
"id": "9438365f-4aac-445f-8375-aad54d313e2c",
"customer": null
},
{
"id": "f37438e7-cbd2-4ef8-a30f-0ce24701c5d4",
"customer": null
},
{
"id": "01a92e7e-fd16-4d1f-93be-ba50e0dd8efd",
"customer": null
},
{
"id": "78a151b0-617d-4ca9-90a3-a0d5d265dcc2",
"customer": null
},
{
"id": "782c88e2-9e7a-4722-83b9-9c0197fcad68",
"customer": null
},
{
"id": "93632d93-1853-4a4c-8f4f-b878d9361d58",
"customer": null
},
{
"id": "410772d7-1d6a-43db-aca2-713ab3a5f838",
"customer": null
},
{
"id": "416d976e-3639-42bb-b606-11426c0ca03c",
"customer": null
},
{
"id": "f2384e95-6af8-4092-ab87-dc683eb88f91",
"customer": null
},
{
"id": "8278fbad-6d8d-4173-bec7-1337f2c66c62",
"customer": null
},
{
"id": "40494088-443f-484e-be15-7c10f8d44822",
"customer": {
"id": "de8a7f34-1b22-4fd4-a26d-eb17f7e8c6f2",
"first_name": "a"
}
}
]
As seen above, it also returns the projects, where not matching customer could be found. This is the case because an outer join is performed, meaning that projects without a matching To only return projects matching the query, we could use an inner join like this:
API Return JSON
[
{
"id": "e45e7450-4fe4-4ea9-801b-a5b366060c39",
"customer": {
"id": "de8a7f34-1b22-4fd4-a26d-eb17f7e8c6f2",
"first_name": "a"
}
},
{
"id": "40494088-443f-484e-be15-7c10f8d44822",
"customer": {
"id": "de8a7f34-1b22-4fd4-a26d-eb17f7e8c6f2",
"first_name": "a"
}
}
]
The PostgREST docs have a section covering this: https://postgrest.org/en/v12/references/api/resource_embedding.html#top-level-filtering @tshedor I think we should always make an inner join or null filter when joining with a filter. What do you think? |
@devj3ns if your Supabase column is nullable, then your Dart definition should also be nullable - it should be final Customer? instead. Or, if you only want Projects with Customers, there should be a null filter in the query. When I provided the example I wasn't aware that some projects could not have customers. |
Or maybe there's another option like |
@tshedor, my https://postgrest.org/en/v12/references/api/resource_embedding.html#top-level-filtering |
@devj3ns Ah ok, sorry, now I understand. Let me play with this today. Before I go further down the option 1 path, do you have any other arguments against it or any more support for option 2? |
@tshedor, great, thanks! No, I don't have any more arguments for option 2 except for the increased explicity. I am totally fine with option 1. |
Now I have only one question left and then I will close this issue. It is related to Unexpected Remote Provider Calls for Associated Data #399. Currently, when I fetch a list of projects with
Brick fetches all the projects with their associated customers in one REST Request:
But unfortunately, for each project, a separate REST Request is sent to fetch the customer:
This means, when I batch fetch 50 projects, 50 REST requests to the customers' endpoint are sent. But because we already joined the customers in the REST Request in which the projects were fetched, this is redundant. The generated Future<Project> _$ProjectFromSupabase(Map<String, dynamic> data,
{required SupabaseProvider provider,
OfflineFirstWithSupabaseRepository? repository}) async {
return Project(
....
customer: await repository!
.getAssociation<Customer>(Query(
where: [Where.exact('id', data['customer']['id'])],
providerArgs: {'limit': 1}))
.then((r) => r!.first),
....
),
} Future<Project> _$ProjectFromSqlite(Map<String, dynamic> data,
{required SqliteProvider provider,
OfflineFirstWithSupabaseRepository? repository}) async {
return Project(
...
customer: (await repository!.getAssociation<Customer>(
Query.where(
'primaryKey', data['customer_Customer_brick_id'] as int,
limit1: true),
))!.first,
...
),
} The @OfflineFirst(where: {'id': "data['customer']['id']"})
@Supabase(name: 'customer_id', toGenerator: '%INSTANCE_PROPERTY%.id.value')
@Sqlite(index: true)
final Customer customer; @tshedor with the new Supabase integration in place, is it possible that brick does not make these redundant requests by default? |
@devj3ns What a relief. I go to bed wondering if I've done enough for your overnight test. Great news.
Done, not sure why that didn't publish
This is a hunch. Forget my advice to include |
@tshedor when I remove the Future<Project> _$ProjectFromSupabase(Map<String, dynamic> data, {required SupabaseProvider provider, OfflineFirstWithSupabaseRepository? repository}) async {
return Project(
...
customer: await CustomerAdapter().fromSupabase(data['customer'],
provider: provider, repository: repository),
....
);
} This means the However, the Future<Project> _$ProjectFromSqlite(Map<String, dynamic> data,
{required SqliteProvider provider,
OfflineFirstWithSupabaseRepository? repository}) async {
return Project(
...
customer: (await repository!.getAssociation<Customer>(
Query.where(
'primaryKey', data['customer_Customer_brick_id'] as int,
limit1: true),
))!.first,
...
),
} Because of the If I would manually change this adapter code to only fetch the local data like follows, the redundant REST API requests are gone: Future<Project> _$ProjectFromSqlite(Map<String, dynamic> data,
{required SqliteProvider provider,
OfflineFirstWithSupabaseRepository? repository}) async {
return Project(
...
customer: (await repository!.get<Customer>(
query: Query.where(
'primaryKey', data['customer_Customer_brick_id'] as int,
limit1: true),
policy: OfflineFirstGetPolicy.localOnly,
))! .first,
...
),
} To me, this means that the |
@devj3ns Unless I'm missing something, that's the same code block for both generated and manually changed. However, I think I know the problem. You're in an esoteric part of Brick, so I'm going to explain with probably too much context for clarity. tl;dr use For the first app that used Brick, we discovered this same problem - our API was overwhelmed and often failed on the first fetch of a model with any associations (for this example, let's say that Customer has an Address association and Project has a Customer association). We initially resolved this by creating an eager loading class that carefully loaded the lowest associations first and then loaded the models that held those associations - load all Addresses, then load all Customers, then load all Projects. That way, the API would never do duplicate fetching. This was fine until we realized that now SQLite was overwhelmed by fetching each association of each association of each association individually. By this point, we were nearing a key beta milestone and didn't have time to rewrite the adapter and SQL layer to do queries that fetched multiple models at once instead of fetching them by their primary key. So the parameter
After beta passed, we found the performance of SQLite to still be good without refactoring to do batch deserializations. We also found Brick's encapsulation to be acceptable - maybe the memory provider would have the instance and fetching it from the SQLite provider would be actually be more expensive than if we were to just fetch the missing models individually. So we decided not to revisit the batch SQL and retain the existing For your case, I believe that SQLite is deserializing needlessly, so There is a chance that this theory is wrong and that the problem is associations are not being serialized into SQLite. If this is the case, the solution will be in changing storeRemoteResults to serialize associations first. Flutter's Dart doesn't support reflections, which is what we'd need here, so I'll have to think on this solution. Let me know if |
Oh, my mistake, I updated my message above. I wanted to use Thanks for the detailed explanation @tshedor! I will read through it later and test it out. |
Thank you for the detailed explanation @tshedor! I created an separate issue with some more context #433 I appreciate your work and time @tshedor. I will close this issue as it is resolved. |
In my project, I have a model
Project
and a modelCustomer
:On one page of my I app, I want to query all projects that are associated with a user.
Before migrating to the Supabase integration, using the following brick query worked fine:
With the new Supabase integration, their are two problems:
1. Problem: DatabaseException
When querying, the following exception is thrown:
2. Problem: Incorrect select in the REST query
(I removed the
providerArgs
orderBy
to test if the rest is working, and noticed the following)The query contains the following select:
?select=id,customer:customers!customer(id)&id=eq.customer.de8a7f34-1b22-4fd4-a26d-eb17f7e8c6f2
The issue is with the
id=eq.customer.de8...
part. This should becustomer=eq.de8...
instead.I hope this helps, unfortunately I do not have time to create a PR for this today...
I just found another, maybe related issue:
When upserting a
Project
there is another exception thrown:Upserting all other models works fine, I guess the problem is the association.
The text was updated successfully, but these errors were encountered: