New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delayed tag invalidations #3116
Changes from all commits
8730c5e
15511e3
001d7a1
0148e46
48c776c
33c6bd2
2bca4e0
aa4cad5
ea34950
f26dd39
8b2ccbb
712f90e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -151,6 +151,16 @@ export interface CreateApiOptions< | |
* Note: requires [`setupListeners`](./setupListeners) to have been called. | ||
*/ | ||
refetchOnReconnect?: boolean | ||
/** | ||
* Defaults to `'immediately'`. This setting allows you to control when tags are invalidated after a mutation. | ||
* | ||
* - `'immediately'`: Queries are invalidated instantly after the mutation finished, even if they are running. | ||
* If the query provides tags that were invalidated while it ran, it won't be re-fetched. | ||
* - `'delayed'`: Invalidation only happens after all queries and mutations are settled. | ||
* This ensures that queries are always invalidated correctly and automatically "batches" invalidations of concurrent mutations. | ||
* Note that if you constantly have some queries (or mutations) running, this can delay tag invalidations indefinitely. | ||
*/ | ||
invalidationBehavior?: 'delayed' | 'immediately' | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ❓ so this is configured at the full API level, not just per-endpoint? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes. I think this is a sane solution to a rare-ish case. For a use-case where one would note the delay, you would need to have an application where you have very long-running queries constantly, and mutations firing at the same time that they run. This tastes unhealthy for performance to begin with and I would strongly suggest the developer to fix the queries, but if that's not possible there's this escape hatch. Or maybe I'm wrong. Do you have a particular use case in mind where a per-endpoint setting would have a strong advantage? |
||
/** | ||
* A function that is passed every dispatched action. If this returns something other than `undefined`, | ||
* that return value will be used to rehydrate fulfilled & errored queries. | ||
|
@@ -255,6 +265,7 @@ export function buildCreateApi<Modules extends [Module<any>, ...Module<any>[]]>( | |
refetchOnMountOrArgChange: false, | ||
refetchOnFocus: false, | ||
refetchOnReconnect: false, | ||
invalidationBehavior: 'delayed', | ||
GeorchW marked this conversation as resolved.
Show resolved
Hide resolved
|
||
...options, | ||
extractRehydrationInfo, | ||
serializeQueryArgs(queryArgsApi) { | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,109 @@ | ||
import { createApi, QueryStatus } from '@reduxjs/toolkit/query' | ||
import { getLog } from 'console-testing-library' | ||
import { actionsReducer, setupApiStore, waitMs } from './helpers' | ||
|
||
// We need to be able to control when which query resolves to simulate race | ||
// conditions properly, that's the purpose of this factory. | ||
const createPromiseFactory = () => { | ||
const resolveQueue: (() => void)[] = [] | ||
const createPromise = () => | ||
new Promise<void>((resolve) => { | ||
resolveQueue.push(resolve) | ||
}) | ||
const resolveOldest = () => { | ||
resolveQueue.shift()?.() | ||
} | ||
return { createPromise, resolveOldest } | ||
} | ||
|
||
const getEatenBananaPromises = createPromiseFactory() | ||
const eatBananaPromises = createPromiseFactory() | ||
|
||
let eatenBananas = 0 | ||
const api = createApi({ | ||
invalidationBehavior: 'delayed', | ||
baseQuery: () => undefined as any, | ||
tagTypes: ['Banana'], | ||
endpoints: (build) => ({ | ||
// Eat a banana. | ||
eatBanana: build.mutation<unknown, void>({ | ||
queryFn: async () => { | ||
await eatBananaPromises.createPromise() | ||
eatenBananas += 1 | ||
return { data: null, meta: {} } | ||
}, | ||
invalidatesTags: ['Banana'], | ||
}), | ||
|
||
// Get the number of eaten bananas. | ||
getEatenBananas: build.query<number, void>({ | ||
queryFn: async (arg, arg1, arg2, arg3) => { | ||
const result = eatenBananas | ||
await getEatenBananaPromises.createPromise() | ||
return { data: result } | ||
}, | ||
providesTags: ['Banana'], | ||
}), | ||
}), | ||
}) | ||
const { getEatenBananas, eatBanana } = api.endpoints | ||
|
||
const storeRef = setupApiStore(api, { | ||
...actionsReducer, | ||
}) | ||
|
||
it('invalidates a query after a corresponding mutation', async () => { | ||
eatenBananas = 0 | ||
|
||
const query = storeRef.store.dispatch(getEatenBananas.initiate()) | ||
const getQueryState = () => | ||
storeRef.store.getState().api.queries[query.queryCacheKey] | ||
getEatenBananaPromises.resolveOldest() | ||
await waitMs(2) | ||
|
||
expect(getQueryState()?.data).toBe(0) | ||
expect(getQueryState()?.status).toBe(QueryStatus.fulfilled) | ||
|
||
const mutation = storeRef.store.dispatch(eatBanana.initiate()) | ||
const getMutationState = () => | ||
storeRef.store.getState().api.mutations[mutation.requestId] | ||
eatBananaPromises.resolveOldest() | ||
await waitMs(2) | ||
|
||
expect(getMutationState()?.status).toBe(QueryStatus.fulfilled) | ||
expect(getQueryState()?.data).toBe(0) | ||
expect(getQueryState()?.status).toBe(QueryStatus.pending) | ||
|
||
getEatenBananaPromises.resolveOldest() | ||
await waitMs(2) | ||
|
||
expect(getQueryState()?.data).toBe(1) | ||
expect(getQueryState()?.status).toBe(QueryStatus.fulfilled) | ||
}) | ||
|
||
it('invalidates a query whose corresponding mutation finished while the query was in flight', async () => { | ||
eatenBananas = 0 | ||
|
||
const query = storeRef.store.dispatch(getEatenBananas.initiate()) | ||
const getQueryState = () => | ||
storeRef.store.getState().api.queries[query.queryCacheKey] | ||
|
||
const mutation = storeRef.store.dispatch(eatBanana.initiate()) | ||
const getMutationState = () => | ||
storeRef.store.getState().api.mutations[mutation.requestId] | ||
eatBananaPromises.resolveOldest() | ||
await waitMs(2) | ||
expect(getMutationState()?.status).toBe(QueryStatus.fulfilled) | ||
|
||
getEatenBananaPromises.resolveOldest() | ||
await waitMs(2) | ||
expect(getQueryState()?.data).toBe(0) | ||
expect(getQueryState()?.status).toBe(QueryStatus.pending) | ||
|
||
// should already be refetching | ||
getEatenBananaPromises.resolveOldest() | ||
await waitMs(2) | ||
|
||
expect(getQueryState()?.status).toBe(QueryStatus.fulfilled) | ||
expect(getQueryState()?.data).toBe(1) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✋ This looks like we could end up with the same tags showing up in
pendingTagInvalidations
multiple times, which I think also means we'd end up running some extra logic when we try to invalidate. CanpendingTagInvalidations
be aSet
instead?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, since tags are objects and not strings in general. More specifically, their type is
The set of invalidated endpoints is already deduplicated in
selectInvalidatedBy
. Now, we could try de-duplicating here again by converting these tag objects into canonical string forms, but I think this is creating more overhead than it alleviates except in extreme cases.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, and while its true that tags can end up in
pendingTagInvalidations
multiple times, the amount of processing required is no different than if they were invalidated immediately. So this is not a performance regression. (It's not like this code will show up in the profiler anyway.)