-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache filtering #2
Comments
I think it's a desirable feature, but I'm not sure the current APIs support it. To my understanding, the values returned and consumed by That said, I think we could go four ways with this:
What do you think? cc: @jbaxleyiii |
What about I think option #3 is the best way to start the filter exploration before we land on any formal placement for it. In fact, between the two teams (hermes + inmemory) we could even include them as part of this package for now. All other cache impls I have seen piggy back on inmemory. I'd be happy to help write this for inmemory. So tldr; lets start with helper functions baed on what every kind of filter API we want to start with and work back towards formal additions if needed? |
The initial API may be something as simple as an array of type names to include everything the cache matching? |
Hmm... I'm fine proceeding with option 3; i.e., exposing lifecycle hooks and delegating to other packages. Again, just going off our use case, this feature isn't going to be useful for us because we cannot express the interesting subset of the cache to persist in the absence of additional metadata, such as time initially store or time last accessed. Could you elaborate on the |
Thanks for writing up all of the options @jamesreggio! I agree that option #1 would probably be the most beneficial in terms of avoiding breaking changes in the long term, but I do worry that it would take too long to execute. I'd like to eventually move toward option #2 once we learn more about how users want to filter the cache, but it seems like it would be easier to experiment with #3 first. I really like @jbaxleyiii's suggestion on using a query to filter down the extracted cache. Parameterized fields might be a good way to offer more fine grained control in the future, but for now I think we can get close to what we want just by filtering on the type. |
Sounds good, @peggyrayzis. I can add some hooks later tonight, and then we can consider learna-fying this repo to include a basic inmemory-compatible query filter? |
Sounds good to me @jamesreggio! 😀 Would you be down to try Yarn workspaces? Also, I think your use case of filtering on metadata is a really interesting one that we definitely need to consider for the next iteration of cache filtering. I can see it being applied to more than just this project as a way to implement cache garbage collection. |
Happy to try Yarn workspaces! |
I would be looking to use this as an engine to persist/restore my cache, while working with Therefore, I strongly support this Issue as a high-priority feature! |
Thanks for the concrete use case, @fbartho. We're definitely going to get this feature in ASAP! |
hey, why not using directives like |
@giautm, that's something the Apollo team is considering for in the longer term; however, it requires deep adjustments to the architecture of the Apollo cache. This module is designed to work with the existing APIs to provide a simple solution in the interim. |
Not quite sure if this is one of the pieces I'm looking for or not: I have a list of Cars (for example) that I'd like to have periodically updated from the backend, but will want to use the on-device cache to service requests like Search and Filter (Ford, yellow, etc). It's important for the app to be offline-friendly, and somehow there will have to be a queue for offline actions like Messages. Event logging will be in the Link mix somewhere. What's awesome is I know Apollo Links can do it all, I just have to figure out how! |
I experimented with cache filtering, but after talking to @jamesreggio, we don't think it's the best solution. Brain dumping everything so I don't forget! Original API Proposal: Add a filter property on import { filterBySize, filterByType } from 'apollo-cache-inmemory-filter'
const filterQuery = gql`
{
user {
posts {
title
body
}
}
}
`
persistCache({
storage,
cache,
filter: compose(filterBySize(500000), filterByType(filterQuery))
}) export const filterByType = query => {
const resolver = (fieldName, root) => root[fieldName];
return data => graphql(resolver, query, data);
}; This is problematic because the data returned from extract is already flattened from the normalization process. If we were to run New proposal: Hold off on type filters, add a |
I still really would need a filter-mode to prevent certain "ephemeral" types from being in the output. I'm perfectly fine making my filter rule apply to flattened keys. (I was planning on having all the "private" types be prefixed by Is there a reason this would be no good? (I think I'm making an unfair assumption about the shape of the serialized data). |
I did some thinking about this this weekend, Is the current Cache architecture composable? If I could declare some models only get stored in MemCacheA, while others only get stored in MemCacheB, and then I could configure new apollo.Client({
cache: composeCaches(memCacheA, memCacheB)
}); This would make it possible for me to "pre-filter" my caches by object-types, and only persist one of the two memory caches to disk. -- Thoughts @peggyrayzis? |
The cache API is synchronous so unfortunately, we wouldn't be able to compose async caches without a significant refactor. I am looking into passing a custom store to |
Hi @allpwrfulroot — your search and filter use case can probably be handled by Apollo Client right now by changing the Hey @fbartho, I think I follow your use case, but I don't think we're going to be able to solve this problem until Apollo adds finer-grained caching metadata to the client itself. A couple thoughts, though:
I'm sorry if it feels like I'm just making excuses for a missing feature. Trust me, I'd love to have better control over what gets cached. I just wanted to share what we're using today to make our app better (i.e., this repo) — the Apollo folks will continue working on the sophisticated stuff :) |
Thanks for your response @jamesreggio -- I think you did hit the nail on the head with my concerns. -- I am particularly interested in your 3rd point; when considering |
Yeah, that's definitely a larger concern with Maybe @jbaxleyiii has thought about strategies for dealing with local And yeah, to expand upon our app's startup process, we basically do this:
I've been meaning to write a blog post about this strategy, but hopefully that was clear enough. In the end, you're just using a counter to selectively jettison local data in hopes that the crashes eventually go away. |
Any updates on this? |
It's being considered a high-priority feature in Apollo Client 3.0. Roadmap is here: apollographql/apollo-feature-requests#33 (comment) |
I did put some work on a side project which might interest you folks: https://github.com/TallerWebSolutions/apollo-cache-instorage |
@lucasconstantino I love this! |
Maybe it is a necrobump, but how exactly was this completed? I can't find any mention of a persistence filtering capability anywhere, not on this project or apollo-client. Maybe marking it as completed was a mistake, @wtrocki? If it was indeed implemented, I would be willing to document it on this repo |
Linked issue lists all docs: #513 (comment) |
One thing I would love to implement before we ship this officially is a cache filtering mechanism. I was talking about this with James and he suggested using
graphql-anywhere
to filter down the extracted store. What do you think?The text was updated successfully, but these errors were encountered: